[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 586 KB, 592x592, 1516047391248.png [View same] [iqdb] [saucenao] [google]
11048801 No.11048801 [Reply] [Original]

Was Searle's chinese room the most unethical thot experiment in philosophy?

>> No.11048806

>>11048801
I love this photo so much and I don't know why

>> No.11048815

>>11048801
sauce

>> No.11048822

>>11048815
>John Searle generously volunteers to welcome Freshman Undergrads. Harvard Journal 2013

>> No.11048830

>>11048806
>>11048801
is that Matt Lauer? why does he look so uncomfortable around a sexually mature supple pretty young thing?

man, it's incredible to me than Lauer got #MeToo'd. career: over. immediately. now his crazy wife is divorcing him. the left will eat itself before they can meet their own ever-increasingly inhuman standards of behavior

>> No.11048854

>>11048801
searle's chinese room is a stupid thought experiment. he's missing a few pieces. the comparison between himself and the machine is incomplete, because the human participant has a sense of self and it is assumed the machine does not. but we're well ahead of the game now, and understand that simply having inputs for "necessary" information is not enough. real AI that convincingly behaves like a human will have human-like sensory inputs, and probably a body as well. in other words, searle is a brainlet. give me one chinese freshman please.

>> No.11048865

>>11048854
>it is assumed the machine does not
It doesn't, though :^)

>> No.11048872

>>11048865
>t. p-zombie
:^)

>> No.11048873

>>11048854
You're still missing the point of the idea. All that you're adding is more information to the system, more Chinese for the machine to attempt to be unable to process.
Searle's point is that mere correct input and availability of information, no matter how complete and accurate, is meaningless if a system lacks the innate ability to understand it.

>> No.11048874

>>11048872
Retard

>> No.11048897

>>11048801
I'd like to get into her Chinese room

>> No.11048905

>>11048873
you misunderstand me. i'm saying searle is missing a necessary component of AI. searle's experiment is correct because it's broken. real machine intelligence that possesses understanding is not simply a GIGO stream that you feed inputs and which gives outputs. it should also receive input from other sensors, unrelated to the human in the room or his input. a subtle body, if you will. a reptile brain like the one humans have built consciousness over. a set of systems that generate their own non-chinese that influences the decisions of the AI. there's probably a few types of these systems that i don't know of yet but AI "understanding" is linked to the concept of identity, self-awareness. and we don't have that yet, but it can be built. searle's experiment completely ignores this.

>> No.11048936

>>11048905
>that you feed inputs and which gives outputs

Technically they are though. Its literally all a computer system can ever be by definition. I'm very much interested and trained in AI technology so I understand your issue here but the qualitative transition from the CRUD foundation of all systems and a sophisticated organic brain is not so clear to me
Self awareness is probably the right track but it needs describing why exactly

>> No.11048949

>>11048830
>Inhuman standards of behavior include not locking unwitting victims in your office to have sex with.

>> No.11048996
File: 33 KB, 657x527, apulobotomy.png [View same] [iqdb] [saucenao] [google]
11048996

>>11048936
>it needs describing why exactly
because we have awareness and judge AI using ourselves as the standard. all of our human language was developed with creatures just like us in mind. we call machines non-intelligent because they do not possess awareness [of self or other as entites]. they only recall what they are given, without understanding, and typically to generate a product for human consumption. in order for a machine intelligence to be what we think we want it to be, it has to be somewhat like us and not a box in a room. i mean possessing a body which generates inputs that feed back into itself. possessing sensors which generate inputs that feed back into itself. these systems need to exist before identity can exist. once you have identity, you will have machine "awareness" and can begin solving for intelligence. i used the human reptile brain as an example, but look at a sea squirt to start. or actual reptiles and amphibians.

i'm not a fancy man i'm just a neet who reads greeks and jung.

>>11048949
precisely

>> No.11049082

>>11048996
I disagree, we call machines non-intelligent for the fact that they can not do very much without very strict guidelines and applications. The amount of effort it requires a human actor just to design a stupid web page is atrocious given the amount of processing power computers have available.
The answer to this struggle of application may indeed be in self awareness on some level but to simply describe such measurement of intelligence on being "like us" in the traditional Turing test logic is very naive and limited to me.

Either way though the idea of what is self awareness and what distinguishes self knowledge from external knowledge is wholly mysterious.

>> No.11049116
File: 67 KB, 625x855, jaden.jpg [View same] [iqdb] [saucenao] [google]
11049116

>>11049082
>WHERE THAT SOUL AT
>IS MYSTERY!!!
Better throw your hands up kiddo and change careers, because I'm giving you pearls here and you're indicating to me that you're never gonna make it.

Why are we unique? What makes each of us unique? How much do we inherit from our parents? How is identity not solely the product of all our past experiences? Start asking the right fucking questions if you want to get anywhere.

>> No.11049191

>>11048854
Read you own post again and tell me you are not a moron.
>>11048905
>real machine intelligence that possesses understanding is not simply a GIGO stream that you feed inputs and which gives outputs
Any logical function is just a different representation of a lookup table. That is the motivation behind the Chinese room, but you were too stupid to understand that. KYS.

>> No.11049199

>>11049116
>Start asking the right fucking questions if you want to get anywhere.
I am amazed by your lack of self awareness. You know that your success in life is a good heuristic of your judgement. So, please tell us: What is your excuse?

>> No.11049204

>>11048806
its a picture of your desitny

>> No.11049223
File: 41 KB, 880x632, agatha_begone.jpg [View same] [iqdb] [saucenao] [google]
11049223

>>11049191
i am not a moron.

>>11049191
>Any logical function is just a different representation of a lookup table
and as I explained, the chinese room is a bad fucking analogy for machine intelligence. it puts the reader up close with a magnifying glass looking at just one part of the hardware layer when he should be standing back asking larger design questions. searle is critiquing an addition circuit for not being a graphing microprocessor. now, please, grow a brain stem.

>> No.11049229

>>11049223
You didn't answer my question. What is your excuse?

>> No.11049233
File: 904 KB, 960x1174, columbo1.jpg [View same] [iqdb] [saucenao] [google]
11049233

>>11049199
so this is the power of STEMfags

>> No.11049242

>>11049233
You didn't get it. That's okay, I won't be here for long.

>> No.11049245

>>11049116
>If we only make AI enough like us, then it will be conscious!
Nah.

>> No.11049265

>>11049223
Wiring a hunk of metal differently won't spontaneously allow it to have qualia.
>>11049116
>I'm giving you pearls here
Get out your own ass.

>> No.11049268
File: 62 KB, 895x412, louis-ferdinand-celine.jpg [View same] [iqdb] [saucenao] [google]
11049268

>>11049116
>says its all about self awareness
>does not have self awareness

>> No.11049272

>>11048905
>>11049223
not trying to further any argument, just curious: what does the 'subtle body' of machine intelligence look like, 'architecturally'?
to me, the real gap between ai and 'real, human-like intelligence' is the presence or lack of a nervous system embedded in a concrete situation. this enables or really compels, motivates the various 'taxis' of thought, colors it in all its affective shadings. without this, machine intelligence will remain largely alien to us, even though we are its designer.

>> No.11049277
File: 88 KB, 500x500, wojakbernard.png [View same] [iqdb] [saucenao] [google]
11049277

>>11049242
Good.

>>11049245
I swear to christ stemfags are the absolute worst. If you want to make something similar to man, you have to make something similar to all of the systems that man is composed of. Make sense? Can you try to think like a philosopher instead of a problem solving machine for a few minutes?

But it's cool, keep banging your head against your workstation pretending to do AI research. I don't actually want machines to ever become man, because we'll be extinct soon after.

>> No.11049281

>>11049268
Such a good man, my god, i can forgive every kike he probably helped into the camps for that look, he’s seen Tartarus, walked with Pluto, made Perspephone weep in shame and conversed with the spirits of the tellurian depths. Good god what a look

>> No.11049283

>>11049277
>If you want to make something similar to man

Nobody has said that this is the goal. You are begging the question

>> No.11049292

>Human consciousness is reducible to mechanical processes
ISHYGDBT
>>11049277
You're calling me a STEMfag? You're the STEMfag. Believing that we can make machines like us is the STEMfaggiest thing you could possibly think.
I study philosophy by the way.

>> No.11049295

>>11049277
>If you want to make something similar to man, you have to make something similar to all of the systems that man is composed of.
No. If you want to replicate the brain, then you only have to find an isomorphic function representing it.

>> No.11049301

>>11049292
>I study philosophy by the way.
I can tell...

>> No.11049313

>>11049301
First you were calling me a STEMfag, now you're bothering me about studying philosophy. Make your mind up.

>> No.11049323

>>11049313
I never called you a "STEMfag".

This is me:
>>11049301
>>11049295
>>11049242
>>11049229
>>11049199
>>11049191

>> No.11049330

>>11049272
>the real gap ... is lack of a nervous system embedded in a concrete situation
that is *precisely* what i'm talking about, and what is missing from searle's experiment. a mesh or matrix where projections roll around, one at a time, flipping and unflipping a lot of switches, before a whistle blows that freezes the lot and declares a winner. in humans this is handled as you said by CNS and all of its support systems, in the moment. carrying the weight of their own existence in that moment (out of breath, stressed, cortisol high, adrenaline high, pressure high, or composed decision making in a quiet room, whatever). build a large enough system and put pumps in the right places and these behaviors emerge. programmers and hardware geeks don't understand that.

>>11049265
>wiring a hunk of flesh differently won't spontaneously remove qualia
you don't deserve a (You)
>Get out your own ass.
I was kidding, brainlet. You know what a joke is? Remove stick from second aperture.

>> No.11049348

>>11048905
>we don't have that yet, but it can be built
Sure it can, champ. That's what AI retards have been saying for literal decades and they are not even REMOTELY close to making a self-aware computer, if that's even possible.

>> No.11049359

>>11049330
>>11049348
The absolute state of this board.

Well, that's enough for me. Enjoy your debate! HAHA!

>> No.11049368

>>11049283
>Nobody has said that this is the goal. You are begging the question
You are wrong.
>what is a Turing test
>what ELSE are we using as a measure of intelligence

>>11049348
>if man was meant to fly!!! he'd have been born with WINGS!!!

>>11049323
>>11049313
>>11049301
you fruitflies cant even figure out a senegalese breakdancing forum

>> No.11049380

>>11049368
>what is a Turing test

I specifically criticized the Turing test as an unuseful measurement of machine intelligence earlier in the thread.
>what ELSE are we using as a measure of intelligence
The ability to solve problems

>> No.11049385
File: 40 KB, 874x963, mistakesweremade.jpg [View same] [iqdb] [saucenao] [google]
11049385

>>11049330
Incidentally, in case it wasn't clear to the literal minded autists and engineers itt

>Wiring a hunk of metal differently won't spontaneously allow it to have qualia.
>wiring a hunk of flesh differently won't spontaneously remove qualia
Yes. It will. Or near enough that you can't tell the difference between the machine and a p-zombie. Pic related.

>> No.11049401

>>11049385
>Or near enough that you can't tell the difference between the machine and a p-zombie. Pic related.

Doesn't answer the question though of what that qualitative difference actually is. You're tripping over the question yourself in your own statement. If we truly can speak of p-Zombie intelligences and actual conscious beings where does the difference arise.

>> No.11049416
File: 95 KB, 1920x1080, moetautologygirl.jpg [View same] [iqdb] [saucenao] [google]
11049416

>>11049401
>where does the difference arise
uhm, anon... you may want to sit down for the answer.

>> No.11049423
File: 36 KB, 640x535, moeconcern.jpg [View same] [iqdb] [saucenao] [google]
11049423

there is none

>> No.11049435

>>11049423
Stupid anime poster

>> No.11049447
File: 161 KB, 725x991, ba7a0dfc11d821a91da62f4fdceea7de.jpg [View same] [iqdb] [saucenao] [google]
11049447

>>11049423
Very good! Now, go outside and ask the wind for a blessing!

>> No.11049450
File: 161 KB, 346x348, moecalculations.png [View same] [iqdb] [saucenao] [google]
11049450

>>11049435
don't get mad at me. you're the one calling the same thing by two different names. it's like the edgelords said all along. we are sentient meat. walking computers, miracles, yes. but not beyond duplication.

>inb4 what's scented meat

>> No.11049479

>>11049450
You're stupid because you are talking as if you are profound for pointing out what the rest of us grown ups took as a given on entering the conversation. You have not once approached the substance of the matter whatsoever.
I recommending lurking more in future

>> No.11049485

>>11049479
>I recommending lurking more in future
And he will. For you, there are no hope left.

>> No.11049512
File: 844 KB, 800x786, laughinganimegirl.gif [View same] [iqdb] [saucenao] [google]
11049512

>>11049479
>the substance of the matter
I started by criticizing Searle's bad analogy of the chinese room. It's useful for saying what machine intelligence ISN'T, but it has obviously led kissless handholdless nerds down the wrong paths.

>us grown ups
>us grown ups
>us grown ups

>> No.11049521

>>11048830
>sexually mature supple pretty young thing
Are you joking? She is in jailbait territory

>> No.11049529

>>11049521
nah, she's just asian. she's definitely at least 20.

>> No.11049535

>>11049512
Whatever you say dude

>> No.11049542

>>11049529
Probably not wrong but I doubt that holds up in court

>> No.11049548

>>11049529
Nah

t.asian living in asia

>> No.11049553

>>11049548
pedophiles have trouble identifying secondary sexual characteristics or finding them sexually appealing

>> No.11049589

>>11048873
Except humans don't posses any innate ability to understand information either. It takes years of mistakes and failure for a human to be trained to perform the most simple task.

>> No.11049614

>>11049589
Except you must in that case have an innate ability to begin trying and testing

>> No.11049639

>>11049589
precisely why I said the machine and the man in the room are not comparable. one has identity (ability to distinguish itself from other), a record of decisions it draws information from when making new decisions.

>> No.11049646

>>11049639
>one has identity (ability to distinguish itself from other)

Why is this important?

>> No.11049657 [DELETED] 

>>11049521
To be fair Searle has only been excused of harassing his GSIs.

>> No.11049661
File: 28 KB, 600x353, searle.jpg [View same] [iqdb] [saucenao] [google]
11049661

>>11049521
To be fair Searle has only been excused of harassing his GSIs.

>> No.11049667
File: 180 KB, 877x1163, Searle lawsuit.jpg [View same] [iqdb] [saucenao] [google]
11049667

>> No.11049683
File: 32 KB, 817x891, wojakbrainoff.png [View same] [iqdb] [saucenao] [google]
11049683

>>11049646
>why is this important
it's not important so much as a side-effect of experience. you are your agency and not the flower pot you have been assigned to manipulate. you are the record of decisions you have made in the past using that agency.

>> No.11049685

>>11048806
Its pornography. Shes Russian

>> No.11049695

>>11049683
>it's not important

Then why mention it

>> No.11049702

>>11049695
ok you have to be baiting. i just told you why, and earlier in the thread as well. identity is a necessary component of agency. how does one act for oneself if one does not have a oneself. machine intelligence can hardly be coherent without identity.

>> No.11049713

>>11049702
Can one not keep a record of activity without self-identifying? I can certainly speak in terms of "He types out the post"

>> No.11049725

>>11049368
The point of AI development is not based solely on creating a machine that acts like a human, but rather making a machine that can solve problems LIKE humans do. What you describe AI to be is like saying when we developed the field of aeronautics by at first observing birds and other flying animals in flight that we developed system purely to mimic them. When we fly we dont worry about how closely we resemble birds but rather adapt the system that birds use for flight for real world human application, or in other words use similar design philosophies in birds and such in things like planes so that we can effectivly solve a traveling flight problem. The same is with AI, we arent developing AI to exactly mimic humans (akin to us mimicing the birds) but rather adapt systems from the human mind (connectivism, computationalism, ect.) And apply it to machines so that they can effectivly solve real world human problems as well.

>> No.11049730
File: 75 KB, 429x400, nicksadler.png [View same] [iqdb] [saucenao] [google]
11049730

>>11049713
>Can one not keep a record of activity without self-identifying?
I don't believe so, no. The collection of data is just data. But when that data is used to make decisions and implement them in the world and judge them using senses over time I believe it sort of becomes like a self-assembling stool. It stands upright and learns what is good for itself and what is bad for itself. Like an infant.

There are some big words in there like "judge" and "decision" and they imply a lot going on but those are whole other discussions.

>> No.11049738

Infants, incidentally, go through an almost identical process that we're talking about. They can't distinguish themselves as a being separate from the mother, even when they have their senses. Ego and all that comes much much later.

>> No.11049740

>>11049730
cursed image

>> No.11049744

>>11049730
Decision is not a big word, nor is judgement. We have very rudimentary programs that do both.
"Good" on the otherhand is totally absent from any programmic vocabulary. I refer you to Hume's is ought problem

>> No.11049750

>>11049725
I see what you're saying, but I'm just brainstorming and putting ideas out into the community. I am not a scientist, and not very much actually interested in us actually achieving machine intelligence but I think it is inevitable. I also believe man is the best model for what we are trying to create, if only because we have already graded the work area and have a bunch of psychological definitions and chemical systems already in the books.

>> No.11049768
File: 52 KB, 720x540, noexit.jpg [View same] [iqdb] [saucenao] [google]
11049768

>>11049744
>judgment is not difficult
>"Good" is
Then how does a machine judge what is good or bad? Standards, obviously: was the intended manipulation successful according to the standard set by the experiment. What was the positive reinforcement to the data. Good in the philosophical sense is, I think, something that will appear on its own once enough data has accreted.

>> No.11049776

>>11049768
>Then how does a machine judge what is good or bad?

They don't, they literally never need to

>> No.11049778

>>11048801
I hate seeing an old man with a young woman. Its just not natural.

>> No.11049782

>>11049768
I have a suspicion this poster is highly overweight and has foul smelling hair on his face

>> No.11049783

>>11049744
I don't think there is an absence of the concept of "Good" in programmic vocabulary, wouldn't that just be efficiency/heuristic cost/accuracy of a given function/algorithm/machine? No machine actually perfectly reaches its goal state, it can only adapt to the inputs given by its environment and perform the most "rational" or best available action from the available information provided.

>> No.11049784

>>11049782
>he

Wrong.

>> No.11049788

>>11049784
I stand by my statement otherwise

>> No.11049798

>>11049783
>I don't think there is an absence of the concept of "Good" in programmic vocabulary

Well you're wrong. There is no good or bad boolean, there is only true or false

>> No.11049804
File: 33 KB, 399x266, firsttree.jpg [View same] [iqdb] [saucenao] [google]
11049804

>>11049776
>They don't, they literally never need to
>need
Are machines immune to the pleasure principle? My entire premise has been suggesting that machines that are intelligent are necessarily a lot more than they "need" to be. Can an intelligence that doesn't need to judge between good and bad even be called intelligent?

>> No.11049815
File: 73 KB, 800x478, nihilists.jpg [View same] [iqdb] [saucenao] [google]
11049815

>>11049798
>There is no good or bad boolean
Like I said earlier in the thread, you're looking at the problem way too close. You're examining hardware with a magnifying glass when you should be standing away considering larger problems. How will you know how to build an intelligence if you can't see the forest for the trees?

>> No.11049817

>>11049804
>Can an intelligence that doesn't need to judge between good and bad even be called intelligent?

If it gets the job done, of course. You can imagine an AI that can perfect any IQ test, solve any problem imaginable without ever feeling pleasure or pain.

>> No.11049821

>>11049776
That is completely false. Look at any "learning" algorithm, for example simulated annealing. The computer uses prior information and current information to determine whether staying in place is advantageous or "moving up the hill" is the better option. Its only going to climb if it determines it will be better to do so overall otherwise it wont change its location (obviously this is a really simplified description but this covers the gist of it)

>> No.11049828

>>11049821
And? There is no good or bad involved there. All that occurs is a measurement is made whether X input reveals Y result, true or false.
Again you must read Hume, this is a four hundred year old conversation

>> No.11049831

>>11049828
4 century rather

>> No.11049838

>>11049798
Just because there is no "Good" or "Bad" keyword in the syntax of computer language does not mean there cannot be larger boolean functions used to represent whether something is a better or worse action for a larger system to act on. As the other anon stated youre looking too closely and need to understand that syntax in computers are just building blocks for higher levels of functions and methods in a computer/system/algorithm

>> No.11049844

>>11049817
You're missing the point. Machine intelligence will *necessarily* have to feel pleasure or pain. It will be part of how they learn.

>> No.11049858

>>11049828
I don't understand your definition of good or bad then. The computer is acting based upon whether a future scenario is either better or worse, if this does not showcase a computer making a decision based off of whether the recieved input is good or bad in relation to its current value than you need to explain a situation that would satisfy your specific notions of "good" and "bad"

>> No.11049860

>>11049844
Teleological nonsense

>>11049838
Oh no I perfectly understand, if the words good and bad mean anything at all then they have to come from somewhere but the problem is you are in no way displaying how we come from declaring a state of affairs is the case to a state of affairs being preferrable.
Machine learning is still just a rudimentary mathematical process, it comes down X > Y.

>> No.11049863
File: 106 KB, 900x600, Little_Rita.jpg [View same] [iqdb] [saucenao] [google]
11049863

Jesus Christ, you guys aren't going to want to know where OP's picture is from

>> No.11049867

>>11049858
>I don't understand your definition of good or bad then.

I haven't given one, nor do I suppose to know one. Perhaps there isn't one and the words are just nonsense

>> No.11049871

>>11049844
So if I am a cogenital analgesic am I not considered a human being

>> No.11049873

>>11049863
Me on the right

>> No.11049875

Keep in mind I don't necessarily mean literal torture tests or anything. I mean once identity is established there should be a model for weighing outcomes with a view to avoiding pain and pursuing pleasure; however that is understood by the machine. They don't need to be expressed chemically as the human body does, it could be simulated. But accurately modeling this human-like learning system is a critical part of creating an intelligence we can understand and teach.

>> No.11049883

>>11049873
Madman

>> No.11049884

>>11049875
>. But accurately modeling this human-like learning system is a critical part

You keep saying this but this remains to be displayed.

>> No.11049902
File: 47 KB, 500x500, wojakfeelsoreal.jpg [View same] [iqdb] [saucenao] [google]
11049902

>>11049860
>accurately modeling a reproduction of the human mind in binary
>teleological nonsense
pic related, it's you.

And "good" and "bad" would initially be equated with pass or fail, or as an infant would see it: mommy loves or mommy withholds love based on your action. But as the data and processing power increases and a sense of self emerges the individual machine comes to define these terms for himself. Just as humans do by their own experience and with the guidance of others.

>>11049871
You're human, sure, and if what you say is true you obviously found some other way to learn. You have parents, I'm assuming.

>> No.11049917

>>11049884
Read the thread. It was a criticism of Searle's bad analogy. How does a machine intelligence's self have agency if that self does not avoid pain and pursue pleasure? Why not just hum zeroes forever?

>> No.11049919

>>11049902
>It wouldn't be the case at first but some tech magic will happen later

This has been your argument this entire thread. The fundamentals of logical processes will remain the same whether at the low level or the high level of operation. Weaseling words like pass or fail doesn't change the fact you're still solely dealing with true or false statements

>> No.11049930

>>11049919
Pass or fail are not weasel words. If you'd ever been held to account for anything in your life you'd know that. Pass = to create an end state within the parameters you are given. Fail = end state not reached. It's literally a series of check boxes a machine could understand.

>> No.11049931

Are trees and other plants intelligent?

>> No.11049935

Her video with Sunny is amazing

>> No.11049944

>>11049930
Which can only be expressed in booleans
End state is in predetermined range = true/false

They're weasel words because they have no qualitative expansion past what these basic booleans express.

>> No.11049945
File: 1.68 MB, 3024x4032, lmao.jpg [View same] [iqdb] [saucenao] [google]
11049945

found this in a book the other day

>> No.11049961

>>11049945
>Toward the African Revolution

Typical Jew

>> No.11049969

>>11049553
>jailbait
>pedophilia
so fucking retarded

>> No.11049979 [DELETED] 

>>11049553
She was around 19 in the time of the video

>> No.11049993

>>11049548
You are bad at telling age.

>> No.11050015

>>11049993
Or suspiciously good

>> No.11050036

>>11049944
Read the thread again, starting with >>11048854 and try to imagine a system of systems that is continually receiving competing inputs from external sensors and the incompleteness of its knowledge is one of the motivators to deciding from a set of courses of action at any given moment. it exists in a controlled environment and has been given simple instructions for what is desirable. it possesses simple mechanical tools by which it can manipulate its environment to match given parameters. this type of creation already exists, and has existed. now add more systems that simulate fear and aggression: a fight or flight response, a drive for resources (even if they don't materially benefit the AI directly)

etc etc brb busy

>> No.11050053

>>11050036
Which is all nothing more than true/false statements. I have read the thread but its you who can't explain where the transition is between cold logic and this supposed need for human emotions, self awareness.
Humans behave in such a way yes, but we have no reason to assume this is not contingent and arbitrary and ultimately inefficient in machines, useless

>> No.11050073

>>11048873
Yeah but Seattle makes that point on a bullshit axiom (the third one) that grossly ignores modern classical conceptions of Strong AI.

>> No.11050091

>>11049867
If you're not giving or supposing on a definition of good or bad then how could you state previously that there is no good or bad involved >>11049828 in that previous statement >>11049821

>> No.11050098

>>11049902
if I can still be considered human or self aware while also being a cogenital analgesic then I don't understand why it is necessary for a machine to feel pain.

>> No.11050109

>>11050091
Because like I said, both processes are entirely captured in terms of truth values. The existence of good or bad is irrelevant to their function and appears to possibly be a mere secondary description

>> No.11050130

>>11049860
A particular state is preferable when, given the current state, this new state more closely matches a goal state given. When we as humans say something is better than something else we are saying that given two variables, one variable more closely matches our declared desires or ideals than the other. The same goes with machines, many functions make the connection that one particular action is more closely aligned with the set goal state then another or multiple others and in this way the machine can make a determination based upon a "good" or "better" action. Our actions in determining good or bad can be expressed computationally is what I am basically trying to say. There isn't much of a difference between our good and bad and the computer's good and bad.

>> No.11050148

>>11050130
Or so it appears

>> No.11050192

>>11050109
I still don't follow because depending on whether the function determines if the situation is good or bad THEN a true or false value is returned. They're not independent. In simulated annealing if it is BETTER or WORSE to climb THEN the function will pass either TRUE or FALSE to climbing. Given a goal and a current state, the statement (when directed at reaching that goal state) of "is this action good or bad" will come down to a truth value, I understand that. This doesn't mean that because "good" or "bad" was represented through a boolean return that it is not still better or worse, it just means that a judgement is being passed to a higher order function from that boolean return. The computer is just saying "Is climbing going to get me closer to the ideal goal state? Yes? Then we preform the action." This "Is moving....ideal goal state?" is the computer actually deciding given its function parameters whether or not something is better or worse. The rest is just how it handles the transition of states from there.

>> No.11050243

>>11048801
This shouldn't be allowed.

>> No.11050245

>>11049863
s-she's legal right

>> No.11050265

>>11050192
>I still don't follow because depending on whether the function determines if the situation is good or bad THEN a true or false value is returned.

Incorrect, to use your machine learning anology (working with tensorflow myself at the moment). All that is measured is whether a number is closer or further away from a point at each increment. Its a purely mathematical statement.

Now to us of course its good or bad for us to get a desired productive use out of the machine but it means nothing to the machine itself.

>> No.11050311

>>11050265
And what are our good and bad assessments for productive use based upon? Are they not mathematical themselves? When we say something is good or bad is it not based upon that same model of how close the thing we're experiencing meets our expectation? What is the main difference between the machine wanting to achieve the highest value and us wanting something to be "better". Of course good has that quality of arete where something is good based upon context but I am interested in what you have to say about our good vs machine computation.

>> No.11050368

>>11050311
Well I began discussing good and bad as a poster above stated a machine will eventually have to develop a sense of good and bad in order to achieve a certain level of intelligence. Whereas I think any sophisticated task can be accomplished without any remote reference to the terms.
To me, I won't attempt to declare a definitive explanation of the terms as I have no definitive answer but I will say I see that there is strong connection between our notions of good and bad with a sort of absolute status of things, we may call dogs good because they are generally good to us. They are generally compatible with our notions of happiness and justice in life.
Indeed I don't think it is possible for the words good or bad to have any legitimate meaning without a reference to emotions and perhaps something that may be called justice.

>> No.11050375
File: 63 KB, 600x800, rita.jpg [View same] [iqdb] [saucenao] [google]
11050375

>>11050245
Obviously, shes an adult video actress

>> No.11050411

>>11050368
I don't know, I see machines as already having a sense of good and bad since they only perform actions that yield the best possible results in a given situation because that is what we tell them to do. Good and bad is contextual, so just because we use good for a bunch of other unrelated variables does mean that other notions of good and bad are ruled to be irrelevant for others. Computers show tendency towards an optimal goal state and therefore must have some sort of sense of good or bad. Also we could say 2 + 2 = 4 is a good answer while 2 + 2 = 5 is a bad answer, I don't really see any connections to emotions based on those judgments there, they are just good or bad depending on how closely the match the true answer.

>> No.11050420

damn i wanna fap but all my roommates are around too awkward gotta hold my nut till later i guess

>> No.11050444

>>11050411
What you're saying is tautological. A result is good if it produces a good result. As per what I said earlier to a machine its totally meaningless and entirely absent from any vocabulary feed.
2 + 2 = 5 isn't a "bad" answer its just inconsistent with the predicates. It might be bad for us if we hope to achieve something with correct calculations but to a computer its just a statement like any other.

You really must look at the extent to which you are reifing what we ourselves do into the mechanism itself

>> No.11050474

>>11048822
kek
>>11049667
o man she cute
>>11049740
lol

>> No.11050479
File: 304 KB, 1000x667, boston-dynamics-robot.jpg [View same] [iqdb] [saucenao] [google]
11050479

>>11050053
>Which is all nothing more than true/false statements. I have read the thread but its you who can't explain where the transition is between cold logic and this supposed need for human emotions, self awareness.
Implying I am loading magical powers onto human emotions. I'm only saying they are useful systems and not inefficient in machines. They are ways of creating an intelligence that we would describe as intelligent. Which is what this is all about.

>> No.11050507

>>11049667
Sounds like he was getting a little too hands on with his experiment.

>> No.11050510

>>11050479
>not inefficient in machines
Debatable

>> No.11050531

>>11048905
>not simply a GIGO stream
lol

I finally see the true face of lit, you are all wannabes with no substance, just parroting bullshit and signaling through term-dropping

Searle's argument destroys the notion of AI as a ghost inside a machine

>> No.11050545

>>11049875
it already exists, in fact this mechanism is what surrounds AI ethics, the main problem they have right now is if they have a "STOP" button, the AI tries to trick the humans into pressing it cause its the most efficient route to fulfill itself

so basically the AI found out killing yourself is the easiest route of all

>> No.11050549

>>11050531
any math you can do on a computer you can do on a piece of paper so suppose some guy who's really fast with a pencil started computing the algorithm on paper...where is the "intelligence" going to exist? our current hardware can never be "sentient"

>> No.11050559

>>11050549
i agree, i actually read sth that wasnt scifi and discovered AI is just a clever technique of arriving to efficient solutions without having to model the problem, like some sort of meta-algorithm, and not my future wired waifu

>> No.11050565
File: 139 KB, 633x565, omgwhatabigbible.jpg [View same] [iqdb] [saucenao] [google]
11050565

>>11050444
Nice trips but you're thinking small. Inconsistent with the predicates IS BAD IN PRACTICE. Not everything makes sense. Accidents happen. Humans sometimes abandon their programming, or their fleshcomputer is damaged in some way. I can't help thinking of poor Christ Benoit's family and his lumpen mass of scar tissue upstairs.

>>11050531
>he is spooked by a buzzword

>> No.11050573

>>11050559
yeah "ai" and "deep learning" and shit is just sort of a way to search for algorithm optimization, that's all it is really a glorified search, but blockchain is just as disappointing behind the curtain, it just turns out to be a very inefficient bloated type of database, most proposed uses for blockchain would be better suited just using a standard db product to be h

>> No.11050576

>>11050545
citation needed. i'm just daydreaming here but i'm fascinated by the topic. i'm confident we'll get there in the next 30 years, and ethics are going to be a huge part of this.

>> No.11050578

>>11050565
>spooked
no, im not spooked, is that your post makes absolutely no sense to people who understand what gigo is

>> No.11050584

>>11050578
ok thanks for your good posts. really made me think.

>> No.11050591

>>11050576
here
https://www.youtube.com/watch?v=3TYT1QfdfsM

its pretty interesting

>> No.11050606
File: 102 KB, 900x600, omgwhataninterestingbible.jpg [View same] [iqdb] [saucenao] [google]
11050606

>>11050591
thanks m8

>> No.11050611

>>11050573
real mind-bending shit to me in this topic are computer languages and things like bootstrapping in compilers, quines, and other self-referential logic loopholes
virii are nice too, but the ones i just mentioned i think are the most exciting for people who approach the subject from an art and literature/philosophy perspective

>> No.11050615
File: 746 KB, 896x597, 4245475.png [View same] [iqdb] [saucenao] [google]
11050615

>>11050606
They then went on to discuss David Foster Wallace.

>> No.11050617

>>11050584
buy a puppy lad, your bitchy attitude shouts you have your buttplug up waay too high

>> No.11050784

>>11048801
Just found the sauce and nutted to this. Best fap of the month desu.

>> No.11050803
File: 21 KB, 631x471, agathaaaa.jpg [View same] [iqdb] [saucenao] [google]
11050803

>>11050784
>he nuts to pictures of old man benis and girls with limited employment options

>> No.11050843

>>11050784
I hope you get better one day

>> No.11050850

>>11050784
Porn is garbage

>> No.11050860

>>11050850
Wrong.

>> No.11050868
File: 99 KB, 578x581, ff601dac343a316ab613b63a7ee91588.jpg [View same] [iqdb] [saucenao] [google]
11050868

>>11050803
And it was god damn amazing

>> No.11050892

>>11048801
Can someone explain to this normie what the chinese room is experiment is?

>> No.11050920
File: 32 KB, 640x480, 8CC15239-B59F-47BC-A5F9-B2020C9964E9.jpg [View same] [iqdb] [saucenao] [google]
11050920

>>11050892
Forgot picrelated

>> No.11050935

>>11050892
>>11050920
I'll show you the life of the mind!

I'LL show you the life of the mind!

I'LL SHOW YOU THE LIFE OF THE MIND!!!!

AAAAAAAAAAAAAAAAAAAAAAAA

>> No.11050939

>>11050892
Here's how I remember it. (You should probably just give the original a read though, not that complicated)
>Searle argued that computers weren't just simulating minds but they actually were minds
>made distinction between strong AI and weak AI
>makes the chinese room experiment
>puts people in a room
>gives them little puzzle thingies to solve
>they involve putting chinese words/letters together
>although the people in the experiment didn't understand chinese, the puzzle was still made solvable by looking at the symbols and letters
>likened these people to computers
>just as these people didn't understand the meaning of their actions but did them anyway, a computer doesn't understand the meaning of it's programming but it can still be seen as a thinking thing
I think that's the gist of it. Could be misremembering parts of it.

>> No.11050940

>>>11049863
American imperialism? Oh boy, that sounds great, honey! Let’s go to bed and do that right now!”

>> No.11050967

>>11050939
thats not how it works u fucker, people make questions out of blocks with chinese tiles and push them under the door, through a slot whatever, some dude in the room who has no clue about chinese has a script, algorithm, whatever, that he follows to rearrange the blocks according to the script and pushes them back out, correctly answering the question, meanwhile he knows nothing of chinese

>> No.11051028

>>11049667
>risking professional disgrace for a literal 5/10
wew. not a great thinker right there

>> No.11051036

>>11051028
dude is like 80 if he gets any non-withered vag its a big win, plus he did it for the last 30 years with no consequences, but like charlie rose found out, at a certain point the pecker just becomes to shriveled for even the most ambitious intern to stomach and then u goin down

>> No.11051045

>>11051036
good point. not bad for a geezer.

>> No.11051081

Harold Bloom's #MeToo was a lot worse

>> No.11051088

>>11049667
I feel like most great writers/philosophers would have gotten BTFOd and metoo'd if they were still alive today.

>> No.11051093

>>11049529
She isn't Asian, she's Buryat. There is a huge difference.

>> No.11051099

>>11051093
>Buryat
what continent is Russia on

>> No.11051108

>>11051099
Talking about the ethnicity. The Buryats aren't the same "asian" as Eastern asians.

>> No.11051123
File: 28 KB, 512x534, smuglocke.jpg [View same] [iqdb] [saucenao] [google]
11051123

>>11051108
>The Buryats aren't the same "asian" as Eastern asians.

so Buryats ARE asians then? :^)

>> No.11051137

>>11051123
They aren't ethnically Han, or Japanese like that person was implying.

>> No.11051187

>>11048815
just google 595433457

>> No.11051470

>>11050892
Its a thought experiment if AI can ever trick humans into thinking its one of them.

>> No.11051489

>>11049529
she could be anywhere from 15 to 30 t b h. With asians it's always a guess.
>tfw chinese project partner in high school. On third day of project went up to random other chinese girl and started talking to her about the project as if she was first girl.

>> No.11051501
File: 253 KB, 1920x1200, sharon.jpg [View same] [iqdb] [saucenao] [google]
11051501

>>11051489
all look same

>> No.11051508

>>11051501
you think it's funny dude, but my prospagnosia is such that it takes me about 5 face to face meetings to recognize a white or black person, maybe like 10 for a hispanic and an indefinete amount for an asian. I could not pick out any distinct asian female celebrities. And Jackie Chan is probably the only asian male I could identify.

>> No.11051564

>>11051508
Its not that hard. You recognize people you are more familiar with, so for you white people and black people.

>> No.11051594
File: 469 KB, 640x480, 1524421466341.jpg [View same] [iqdb] [saucenao] [google]
11051594

>tfw was in the last class John Searle ever lectured
>tfw saw him high on oxycontin with a broken arm
>tfw heard him go on about descartes almost every lecture, seemingly forgetting how much he'd taught already
>tfw he'd flirt with girls in the class
>tfw he brought his fucking dog into class one day (called Tarski, i'm not making this shit up)

truly the greatest thinker of our time i have no doubt

>> No.11051601

>>11051594
I wonder how high up you have to get in academia before you're allowed to break student faculty conduct policies.

>> No.11051607

>>11051601
I wonder how high up you have to get in academia before you're allowed to develop the concept of biological naturalism and still be allowed to teach anyone

>> No.11051676

>>11050784
based horny poster

>> No.11051786

>What happens in the Chinese Room stays in the Chinese Room

Guess his theory didn't hold up.

>> No.11051799
File: 725 KB, 1202x673, aids dick.jpg [View same] [iqdb] [saucenao] [google]
11051799

>humanities dominated by adherents of postmodern neomarxism and far left ideologues
>affirmative action, so no positions
>vastly more phds than permanent positions
>low pay
>no more student poon #poundmetoo

Why even bother with academia at this point, tbqh?

>> No.11051873
File: 166 KB, 1920x1080, sugar.jpg [View same] [iqdb] [saucenao] [google]
11051873

>>11051799
L-love of knowledge...?

>> No.11053144

>>11049863
>tfw you impose your phallogocentric imperialism on supple and lithe indigenes