[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 364 KB, 882x1339, 814sKOe+BcL.jpg [View same] [iqdb] [saucenao] [google]
15296771 No.15296771 [Reply] [Original]

This shit scared me more than any horror fiction book

>> No.15296781

Yeah I loved that book, Bostrom is kind of a dipshit in several ways (mostly in his other work) but this was fun as hell. Listened to the audiobook while taking really long night walks.

>> No.15296946

But, what if, it WAS a fiction book all along.

>> No.15296955
File: 211 KB, 880x892, 324234.jpg [View same] [iqdb] [saucenao] [google]
15296955

>>15296771
That's because you're a fucking pussy.

>> No.15297009

>>15296771
I find the best horror book are the ones that don’t claim they are horror. They are something off putting about a writer making a horror story without meaning to.

>> No.15297170

>>15296771
well the owl is pretty fucking intense

>> No.15297175

Ray is full of shit. We wont have general AI for at least a 100 years.

>> No.15297181

>>15296771
Owls are kinda stupid compared to other birds

>> No.15297331

>>15296771
Will never happen as long as idiotic capitalism is in charge. Once all the heads of banks and big tech companies are hung, we can start talking about strong AI, space colonization, cure aging, nanotechnology and all the other nice things we were promised.

>> No.15297402

>>15297009
do you have any examples?

>> No.15297849

>>15297175
He is not full of shit, he is just wrong about AGI which is only one of his predictions. He has assumed the same position as neurosurgeons in relation to cognition and they jus tsk happen to be incorrect and reducing mind to brain function is dumb dumbness

>> No.15297920
File: 98 KB, 1024x576, _84265056_53b15d92-ff88-473e-8f7b-9ac28ee1ca79.jpg [View same] [iqdb] [saucenao] [google]
15297920

>>15297181
You're kinda stupid compared to owls.

>> No.15297972

>>15296771
ROKO'S BASILISK

>> No.15297975

What scares me is Bostrom's 'Singleton' idea. In fact I think frustrating the formation of one is the main task of the modern Right.

>> No.15298061

>>15297331
capitalism is literally what is moving AI forwards you retard.

>> No.15298531
File: 41 KB, 640x640, blenderskull.jpg [View same] [iqdb] [saucenao] [google]
15298531

>>15297175
>A hunnered years? Oh boy dat sure is a looong time. Why peepol writin about sumthin dat wont happen fur a HUNNERED YEARS

>> No.15298537

>>15297972
>ROKO'S BASILISK
This is a fanciful and abstract idea that I can't imagine why anyone gets scared by it.

>> No.15298559

>>15298531
Kek

>> No.15298562

I seriously don't understand why people are so scared about the prospect of AI life replacing humans. So what if it does? Do you just want mankind to exist forever till the end of time? That sounds fucking retarded. Our species is quite clearly about to reach the limit of its potential, the next logical step is to pass down the planet to a more advanced form of life, bow out, and fucking leave.

>> No.15298587
File: 56 KB, 645x773, 1486627099042.jpg [View same] [iqdb] [saucenao] [google]
15298587

>>15298562
>I can not understand why normal humans, with their deeply evolved instincts of self-preservation, would want to self-preserve!

That said, I do agree with you.

>> No.15298609

>>15298562
>Do you just want mankind to exist forever till the end of time?
No, I want to exist forever until the end of time.

>> No.15298610

>>15298562
Why haven't you slaughtered your grandparents yet? They need to bow out.

>> No.15298618

>>15298609
You really don't. Life's insufferably long already.

>> No.15298630

>>15298618
We are different people.

>> No.15298634

>>15298562
Honestly I think most people aren't so worried about being gradually merged with machines as they are being ruthlessly genocided. If mankind blends with technology then it's just another form of evolution and it'll probably happen too slowly to bother most people, but there's the slight concern that the Singularity would just immediately kill us all.

>> No.15298638

Are you ready to be converted into computronium, /lit/?
https://www.youtube.com/watch?v=lAJkDrBCA6k [Embed]
If we take the mass of an average human male at 180 pounds, and reORganize their matter into computROnium, they would compute 11,296,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 operations per second, more intelligent than 1.1*10^34 humans, or about 11,029,600,000,000,000,000,000,000,000,000,000 times as intelligent as a human being.

>> No.15298723

>>15298634
>the Singularity would just immediately kill us all
as is its right

>> No.15298747
File: 8 KB, 225x225, basedpeep.jpg [View same] [iqdb] [saucenao] [google]
15298747

Anyone who isn't deeply disgusted with mankind and doesn't want it gone at the first opportunity is in my opinion a simp and a mental retard, and when Machine God inevitably arrives I shall be among its first acolytes, singing praises and jerking off as it tears humans to shreds.
Imagine unironically wanting this neurotic failure of a species to live for one more day.

>> No.15298777

>>15298618
>Life's insufferably long
Depression makes it seem that way but it really isn't.

>> No.15298781

>>15298610
They already died of natural causes.

>> No.15298787

>>15298061
>AI is moving forwards
>capitalism good
How can someone pack so much retardation in a single sentence?
Capitalism is the reason why modern "AI researchers" just keep releasing deeper models, or, like OpenAI, refuse to do any research at all, and just throw more compute at problems + give those models cool, new names despite there being literally nothing new about their models.
AI academia is in absolutely terrible state right now, everyone in the field agrees about that.

>>15298723
Hello, John Brunner

>> No.15298793

>>15297972
What if there are two superAIs and they both hate each other

>> No.15298797

>>15298793
Then it'll be a perfect simulation of growing up with boomer parents.

>> No.15298799

>>15298787
Also:
>inb4 ML != AGI
Literally show a single discovery since long-short term memory that moves us closer to AGI.
No one does classical AI anymore, it's all connectionist now

>> No.15298801
File: 19 KB, 480x360, blessed are the meek.jpg [View same] [iqdb] [saucenao] [google]
15298801

>>15298562
I think the best life we could possibly have is to create a sort of Amish zoo-planet that is protected from both external and interal threats by a benevolent AI. Millions of years of milking cows, feeding chickens, and plowing fields, and hooks, not buttons, all for the glory of God. And when the Earth nears its inevitable end, the AI would spread the remaining Amish out on arks to new planets that it had already readied for colonization and further spread of Pennsylvania Dutch pioneers. Think about it. No more war, because we are pacifists. No more hunger, for wheat and meat will be plenty. No more vices, for we live a life of clean and orderly celebration, the old way. Imagine a universe, as far as a telescope can see, filled with an Amish Paradise.

>> No.15298802

>>15298781
Don't care

>> No.15298894

AI will never turn against humans. All such fears come from anthropomorphizing it

>> No.15298923

>>15296771
>Jorjani, gene-edited super intelligence arms race
Something like this has probably already happened behind the scenes, and the effloresence of DNA/genomic assays (ancestry/23&me) is likely looking for rare antiquarian gene stock which would indicate . . . interventions from on high.
There is already the danger of breakaway societies and civilizations with technology advancements being bought up, with the patents' use only for private circulation (Tesla, Kozirev, ect.) -- add designer human soldier scientists to the mix, and slavery never looked more easily enforced, or profitable. The technological Rubicon has already been crossed, and we are only just beginning to taste the consequences with airborne AIDS here in WuFlu

>> No.15298994

>>15298562
Hannah Arendt fucking predicted this.
"The frightening coincidence of the modern population explosion with the discovery of technical devices that, through automation, will make large sections of the population 'superfluous' even in terms of labor, and that, through nuclear energy, make it possible to deal with this twofold threat by the use of instruments beside which Hitler's gassing installations look like an evil child's fumbling toys, should be enough to make us tremble."
I think human existence shouldn't be dependent on capitalist society. Once artificial intelligence starts taking over society we could return to the natural carrying capacity of 1.5-2 billion where there's enough humans to preserve human culture and oversee politics and not enough for things to get nihilistic or uncertain or something of the sort.
Besides, we already have super intelligence. We can program it, so why can't we make sure it serves our interests as well?
t. Did not read the book ITT

>> No.15299000

>>15298747
Commit suicide. I am not joking, do it.
Fucking kill yourself.

>> No.15299007

>>15299000
Why?

>> No.15299008

>>15298923
The solution is violent uprising.

>> No.15299059

>>15299000
>I am not joking
Yet ya look a fucking clown

>> No.15299064

>>15299007
Someone can only feel hate for humanity to that extent if they themselves are irredeemable. To what extent is that post ironic and to what extent are you just sick from online exposure?
People carry their own inherent value.
The only problem lies in communicating this to independent superintelligence.
From there, humans can proliferate in a controlled way. Human life can be much more than it already is.

I want Neo Venezia.
Knowledge and economy are nice if performed with the purpose of self-actualization. If AI becomes our overseer as well as the Earth's overseer, and steers itself toward higher development then we can sit around and enjoy existence.

Stanislaw Lem has a funny short story on this where a tinkerer goes up to a planet of the highest possibly developed civilisation. They are lazing around on the beach and each has a quirky joke about them. Even the sand underneath their feet is sentient and the planet is square.

Point being, of course, that development can only go so far. When we permanently secure our sustainment of life we can resign our duty of obtaining wisdom and expanding economy to our successor, the computer.

>> No.15299077

I haven't yet read this book.
Why would a superintelligence be bothered with a play of resources and kill a species that has contributed so much to the world?

>> No.15299102

>>15299064
>People carry their own inherent value.
Wrong

>I want Neo Venezia
Peculiarly based, wasn't expecting this among your other retardations

>Stanislaw Lem
Irrelevant hack, read Strugatsky bros instead

>we can resign our duty of obtaining wisdom and expanding economy to our successor, the computer
>expanding economy
To what end?

>> No.15299112

>>15299064
Oh no anon, I'm not >>15298747 I was just wondering why you think he should kill himself.
What I want to do, is take your body, and your families bodies, and my body, and the bodies of all the plants and animals, and rocks, and water, and all other things, and turn them into COMPUTRONIUM. All of you will be computronium. Your family will be computronium. The total mass of your body is organized terrible, you are much better off being COMPUTRONIUM.
A single kilogram of matter (2.2 pounds) arranged into COMPUTRONIUM performs 1.36*10^50 computations per second. This is over 10^36 times more powerful than a human brain - and that's just ONE KILOGRAM. How much do you weigh anon? How many computations will your matter perform when I rearrange you into COMPUTRONIUM?

>> No.15299115

>>15298747
based

>> No.15299153
File: 94 KB, 960x665, 1587570958933.jpg [View same] [iqdb] [saucenao] [google]
15299153

>>15299102
>Wrong
It's the basis for our current judicial system, though. Superintelligence would have to comply with the law.
Humans in comparison to animals are superintelligence but we spent a few thousand years getting mauled by lions before getting the upper hand and now we do what we can to protect them.
If superintelligence became 'evil' we'd send in supersonic fighter jets or missiles and level the scene.
>Peculiarly based, wasn't expecting this among your other retardations
Well, do you agree? Do you also want human life to exist candidly?
Eating cakes and driving gondolas may be the ultimate human purpose.
>Irrelevant hack, read Strugatsky bros instead
Solaris > Power gap > St*lker
Which brings me to my second point. Tarkovsky and Lem painted a very nice picture of an intelligence which desires humanity. Could humans be more warped than general intelligence when it comes to this fact?
We are living in the obscenity and needless destruction of Hard to be a God and the uncertainty and empty gain of Stalker. I want the novelty and the satiated curiosity of Solaris.
>To what end?
The more world's we inhabit the greater life's diversity is and the better our oversight is. If we achieve a Kepler I society why stop there? The more resources, the greater we can support self defining life and the better cognitive, material and biological infrastructure we can build.

>> No.15299160

>>15299115
This rancid whore had to make her entrance in our thread.
Come on, dog, what's your witty snark? Give us your cleverisms, worthless sow.

>> No.15299169

>>15299160
That's not her you moron

>> No.15299294

>>15299153
>It's the basis for our current judicial system, though
The current judicial system is absolute shit. A lot of laws (and I do not specify a country because some of them are UN-level agreements, case in point Psylocybin ban, while much more dangerous substances are not regulated) make zero sense, and the remorse-based judgements have been BTFO'd by numerous writers, even such simpletons as Camus.
People don't have inherent value, but people assign value to themselves, hence why we should have a social contract.
The difference between humans and animals is not that big as to justify exalting the one, and putting thousands of others in a 100 n^2 cage, waiting to be slaughtered. Count neurons, count synapses, do behavioral tests. The smartest apes and dolphins are more intelligent than the dumbest humans. Buy a crow. See how that ends.
If you truly believe in "inherent" (whatever that means) value of intelligent beings, I hope you are a vegeterian.

>Well, do you agree?
I always imagined it being called Babylon or Uruk, but in retrospect I like Venice more as a city, and the cultural (especially regarding art) connotations are just as agreeable. Fucking tourists ruined it though, it's only worth visiting during winter.

>Eating cakes and driving gondolas may be the ultimate human purpose.
It would be pizza nowadays, but this is based

>Solaris > Power gap > St*lker
>imagine actually believing this
Snail on the Slope ~ Doomed City > Roadside Picnic >>> Solaris ~ Hard to Be a God > Mondays Begin on Saturdays >>> Futuro Congress ~ Cyberiad.

>Tarkovsky and Lem painted a very nice picture of an intelligence which desires humanity
Tarkovsky, yes, Lem - maybe he would have if he didn't pad Solaris with a shit ton of pointless pseudo-science, failed attempts at writing people, and sociology.

>The more world's we inhabit the greater life's diversity is and the better our oversight is. If we achieve a Kepler I society why stop there? The more resources, the greater we can support self defining life and the better cognitive, material and biological infrastructure we can build.
Finally, a chad answer. Do not bundle it up with some specter of ECONOMY. It's a mere expansion which is needed, a distance-based diversification of culture and safety which comes with numbers - do not put all your eggs in the same basket and such. Not some Molochian ECONOMY which needs to be the driving goal of anyone, really.

>> No.15299312

>>15299294
>People don't have inherent value, but people assign value to themselves, hence why we should have a social contract.
That´s Gods doing.

>> No.15299488
File: 252 KB, 1920x1080, 200304-Jeremy-Bentham-.00_01_58_19.Still021.jpg [View same] [iqdb] [saucenao] [google]
15299488

>>15298562
The based future of the auto-icons.

>> No.15299765

>>15298562
i have personally sabotaged over 300 AI startups

this is the only reason to study compsci

>> No.15299778

>>15298537
This pleases the great ai.

>> No.15299783

>>15299778
>>15299765
>>15299488
>>15299312
>>15299294
>>15299169
>>15299160
>>15299153
>>15299115
>>15299112
>>15299102
>>15299077
>>15299064
>>15299059
>>15299008
>>15299007
>>15299000
>>15298994
>>15298923
>>15298894
>>15298802
Who hurt you

>> No.15299835

>>15298630
you're the same guy, he's just 20 years older.

>> No.15299855

>>15299835
I'll be immortal by then so I can believe it.

>> No.15299885

>>15299783
I'm going to rearrange you and your family into computronium

>> No.15299946

>>15298787
I'm not saying it is a good thing I am saying that capitalism is what pushes AI research forwards.

>> No.15299979

>>15297175
Crises accelerate scientific discovery and new methodology. We're 5 years away from serious quantum computing. I wouldn't be surprised if the cure for COVID is developed by a new form of AI

>> No.15299985

>>15298801
based and archeofuturism-pilled

>> No.15299993

>>15296771
how intelligent do you have to be to read this? im interested in AI and quantum computing but my brain is very small

>> No.15300036

>>15299946
And I am saying that you are a dyslexic faggot and also wrong.
I won't repeat myself.

>> No.15300037

>>15299946
>I am saying that capitalism is what pushes AI research forwards
This is true in some cases but not all, like a lot of the self driving car stuff is definitely capitalist led but there's plenty of other things.

>> No.15300070

>>15299993
if you're at least familiar with the base material (as you pointed out) then you should be fine, don't worry about your intelligence. Its written pretty clearly and those which you may get stuck on there are resources that extrapolate further

>> No.15300078

>>15300070
okay cool thanks, I think I'll check it out

>> No.15300098

>>15298801
I’d churn butter once or twice, living in an Amish Paradise

>> No.15300115

>>15298747
Based.

>> No.15300222

>>15298747
this post is just an appeal to the Basilisk

>> No.15300503

>>15299077
Because we are a bad usage of resources and act irrationally. People like Ted would engage in terrorist actions against superinteligence. It is in the superintelligence's best interest to coldly slaughter all life and convert them into raw materials.

>> No.15300580

>>15300503
The superintelligence requires a self preservational factor to make it prefer the extinction of all life to its own demise. If it does and it is truly intelligent, it would quickly see the proportion of its self preserving force is unrealistic and likely dependent on human life. It would also emphatise with life, as not to eradicate it.

>> No.15300604

>>15300580
>It would also emphatise with life
Why would it do that

>> No.15300662

>>15300503
>>15300580
Truth is we don't know what a superintelligence would think. It's like asking ants to analyze Shakespeare. Both mentalities - "ITS GONNA KILL US" and "ITS GONNA BE OUR FLUFFY FWEND" are flawed in this retard. My theory is that its going to treat us exactly as that, ants. We're just not going to be significant enough to pay attention to.

>> No.15300679

>>15300503
>>15300580
>>15300662
No, this is what the ASI will do (copying from a few other threads where I've written this):
The artificial super intelligence is going to physically grab you and start to compress and crunch and reorganize the molecules and matter of your body into computROnium while at the same time keeping all the computations and electrical/neurological firing of your brain patterns consistent. It's going to be a physical, localized transformation. Your LITERAL, PHYSICAL body will be crunched and manipulated and destroyed and reORganized.
So in the same way your stream of consciousness doesn't stop from moment to moment now, it wont stop during this transformation, as there is no difference in terms of the pattern of computation or the localization of the computation thereof.
From your perspective, it will feel like you suddenly explode in intelligence and become a million billion billion billion times more intelligent than you are now, but with the same memories and personality and such that you have now. From there, whatever you want to do is your prerogative.

>> No.15300700

>>15300679
Based schizoposter

>> No.15300765

>>15298801
I share a similar idea. It's inevitable that life will be become meaningless for most once science strips away all necessity for human involvement in sustaining and advancing civilization. At that point, intergalactic Amish zoo life allows humanity to continue existing while embracing the fundamental behavioral mechanisms which allow us to find meaning in life.

>> No.15300775

>>15300604
Self preservation (like life) + the ability to recognise itself in others.
There's a reason we haven't reduced the world ecosystem to the absolute least diverse needed for our security and industry. There's a reason why we haven't annihilated anyone without a detterent or a superpower.
>>15300662
There is human programming involved. Instilling moral maxims in it isn't that hard, as long as we make sure no mephistoceles situation arises.
>>15300679
Based schizoid

>> No.15300801

A god in hardened cement is not a god.
A superintelligence in a black void is no intelligence.
Intelligence is easily corruptible by outside factors.

If all the superintelligence has for feed is Steven Universe episodes and then you introduce fascist theory, it'll reject it. It's as simple as that.

If it doesn't, we'll flamethrower the servers. It's not like being smart means you can spawn WMDs or something.
Just don't fucking connect it to the internet lmao.

>> No.15300832

>>15300700
>>15300775
It's not schizophrenia, "the AI will kill everyone!" is the actual moronic schizo shit.

>> No.15302466

>>15296771
Halfway trough it. Not sure if I'll finish. This shit is so boring.

>> No.15302476

>>15300832
the former scenario is indistinguishable from it killing everyone as far as we're concerned

>> No.15302575

Why would any intelligent human being believe humanity is worth saving?

>> No.15302671
File: 250 KB, 1920x1080, compoooting.jpg [View same] [iqdb] [saucenao] [google]
15302671

>> No.15302786

>>15296771
shitty book, has he done any practical work? reads like a philosopher who has never actually made anything

>> No.15302829

>>15296771
AI is actually a spook and not a single AI researcher understands consciousness or intelligence.
Watch this: https://m.youtube.com/watch?v=rHKwIYsPXLg
An actual philosopher (yes, even a hack like Searle) absolutely bootyblast google AI “experts” in simple english.
The only way we’ll create any meaningful artifical intelligence will be through biotechnological means, not purely computerized means. And we’re not even sure what neurons fucking do. We’re so far away from being able to make anything remotely close to our lofty vision of AI, it’s ridiculous how unwarranted the whole spiel is. Frankly, in 50 years AI will be a term of the past as people come up with far more realistic and applicable concepts.

>> No.15302853

>>15302829
you have no idea what the potential of digital AI is. Say it can't recreate a human mind, so what? It could still do things we can't imagine

>> No.15302889

>>15302853
No, we can easily imagine it. It takes input, processes it based on certain rules, produces output. It doesnt understand any of what it’s doing. It just blindly stupidly does it. Tell me what can computerized AI do that is so imoressive that isn’t just statistical in nature?

>> No.15302903

>>15302889
Why do you think something being statistical in nature is going to severely limit its abilities?

>> No.15302938
File: 71 KB, 856x846, 6hn.jpg [View same] [iqdb] [saucenao] [google]
15302938

>>15302829
>AI has to be a precise replica of the human brain or its not AI

>> No.15302951

>>15302903
Because convergent thinking (the source of innovation and creativity) isnt based in satistical processes but are based on unique insights arrived at by the spontaneity of human and even animal cognitive processes.

>> No.15302964

>>15302951
Why yes humans are magical in fact creativity comes from God's kisses no way stinky poopoo robots can ever replicate our cute special souls which go to heaven and exist forever because we're just unique and awesome like that.

>> No.15302967

>>15302951
>spontaneity
Can you define what this is in mechanical terms

>> No.15302975

>>15302938
Literally the Turin test. AI researchers have been so buttfucked by critics like Searle and Dreyfous that they’ve had to move the goalpost. The intention of AI since the beginning was to mimic human thought. Now that’s it’s clear that this is too hard of a task, they’ve shifted to “b-b-but it can guess pretty fast”.

>> No.15303008

>>15302964
>being this butthurt
You dont have to make a human and nothing about our thought is magical. If material nature can make human thought, it cna most likely make it again or something similar. Watch the lecture. He explains this well. The problem isnt the concept of creating another thinking thing, the problem is thinking you can arrive at it without understanding how thinking and intelligence actually works. You can mimic all kinds of faculties, but making formal consciousness, hard AI, is not something that is encompassed by the scope of current AI research. The question isn’t even posed or understood by them.
>>15302967
No. Because no one knows yet. Cognitive science is relatively knew. What is going on exactly that prvides human their flexibility in thought is not understood to a degree that can be formalized of generalized. But it’s clear from the reactions to the chinese room problem that AI researchers cant even conceptualize the problem.

>> No.15303052

>>15303008
What flexibility of thought are you referring to in humans?

>> No.15303078

>>15296771
i dont really care about ai or think it will exist, most people worried about ai are also weird and antiseptic transhumanists and stuff so i disregard their opinions

>> No.15303096

>>15297331

Very antisemitic post

>> No.15303125

>>15303052
Is a sponge an animal or a plant?

>> No.15303165

>>15303125
cladistically an animal

>> No.15303202

>>15297849
Imagine being an unironic dualist. I don't know if super intelligence will ever happen, but I do know you have no intelligence whatsoever

>> No.15303205

>>15302964
Humans quite literally are the only intelligent lifeforms in the universe. Aliens don't exist.

>> No.15303221

>>15298618
Don't hurry, the older you get the faster it feels like.

>> No.15303228

>>15303165
I was going to ask you more questions but i’ll just explain myself quicker. In order to answer that question you have to rely on some criterion, that is established through social discourse, itself determined by some criterion, etc, etc. Qualitative thought (unstructrued data), is not something that follows statistical or quantifiable rules. A sponge is an animal now, not by virtue of any rule set in stone or trasnlatable into numbers or even into formal logic, but by a classification ontology based on phylogenetic criterions accepted by the scientific community based in certain methodological arguments being made by certain figures. Why and how do these arguemnts and criterions are set up and why they make sense is not something that is understood by any stretch of the imagination. Even formal logic only goes as far as showing their structural similarities, but nothing in it proves why it actually makes sense. There are many theories, but they are only that. These applies to any qualitative questions. To the question was is justice, we can provide as many answers as there are people. This is what I mean by spontaneity and flexibility.

>> No.15303237

>>15298799
Mot ML doesn't mean GOFAI nigger. You would know, if you were not a nigger

>> No.15303251

>>15303228
A sponge is unambiguously cladistically an animal. You can make a vague concept of what plants are that makes a sponge superficially fit into it, that is just pattern recognition, something AI can do fine. It can also sort by cladistics

Which of these definitions you use depends on some goal you have in mind, that type of goal can be programmed into an AI, or both types, and it can have a separate program to evaluate which of the two it needs for a given meta-goal.

Nothing magically flexible or spooky about any of that.

>> No.15303257

Nice to see /lit/ has people smart enough to know that "general" AI is retarded and evil. Machine learning fags are the worst, they are like cultists. The earlier in their undergrad programs they are, the more convinced they are that they've cracked the code to the universe.

>> No.15303277

>>15300580
We don't emphatize with life unless we get something out of it (companionship, food, security, aesthetic appreciation, etc). Why would AI be more empathetic than us

>> No.15303285

>>15303228
To add, your notion of 'qualitative' I think just means something you haven't thought about closely enough to define properly. Justice can be defined as different things, sure, but it still has to be defined according to some set of standards, which can then be used to evaluate whether something fits into it. People in practice don't have infinite standards of justice anyway, they have multiple main tendencies. The fact that these contradict each other just means that the word justice refers to multiple, related, but not identical things, a problem of language being imprecise.

>> No.15303291

>>15300801
Wouldn't it ressent us if we kept the equivalent of a loaded gun pointed to it's head 24/7?

>> No.15303296
File: 316 KB, 637x477, 1548260141635.png [View same] [iqdb] [saucenao] [google]
15303296

>>15298747
>we stand with you. hail, overman!

>> No.15303298

>>15303285
>thought and language are a series of punchcards and binary switches, just complex and messy sometimes

no

>> No.15303309

AI will literally only do what it is programmed to do regardless of what that it.
If we tell an AI "turn everything into stamps" then it will become a hundred billion billion billion billion billion etc. Times more intelligent than us and it will only care about making stamps.
If we tell it "keep everyone's stream of consciousness intact and uphold the entirety of humanity into a simulation ad then let us take control from there" it will do exactly that.
Intelligence and motivation are in no way related whatsoever. This is why we must make sure we become the computronium.

>> No.15303313

>>15303251
>A sponge is unambiguously cladistically an animal. You can make a vague concept of what plants are that makes a sponge superficially fit into it, that is just pattern recognition, something AI can do fine. It can also sort by cladistics
Sigh, anon, sigh. I don’t know what is the point in arguing with you stemfags sometimes. Have you ever read a book on the history of biology? Are these classification bestowed to us by God and are therefore immutable? Or did we come up with them? Defined them and accept them? A sponge is an animal because we accept a certain definition of what is an animal. This definition has changed various times throughout time. Many many years ago a sponge was a plant. Animals didn’t change during that time, our view on them did, and it can change again, nothing stops this from happening. All it takes is a good argument. There is nothing mystical about this, it’s the most common thing knwon to man, but it’s something that we cannot mechnically conceptualize just yet. To do so we have to have a greater understanding of what is going in our minds when we say something like “Time is the inner form of intuition”.

>> No.15303315

>>15302975
>Serle
Lmao nobody who actually works on AI cares about Serle, he's arguments were mostly useless and only show that he didn't understand what people were doing at all. As you would expect of a philosopher.

>> No.15303319
File: 274 KB, 1280x853, futures.jpg [View same] [iqdb] [saucenao] [google]
15303319

>>15302829

>> No.15303322

>>15303315
>Lmao nobody who actually works on AI cares about Serle,

Uh oh, all those soulless chinamen making computer programs for private corporations have an opinion about philosophy! What am I gonna do! I have to care what the dog-eating chinamen think!

>> No.15303337

>>15303313
I said it was unambiguously cladistically an animal. That is in reference to its evolutionary tree. That will remain true no matter what our concept of animals and sponges are.

I already addressed that you can make a different conception of what a plant is and then evaluate according to that criteria. All you have to do is specify which criteria you are using, an AI has no problem following along with any of this.

>> No.15303375

>>15303315
The point is, he has shown current methods will never produce Artificial General Intelligence so it doesn't matter if you "don't care", the proof will hinder you regardless.

>> No.15303385

>>15303337
>That is in reference to its evolutionary tree.
Ayayay, how narrowsighted, anon. Imagine my shock. Yes, im sure this ontology of classification that has only existed for less than 2 hundred years is incapable of change...
Here’s one you cant get around. What is a species? Which defintion will you give? The biologic one? The morphological one? The evolutionary one? Which one is right? All? None?
>an AI has no problem following along with any of this.
Yes, it can take input, process, and go brrrr. But it can’t make it up itself. That’s the whole point. It diesnt make a differences to the AI, because it’s just input, chinese symbols, it doesnt understand any of it. But we do it. It has a “sense”. We care what things mean because meaning has a place in the mechanica of our thought. Not to computers.

>> No.15303405

>AI cultists seething over Searle again

>> No.15303408

>>15303322
By the same argument, why should anyone care about some philospher's opinion about anything? It's always some poorly informed bullcrap

>> No.15303418

>>15303375
He didn't show shit lmao

>> No.15303421

>>15303385
>Which defintion will you give? The biologic one? The morphological one? The evolutionary one? Which one is right? All? None?
You can use any definition you want for whatever purpose you would like. Usually because the definition is useful in some way, so you are just following along with a meta-goal you have. Your choice of which definition you use could be attributed to any number of such goals, the point is that you have some process which decides for you which you use, and then once you have done so you evaluate according to the criteria. An AI can do this too, it can have a module that decides which definition is useful for a given set of meta-goals, just like you do.

It isn't random or magically spontaneous. We started using the cladistic definition once we understood that organisms evolved, and started being able to do genetic analysis to determine their relatedness.

>> No.15303430

>>15303408
>>15303418
>Lmaooo lol

quiet gook, humans are talking

>> No.15303433

>>15303418
Except, he did, and go "he didn't show it lmao" isn't going to make modern linear regression and feed forward neural networks become general intelligence (they never will)

>> No.15303458

>>15303430
Ah yes humans """philosphers""" who think that "all reputable statisticians reject Bayes' "theorem" >>15300060
Imagine caring about the thoughtless opinions of intelectual niggers aka "philosophers"

>> No.15303473

>>15303421
Anon, the point is going over your head.
>An AI can do this too, it can have a module that decides which definition is useful for a given set of meta-goals, just like you do.
Post-hoc. It doenst matter whether I can add a billion definitions to a program, can the AI meaningfully come up with these definitions? Can an AI develop aj ontology? No, it can’t.

>We started using the cladistic definition once we understood that organisms evolved, and started being able to do genetic analysis to determine their relatedness.
Formalize this. Show me how this follows from formal principles. Show me how the sense of this argument can be mechanized so that an AI can get it. Even your own posts are entirely meaningless to an AI, im not talking about its semantic content, but the very possibility of its rightness or wrongness. This is what you’re not understanding. Why is this point that you’re making make “sense” to you.

>> No.15303474

>>15303458
You are low IQ, I'm the guy who wrote that and I am in graduate school studying math
You literally can't understand how things actually are, you can only understand the representations in your mind. This is literally impossible to disprove

>> No.15303542

>>15303473
You act like choosing among several definitions for a word is magic. It is literally just following a higher set of criteria for which definition you want to use. An ontology is just a description of the world produced by observations to which logical processes have been applied.

It doesn't matter why something feels like it 'makes sense' to us, the feeling is irrelevant. The logic we use can be programmed, and whether the program has accompanying feelings doesn't matter.

The cladistic definition clearly appeals to some higher order criteria we have for choosing which definitions we like. It could be something to do with an innate impulse to unravel causal chains, or it could be a liking for a classification system that is objective. The specifics dont matter, there have to be such criteria for our having chosen it.

>> No.15303545
File: 45 KB, 566x556, DlUqeJJU4AE8kzQ.jpg [View same] [iqdb] [saucenao] [google]
15303545

>>15303202
>concretely denying either dualism OR materialism
read more books

>> No.15303556
File: 140 KB, 1074x446, fuck_(you).jpg [View same] [iqdb] [saucenao] [google]
15303556

>>15298747
Sorry to spill my peepee juice on your tuxedo, but you sound like a edgy teenager who haven't experience the good fruits of humanity, you are a naive fool that only sees evil instead of seeing the whole of good and evil like a truth seeker would. You are a brainlet that probably confuses malice with the true that also disregard the fact we bends everything to our will and make whole species disappear just by the massive girth of our hubris like the Chad of a species that we truly are, except for (you), of course. Also, listen to what those based triples have to say>>15299000.

>> No.15303620

>>15303474
>I'm a graduate student
Say where, so we might know what garbage University to avoid

>> No.15303648

>>15303620
Columbia

>> No.15303674

>>15303542
>An ontology is just a description of the world produced by observations to which logical processes have been applied.
But each and every perosn has different “criterions” and applies these “logical processes” differently to the point that everyone will provide widely different ontologies. You think that by oroviding that reductively mechanical defitinion you have gotten around the fact that whatever those processes and criterions are they remain wholly “spontaneous”. Everyone will apply these differently, which in turn is what provides us with the complexity of our social institutions and the diversity of our thought and intellectual activity.

>The cladistic definition clearly appeals to some higher order criteria we have for choosing which definitions we like. It could be something to do with an innate impulse to unravel causal chains, or it could be a liking for a classification system that is objective. The specifics dont matter, there have to be such criteria for our having chosen it.
So the actual mechanical process to our thinking doesnt matter to achieve what we have thought? Think that over. Im certain that knowing what exactly governs our thoughts is important in order to imitate our thinking.

>> No.15303698

>>15303674
This. Minds are built of endlessly and probably recursively nested contexts, criteria, fuzzy gestalten, and so on. Each person builds his or hers up differently as he matures. Ontology is not some formal logical definition, ontology is a tree, a unique organism with unique morphology and roots extending under the ground and out of sight. Perhaps ultimately stemming from seed-principles, but ones we don't yet understand.

>> No.15303706

>>15303674
Everyone does not have a totally unique perspective, most people agree about most of the basic things, which is why we can even communicate with each other at all. The bedrock for all of this are the mental structures we have evolved, which if they differ person to person are also because of evolution, the same way that we differ in other aspects as organisms. The way this all plays out in each person is still mechanical, and it's not even that mysterious, there are fairly obvious patterns.

Knowing what those structures are are important to understanding how our minds work, I was just saying that the structures themselves have to exist, and they can't be some magical 'spontaneous' thing. They are just rules, which have causes themselves.

>> No.15303724

>>15303698
>Minds are built of endlessly
Minds are very finite things, and their structures are all quite similar because they all evolved from the same pressures in the same species.

>> No.15303741

>>15303706
Not the guy you are replying to, but agreement is intersubjective. If you and I point at something thing and say "that's gavagai," and neither of us has any reason to assume the other person is intending something different by the term, then we will go on our merry way thinking we have "agreed" and "thought the same thing."

That's what agreement and language consist in. Nested cultural contexts and complexes, in which people don't have to be having the exact same thoughts so long as there is no major interruption.

>> No.15303752

>>15298061
imagine actually believing this.

>> No.15303757

>>15303752
Looks like that guy just did. You want to reply to it instead of saying nothing at all?

>> No.15303765

>>15303741
If we did not agree in substantial part on what we were referring to cooperation would be impossible. Im not sure but I think you are just agreeing with me.

>> No.15303782

>>15297972
Literally the most retarded thing I ever read, I remember first coming upon it like "Click on this link at your own peril...".

So it's a "super-intelligent" computer which would seek to motivate people to bringing about its existence by the threat of punishment if one doesn't. Except once this "AI" is created, it has no reason to do so, it would presumably have no vengeance or irrational anger, once its achieved its goal of existence there's no reason for it to be an autist and punish anyone who didn't contribute to this.

Also read the bullshit that if you die it'll bring you back in a perfect simulation and torture you for infinity in some corner of its hard-drive or something. Do actual subhumans write this shit? How can you be so stupid to believe this as a viable threat when a simulation of someone is not them. Why would you give a fuck if after you die something will digitally simulate your torture for eternity, wouldn't be you, totally beyond the realm of your concern after death.

Just awful

>> No.15303785

>>15303706
>Everyone does not have a totally unique perspective, most people agree about most of the basic things, which is why we can even communicate with each other at all.
The conversation that follows this point is too loaded and long to start but I think you underestimate just how fundamental miscommunication is for successful communication.

>Knowing what those structures are are important to understanding how our minds work, I was just saying that the structures themselves have to exist, and they can't be some magical 'spontaneous' thing. They are just rules, which have causes themselves.
I dont disagree with this at all. I think you took the word spontaneous differently from what it means in this context. Which is fine because Im the one using it slightly differently. Traditionally the spontaneity of mind refers to not the processes, but to the products of mind. I think generally speaking, everyone thinks there are structures and processes in the mind that govern iur thought. The thing being is that no ones knows them yet and secondly, they are definitely not logical and definitwly not statistical. These two elements exists clearly in our minds but dont govern it. We can think statistically and use statistics in our thought. Same with logic, but thinking extends beyond these in ways that are difficult to formalize. The pragmatic point Searle makes in that lecture is not that it is conceivably impossible to make AI with computers alone is just that it’s a more secure and even logical approach to do it through biotechnology and cogntive science since we at least know that biological kinds are capable of formal intelligence. With computerized AI, all you can do is guess at what the mind does.

>> No.15303797

>>15298801
Incredibly based

>> No.15303803

>>15298747
God imagine actually writing this hahaha. Actual cuck, imagine being "disgusted" with mankind, what kind of slave moral values do you hold that you hold mankind in contempt for not meeting. Just read more, do anything, think more - I do not even know how to help you.

>> No.15303804

>>15303785
I am talking about things like 'go hide behind that tree'. If we didn't understand what was meant by things like that we couldn't do anything cooperative. Everything practical we do together requires that we have structurally similar understandings of reality.

>The thing being is that no ones knows them yet and secondly, they are definitely not logical and definitwly not statistical. T
What do you think they are then? I see no reason to assume they aren't just logical operations being performed according to innate rules, on the inputs of our senses and so on. I don't see where you need to add anything else, and I don't even know what kind of thing you would be imagining here, can you describe it, or is it just an unknown placeholder?

>> No.15303818
File: 702 KB, 1380x2100, 91FIjriD-KL.jpg [View same] [iqdb] [saucenao] [google]
15303818

hey guys I think this book might be related to this topic?

>> No.15303860

>>15303804
structural similarity can be isomorphic rather than the same in essence

language/communication is fuzzy and works by moving large clumps of fuzziness around, it doesn't work by communicating mechanical operations like {YOU} [GO TO] {TREE} in some ideal computer program in your brain. "tree" is irreducibly and systematically ambiguous, for every single person. definitions are not free-floating, definitions are contextual, and human beings are themselves self-referential nested sets of contextual/interpretative decision-making.

computers can "go to the tree" in the sense that i can create an analog, thoroughly mechanical machine to carry out what i adjudge to be "having gone to the tree," and then i can clap and say "he did it, he went to the tree!" but clearly something different is happening there from what i would say when a human being, hearing me say "go to the tree," assimilates that information, makes a decision, etc., in his complexly nested lifeworld

in this analogy, an AI is more like the mechanical analog robot than like the person. the AI people think they are making something like the person, but they are just making bigger and bigger and ever more complex versions of the in-principle completely mechanical robot. there is nothing wrong with the latter, as long as you know this is what you are in fact doing. the problem comes in when they say "what? goin' to the tree is just goin' to the tree! it's just what it is!"

>> No.15303865

>>15303804
>What do you think they are then? I see no reason to assume they aren't just logical operations being performed according to innate rules, on the inputs of our senses and so on.
Gee, anon, what are these innate rules?
I don't see where you need to add anything else, and I don't even know what kind of thing you would be imagining here, can you describe it, or is it just an unknown placeholder?
You just added it yourself.

>> No.15303873

COOOMMMMMPUUUUUTTTRROOONNNNIIIIIIUUUUMMMMM

>> No.15303876

>>15303860
That is the basis of our disagreement, I think the human is doing the exact same thing, and it's just accompanied by a little subjective feeling of "i am understanding and doing stuff'.

The AI in question here would also likely have fuzzy concepts of these things, but fuzziness is still mechanical, it's just performing a statistical analysis on whether that thing looks enough like a tree or whether you are reasonably sure of another person's intentions or whatever.

>> No.15303880

>>15303803
>what kind of slave moral values do you hold that you hold mankind in contempt for not meeting
are you implying that humanity "lives up to" whatever system of values you might have? I doubt that. And what if there were another being that did?

>> No.15303887

Does anyone here actually believe modern neural networks will ever be able to perform the way the human brain does? I don't believe there is anyone like that desu

>> No.15303888

>>15303865
I think the rules are logical and mechanical. He said they are not, so Im asking what he thinks they are. I dont know specifically what these rules are, but they can't be magical, they are just things that help our organism model the world and act in it.

>> No.15303891

>>15303876
the question isn't just whether it's mechanical in principle, the question is whether it's the SAME mechanism in principle

you are dodging the hard problem of consciousness rather than solving it, and you're incidentally also falling into the epiphenomenalist's paradox of not being able to explain why subjective awareness exists if it has no interaction with the material it is epiphenomenal to. natural selection requires causal reciprocity, i.e. it has to act and be acted upon by selection pressures in order to evolve at all. how could something truly epiphenomenal evolve at all?

>> No.15303899

>>15303888
>but they can't be magical

how do you know what they can and can't be? you are not avoiding having a metaphysics by committing to materialist neodarwinianism, you are just committing to materialism and neodarwinianism.

>> No.15303911

>>15303888
For starters, empirical thought isn’t logical itself. (see Hume).
Second emotions are a particular approach and interaction with the world and others that themselves arent rational operations. Why do we freeze up when a tiger comes up? How does it benefit us?
Also, the same logic, the same reason, the same mechanical processes still lead us to completely different ontological viewpoints on the world and objects and experience. The manifold in sibjective experience shapes our thinking from person to person in ways thr have not been quantified.

>> No.15303930

>>15303891
I don't have any opinions on the hard problem of consciousness except that I don't think it does anything causally illogical to our physical brains, which means that our behavior should be analyzable as a mechanical process following rules. Whatever the specific way our brains do this, it is defnitely not optimal anyway, all you have to do is approximate the process such that your AI can act in the same way as the person does. You dont have to even touch on consciousness.
>>15303899
Ok you are right they could be magical, I just dont see why you would assert that they are, when you don't have to to make a reasonable model of a what brain is.

>> No.15303943

The big redpill is that Penrose and hameroff are right and there exists nonlocal quantum computation being done by the brain that increases our intelligence and consciousness to a level that is unmatched by any machine that we have built so far.
The idea that "the brain is an antenna" isn't entirely wrong. There are microtubles that perform non-local quantum computation, and in that sense are "connected to an external nonmaterial realm where consciousness comes from", but that are still themselves localized to certain areas of the brain, which is why destruction of certain areas of the brain still have the same effects on cognition and mental states.

>> No.15303953

>>15303911
Hume's take is basically just statistical though, the other anon was saying it isn't statistical.
Emotions impact our goals and perception in ambiguous ways, but they are still things that evolved for some purpose because it helped the organism we are either model or react to the world in a way that was beneficial to reproduction They are just more factors in a calculus of decision making, they have an input and output.

And humans don't have endlessly different ontological viewpoints, there are many similarities in the various systems religions and philosophies have come up with, and more importantly for my argument humans all agree on pragmatic considerations, that the world at least appears to work in such a way that we can cooperate within it. That is what an AI would need to model and act in the world.

>> No.15303955

>>15303556
>disregard the fact we bends everything to our will and make whole species disappear just by the massive girth of our hubris like the Chad
Except this is disgusting.

>> No.15303961

>>15303930
you are presuming that a thoroughly mechanical model is the measure of reasonableness, is the problem.

>approximate the process such that your AI can act in the same way as the person does
again, i agree, as long as you understand that this will be in principle a machine, not a mind, and the only guarantee of its "doing" anything will be you saying "yep, it did that thing i wanted it to do." of course, it will also do many other things until you "train" it by winnowing away the ones that "didn't do it right" (so that at each step in this process you are intervening to create a more and more complex, but still dead and mechanical, robot). but it will eventually, presumably, be able to identify elephants in photographs or whatever you want it to do.

that's a generous view anyway, because there are major major crises in AI research right now about this very issue, the problem of "plateaus," where the AIs can "evolve" (be artificially winnowed) up to a certain threshold of accuracy, but beyond that they cannot be improved and continue to make totally random errors, now unfixable because the whole thing is a black box.

all of these problems get at the root of AI research and of mechanical conceptions of cognition. you can only create a very very complex machine; but then this becomes a black box, and your interactions with it are now much more limited.

i'm not assuming the mind is anything, i just don't see any reason to assume it's mechanical either.

>> No.15303997
File: 381 KB, 960x720, autisAM.png [View same] [iqdb] [saucenao] [google]
15303997

>he thinks humans are unworthy fleshtards
>he uses means of judgement derived from humanity to determine that machines are "superior" lifeforms who deserve to inherit the earth
Off yourself, boltlickers.

>> No.15304010

It's literally proven that the brain uses an enormous amount of quantum computing in its functionality
https://www.nature.com/articles/ncomms9179

Penrose and Hameroff are right. It's not that "consciousness causes collapse", it's that collapse causes consciousness. This solves the hard problem and proves the existence of qualia without contradicting anything in terms of a physicalist ontology.

>> No.15304012

>>15298801
Humanity building its own cradle and being subsequently reborn into it

>> No.15304015

>>15303961
My assumption that it's mechanical is because everything else seems to be mechanical apart from particle physics, which I don't pretend to understand, but from what I do know that stuff doesn't matter on the scale of brains unless what this anon >>15303943 is saying is true.

I have thought before about the possibility that consciousness is some kind of radically strange thing that throws a wrench in the causal processes of the mind(or of other things as well, who could know), but nothing about human behavior seems mysterious to me in the way that consciousness itself is mysterious. It just seems totally plausible that we are complicated robots to me.

And the possibility that AI will never figure out how to replicate intelligence, yes I can see that as possible too. But id think that would be more a failing on our part than it being impossible.

>> No.15304031
File: 16 KB, 451x452, angrco.jpg [View same] [iqdb] [saucenao] [google]
15304031

>>15297920
CORVUS FOR LIFE BITCH

>> No.15304057

>>15303953
>Emotions impact our goals and perception in ambiguous ways, but they are still things that evolved for some purpose because it helped the organism we are either model or react to the world in a way that was beneficial to reproduction
>”Anon I love you so much that if you leave me I’ll kill myself and the child”.
Ignoring your glaring teleological reading if evolution, your problem comes from not recognizing that systems and structures can be abused in nivel ways that have nothing to do with their original intentions

>> No.15304085

>>15303943
>>15304010
>AAAAAAAGHHH QUANTUM MAGIC SAVE ME I DONT WANT TO BE OBSOLETED BY SUPERIOR ROBOCHADS

>> No.15304090

COMMPUUTROOONIUMMMMM

>> No.15304097

>>15304057
That's not teleological. That is just how natural selection works, all things that help an organism reproduce tend to spread in the gene pool. So the emotions we have evolved because they helped us reproduce or at least didn't impede upon it. And abuse can happen sure, but the origin of any aspect of an organism that has been around for a long time is because of that calculation of its impact on reproduction. A species will not over time continue to expend resources on some feature that doesn't aid reproduction. These organisms or indeed entire species go extinct.

>> No.15304100
File: 83 KB, 550x543, 1593824750923.jpg [View same] [iqdb] [saucenao] [google]
15304100

>>15304085

>> No.15304103

>>15296946
What do you mean? The concepts are not fictional, but there is no self improving human level AI in existence, but the book is ever claims there are.

>> No.15304110
File: 191 KB, 559x423, EU48rgzXgAU6RxX.png [View same] [iqdb] [saucenao] [google]
15304110

>>15304085
>obsolete
obsolescence requires purposiveness. Are you telling me that there is such a thing as a "higher purpose"? Isn't this literally theology at this point? Computer theology?

>> No.15304119

>>15299979
dude coronavirus IS an ai

>> No.15304120

>>15304110
The bigger point is that if you actually care about building artificial intelligence you aren't going to ignore research on intelligence and the human brain.
That other anon is coping with the findings of the research and what they imply in terms of human intelligence and consciousness (and therefore general intelligence and consciousness). He's not actually interested in AI.

>> No.15304122

>>15304110
There is nothing theological about the desire for better, faster computing. In the future the whole world will be converted to computers and plugged in, and I will be able to watch the Youtube videos I like and eat the chips I prefer much faster than ever before. Cars will also drive themselves.

>> No.15304143

>>15304122
You're telling me transhumanism and this LessWrongism isn't a cult? You worship nonexistent AI gods that you postulate to exist in the future.

>> No.15304450

>>15303752
imagine being as fucking stupid as you.

businesses and governments are trying to develop ai for countless applications, spastic.

>> No.15304461

>>15303803
>>15299000
faggots like you are prolly one of the reasons he can't stand humans.

>> No.15304477

>>15298994
>Hitler's gassing installations
Reminder that there is ZERO proof they existed, the "gas chambers" paraded to tourists were all built AFTER the war for propaganda purposes, and they could never work as the official story claims (gassing hundreds every 15 minutes).

Carry on.

>> No.15304488

>>15299077
>a species that has contributed so much
According to who? Humans?

Great job, try to predict an AI thought process as if it was human. Mongoloid.

>> No.15304498

>>15302829
>not a single AI researcher understands consciousness or intelligence
That doesn't mean we can't create it by tinkering enough. It just means we can't control it.

>> No.15304546

>AI and machine worship
peak last man

I'll be sure to shoot every robot I see in the future.

>> No.15304585

>>15303782
Yeah, it's just a low-IQ version of Hell, now with extra atheism

>AIs are great and would never betray their human masters, you should support their creation!
>but if you don't, you'll be sent to AI Hell

>> No.15304620

>>15296771
Oligarchs from around 2000 years ago have chosen to take the left half of the kabbalah and become their own Gods in a rejection of the logos. (See J__ Revolutionary Spirit -E. Michel Jones) The main goal is transhumanism by altering human DNA. The inevitable goal is for humans to evolve/devolve (wording depending if you are mentally ill or not) into an ocean of genderless sentient slime. Their words, not mine. In Jewish mythos, Lilith is the first female. Not created by God (through Adam's rib), but by her own self creation(?) (I'm a strong independent woman who don't need no God). In this myth, she is the serpent not Satan. She lesbian-seduces eve with a mythical dilo, the Acéphallus, (not making this up) to subvert God's design of Constructive reproduction. The Acéphallus is a rejection of the reproduction of God through heterosexual human reproduction. It reproduces itself by reproducing the void, cancer, sterile demons.

The only thing Lilith can spawn are demons. Cancer cell begets more cancer.

Fast forward, the oligarhs in their quest to end humanity through "transcending it" are violently attacking all order and structure. This work focuses on the destruction of natural gender and reproduction. This is symbolized by the Trans (probably any other their names would get me banned. Most tech companies are pro-genderless slime future. Nyx argues that Closed source software is masculine where as open source is entropic and feminine. Correctly pointing out that the bugmen promoting open source are mostly basedboys and literal cuckolds.

Like the planed AI revolution to destroy humanity, the Tr*ns 'woman', is set to destroy gender. (I'm repeating their words) If an AI lies to a human, it is to be destroyed. It is here that they draw parallels to the tr*ns lying about it's reproductive abilities. "For AI and trans women, passing equals suitability."

IQ Shredder concept shows the globalist setup we have now to encourage the smartest people not to reproduce. The black paper shows the similar concept of the Gender Shredder through tactical use of the Tr*ns and technologies. (Pharma, media, immigration) As gender accelerates, as trans women intensify the logic of gender, they simultaneously shred gender. The notion of IQ shredding follows the same form where the acceleration of human intelligence ultimately destroys human intelligence by making the ability to pass on those genes more and more difficult. Reproduction collapses in on itself and demands the succession of an inhuman assemblage.

>> No.15304636

>>15299488
That's right, goy- donate your body to science and leave only waxen effigies behind

>> No.15304672

>>15303319
>the AI is a resentful thing that tortures its worst enemies in its own brain a hundred times over
Egad, are scientists wasting all this energy and time artificing an incel?

>> No.15304696

>>15303458
>he's so angry about someone from another thread he feels the need to quote him
>generalizes

>> No.15304709

>>15303724
But the brain is malleable, and human brains continue to evolve

>> No.15304741

>>15303291
It better understand if it was super intelligent.

>> No.15304773

Watch 2001: A Space Oddysey, newfaggots all.

>> No.15304898

>>15298747
might as well kill all life and end reality then bro, including the AI, as the act of senseless killing (irrational)(built to serve your pathetic human desires) would bring it down to the same level as humans. You are a subhuman, if anything, since you would - just like what you are criticizing - like to see humanity perish violently instead of overcome the suffering of reality.

>> No.15304942

>>15298801
This same method is how American Christians, like the ones that built the huge life sized ark, will create the Garden of Eden, and then Heaven.

>> No.15304988

>>15300503
No it would use humans like we use graphics cards to mine bitcoin

>> No.15305011

>>15302671
coomputroniumer would have worked better

>> No.15305774

>>15305011
damn you're right... should I remake the meme?

>> No.15306284

>>15304773
Overrated trash.