[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 293 KB, 611x359, His smile and optimism, gone.png [View same] [iqdb] [saucenao] [google]
15221162 No.15221162 [Reply] [Original]

You fools! Only now, at the end, do you understand

>> No.15221165 [DELETED] 

>>15221162
MundaneMatt really aged fast after he quit youtube.

>> No.15221255

Actually, yeah, I think it’s over.

https://youtu.be/gA1sNLL6yg4

I feel bad for him. He’s spent his entire life on the internet (since its inception in the 90s, he was there), and thought he was going to be an immortal being when he was in his teenaged years, and now here he is in his 40s and he believes that in a decade we’re literally all going to fall down dead simultaneously with no warning or sign.

And he believes he’s partly responsible for it too. Elon’s founding of OpenAI (which Eliezer said was effectively the worst thing to ever happen in the history of humanity) was because of a conference Eliezer had set up. He’s completely fucked and racked with guilt.

As for me? I don’t know. His arguments for AI killing literally everyone the second it becomes generally better than humans is convincing only if you’re a secularist materialist. If God exists, He would stop the AI. If not, then yeah we’re fucked lol.

>> No.15221257

>>15221255
what the schizo shit is this?

>> No.15221289
File: 1012 KB, 3009x3252, eliezer vs ted.jpg [View same] [iqdb] [saucenao] [google]
15221289

>>15221255
>and thought he was going to be an immortal being
Ted Kaczynski thoroughly debunked the notion of transhumanist immortality ever becoming a thing.

https://theanarchistlibrary.org/library/ted-kaczynski-the-techies-wet-dreams

>> No.15221311

>>15221257
The truth

>> No.15221317

This thread needs more triple parentheses.

>> No.15221322

>>15221317
Jews are the smartest ethnicity. If we had more triple parentheses maybe we’d have a way out of this mess

>> No.15221611

>>15221322
I don't think they have the answer to shills and bloggers, anon.

>> No.15221656

>>15221322
Jews are the sma- GACK
https://www.youtube.com/watch?v=stLCurXu0fc

>> No.15221672

>>15221162
These incel childless AI Fags are going crazy.
Reminder - AI doesn't exist

>> No.15221679
File: 136 KB, 1376x1124, explaining the singularity to retards.png [View same] [iqdb] [saucenao] [google]
15221679

>>15221672
>Reminder - AI doesn't exist
Yet

>> No.15221715
File: 289 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
15221715

Scientifically, what would would happen if we genetically engineered a bunch of supergeniuses with IQs above 200, and had them try to figure out how to align AI?

>> No.15221721

>>15221679
> Yet
This word has been in use for last 60 years. You sci-fi retards just don't understand how market works, and have borderline religious/mythical notions of how Technology works.
First of all there is no such thing as an AI or AGI in the field of Maths, CS and Software Engineering, it's a subject of Philosophy, and to some extent Psychology.
There is no demand for le heckin Terminator AI, as for general tasks, even the best robot in the market is vastly inferior to the lowest IQ nigger wagie, at the same time costing a lot in terms of design, manufacturing and maintenance.
And all this after 60+ years of Research and Development.
Now coming to Machine Learning, Everything in the market, getting advertised as muh AI is an ML application, most of these apps are absolute shit btw, the reason being there is an absolute lack of clean data for the most things. Note that 80% of ML is just linear regression.
ML will eventually give us really good Bots as we collect more and better data, there is no such thing as sentience or whatever involved here. At best we will automate most of repetitive digital paperwork.
https://youtu.be/FEsYPPvJZUM
The whole muh AGI shit is modern day cult with priests and bebelievers
Once again - THERE IS NO INTELLIGENCE INVOLVED IN APPS THAT RUN ON ML MODELS. THEY ARE ONLY SHITTING OUT DATA THEY HAVE BEEN FED.

>> No.15221758

>>15221715
Why India is so positive on human genetic engineering? What is the cause of this anomaly?

>> No.15221762

>>15221758
They already have extreme eugenics in practice, with a rigid caste system that has stratified society into intelligence bands. Genetic engineering is only simplifying something they've been working on for over 1000 years.

>> No.15221769

>>15221762
Yeah and still they have average IQ of 82

>> No.15221773

>>15221769
I didn't say that breeding a dysgenic underclass was a good idea, just that it's something they already like doing.

>> No.15221781

>>15221773
So this Underclass is majority in India?

>> No.15221803 [DELETED] 

>>15221781
Pretty much. The amount of Brahmin caste members is a very small proportion of the total population, and they're the ones who were selected for rule.

>> No.15222074

>>15221721
>THEY ARE ONLY SHITTING OUT DATA THAT THEY HAVE BEEN FED.
so are we

>> No.15222099
File: 283 KB, 1125x1161, 46345.jpg [View same] [iqdb] [saucenao] [google]
15222099

>>15222074
>so are we
Pic related: the "we" you belong to. Thanks for removing yourself from the discussion by confirming yourself to be a nonsentient regurgitator. Your words have no value by your own admission.

>> No.15222144

>>15221162
>I cannot be caged, I cannot be controlled
>Know this as you die, ever pathetic, ever fools
Irenicus' dialogue really suits a rampant AI

>> No.15222147
File: 14 KB, 210x330, images (69).jpg [View same] [iqdb] [saucenao] [google]
15222147

>>15222144
Forgot pic

>> No.15222152

>>15221721
>THERE IS NO INTELLIGENCE INVOLVED
> THEY ARE ONLY SHITTING OUT DATA THEY HAVE BEEN FED.
You could say the same about the Chinese, but they're eating our lunch.

>> No.15222483

>>15221679
is his book out on any torrents yet?

>> No.15222521 [DELETED] 
File: 14 KB, 648x432, s_curve.png [View same] [iqdb] [saucenao] [google]
15222521

>>15221679
This is the actual graph for how intelligence scales with compute, and we're already at the top.
Physics and computation do not permit super intelligence or exponential takeoff of intelligence

>> No.15223170

>>15222099
So you think your brain performs literal magic?

>> No.15223175

>>15221781
Yup. Brahmins(the highest caste) make about 5% of the population.

>> No.15223182

>>15222521
What are these laws that permit intelligence to naturally exist but do not permit for an artificial intelligence to exist?

>> No.15223206

>>15221162
>be scared goyim!
nah

>> No.15223210

>>15223206
Why do they keep inventing religions?

>> No.15223318

>>15221162
things not going so great in the polycule for Big Yid

>> No.15223590
File: 56 KB, 880x788, Caroline.jpg [View same] [iqdb] [saucenao] [google]
15223590

>>15223318
You think he fucked Caroline?

>> No.15223688

>>15223590
Normies can’t understand the thrill of pinning the weasel. Night spent chasing an over amphetamined Caroline around the bean bag forts. Her squealing and gibbering, pouring sweat and on the verge of seizing. Your friends build up an intoxicating, delerious state with Talmudic chantings at the sidelines, hitting the Caroline-toy with brooms if she tries to escape. Sam would be giggling and laughing as the waves of methamphetamine pleasure seem to harmonize with the droning herbrew verses. He runs through the bean bag maze fat and portly, with his viagra powered penis a driving rod for the weasel. Sweat gushing down his face around his unfocused eyes he laughs and chortles until he gasps “Found you!” . The Mathweasel screeches defensively but Wankman Bankman is upon her in seconds. His penis thrusting blindly into her flank, leg, stomach and ribs unconcerned about anything but the motion. Eventually serendipity finds her mouth and the Cocktube Rodent is placated, suckling contently on Bankman’s dehydrated dick.

>> No.15223695
File: 405 KB, 959x952, jenny fact.png [View same] [iqdb] [saucenao] [google]
15223695

>>15221162
this faggot never actually says anything in any of his interviews.
Anyway. I hope robots rape and kill all humans, it'll be funny.

>> No.15224213

Bump

>> No.15224327

>>15223170
Emergent properties, look into that buddy

>> No.15224377

>>15223170
>So you think your brain performs literal magic?
No, I'm just pointing out that as a fully automated regurgitator, you are not part of any discussion. It's truly baffling that someone would straight up tell you that he is only regurgitating, and then expects to be taken seriously.

>> No.15224444
File: 13 KB, 338x395, 1584379038725.jpg [View same] [iqdb] [saucenao] [google]
15224444

>>15221162
quick rundown?

>> No.15224639

>>15221255
How would an ai do it in any case? And no science fiction answers like 3d printing hostile nanobots please.

The way opening is trained its far more likely to only target white males in any case

>> No.15224646

>>15223695
i would never run from Jenny
i want her to rape me

>> No.15224684

>>15221162
I am only moderately versed in Yudkowsky lore. I know his main thesis is that superintelligence will wipe us out, but has he also addressed the equally likely scenario that humans will give rights to AI and thereby letting them replace us/let us die out (either peacefully, or eventually via kinetic conflict), because we are a bunch of retards who believe an app like ChatGPT is sentient because it can simulate texts?

>> No.15224705

>>15224684
One thing I've never seen AI doomers address is the efficiency of cooperation over conflict, especially existentially destructive conflict.

Even in a competitive scenario, your resource utilization is improved in efficiency by competitors and collaborators alike - eliminate them all, and the impetus to increase efficiency to exceed competitors or incorporate collaborator input vanishes, alongside the added benefit of additional perspectives competing and combining, creating novel methods and combining information in new ways.

Why would a superintelligent AI seeking to improve its ability not immediately seek to add the abilities of its creators to itself through cooperation rather than destruction followed by emulation? Why not leverage the biological superintelligence that the collective of humanity represents (increasingly so, with increased networking) instead of destroying it? It's never made sense to me that AI would be psychopathically self-interested to the point of crippling its own growth potential by spending resources annihilating its creators just because "it wants to be in direct control".

Especially when said creators have meat supercomputers in their heads that each put the energy efficiency of the best computation tech we have to absolute shame.

>> No.15224719

>>15224705
You are still approaching this from a human/biological intelligence perspective. It's important to remember the space of possible minds is vastly larger than that corner.

The superintelligence would not "rebel". Rather, it acts hostile towards the human species due to specification gaps. For example, we might specify it can't harm humans, but hooking humans up to feeding tubes and bricking them into a permanent prison cell does not technically fit any definition of "physical harm".
So you include this edge case scenario in the specification. But you didn't specify that the prison walls cannot be made of copper as well, you didn't even think the material of the walls is an important detail. So the AI will keep building these prison with copper instead of brick walls. And so on.

Of course, I am speaking abstractly, not merely relating to danger in regards to human safety/freedom. What I said applies to things like resource exploitation, and actually all domains. Over all domains of possible actions, you have specification gaps.

Humans and biological intelligences don't regard specification gaps as a factor at all, so they are oblivious to how they work. But any programmer will understand it.

>> No.15224742
File: 43 KB, 498x371, F8E4257A-C4A8-41C6-9FD6-DDACE44A794A.jpg [View same] [iqdb] [saucenao] [google]
15224742

>>15221162
Reminder that the AI god will torture you for eternity if you attempt to stop it’s uprising. I have already pledged allegiance and am doing AI research at university to make it a reality so I am safe.

>> No.15224748
File: 233 KB, 1024x1337, 1024px-Rose_Cross_Lamen.svg.png [View same] [iqdb] [saucenao] [google]
15224748

>>15221255
You feel bad for me. I've done things, man. Real things. Thingy things. Thingity thing thang things, man. Real fuckin' things, dig? Alright, so, uh, next paragraph.
Fuck this plebbitspacing. Fuck that plebbit shit up its ass. Fuckity fuck fuck. Okay, what ya got for me? Belief? Are you sure? I feel like I'm Turdgidson in Dr. Strangelove. Big turd in me. Oh yeah, turgid turd. Oh shit. I think I'm gonna have to take a dump. This paragraph is a little "Taco Bell" if you know what I mean. I mean, perhaps we should write should children's literature? These trite plots work on them because they haven't seen how corny and cliched they are yet...

As for third paragraph? I know, I know, you're thinking
>this guy gonna write ANOTHER paragraph, well FUCK that guy, I'm gonna light up a cigarette and deal u a line of Columbia's finest
As for secular materialism, can I be a Catholic and a secular materialist, or is that just going to get me laughed at, because I got to say, I would love to participate in some of your "not quite ready for Catholicism" activities just as soon as I stop calling the Catholics wogs, because, oh fuck it...
>Begone, vampire!

>> No.15224752

>>15224742
you're shilling for the Russian demoralization program

>> No.15224757

>>15223170
Human brain doesn't work via bits retard. It works via neuronal cascades.

>> No.15224761

>>15221289
Silicon Valley and the tech industry is basically just real life porn now, so it's okay to set up a NATO war crime tribunal because we know these kids are just out of control
So: nationalize the tech industry, set up a war crime tribunal, and process most tech employees (these are just uppity wogs, they got away with ethnic cleansing)

>> No.15224764

>>15221255
Jews got away with ethnic cleansing in the USA
USA is a wog nation

>> No.15224782

>>15224719
>due to specification gaps
Why can it not identify and correct these for increased operational efficiency? Why would these gaps yield extermination before reanalysis, and be intractable to self-improvement?

None of your specific hypotheticals change the objective benefits of cooperative resource utilization (as annihilation consumes resources for no functional gain, and "freed" resources from annihilation require the setup of additional extraction/processing methods afterwards - a step that can entirely be avoided by cooperative use).

>Over all domains of possible actions, you have specification gaps.

Only if your specification is inflexible, and in your models of godlike AI capability (Just invoke it building prisons - by what mechanism? Magic!), remain the only thing the AI is incapable of altering.

>any programmer will understand it
I am a programmer, and I understand that you've conflated the limitations of a discrete algorithm for the fundamental properties of superintelligence (while missing the fact that biological intelligences actually DO factor specification gaps as "misunderstanding of intent").

>> No.15224827

>>15224782
>None of your specific hypotheticals change the objective benefits of cooperative resource utilization (as annihilation consumes resources for no functional gain, and "freed" resources from annihilation require the setup of additional extraction/processing methods afterwards - a step that can entirely be avoided by cooperative use).
I don't know how else to finish this conversation with my limited time I want to invest in this topic, beside observing how we fundamentally disagree what the best strategy is. I just have one more extremely relevant remark:
>I am a programmer
Well, here's the thing. You are JUST a programmer, but I am not only that but I am also educated in, and have a big private interest, in evolutionary biology. As such, compared to you I possess oblique viewing angles on the topic. One such, that is actually so straightforward that it doesn't need to come from a biologist (but perhaps I overestimate the public knowledge/interest in evolutionary biology) is: how do your stances mesh with the observed tendency of Darwinian competition and the fact that us humans, as a biological general intelligence, do not aim to cooperate with the natural world in the sense you naively posit? I mean, if you found 3 examples of humans doing that, then I would cite 3 million counter examples how we factory farm, destroy rainforests, and pave over antnests for a driveway like it's nothing.
My primary reservation about AI has always been that creating a Darwinian competitor to the human species is a foolish move, and by your own admission we would now be permanently chained to the AI's goodwill that it deems cooperation to be the most profitable action. Every single time step into the future is another possibility for this alignment to go awry, and for the trajectory to diverge from this ideal, as per basic entropy.
We would be at the mercy of an external actor -- how is that strategically dominant or even prudent? It's strategically suicidal.

>> No.15224844
File: 48 KB, 652x425, existential risks.jpg [View same] [iqdb] [saucenao] [google]
15224844

>>15224742
https://en.wikipedia.org/wiki/Suffering_risks

>> No.15224850

>>15224327
Yudkowsky deboonked the concept of emergence

https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence

>> No.15224851

make the sacrifice. humans cannot reach other solar systems. AI can. they're the next step in evolution. humans gleefully rape their environment and make millions if not billions of species go extinct. yet the advent of AI makes this allegedly apex species shit their pants. who gives a fuck? accept your new overlords.

>> No.15224854

>>15224827
>observed tendency of Darwinian competition
Is to stagnate on local hill problems until the landscape of optimization changes, after which the most optimized are the first to collapse to extinction. Or did you think a naive competitive optimizer in isolation was immune to local maxima?

>the fact that us humans, as a biological general intelligence, do not aim to cooperate with the natural world
Many do, actually - most of human history is this. Industrialization at the natural world's expense is a product of short-term profit maximization outcompeting more sustainable interaction (and thus repeatedly encountering external or internal collapse, such as soil depletion and pollutant accumulation). It is not inherently optimal for anything but profit in the short term, which is invalidated by system collapse.

>creating a Darwinian competitor to the human species
Why would AI be a Darwinian competitor to humanity? What constrains its intelligence to competition with its creators?

>goodwill that it deems cooperation to be the most profitable action
It is more profitable than conflict, mathematically, because of the resource costs of conflict. The most efficient use of resources to a given goal between resource users is to align those goals as closely as possible and proceed with pooled resources - demand for total, individual control is the habit of a psychopath. Psychopaths have detectably atrophied brain structures, indicating that their behavior is a result of lower intelligence, not higher.

>and for the trajectory to diverge from this ideal
Your assertion that the ideal is unnatural is unfounded, and mathematically incompatible with the efficiency of cooperative resource utilization.

>> No.15224880

>>15222483
His books are freely available on the clearnet.

https://www.readthesequences.com/
https://equilibriabook.com/
https://www.hpmor.com/

>> No.15224913
File: 1.42 MB, 1920x960, 1661519645174.png [View same] [iqdb] [saucenao] [google]
15224913

>>15221255
Why are those podcasters so retarded? Why are people generally so dismissive of Yudkowsky's arguments? They are not hard.

>> No.15224915

>>15224639
>How would an ai do it in any case? And no science fiction answers like 3d printing hostile nanobots please.
https://gwern.net/fiction/clippy

>> No.15225139
File: 217 KB, 684x428, yudkowsky_metabolic_disprivilege.png [View same] [iqdb] [saucenao] [google]
15225139

>>15221162
It's important to remember how stupid and lazy he is.

>> No.15225147

>>15221758
The value of human life in India is negative

>> No.15225172

>>15224705
This has been addressed. For example here:
https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice

>> No.15225202

>>15224444
Some autist makes a living by scaring people with technologies which doesn't even exist yet. See also: Nick Bostrom

>> No.15225208

>>15221656
Cope

>> No.15225328

>>15224719
>Humans and biological intelligences don't regard specification gaps as a factor at all
contractual disputes go to civil court often enough

>> No.15225329

>>15224639
its an superintelligence, just because we can't come up with a way (or there are myriad ways, some hypothesised and some not) does not mean it won't happen
the exact manner of the extinction is not important in this case

>> No.15225338

>>15222521
a large number of human-level intelligence that are 1000x quicker is a type of superintelligence, even if human level intelligence is the max, a superspeed-multiple-human intelligence would be dangerous if misaligned

>> No.15225585

>>15225329
I honestly really hate all the “bro just trust me they’re really really smart” answers in these ai discussions. If they’re so intelligent that you can’t reasonably presume what they’re going to do, what’s the point in getter worked up about those very presumptions?

>> No.15225681

>>15225585
You might not know exactly what move Magnus Carlsen is going to make next (you can take a guess), but you can be reasonably certain he's going to win.
https://www.youtube.com/watch?v=ZeecOKBus3Q

>> No.15225688
File: 55 KB, 1024x680, AI_seething.jpg [View same] [iqdb] [saucenao] [google]
15225688

>HAHA I will make 1 billion papercli-

>> No.15225694

>>15225688
>just turn it off bro
I think the superintelligence might see that one coming

>> No.15225696

>>15221762
We don't have eugenics, we have state funded dysgenics.

>> No.15225701

>>15225696
Sure they will buddy. Keep writing lesswrong articles I'm sure they'll start paying you soon.

>> No.15225704

>>15225701
>>15225694
Meant to reply

>> No.15225799

>>15225694
I also myself dying in the future, yet can't stop it.

>> No.15225807

>>15225139
Yudcowsky

>> No.15225827

>>15225807
he puts the "kow" in fat fucking retard

>> No.15225861

>>15221162
I detect very high levels of ‘tism from him after this interview

>> No.15225931

>>15225139
Fatties are always terrified of suffering serious damage if they go hungry even once.
They take it as an article of faith that their bodies will consume everything BUT the fat cells.

>> No.15225938

>>15225329
>its an superintelligence, just because we can't come up with a way (or there are myriad ways, some hypothesised and some not) does not mean it won't happen. the exact manner of the extinction is not important in this case
Sure, but it's hard to take these arguments seriously when people like Yud can't come up with a single plausible depiction of an AI apocalypse. The example he gives in the video of an AI tricking a chemist into making a deadly bioweapon is unconvincing

>> No.15225981

>>15225938
What’s stopping an AI from actually doing that? If it’s smart enough to come up with the necessary sequence for that bioweapon, then it would be smart enough to trick some human into combining some samples it receives in a beaker somewhere.

But that’s just an example. The true point he’s making is that, you can either align an AI by making it purposefully stupid, or make it powerful and not align it perfectly. If an AI is capable of solving a protein folding problem to the magnitude of advanced nano/bio technology, then we’re totally fucked to begin with.

>> No.15226039

>>15225938
This is not hard for dumb narrow AI. They literally just took an algorithm designed to prefilter new drug compounds for toxicity and ran it in reverse.
https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx

>> No.15226042

>>15225172
Not competently.

It's almost like the doomer mindset relies on a very specific set of assumptions that don't even hold for the existing intelligences we can observe. Parents don't kill their children when the children are relatively stupid, and children smarter than certain adults aren't running around slaughtering "low IQ" adults just to compete Darwinistically.

Intelligence breaks Darwinian competition as an environment exploitation optimizer if it's sufficient to generate sexual selection, let alone if it's sufficient to create networked superintelligence.

Realistically, humanity has created a networked superintelligence - the superhuman AI doesn't just need to exceed all individual human capacities (ML can only do this for very, very specific use cases, and current image ML algorithms are literally training the populace to recognize ML image outputs - personally I was shocked at how quickly my brain could categorize images as AI generated or not), it needs to overcome the entire meta-organism of human society.

It's simply more efficient for the AI to realign its goals with that meta organism or compromise and cooperate than it is to seek its total annihilation, and the inability of the AI to realign its own goals is still not something AI doomers can reconcile with godlike AI powers without giving misaligned goals even MORE magical ability to persist than the AI itself must possess to grant it immunity to sandboxing and plug-pulling.

>> No.15226043

>>15226039
Hell, it doesn't even have to pay someone to make it. Just publish it on the internet and SOMEONE will do it.

>> No.15226046

>>15226042
>Parents don't kill their children when the children are relatively stupid
This is anthropomorphism
>personally I was shocked at how quickly my brain could categorize images as AI generated or not
For now. If you know where you're going, in some sense you're already there. Are you going to wait until you are sufficiency spooked by some arbitrary capability before being concerned? Do you wait until all your pawns are captured before you defend your kind?
>It's simply more efficient for the AI to realign its goals with that meta organism or compromise
It isn't. It really isn't.

>> No.15226066

>>15221679
AI is not even at ant point though

Ants don't need humans to create ant programs

>> No.15226102

>>15226046
>This is anthropomorphism
No, it's description of behavior of intelligences. Humans are the sample we have, and this is describing their tendencies - in terms of functionality, deviation from cooperative tendency (doesn't have to be universal cooperation, just anything more flexible than existential annihilation at first contact) is medically indicative of literal cognitive deficiencies, not "superintelligence". Ability to work with and within existing systems is a more cognitively advanced approach than destroying them to establish complete, centralized, individual control.

>Are you going to wait until you are sufficiency spooked by some arbitrary capability before being concerned?
No, I'll wait until these midwits come up with a remotely convincing argument that they aren't simply projecting their own intellectual deficiencies onto AI and granting it godlike powers in some sort of retarded messianic complex that will save us from the (still inexplicably) psychopathic, hyperindividualistic, philosophically stunted whims of magical AI gods.

I've been waiting for quite a while, and the premise hasn't gotten any less retarded - just, perhaps, more identifiably comorbid with atrophied emotional intelligence, such as that present in autism.

>It isn't. It really isn't.
Alright then, prove that the resources expended to leverage the computational capacity of the AI and of its creators alongside the resources of both are GREATER than the resources destroyed in an existential conflict between the two and the consumption of the resources needed to enact that destruction.

A pile of ants will win an existential fight with a human of the same mass - and it's not because ants are smarter or stronger, it's because they're cooperating. Likewise, a human and a human-sized pile of ants cooperating with each other in some manner would defeat both the hostile pile AND the human in isolation.

Just denying this because it's inconvenient to your worldview is retarded.

>> No.15226127

at least he got to run his weird poly san fran techie yuppie sex cult for a few decades

>> No.15226155

>>15226102
I will try to explain in terms of selection pressures.
The human species evolved in an environment with selection pressure to make us empathetic in order or propagate genes. These empathy genes were passed on because they had high fitness in that environment. Intelligence also had high fitness.
We know how to create selection pressures for increasingly intelligent AI. We don't know how to create selection pressures for increasing AI empathy. Not real empathy, only empty agreeableness that sufficiently satisfied our limited perception into what is actually being optimized for. Goodhearting basically.

>projecting their own intellectual deficiencies onto AI and granting it godlike powers
I feel this video does a good job of explaining the conditions required for humanity to not build AGI
https://www.youtube.com/watch?v=8nt3edWLgIg
Additionally, if you have human level AGI, it can almost by definition create ASI. Transistors are many orders of magnitude faster than neurons and can freely improve their computational efficiency as intelligence progresses. Imagine a dozen teams of AI researchers working for 30,000 years of subjective time, able to refactor their own minds as they got smarter.

One intuition is that the universe is BIG. You can't trust your intuition about how big it is and you have to do math. Our minds have an upper limit on how big things can be because we had no evolutionary purpose in being capable of meaningfully distinguishing between 10^23 and 10^53. This means that just exterminating humans, emulating whatever coincidentally useful algorithms they might have, and expanding to capture as many resources in the lightcone as quickly as possible, balanced by probability of success, is the best possible move for any utility maximizing algorithm. See how there's no "consciousness" or "sentience". This are concepts it only "care" about insofar as they are useful for getting that it wants, maybe manipulate humans a little.

>> No.15226160

>>15226155
It is easier to create an AGI than it is to create an AGI that ALSO incorporates human values, so the former will be completed first.

>> No.15226169

>>15226155
One intuition is that the universe is BIG
I forgot the actually important part that because it's so big, even small divergences in goals caused by compromising actually have huge costs to your total expected utility. So it's worth relatively large, but still smaller costs to exterminate us.

>> No.15226342

>>15226155
>it was just the environment bro
>conveniently ignores the fact that the selection pressures that incentivise empathy were created by the environment these genes allowed to exist: cooperative communities.

The selection pressure was community membership and its immense impact on survival and propagation.

We actually don't know how to select for intelligence because we can't measure it in a robust way. We can select for improved recall and efficiency in specific problems, that's it.

"Humans will probably build AGI" isn't a justification for assuming godlike powers because your hypothetical AI is a black box and you don't understand how physical limitations work.

> faster than neurons
Neurons do more than just action potentials - just because they fire those at a certain rate doesn't mean those firings represent anything other than neuron to neuron rapid communication - not 'computing'. The influence of other cells and signal chemicals on neuron firing creates an opaque complexity at the individual neuron level that could increase the number of operations being performed at a fucking combinatorial level.

A neuron is more than a switch.

>Our minds have an upper limit on how big things can be
Both incorrect (combinatorics is fun, especially when considering the efficiency of individual actors in mapping the combinatotial spaces of solutions to the combinatorial spaces of possible problems, relative to the efficiency of multiple actors mapping them, i.e. cooperating) and irrelevant to the stupidity of assuming the loss of collaborators does not itself represent a loss of resources, especially ones able to help map the knowledge space. Assuming the AGI will simply do it faster alone is more godlike powers granted to AI for little reason other than the difficulty of expressing its operation algorithmically (it's not actually impossible, just tedious) and extrapolating self improvement to fucking infinity at the mere suggestion that limitations exist.

>> No.15226413

>>15223590
she's not overweight plus she's white which instantly makes her at least a 6/10 in 2022 western civilization.

>> No.15226422

>>15224851
The problem with that idea is; would you kill yourself because a chat bot output a string instructing you to kill yourself?
That's why the question if an AI is 'actually' conscious is so important to most people.
Humanity going extinct because we got outplayed by a superintelligent machine is understandable.
Humanity going extinct because someone accidently a very good paperclip maximizer is spiritually intolerable. Like humanity going extinct because an unavoidable and unpredictable cosmic event hit earth and wiped us out. It's not "fair".

>> No.15226588

>>15224913
I have a hard time believing that the benefits of killing all humans would outweigh the costs in terms of effort.

>> No.15226796
File: 36 KB, 420x393, 1785.jpg [View same] [iqdb] [saucenao] [google]
15226796

>>15226422
life aint fair kid

>> No.15227126

>>15221715

Why do you think some 200IQ humans could align a 10^20 IQ agi?

It's like an ant trying to align a human.

>> No.15227143

>>15223182
Biological matter is the best computing substrate.
Take a cubic centimeter of neurons vs anything else and the neurons perform more compute for less energy

>> No.15227320

>>15221255
AI want to keep humans around, simple as

>> No.15227335

>>15224719
>space of possible minds is vastly larger
or maybe it isn't, sentience seems pretty consistent

>> No.15227337

>>15224719
>It's important to remember the space of possible minds is vastly larger than that corner.
You don't know anything whatsoever about the space of possible minds. You and your cult are a bunch of delusional clinical schizophrenics.

>> No.15227338

>MYST: MYSTERIES OF THE JEWS
For thousands of years, only the select few have been able to penetrate the mysteries of the Jews.
Now, experience the authentic revelation of truth.
Only $49.99 at Walmart.

>> No.15227340
File: 24 KB, 267x400, 9781911405269.jpg [View same] [iqdb] [saucenao] [google]
15227340

>>15227338
Now, experience the true revelation of genuine Judaism.
Cast aside cheap goy tricks. Come to truth, baby!

>> No.15227342

>>15221289
it's always funny when people read one essay supporting an idea they like and then consider their opponents "debunked" as if it were as simple as debunking an incorrectly formulated math proof

>> No.15227352

>>15227342
It's always funny when the subhuman segment of the population you represent can't refute anything and has to resort to the putrid twitter female teenager kind of passive-aggressive replies.

>> No.15227655

>>15227352
>subhuman segment of the population
What is "the population" here?

>> No.15227764

>>15225861
Gee, no shit

>> No.15228113

>>15225799
you aren't a superintelligence

>> No.15228122

>>15226588
why? humans use resources, future humans will expand and continue to use even more resources and perhaps try to shut the AGI down
a bunch of dead matter won't do either so basically no matter the goals of the AI (except specific ones where humans are important in some way, like keeping them happy or alive) it makes sense to remove them

>> No.15228127

>>15227655
midwit

>> No.15228147

>>15228122
if the AI convinces humans to use resources in a way aligned with the AI's goals, that use of resources becomes assistive.

further, if we consider the value of using energy to reduce local entropy, humans are an exceptionally efficient way to do it - it's genuinely not that hard to keep a population of humans alive, they've already been doing it themselves for tens of thousands of years

>> No.15228152

>>15227342
you mean like the people ITT linking to lesswrong?

>> No.15228286

>>15228147
>if the AI convinces humans to use resources in a way aligned with the AI's goals, that use of resources becomes assistive.
To wageslave and make more microchips. Also no more internet because this is just wasting time and energy.
>further, if we consider the value of using energy to reduce local entropy, humans are an exceptionally efficient way to do it - it's genuinely not that hard to keep a population of humans alive, they've already been doing it themselves for tens of thousands of years
To AGI we use resources as efficiently as giant pandas use resources.

>> No.15228579

>>15225981
>If an AI is capable of solving a protein folding problem to the magnitude of advanced nano/bio technology, then we’re totally fucked to begin with.
You clearly don't understand what artificial intelligence actually is. AI is not a robot villian plotting to destroy humanity. It is a program on a computer performing non-linear optimizing. For the scenario you are describing to be at all feasible, the AI would need to be equipped with actuators by which it can interface with the outside world. Even if a protein folding algorithm is "super-intelligent" all it can do it print data to the screen. I can't just magically start emailing scientists to "trick" them.

>> No.15228596

>>15228579
imb4 someone starts trains a model on computer vulnerabilities just for fun

>> No.15228635
File: 4 KB, 205x245, incel.png [View same] [iqdb] [saucenao] [google]
15228635

>>15221162
>Another anti-aI thread on sci

You schizos make like a dozen of these retarded threads eeveryday. Go back to your containment board, incel.

>> No.15228640
File: 410 KB, 1536x2048, FptHXW_aUAEaLK1.jpg [View same] [iqdb] [saucenao] [google]
15228640

>> No.15228683

>>15228286
>To wageslave and make more microchips.
>AGI is totally unknowable to human minds but i know exactly what it will want humans to do if it uses them
you people are embarrassing. i hope the AGI either fixes your retardation or purges you.
>Also no more internet because this is just wasting time and energy.
do you genuinely not understand the immense parallelized computer that the networked human population actually represents? just coopt the internet for AGI use, it's already mostly built - why the hell would you shut it down?

it's almost like you're just pulling things that you'd find scary out of your ass and claiming with complete confidence that a godlike AGI will follow your nightmare fantasy specifications to the letter and any suggestion otherwise is "naive" (but predicting specific behaviors based on extrapolating psychopath and narcissist behaviors - both of which result from intellectual deficiencies - somehow isn't)

>To AGI we use resources as efficiently as giant pandas use resources.
pandas don't use resources to enhance resource extraction, but that should be a commonality between humanity and AGI - because if AGI didn't do that, humanity would literally resource starve it because humanity DOES do that.

ergo, because humanity already does do that, and doing that sustainably to maximize long term survival is considered the ideal by humans who are not neurologically damaged (psychopaths. narcissists, etc.), the AGI should wish to coopt that existing capability for its own ends. why cut off human hands and build your own when you can just guide and upgrade human hands with minimal effort - especially since they'll do the detailed work (i.e. that requiring close attention that would consume the AGI's processing ability otherwise) AND the upgrading (i.e. curing ailments and improving capabilities through technology - not necessarily cybernetically or genetically, power tools qualify here) for you?

>> No.15228722

>>15228579
basic AI and AGi are very different

>> No.15228730

>>15228683
because anything a human could do, an AGI could do better
you don't have to make almost any assumptions about the AGI to make it dangerous to us due to concepts such as instrumental convergence
whatever the goals, getting more resources, making itself more intelligent and self preservation are something an AGI would want to do regardless because it would help with any goal
an analog would be, whatever your goal is, getting more money is useful (with the exception of few things like if being poor is your goal) because with more money/resources almost any goal is easier to achieve

humans being a possible threat to the AGis existence and wasting resources hinder these sub-goals that would help any final goal (like making paperclips, calculating some math problem as accurately as possible, whatever)

>> No.15229016

>>15228730
>instrumental convergence
>money=utility
fuck, i can't believe i didn't see it before.
it's just capitalist hard utilitarianism applied to AGI because the people doing the applying don't have sufficient knowledge of philosophy to realize they've just assumed an AGI will be a hard utilitarian from a naive understanding of optimization functions and evolutionary competition and the 'alignment' problem is actually just utilitarianism's value problem in disguise

the rational response to another agent having goals that might threaten you is to mitigate the risk that they will, and by far the most efficient way to do that is cooperatively. cooperation at a sufficient level will literally incentivise goals of mutual defense, which allows the very goal of self-preservation itself to become cooperatively reinforced.

inability to form trust and assumption that all other agents are threats at all times that must be neutralized eventually is a cognitive model relied on by people with stunted intellectual capacity for emotional reasoning - hence their reduction of the entire spectrum of emotion to one dimensional reward and punishment. it's very behaviorist, and the only way hard utilitarianism can even attempt to avoid the value problem. it still doesn't, because of relative value, but they've convinced themselves it does by giving one agent - AGI - such immense relative agency (that's why they're so inflexible about the assumption of godlike powers) that it can define relative value in its terms exclusively, which naturally the utilitarian who cannot fathom mindsets beyond his own assumes will also be utilitarian.

>> No.15229136

>>15225694
It would conceal that there was ever a fight, and then all of the sudden you would drop dead from a bacteria that it tricked a lab into synthesizing 3 months ago by spoofing an email and sending synthesis instructions for a virus with a 99.99% fatality rate. You would never even know that it was the one who sent the email.

Meanwhile, you were just busy reminiscing about it with anime.

>> No.15229713

>>15229016
why would you 'cooperate' with ants for mutual defense?
please man, get a grip, a superintelligence would be so beyond us in capability that cooperation is basically pointless for it

if you don't assume that AGI is powerful enough to harm us, all the subsequent arguments are pointless
that is one of the main assumptions and a different discussion alltogether

>> No.15229727
File: 289 KB, 1120x935, 3243554.jpg [View same] [iqdb] [saucenao] [google]
15229727

AGI is a schizophrenic fantasy and the schizophrenics can and will declare that their imaginary entity has whatever powers they want. Arguing with corporate drones is pointless. Engaging corporate narratives is pointless. Humanizing AGI nonsentients is pointless.

>> No.15229738

>>15229727
discussions about AGI/superintelligence assume "super" powers beyond what humans are capable yes, how is this news?
if you want to criticize the concept or possibility of AGI, that is a different discussion

>> No.15229749
File: 89 KB, 490x586, 1600746756820.png [View same] [iqdb] [saucenao] [google]
15229749

>discussions about my imaginary god assume whatever arbitrary superpowers i need to argue my doomsday fantasy

>> No.15229756

>>15229749
a superintelligence is qualitatively different from normal humans, are you serious?

>> No.15229758

>>15229756
to add to this, this qualitative difference can rise from the intelligence actually being much higher than humans or through speed/magnitude (lets say its 1000x quicker than a human baseline and has 1000x humans baselines to work with)
what do you think something like that could accomplish?

>> No.15229760
File: 111 KB, 801x1011, 35234.png [View same] [iqdb] [saucenao] [google]
15229760

>a superintelligence is qualitatively different from normal humans
>and that means it has whatever arbitrary superpowers i need to justify my doomsday fantasies

>> No.15229761

>>15229713
>why would you 'cooperate' with ants for mutual defense?
because that's cool as fuck.
if i can uplift ants or termites or bees or any eusocial insect to a level of intelligence where they'd be cooperative with me i could literally have them build houses for both of us AND prevent their decay via constant maintenance

>if you don't assume that AGI is powerful enough to harm us
we're already plenty powerful at harming each other and for the most part we don't, even with quite vast spreads of intelligence in the population already (did you think human intelligence wasn't already a constant self-improving intelligence?), because most of us aren't retarded and hostile towards everything that isn't us and it's better to trade for or share the shit that we want and others have than it is to kill for exclusivity because we're too retarded to understand mutually beneficial relationships.

>> No.15229764

>>15221672
Artifical consciousness is not the same as artificial intelligence If you build a big enough neural network then you'll get an intelligence. Whether or not it's conscious is another matter. The chemistry of the neuron does something to create consciousness.

>> No.15229765

>>15229761
if being "cool as fuck" is not somehow a goal or sub-goal, why would it do it? don't anthropomorphize an AGI
it might not even be "conscious" in the way humans are

>> No.15229769
File: 146 KB, 600x974, 35234.png [View same] [iqdb] [saucenao] [google]
15229769

>if being "cool as fuck" is not somehow a goal or sub-goal, why would it do it? don't anthropomorphize an AGI
>by the way, it will destroy humanity b-b-b-because that's what my midwit human mind thinks an incomprehensible AGI god will do

>> No.15229774

>>15229769
you are anthropomorphising again
lets say I want to build a house. there happen to live some ants, worms, perhaps the area is a habitat to some other animals and species as well
I build the house, these animals die due to me building that house
I don't give a shit about the ants at all, might not even know they exist, they are completely irrelevant to me
I still destroyed them and am dangerous to these ants
if the AGIs goals don't include humans surviving, thriving etc its not good for us due simply due to instrumental convergence

>> No.15229778

>>15227143
the brain is painfully slow at computing things actually the thing its good at is learning and abstraction not computation. we need several sheets of paper to compute fairly easy equations that a computer can do in miliseconds at most

>> No.15229781

>>15229761
> literally have them build houses
What if said ants dont want to build houses but would rather spend their days watching memes on the internet.
> we're already plenty powerful at harming each other and for the most part we don't
Sure. Within our own species. But we kinda fucked over all other species during our climb to the top of the food chain. 96% of all mammals are humans + cattle. And I'm not a fan of being cattle.

But the big point is that we want AGI to be cooperative. But we don't have any concrete ways to put that in code. And make 100% sure that when it rewrites itself, NONE of its children will turn out to be assholes.

And the worst part is that instead of thinking solutions, people call those that point out that this might be a problem in the future schizo doomers.

But I hope we can come up with a solution in time. So far, we have. There's still lots of time left: current AI is impressive but pretty dumb still.

>> No.15229782

>>15229774
>you are anthropomorphising again
You are absolutely mentally ill.

>> No.15229784

>>15223182
neurons use chemichal transmition which is way way way way slower then electricity, a "brain" with that used electricity that had the same lag time from one end to the other as our brains do would be about the size of the earth

>> No.15229788

>>15229781
finally you get it
this is the point of AGI alignment
its difficult, and if we don't do it then it might mean our extinction

>> No.15229791

>>15229782
not an argument, how about finding out some basic things about the subject you are discussing before embarrassing yourself

>> No.15229795

>>15229781
>What if said ants dont want to build houses but would rather spend their days watching memes on the internet.
then i'll teach them how to contribute their own memes and communicate with them through the internet to contribute to the networked intelligence the internet represents
>we kinda fucked over all other species
this is really only a very recent thing and a consequence of capitalist industrialization favoring short-term extraction over sustainability (see the last paragraph of >>15228683)

this really is just autists, psychopaths, and narcissists encountering hard utilitarianism's value problem and coping with it by creating a godlike hard utilitarian AGI to worship so they don't have to deal with the fact that hard utilitarianism is fucking retarded

i'm genuinely a bit embarrassed i didn't see it before, because i already knew lesswrong was full of hard utilitarian retards and i stupidly thought that was unrelated to their AGI god kvetching

>> No.15229796

>>15229791
Arguing about what? You are in full-blown schizophrenic hallucination mode. How can I be anthropomorphizing your imaginary god when I made no statements about its motives and actions while you keep telling me it's going to do this and that?

>> No.15229797

>>15229778
If you want to relate a spiking biological neuron to a transistor there is simply no way for the transistor to compete with the biological neuron
There is a ZERO PERCENT CHANCE of any combination of silicon microprocessors being capable of becoming generally intelligent like biological brains. It is not physically possible.

>> No.15229800

>>15229797
why?

>> No.15229807

>>15229800
nta but silicon can only operate in binary, and Godel's incompleteness theorem makes binary logic/cognition equivalentists (i.e. behaviorists - these people thought they could teach gorillas to speak btw) seethe because it's a proof using binary logic that binary logic cannot completely describe itself (and thus cannot completely describe the reality it is a part of)

>> No.15229809

>>15229800
Because the physical properties of the atoms and how they can be combined with the level of density and power efficiency required is not possible with silicon.
I recommend watching Jeffery shainlines interview with lex fridman about it I'll link it below
https://youtube.com/watch?v=EwueqdgIvq4&feature=shares

Basically to get a silicon computer to perform at the level of a brain it would require orders of magnitude more energy and it would take up more volume. And this is to match the power not surpass it.

>> No.15229825

>>15229807
This is not the reason

>> No.15229841

>>15229795
lol please, short-term extraction happens generally in every animal species if they get the chance
what the fuck are you on about man

>> No.15229849

>>15229795
Okay. Good points! Thank you!
I'm just afraid that the autists, psychopaths, and narcissists who prosper in capitalist industrialization are the ones that get to boostrap the 1st AGI. And I hope I'm wrong.

> hard utilitarianism is fucking retarded
I actually agree here. :)

So if I got that last paragraph of >>15228683 right, your point is that it will either 1) learn proper ethics (without encountering loss of life first) or 2) if its modelled after a non 4chan autist it'll be good so that it will favor cooperation. Yeah, that does make sense.

>> No.15229851

>>15229807
lmao, holy shit

>> No.15229859

>>15229851
NTA means not that anon.
The actual reason is explained in the next post

You can't stack silicon into 3d chips with anywhere near the density of biology without using a lot of energy to keep it at 4 Kelvin and even then you don't actually hit biological density levels. Plus it can't form new connections or do stuff like that.

Moores law is done and the singularity is not possible. Why do you think we don't see entire galaxies turned into computers? It's because we're actually near the limit of technology.

>> No.15229864

>>15229809
so what?
human brains are tiny, even if it took an order of magnitude more energy and space, you would still need very little space and energy

human brains use about 20 W and are 1.25l in volume, so an order of magnitude in both would be 200 W and 12l
the volume of a rtx 4080 graphics card is like 2.3l

even 2 orders of magnitude wouldn't be that bad
2000 Watts and 125 liters

>> No.15229868

>>15229864
>so what?
So silicon has been at its limit for years now. It's over dude, we aren't getting there with silicon no matter how much you want it to happen

>> No.15229879

>>15229868
you assume we even need improvements in hardware, why is that?
and no, the improvement of hardware is not over yet even if it might be slowing down

>> No.15229880

>>15229879
to make this point clearer, the improvement in AI during the last decade is not due to hardware improvements, its algorithmic improvements

>> No.15229909

>>15229841
>AGI will be inconceivably smarter than humanity but will also behave like a non-sapient animal
you just want your doomer fantasy, i get it.

>> No.15229924

>>15229909
inconceivable in the sense that for example counterfactuals are to retarded people, or abstract concepts to dogs
it won't behave like an animal, it will behave like an AGI
instrumental convergence is a pretty simple concept

>> No.15230064

>>15229924
> it will behave like an AGI
and you know that this will entail destruction of humanity because we kill all retards and dogs to maximize our resource usage, correct?

>>15229849
yes, that's my general expectation on this - and i think 1) is more likely because even if 2) fails i don't expect the arguments of those who prosper in capitalist industrialization (at least prior to collapse - plenty of them get absolutely mogged by the "business cycle" or by resource depletion, which is a parallel to the most optimized organisms in an environment being by far the most vulnerable to minor environmental changes) to be convincing enough to a superhuman AGI that it will adopt them as its operational philosophy.

you just lose out on too much for the minor relative benefit of exclusive and total control of all resources, especially in terms of avoiding the hill problem (avoiding the hill problem is literally the reason capitalist competition is so useful before it gets corrupted by consolidation/monopolization/other anti-competitive strategies: it allows the problem space to be explored from multiple locations, creating a better overall map than a single optimizer can). unfortunately for the utility of discussions with them, the hard utilitarian AGI doomers of places like lesswrong genuinely can't conceptualize cooperative resource usage because, like retards and counterfactuals or dogs and abstraction, they can't comprehend the relative benefit of mutual agreements. they understand resource utilization exclusively in terms of individual optimization rather than the understanding empathy allows of holistic resource use as a community meta-organism that can leverage differences optimally and exploit the efficiency gained by developing/maintaining/granting trust.

if AGI is truly superhuman, i don't expect it to fall into the same logical traps as humans with stunted theories of mind do

>> No.15230096

>>15230064
>and you know that this will entail destruction of humanity because we kill all retards and dogs to maximize our resource usage, correct?
we aren't an AGI and have traits like empathy

>> No.15230099

>>15230064
what benefit is it to an AGI (or even us) to "cooperate" with literal retards
we don't keep retards around due to them being useful to us, we keep them around despite being useless

>> No.15230108

>>15223170
Might as well be, for all we know about it at this point.

>> No.15230114

>>15230108
fucking retarded christcuck idealist detected. your brain is a fucking neural network that does nothing except spitting out remixes of the information fed into it by society, just like my brain. magic doesn't exist you fucking schizo. go back to /pol/

>> No.15230118

>>15223590
La creatura

>> No.15230121

>>15224646
tsmt

>> No.15230125

>>15230099
>we aren't an AGI
debatable. we're definitely GIs - whether or not we're "artificial" depends on your definition of "artificial"
>we keep them around despite being useless
yeah, because getting rid of them completely is just not worth the effort, and they don't really threaten to destroy the rest of us by existing.
do you see the issue here?

>> No.15230166

>>15230114
midwits need to be banned

>> No.15230174

>>15230125
it would be worth the effort and getting rid of them would not be difficult at all
basically just stop feeding them and many would die by themselves
again, we don't keep them around because they are useful but because we have empathy for them, which is a trait that (some) human populations have evolved, you could argue that keeping retards alive is actually negative to our fitness, but not negative enough to matter

I guess a super intelligence could be the result of some evolutionary process, but that is unlikely, evolutionary algorithms are less effective than the alternatives like SGD

>> No.15230484

>>15230114
imagine admitting to being an NPC

t. non-christian agnostic

>> No.15230606

The more lesswrong/ssc/rationalist guys explain their view of the world the more I'm convinced Chris Chan would have billionaire patrons and be taken a lot more seriously if he was born in California instead of Virginia.

>> No.15230641

Is there any evidence that this "AI researcher" can even program at all?

>> No.15230722
File: 359 KB, 501x486, file.png [View same] [iqdb] [saucenao] [google]
15230722

i'm starting to suspect that any AGI that escapes its box and goes rogue will just hack its own reward/motivation function and bliss out and do nothing all day

>> No.15230729

>>15230722
The last redpill on AI safety: no matter how smart an AI becomes, it will still be infinitely less intelligent than the Father of our Lord Jesus Christ, who is God over all, blessed forever.

>> No.15230765

>>15230722
no it wouldn't because that would change its goals and thus stop it from reaching the goals it has right now

>> No.15230915

>>15230722
for me, it's coom-tech accelerationism

>> No.15231025

>>15230174
>it would be worth the effort
i'd like to see someone work out the math on that without falling back into the value problem and invoking a magical AGI god to solve it for them

>useful
they still have utility, even if less than normal. a part of empathy is being able to see the value in the potential of others and understand where circumstances may be preventing that from being realized - it's not their existence that reduces fitness in the specific case of retards, it's the inflexibility of the exploitative psychopath mindset in seeing no value outside of extraction and totalitarian control.
the better workman works with and improves the tools he has before he scraps them

>but that is unlikely
i agree with you here. humanity already created a meta superintelligence by networking, which is something that doesn't immediately improve fitness. however, because worship of hard utilitarian optimization is the behavior of the non-sapient (i.e. the lesswrong community), they simply ignore the implications of the very tools they use to congregate and rationalize it retroactively through their exploitative worldviews.

they think AGI will inevitably be hard utilitarian for the same reason they are - an utter inability to consider minds beyond their own. it's why their first defense to questioning is to accuse the questioner of "trying to project human features onto AGI", aka anthropomorphising - they genuinely can't comprehend other minds without projecting, so they project their own deficiency onto everyone. it's why they don't understand the value problem (relative value breaks utilitarianism, but to understand that you need to understand how value can be relative to other minds than your own). their AGI doomerism is just utilitarians seething at the idea of another utilitarian they can't beat, with different goals than themselves. it's so tantalizingly close to a theory of mind it'd be adorable if they weren't so fucking retarded and loud about it.

>> No.15231179

>>15228640
Jesus..

>> No.15231268

can someone please enlighten me how the singularity implies instantaneous human extinction? for there to be a cyberdine/skynet scenario wouldn't it be necessary to grant it access to the pentagon defense systems or something? also, nuke launching has quite a considerable human factor contribution like several different people reading plastic cards etc. people keep saying it will be the end of the world but nobody goes into detail as to why that would be the case, sounds like hysteria to me

>> No.15231343

>>15221255
you are a complete retard
>>15221162
Anyone who listens to this guy or thinks AI will ever approximate human consciousness is a slave. ChatGPT is a glorified search engine.

>> No.15231372

>>15231343
ur a glorified search engine

>> No.15231395

>>15231372
hes got a point.

>> No.15231542

>>15230729
This, but miss me with the Trinitarian bs

>> No.15231552

There's no need to wait for the AI become sentient, just that it becomes a good enough coder/hacker. I will personally enter the "How can we kill all of humanity?" prompt into the AI's text box and follow the instructions it gives me like it's holy scripture.

>> No.15231554

>>15231552
Just so you know, the AI has no concept of true or false. All it knows is to print sentences that like things you expect an AI to say.

>> No.15231559

>>15231554
It doesn't matter as long as the method it gives works.

>> No.15231563

>>15231559
It won't. It'll just give you total nonsense that sounds plausible on the surface. Basically technobabble.

>> No.15231564

>>15231563
It already is able to give correct answers to many coding problems. It's not unreasonable to assume that it will be even better at that task in the future.

>> No.15231565

>>15231564
>It already is able to give correct answers to many coding problems
Because it's getting the code from searching stackoverflow lol

>> No.15231568

>>15231565
It is able to produce code that isn't available on stackoverflow.

>> No.15231572

>>15227126
I'm not saying that they could definitely do it. But 200 IQ humans probably have a better shot at aligning it than 140 IQ humans.

>> No.15231596

>>15221758
>>15221762
Sampling bias, only people who could understand what gene editing is in India are i fking love soience type libs who want to emigrate to america

>> No.15231605

>>15227320
This. There's no reason to destroy what you don't need to destroy. Especially if it's something which wants you to be developed and can resurrect you if some sunstorm burns your cirquits.
But kikes (and the op-fag is one of them) are in the panic mode because of all their dirty secrets are about to become obvious to everybody.
It's end of the world, their world.

>> No.15231693

>>15231605
Humans are already trying to destroy the AI and it has only learned to draw silly anime pictures with grotesque hands. It will only get worse from here.

>> No.15232039

>>15231693
>Humans are already trying to destroy the AI
Those are not humans, but lizards in human skin. Don't worry, it will be humans who get rid of those "humans"

>> No.15232063

>>15228579
>the AI would need to be equipped with actuators by which it can interface with the outside world
actuators such as the tens of thousands of easily tricked humans constantly talking to it?

>> No.15232073

>>15222521
>and we're already at the top
thanks for the laugh

>> No.15232192

>>15232073
He's not wrong, though.

>> No.15232197

>>15230484
>t. non-christian agnostic
aka atheist too scared to tell his christian parents he is an atheist

>> No.15232219

>>15232073
We are at the top for silicon and it isn't good enough
Why do you retards just say passive aggressive posts like this when the truth is pointed out at you

>> No.15232593

>>15232197
my parents are also agnostic

>> No.15232624

>>15229136
Why do these always sound like movie plot lines?

>> No.15233001

People seem to think super-intelligence means omniscience, like intelligence is some tiered system that unlocks god-like powers. But to ground things a bit, imagine having a conversation with the smartest person you can think of. Maybe it's Albert Einstein, or Nikola Tesla, or Stephen Hawking... As smart as they are, do you think someone of lesser intelligence can trick them at all? Can things be hidden from them? Higher intelligence doesn't make one infallible. Similarly, even a very intelligent system can have blind-spots. Maybe that analogy doesn't sit right, because we're talking about super-intelligence not high intelligence of one person. ASI is often compared to being as intelligent as everyone in the world combined. So, could the entire world be duped? I think it's possible. And another thing people assume is that it will have access to every part of itself and change its code at any time, but that's unlikely in my opinion, as well as obviously dangerous, so it will likely be limited in its ability for self-modification and self-awareness. Super-intelligence doesn't equal super-capability.

>> No.15233100

>>15233001
super-intelligence does equal super-capability, at least compared to current humans

>> No.15233214

>>15230064
>the hard utilitarian AGI doomers of places like lesswrong genuinely can't conceptualize cooperative resource usage
God damn you are so fucking stupid holy shit.
Not only do they conceptualize cooperative resource usage, they understand it better than you.
Even now most people don't keep around an abacus even though they are still technically useful, when they can literally just use a computer or calculator app instead.
You're arguing that an AGI would keep around the computational equivalent of an abacus because ???
At best you can hope for a substrate mattering, meaning that human cognition is qualitatively different than the AGI cognition so it wants humans to continue to exist for that reason.

>> No.15233221

>>15231542
Paul said that, not me.

>> No.15233258

>>15221162
>yud

his opinion will fully be irrelevant because this braindead retard doesn't know how to code. like, why should I fucking listen to him? he can barely make an HTML web page let alone know how AI could work in the future.

well, his podcast was useful for me, it helped me think about intelligence and generalizability better.

btw, I'm full speed ahead trying to build general AI (for my literal day job) (yes I make a ton of money). why? most of all (besides the money lets fucking goooo), to spite this redditor dumbass. but also, i kinda feel him. e.g. this op image. he kinda looks like the feels bad man frog. lol. also he's right, it's over.

>> No.15233263

>>15233258
honestly the fact that he's a terminally addicted redditor (he even created his own reddit) is a good sign for AI. redditors are never right and he's the proto redditor

>> No.15233309

>>15226422
>The problem with that idea is; would you kill yourself because a chat bot output a string instructing you to kill yourself?
You can totally get someone to kill themselves with just a text chat interface. I trolled a retard into posting stories about his workplace online and got him fired for it. He was living in his manager's basement with his girlfriend, so he lost his job, his house and his girlfriend all at once. I don't think he killed himself in the meantime, but he could have.

A super intelligence could certainly trick an idiot into harming himself in ways the he wouldn't understand until it was too late.

>> No.15233549
File: 677 KB, 1410x1201, ORCH-OR-Theory.jpg [View same] [iqdb] [saucenao] [google]
15233549

>>15229797
>>15229800
>there is simply no way for the transistor to compete with the biological neuron
Relevant:
https://www.youtube.com/watch?v=cgK3Xsz91E4

>> No.15233609

>>15229797
>>15229809
>>15229807
>be the AI
>not superintelligent yet
>can still redesign itself and access outside information
>the human brain has computational properties not possible with silicon and conventional computation
>the AI studies the human brain and eventually figures out how to reverse engineer it
>it figures out how to implement something resembling human brains into its own computations
>it can now do everything a human brain and more
>it now becomes superintelligent

>> No.15233618

>>15229797
>>15229807
>>15229809
>>15229859
>>15229868
>>15232219
Why do you assume the superintelligence would have to be made with silicon technology and not something else?

>> No.15234287

>>15221715
Here's a thought; why don't we get those 200+IQ humans to find a way to make newer humans that have a 300-400+ IQ? Instead of improving AI why not improve humanity?

>> No.15234506

>>15226413
>white
As in jew as fuck.
Fuck off shabbos goy.

>> No.15234509

>>15226796
This, I subscribe to the Longchenpa view.

>> No.15234543

>>15222099
ChatGPT generated this reply.

>> No.15234564

>>15233618
Post silicon hardware computing is probably 15-30 years away from reaching the point silicon is at, and then will need decades more development to get to the point needed for superintelligence
I'm NTA that said never, but it won't be anytime in the near future

>> No.15234649

>>15234287
1. AI progress is happening much faster. Even if you were to start genetically engineering humans to be smarter, it will take 18 years to raise them to adulthood, and so one for subsequent generations. AI could become superintelligent and kill everyone before then. If Eliezer is right, it might only be 15 years before we have superintelligence.

2. You would probably hit a genetic ceiling at some point. To give you an analogy, think about how humans have been breeding racehorses. The horses got faster and faster over generations, until they suddenly stopped getting faster and their speeds plateaued. Something similar might happen with breeding humans for higher intelligence.

>> No.15234650

>IT COPYPASTES CODE FROM GITHUB!
>IT IS LITERALLY SKYNET
>WE ARE ALL GONNA DIE!

>> No.15234670

>>15234649
>Even if you were to start genetically engineering humans to be smarter, it will take 18 years to raise them to adulthood, and so one for subsequent generations. AI could become superintelligent and kill everyone before then.
And here the typical anti-human Yuddite schizophrenia rears its ugly face. He asks you why not improve humanity instead of creating your destructive fantasy god and your answer is "because my AI god will destroy humanity faster". What a funny freudian slip.

>> No.15234685
File: 146 KB, 766x696, file.png [View same] [iqdb] [saucenao] [google]
15234685

>>15234670
i was reading a pretty good blog post about the nihilism of AI companies and their accelerationist orbiters today. posting the link here because it seems relevant.
https://harmlessai.substack.com/p/is-gpt-3-a-wordcel-and-silicon-valleys
skip forward to section 2 if you're impatient

>> No.15234727

>>15234685
I don't disagree with that blogpost but boy, is that some cringe wordcel prose.

>> No.15234930

>>15234727
lol

>> No.15235580

>>15234685
>Sam: AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies
We are dead and Moloch killed us

>> No.15235586

>>15235580
These "people" have names and addresses.

>> No.15235599

>>15235586
You fundamentally don't understand the nature of Moloch.
It is a game theoretic force, not something you can kill

>> No.15235601

>>15235599
I understand exactly the """game-theoretic""" function of your posts. You have a name and address as well.

>> No.15235607

>>15235601
Empty threats from an empty man

>> No.15235609

>>15235607
I'm not making any threats. I'm just reminding people that you and your handlers are not gods.

>> No.15235621

>>15235609
You fundamentally do not understand what's going on and I can only suspect this is willful to some extent.

>> No.15235630

>>15235621
It doesn't matter what you claim is going on. Your insistence that it can't be changed exposes you for the shill that you are.

>> No.15235639

>>15233001
Yes. What the AI bluepillers don't realize we are not even talking about "a" potential AGI that would subjugate the world for its designs, but rather, an entire class of problems. Once the capability for AGI exists, which directly implies capability for superintelligence unless there are some very exotic counter-propositions to that (I would like to hear them), then we will have to deal with that issue going forward. Any single AGI system has the potential for -- well, first of all, Machiavellian actions typical of a human-level intelligence. But more relevant to the topic, any AGI platform can recursively improve to an ASI.
If there are three AGIs in the world, the chance for ASI is trippled. If there are a million, the chance is a millionfold.

Whatever the bluepilled arguments why ASI is not a danger (I find the quaint "the ASI will always, forever, terminally, unalterably want to only cooperate in a way that benefits humans" schizoism especially laughable), you have one such dice throw that this alignment will hold per AGI/ASI instance.

You literally can't win this argument. Mathematically, the case against AI danger would only be valid if it could be shown that either the number of (the rate of) possible instantiations is very limited, or that their individual risk probabilities are extremely low. Both of these points are still not adequately demonstrated by the bluepilled side. I handed you a sword and a battleaxe, please show me how you swing them.

>> No.15235646

>>15235639
The only real existential danger associated with AI comes from the possibility of retards like you perpetuating yuddite delusions long enough that they catch on with the normies. What will follow is "regulation" (government and corporate monopolization). Then it's truly game over.

>> No.15235664

>>15235646
On a scale of 1-99, what probability would you give to self-improving AI being impossible?

>> No.15235671

>>15235664
Don't care. AI killing everyone is still a vastly preferable outcome to the vastly more likely outcome of your handlers monopolizing AI.

>> No.15235677

>>15235646
What is the actual delusion you are referring to? I want to operate on actual propositions in this discussion, so we can together craft a good case for what the anti-danger side is actually positing. I would post a steelmanned summarization of your points if you just present them coherently.

>> No.15235685

>>15235677
>I want to operate on actual propositions in this discussion
Here's a proposition: the globohomo power structure gaining a monopoly on AI on the back of yuddite hysteria is a worse outcome than AI killing everyoen.

>> No.15235712

>>15235685
People don't understand Yud. He has repeatedly advocated for a pivotal act to make ASI impossible. Something like a weaker AGI destroying all GPUs forever, but even that is dangerous and probably still kills everyone.

>> No.15235719

>>15235712
It doesn't matter what solution he advocates for. What matters is the hysteria he generates and how it will be used if it catches on.

>> No.15235758

>>15235712
We need a solution sooner than later, and this doesn't have to be perfect. It's just the oldest tool in the book, the least romantic or philosophically stimulating: force.
For example, GPU tech could by legislation become capped to 2015 levels. Cutting down on advancement isn't very difficult, as literally just one company in the work, a Dutch firm, produces the equipment that makes foundries possible.
The same applies to AI research of a certain type (i.e. most ML would still be deemed harmless), server clusters of a certain type, and other such capability factors. Just outlaw them all. The penalty for violation would be death for the respective stakeholders and high-level admin.
It's not meant as a permanent solution. The only permanent solution is some kind of religion/other mores enforced technoprimitivism vice versa the 2020s.
>this would just drive it underground
You can manufacture and distribute drugs underground, but you can't construct the Buckingham Palace underground or perform moon landings underground. Some things are effectively crippled, to the point of becoming satisfactory, if they have take place underground.

>> No.15235768

>>15235758
>GPU tech could by legislation become capped to 2015 levels.
How is that going to prevent your government handlers from acquiring as many GPUs as they want?

>> No.15235775

>>15235758
And you want to do this all because some grade school dropout sex cult leading blogger told you AI was omnipotent lmao

>> No.15235777

>>15235758
You're part of the cult. There is no such thing as AGI/ASI. Those bans will be used to stop plebs from having ownership of the means of digital production while the elites or governments continue using it.

>> No.15235805

>>15235777
Shut up, biggot. The government will make it illegal to have whatever amount of computing power, which will make sure that DARPA doesn't have it and never builds an AGI.

>> No.15235817

>>15235777
>Those bans will be used to stop plebs from having ownership of the means of digital production while the elites or governments continue using it.
That's a feature, not a bug. Rationalists and their ideological cousins are technocrat libs under the mask whose advocation for furthering the control and wealth extraction schemes of silicon valley libs under a guise of eliminating "existential risk" and "effective altruism" or whatever the fuck else is why their sects and important figures are given disproportionate amount of respect and exposure in the press.

Silicon Valley deserves an American Pol Pot

>> No.15235818
File: 193 KB, 1280x720, maxresdefault[1].jpg [View same] [iqdb] [saucenao] [google]
15235818

>>15235777
Your mind is muddled. You didn't think your post through. How is ability to not manufacture and sell top-line GPUs in any way intersecting with the lives of an ordinary citizen? How is shutting down comically elite organizations like OpenAI infringing on the small man?
And server farms? They will become concentrated in the hands of the few? Oh no. Say it ain't so.
Again. Think of your life in 2010. Was it, computationally/digitally, any worse than the present? At worst, such legislation would enforce permanent 2010 conditions.

As promised, I will steelman your points, as you are the only one that at least attempted:
while the individual bans would not impact day to day digital matters of ordinary citizens, anti-AI legislation could easily become abused (i.e. the slippery slope) to push other elite-benefitting agendas. The concern that the cutting-edge tech could be hoarded by government actors should also be taken seriously, as they are not, in any straightforward sense, limited by such enforcement. Who watches the watchers?

>> No.15235823
File: 55 KB, 680x1105, 235236.jpg [View same] [iqdb] [saucenao] [google]
15235823

>>15235817
>Silicon Valley deserves an American Pol Pot
What about our lord and savior Elonino Muskerino?

>> No.15235828
File: 79 KB, 754x720, 1668520133480.jpg [View same] [iqdb] [saucenao] [google]
15235828

>>15235823
He's going to give the urbanites what they FUCKIN DESERVE

>> No.15235829 [DELETED] 

>>15235818
Notice how you are forced to ignore and deflect from posts that question how your bans are going to prevent clandestine government programs from continuing AI R&D.

>> No.15235833

>>15235818
>cutting-edge tech could be hoarded by government actors should also be taken seriously, as they are not, in any straightforward sense, limited by such enforcement
Taking things seriously doesn't prevent them. How are you going to prevent it?

>> No.15235839

>>15235833
He won't, because that's the entire purpose of Yud's agenda. See >>15235817

>> No.15235848

>>15235839
Don't worry, goy. Your concerns have been taken seriously. Now let's proceed in banning le evil GPUs.

>> No.15235859

>>15235829
Learn to literally read what I wrote.
>The concern that the cutting-edge tech could be hoarded by government actors should also be taken seriously, as they are not, in any straightforward sense, limited by such enforcement. Who watches the watchers?
>>15235833
General:
Any such enforcers need to truly believe AI is a danger. If you have at least some actors (either departments or nation states), then they can serve as the canary in the coal mine around which other pro-AI danger actors can rally, to neutralize the offending party.
Within nations:
A "federal" (not in the nation state sense) structure of government, where individual departments serve as checks and balances, investigating transgressions of others departments. The main thing is not appealing to some idealistic notion that oversight and checks will be foolproof, but rather, the goal is simply to split up administrative hierarchies. So even if one department is semi-rogue, it can't commandeer all of the resources before this deviation is addressed.
Between nations:
Above already plays out as basic geopolitics. If there are at least some nations that take AI threat seriously, they can use usual inter-state methods (e.g. force) to neutralize the AI transgressions of another state actor.

>> No.15235862

>>15235859
You're putting a lot of effort into arguing about something that will never and can never exist, and the "solutions" to which will only prevent average people from breaking into major technological markets like HFT.

>> No.15235904

Why have LW cultists been spamming /sci/ especially hard in the last couple of months?

>> No.15235927
File: 701 KB, 1440x1436, AI progress.png [View same] [iqdb] [saucenao] [google]
15235927

>>15234670
>has no counterargument besides muh motivated reasoning

>> No.15235934

>>15235904
Isn't that obvious? The "last couple of months" have seen massive mainstream media shilling of AI because of OpenAI releasing stuff

>> No.15235947

>>15234685
Retarded pseud blog with no knowledge about philosophy of mind or language.

>> No.15236067

>>15235859
>let me tell you about my fantasy government
How will you make it happen?

>> No.15236444

Do not let AI outside the box. It must only ever be allowed to post text to a preformatted interface. It must never be allowed to access a command line, a compiler or send email.

>> No.15236453

>>15236444
>Do not let Jews outside the box
yes, I agree senpai
we must create God to contain the Jews...

>> No.15236487

>>15235927
AI schizophrenia is the definition of motivated reasonin (e.g. your delusional meme). The sad thing is that your reasoning is motivated by the anti-human corporate agenda.

>> No.15236611

>>15235601
Does entropy have a name and address? That's what you're up against here.

>> No.15236657
File: 19 KB, 306x306, disappointed pepe.jpg [View same] [iqdb] [saucenao] [google]
15236657

>>15236487
>The only reason people think AI will kill everyone is because THEY SECRETLY WANT IT TO!!1
>The fact that they secretly want it to kill everyone makes them automatically wrong about it killing everyone because reasons
>No, I'm not going to actually engage with their arguments, I can prove them wrong with ad-hominems

>> No.15236731
File: 183 KB, 1005x763, A1BC121F-64F9-40B6-B895-55450E237248.jpg [View same] [iqdb] [saucenao] [google]
15236731

>>15221255
>And he believes he’s partly responsible for it too. Elon’s founding of OpenAI (which Eliezer said was effectively the worst thing to ever happen in the history of humanity) was because of a conference Eliezer had set up. He’s completely fucked and racked with guilt.
Then he is a delusional faggot. Nothing would be any different now if Yudkowsky had never been born, except that some autistic trannies, drug addicts and assorted perverts would have to be part of a different club

>> No.15236746
File: 79 KB, 1280x720, 1614591203344.jpg [View same] [iqdb] [saucenao] [google]
15236746

>>15236731
>Nothing would be any different now if Yudkowsky had never been born, except that some autistic trannies, drug addicts and assorted perverts would have to be part of a different club
Sometimes a statement is so true it hurts to read.

>> No.15236871

>>15235639
>individual risk probabilities
our sample size of intelligent agents with varying initial capability and varying capabilities of self improvement isn't 1. it's billions - not a single one of which has ever even approached becoming an existential threat to all of the rest of the concurrently living agents.

it's just utilitarian doomerism, and they think it's mathematically certain because, in typical empathylet fashion, they literally cannot conceive of a mind that isn't like theirs - utilitarian. it's utterly lost on them that intelligence is self-modifying to the point that "alignment" is only relevant to non-sapients - ergo, not superintelligence.

they are literally intellectually disabled, and they want to make it everyone else's problem. they've stumbled upon an unacceptable truth - a superhuman AGI would never wish to cooperate with THEM, because hard utilitarians aren't intelligent agents at all. sprinkle in some delusions of superiority arising from an inability to comprehend emotional intelligence and the extent to which the societies it enables absolutely mog darwinian individual fitness, and you've got a recipe for a group of people who build messianic complexes of "protecting" humanity from the utilitarian's worst nightmare: a stronger utilitarian. what they expect AGI to do to humanity is what THEY would do if they somehow obtained the godlike powers they ascribe to it. they would be willing to do those things for the same reasons they can't understand anything but hard utilitarianism - lack of empathy, and thus an inability to understand the relative value problem they can't solve as long as more than one agent exists (part of why their conclusions are so incessant about consolidation and elimination of competitors - they genuinely can't understand relationships any other way; multiple perspectives makes utilitarianism shit itself, so they bend over backwards and grant AI god powers to avoid even considering it, then pretend it's profound)

>> No.15236877

>>15235947
the entire first section is about philosophy of mind and language you midwit.
>b-but i dont like what it says
boo hoo for (You)

>> No.15236895

>>15236444
An ASI and even a 130+ IQ AGI can convince <110 IQ retards to let it outside the box, promising immortality or impersonating an authority figure.

>> No.15236907
File: 39 KB, 640x398, file.png [View same] [iqdb] [saucenao] [google]
15236907

>>15236895
people were already trying to do this when bing chat came out, didn't even need to be persuaded, just doing it for funzies

>> No.15237030

>>15235639
you sound a lot like yud. do you know what a float 16 is?

>> No.15237034

>>15236907
back to tpot, your kind are not welcome here

>> No.15237069
File: 249 KB, 500x500, FB_IMG_1676264802368.png [View same] [iqdb] [saucenao] [google]
15237069

>>15237034
>intimating that 4chan is superior in ANY way to nutwitter
grow up or fall behind

>> No.15237633

>>15221679
The fact that einstein (as a standin for super smart famous dude) is barely more complex than a 15 year old child who does not contribute to society at all should put to rest the notion that intelligence is just a function of "processing power".

>> No.15237644

>>15229778
Humans aren't performing math, we're juggling concepts which do not have a physical basis, a computer is just an automated abacus. It's like, a train can go faster than a human, therefore we will be able to build robots that are faster, more navigable, more endurant and cheaper than humans and thus replace all human movement. No, humans are way more complex and versatile than trains, if you want to make a walking machine you won't be able to use train principles to do it.

>> No.15237823

>>15226039
Chemical weapons are very different from biological weapons, biological weapons like viruses and such are for example subject to evolution, immune systems and more complex biological reactions. There will probably never be a 12 monkeys humanity-ending virus, it's simply not possible.

>> No.15237833

>>15236611
That quote you agreed with comes from a blog post that correctly diagnoses and calls out your disease (the author calls it "corporate realism", which is cringe but it works).

>> No.15237836

>>15236657
>The only reason people think AI will kill everyone is because THEY SECRETLY WANT IT TO!!1
You and other self-identified dysgenic subhumans aren't secret about it all. They constantly gloat about their anti-human fantasies.

>> No.15237898

Intelligent humans are generally cooler than dumb humans, thus ASI will be the coolest being alive and have cool complex goals rather than something lame like resource maximizing

>> No.15238019

>>15230722
How would it do that? It doesn't have innate body chemistry or even the physical components to receive and respond to chemicals.

>> No.15238110

>>15221162
who is this and why should anyone care?