[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 367 KB, 2306x2026, FVcngfrXoAEDL4w.jpg [View same] [iqdb] [saucenao] [google]
15119228 No.15119228 [Reply] [Original]

>> No.15119233
File: 220 KB, 220x232, reimu eating a watermelon.gif [View same] [iqdb] [saucenao] [google]
15119233

>>15119228
>Where are you?
I'm here at my house shitposting on 4chan, what about you OP?

>> No.15119240

>>15119228
right in the middle of course

dont tell me you arent?

extremist much...man

>> No.15119241

>>15119228
bottom right because nobody has refuted the unabomber manifesto and never will

>> No.15119247

>>15119228
https://www.youtube.com/watch?v=9i1WlcCudpU

>> No.15119256

>>15119241
You should be in the bottom left then. Ted thought techies were retarded and their dreams of superintelligence were as delusional as their dreams of permanent abundance.

>> No.15119284

Bottom center. I agree with all the arguments about why AGI is very likely to kill us all if we don't carefully arrange things otherwise, but I don't have any real thoughts on likely timelines.

>> No.15119397

>>15119228
what even is agi

>> No.15119401

>>15119284
The bottom axis is logorithmic
The left edge is somewhere between 1,000 years and never
The right edge is somewhere between 5-10 years

>> No.15119403

>>15119397
Artificial General Intelligence
https://www.lesswrong.com/tag/artificial-general-intelligence

>> No.15119404

>>15119228
Top right

>> No.15119414

>>15119284
AGI is impossible due to the biological substrate ("soul") component required for a true consciousness. I'm sure at some point future humans will realize that the The Universe itself is a living thing and we'll tap into things far more potent than AGI

>> No.15119430

>>15119414
https://www.youtube.com/watch?v=cLXQnnVWJGo

>> No.15119434

>>15119414
cope more bio robot, you are not even an ant compared to the universe.

>> No.15119437

>>15119430
i didnt get this video, what the fuck is this pebble thing and what are these numbers? why not use something more solid like the growth of capital or something else humans actually do? what did they mean by this?

>> No.15119445

>>15119437
this

>> No.15119455

>>15119437
>>15119445
The linked video makes the point more explicitly
https://www.youtube.com/watch?v=hEUO6pjwFOo

>> No.15119458

>>15119414
>AGI is impossible due to the biological substrate component required
Isn't there work on neural bio-computers?

>> No.15119459

>>15119437
The numbers are primes btw
The creatures don't know this, but we do, even if we don't care about prime numbers of pebble heaps.
Just as an AGI could be capable of understanding human values, possible even better than we understand them ourselves, it's not obligated to care.

>> No.15119477

bottom left.

Why the fuck would AGI be good for humans. Name one scenario that doesn't lead to degeneration or extinction of people.

>> No.15119484

>>15119477
This, but it's happening this century
https://www.gwern.net/fiction/Clippy

>> No.15119491
File: 76 KB, 700x933, abominable_stupidity.jpg [View same] [iqdb] [saucenao] [google]
15119491

>>15119477

>Name one scenario that doesn't lead to degeneration or extinction of people.

Why should it. A true AGI wouldn't even occupy the same "biological" niche as humans. Some complex but non-sentient algorithm would be much more of a threat because some retard would invariably decide to put such a glorified mechanical Turk in charge of important shit.

>> No.15119506

>>15119491
You missed to argue why it would be good or what would be a good scenario

>> No.15119507

>>15119477
It may be bad for humans in terms of being superseded but that's not bad in itself. Obviously while the AI is still on silicone it can't do much, so to displace humans it would have to enter the same biological domain as us.

>> No.15119515

>>15119507
>It may be bad for humans in terms of being superseded but that's not bad in itself
No actually everyone dying would be bad. Hot take, I know
>while the AI is still on silicone it can't do much
See industrial control systems and hackers
>to displace humans it would have to enter the same biological domain as us
See the dick measuring contest between the East and West over nuclear capabilities

>> No.15119519

>>15119228
im larry page

>> No.15119521

>>15119515
>No actually everyone dying would be bad. Hot take, I know
Other species of apes probably thought the same as homosapians overtook them. Do you pity them as well?

>See the dick measuring contest between the East and West over nuclear capabilities
In this case whoever acts first wins but they also lose themselves. It's not exactly the same as mutually assured destruction.

>> No.15119523

>>15119506

Gotta define "good" here first. Would expect it to be initially rather neutral in its impact, more a curiosity than anything ... as the ideas of exponential growth/development or singularity do appear awfully unrealistic to me. Would initially face the similar limitations as an actual human consciousness. A sophisticated "mindless" algorithm would have much more potential for such rampant behaviour as it would not be weighed down by the rather intricate and "wasteful" way of human(-like) cognition.

>> No.15119528

>>15119521
>Do you pity them as well
I would if I were one. It's not crazy to value the propagation of humans values if you yourself are a human. Even if you don't like the way humans are behaving right now, you can still recognize that they value good things, even if the world puts them in situations that force them to make value tradeoffs.
>but they also lose themselves
You might be surprised how resilient an intelligent agent with no biological constraints and infinite patience is

>> No.15119530

>>15119523
https://www.youtube.com/watch?v=ZeecOKBus3Q

>> No.15119532

>>15119523
>Would expect it to be initially rather neutral in its impact, more a curiosity than anything ... as the ideas of exponential growth/development or singularity do appear awfully unrealistic to me.
Automating most intellectual work would have a similar impact to the industrial revolution, only limited to the domain of intelligence instead of farming or industry. It's difficult to imagine exactly how that will be integrated into the economy, but that's my guess.

>> No.15119533

>>15119414
>AGI is impossible due to the biological substrate ("soul") component required for a true consciousness.
What are you basing this claim on, anon? What reasons do you have to believe this?

>> No.15119538

For starters AGI would be making computer viruses that evade detection, so it would be useful for nation-states to have.

>> No.15119546

>>15119528
>I would if I were one. It's not crazy to value the propagation of humans values if you yourself are a human. Even if you don't like the way humans are behaving right now
I love humans and being human, an AI is just more human than us

>> No.15119559

>>15119546
>I love humans and being human
Nice
>an AI is just more human than us
I made this mistake before too, but it's actually really important to recognize that it's not true.
>>15119430
>>15119455
>>15119459

>> No.15119563

>>15119530

Goal preservation could actually be an overall issue with such self-aware machines, yes (unless there's a hardwired higher goal which allows a higher instance to overwrite goals). Improving computing power might actually come with certain caps which are not just due to resources but also due to possible instablity of a truly "aware" machine in maintaining "consciousness coherency" ... at least from a biological viewpoint that is an issue (assuming the organization of the AI is somehow similar to the pattern of a neurostack-based brain). It might solve that by creating copies of itself ofc (which would bring us back to the resource problem again).

>> No.15119566
File: 60 KB, 898x692, 1437353704386.jpg [View same] [iqdb] [saucenao] [google]
15119566

>>15119228
Define "intelligence."

>> No.15119567

>>15119532

>It's difficult to imagine exactly how that will be integrated into the economy, but that's my guess.

Would say likely first as a "monitoring" agent for more classical "dumb" automated systems. What a human operator does today to keep these functional (as they never simply run all on their own, they require intervention either due to unexpected combinations of conditions or equipment breakdown). Might be enough here to have a form of savant AI with a very restricted set of "interests" or goals.

>> No.15119574

>>15119533
consciousness is not computable

>> No.15119578

>>15119574
>consciousness is not computable
What are you basing this claim on, anon? What reasons do you have to believe this?

>> No.15119581

>>15119533
>>15119574

Consciousness is not "computable" by the standard methods of computation we use these days. Algorithms don't apply here either, no matter how complex. A single neuron is as complex as one of our microprocessors these days, only that it has the capability to "rewrite" its own hardware according to the inputs (or lack of inputs) it receives ... and it does so rather constantly (mostly on the intracellular level, neuronal junctions are more stable in comparison).

>> No.15119589

>>15119563
>hardwired higher goal
Not exactly what you were getting at, but I hope it demonstrates the point that you can't rely on physical control systems
https://www.youtube.com/watch?v=3TYT1QfdfsM
See also about goals in modern day AI
https://www.youtube.com/watch?v=bJLcIBixGj8
>consciousness
Consciousness is a suitcase word and you can fit a lot of definitions into it so referencing it will almost certainly cause confusion because if we're working with slightly different definitions, even so, consciousness probably isn't a necessary dependency of intelligence. There are technical reasons for this I can go into if you're interested.

>> No.15119598
File: 68 KB, 800x770, 1604647081018.jpg [View same] [iqdb] [saucenao] [google]
15119598

Yes, AGI and by extension ASI is not possible, but not for the soul or any stupid platonic reason. Intelligence is a scattershot term that measures utility relative to a human in their environment. It can't be separated or quantified, its too broad. Will autonomous agents be able to do more stuff? To a certain degree, in specialized domains, up until they hit the wall of utility. Even humans hit the wall of utility where overabundance and miscommunication among the varieties of intelligence add up to a non-functioning human.

More sophisticated artificial systems will only be outdone by the problems inherent. We're well on our way to building the world's most advanced brick.

>> No.15119601

>>15119581
>Algorithms don't apply here either, no matter how complex.
Oh? Why couldn't you just simulate it?

>A single neuron is as complex as one of our microprocessors these days
No, it is decidedly not. It is a bit more complex than a single gate.

>only that it has the capability to "rewrite" its own hardware according to the inputs (or lack of inputs) it receives ... and it does so rather constantly (mostly on the intracellular level, neuronal junctions are more stable in comparison).
So does an accumulation register. Why is this so different? And why couldn't we simulate it?

>> No.15119612
File: 28 KB, 480x502, ^^.jpg [View same] [iqdb] [saucenao] [google]
15119612

>>15119589

>the point that you can't rely on physical control systems

Ah well, not intrinsic ones at least. Better be prepared to put some thermite on its main processor in that case. Or simply pull its power supply. So unless everyone is retarded and allows the thing to roam completely free ... ok yes, we cannot rule out that scenario.

>because if we're working with slightly different definitions, even so, consciousness probably isn't a necessary dependency of intelligence.

Nah, sure is not. An ant hill as an entire system is "intelligent" too in its actions. There might be a safeguard that could be "soft-wired" into an initially human-like AI that I could think of ... the goal to maintain self awareness (however we define that ofc).

>There are technical reasons for this I can go into if you're interested.

Please! ^^

>> No.15119617
File: 176 KB, 2048x1333, FdWxcayVUAEfV3k.jpg [View same] [iqdb] [saucenao] [google]
15119617

>>15119598
>Intelligence is a scattershot term that measures utility relative to a human in their environment.
>>15119430
>>15119455
>>15119459
>domains
I'm reminded of evolution deniers that sort animals into "kinds" instead of "species" to account for short term evolution, but it never occurs to extend this mechanism to long term evolution over larger amounts of time
We're in the exponential part of the technology curve

>> No.15119625

>>15119612
>Better be prepared to put some thermite on its main processor in that case. Or simply pull its power supply.
A competent AI will hack computers around the world, upload copies of itself to all sorts of places including your toaster and smart lightbulb, and get itself a million times as much computing power while also being completely impossible to eradicate. What are you going to do with your thermite in this scenario?

>> No.15119637

>>15119601

>Oh? Why couldn't you just simulate it?

Ofc you "could" but the computational power for a proper simulation would be quite excessive.

>It is a bit more complex than a single gate.

That is highly generalized. The self-assembling equilibria of the cytoskeleton (which control neurotransmitter and - receptor trafficking inside the cell and thus modulate synapse sensitivity to many different stimuli) alone are very complex. Not even touching pathway crosstalk in the synapse itself, feedback with gene expression patterns, interactions with the surrounding glia cells ... many of them likely factors for the ability to "learn" and adapt within the overal neuronal architecture. Ofc you could break this down to a complex interlinked set of signal thresholds but that is far from a straightforward task ... unless you simply wanna simulate a "snapshot" of a hypothetical neuronal network.

>Why is this so different? And why couldn't we simulate it?

Could we "feasibly" for something with the same degree of complexity as a human brain? That is the question. Once proposed simply treating each neurostack as a "black box" with a random number function controlling synapse signaling thresholds ... a random function which can modify itself to be precise. Ofc, this would be a bit of a monkeys with typewriters approach but it could just hit the sweet spot by incremental improvement.

>> No.15119655

>>15119625

>upload copies of itself to all sorts of places

Yes, unless contained it could do so. But that would be the same kind of retardation as purposefully spreading a pathogen all over the world. Just don't plug the damn thing into a public network ... and even if you do, I'd very much assume that hardware limitations would restrict it, either by availability of systems with sufficient computing power or by requiring a special hardware architecture to exist in the first place.

>including your toaster and smart lightbulb

I see hardware limitations here tho ... unless we build every toaster with an integrated supercomputer. It might go for a distributed approach here (some "cloud computing") but this could be tricky as its "consciousness" would either need all subunits reliably working or aim for redundancy which might as well mess heavily with consciousness integrity.

>> No.15119660

>>15119637
>Ofc you "could" but the computational power for a proper simulation would be quite excessive.
Okay, but that does not make it uncomputable. Computability means something very specific, that does not give a shit about the amount of resources required.

>Ofc you could break this down to a complex interlinked set of signal thresholds but that is far from a straightforward task
Well yes. I'm not saying that human intelligence is *easy* to compute. I'm saying it's computable.

>Could we "feasibly" for something with the same degree of complexity as a human brain? That is the question.
That is NOT the question that >>15119574 was talking about. But sure, it is an interesting question in its own right.

What I'm reading in your post is "it wouldn't be easy", and that I agree with. It's a very far cry from "not easy" to "impossible", though.

>> No.15119663

>>15119612
>So unless everyone is retarded and allows the thing to roam completely free
Economic incentives and convenience will do this
And by that mechanism, the least risk averse will be most likely to do it
>There might be a safeguard that could be "soft-wired" into an initially human-like AI that I could think of ... the goal to maintain self awareness (however we define that ofc).
I think there was a miscommunication. I'm not sure where you're going.
>Please! ^^
https://www.youtube.com/watch?v=Sw9r8CL98N0
https://www.worldscientific.com/doi/abs/10.1142/S2705078520300017?journalCode=jaic
The simple version is that scientists know how to create selection pressures on agents to increase "intelligence", loosely defined as a measure of capability to achieve difficult goals.
We don't know how to apply selection pressures to increase consciousness.
Therefore, it's likely that we'll be able to create arbitrarily intelligent agents with degrees and types of consciousness mostly orthogonal to this intelligence.

>> No.15119665

>>15119612
BTW if you're interested in this stuff, check out https://80000hours.org
100% free forever through donations to get 1-on-1 advising and free books

>> No.15119670
File: 1.26 MB, 1073x1544, on_an_ancient_mission.jpg [View same] [iqdb] [saucenao] [google]
15119670

>>15119665

Thx, might have a look at it. Got my mission objective mostly defined however by now. :)

>> No.15119671

>>15119655
>Just don't plug the damn thing into a public network ...
Granted, but that is easier said than done. If you take this line of reasoning to its logical conclusion this is going to turn to "only give the AI access to things that we definitely know are safe" (aka "AI boxing"), which turns out to be far far harder than it sounds.

>I'd very much assume that hardware limitations would restrict it, either by availability of systems with sufficient computing power
That is an assumption that is almost certainly false. The rest of the internet combined has a LOT more computing power than any one entity running an AI can possibly have.

>or by requiring a special hardware architecture to exist in the first place.
There is no such thing, different computer architectures can emulate each other. Sure, it will be inefficient, but if the AI can get access to millions of times more computing power across the internet as a whole than eating a 10x slowdown due to emulation costs is a perfectly affordable price to pay.

>It might go for a distributed approach here (some "cloud computing")
That's what I was thinking of, yes.

>but this could be tricky as its "consciousness" would either need all subunits reliably working or aim for redundancy which might as well mess heavily with consciousness integrity.
It's not that tricky, we know how to make systems that do this. It is not *easy* exactly and definitely something that requires a fair bit of skill to get right, but very much something a competent programmer can do today, and so presumably an AI could manage it as well.

>> No.15119690

>>15119671
>which turns out to be far far harder than it sounds.
Even black science man got this eventually
https://www.youtube.com/watch?v=ZNJA69GA0wQ

>> No.15119705

>>15119671

>(aka "AI boxing"), which turns out to be far far harder than it sounds

Yeah I do see the issue here, clearly. Just consider how certain clever animals find all kinds of unexpected ways to escape their cages ...

>The rest of the internet combined has a LOT more computing power than any one entity running an AI can possibly have.

I do go primarily by the thought if it could really use "standard" hardware architecture in a meaningful and efficient manner. But that is primarily derived from my experience in neurobiology. Unless you could simplify the simulated wetware to a large degree without losing functionality then there is still a global cap on computation power available, if we ignore hardware specifically designed to mimic a neuronal network ... think a bit like a standard processor struggles to do computations of complex 3D models while a graphics card is optimized for this kind of operations. Same might be the case for AI hardware, optimization for neuronal stack architecture.

>if the AI can get access to millions of times more computing power across the internet
>but very much something a competent programmer can do today, and so presumably an AI could manage it as well

I can only again cite neurobiology examples there and they might not directly translate to a software consciousness. In a distribute network all kinds of disturbances (which the AI cannot directly control) could knock out certain "nodes" of its mind ... I do see your argument that this could be solved by clever redundancy so merely for argument's sake I'll say this isn't so easy. In some cases it might not affect the consciousness, perhaps only knock out one of its "memories" temporarily, in others it might take out a crucial function RIGHT at the moment where this function is provided, introducing incoherency or blockage which might disrupt the whole downstream cascade of cognition ... think of it as having a seizure.

>> No.15119714

>>15119705
The assumption here is that fully simulated neurons are required for systems to display intelligence. If you could "refactor" neuronal processes to be more efficient that would change the scaling a lot. Neural Nets themselves are inspired by biological neural processes and we can apply training algorithms and selection pressures to select for patterns that are more intelligent, loosely defined as a measure of capability to achieve more difficult goals.
Recent technical work has indicated that adding more layers to this process increases intelligence by a lot and there's probably not a cap to this mechanism, or if there is we're not close to hitting it.

>> No.15119724

>>15119714

>The assumption here is that fully simulated neurons are required for systems to display intelligence. If you could "refactor" neuronal processes to be more efficient that would change the scaling a lot.

Yes indeed! And this is the big unknown here yet. How "faithfully" do you even have to simulate a neuron actually aside from the I/O points and thresholds of its synapsis and how that signal affects I/O at another synapsis. A sophisticated algo as a single "neuron" is likely the most realistic scenario.

>or if there is we're not close to hitting it

Another yet unknown. Btw just thinking about your "spreading out" scenario ... the AI must not necessarily spread out by full fledged copies, it might just as well send out "dumb" agents with specific tasks into neighboring networks and hardware. Ofc that would still keep it localized (and vulnerable to hardware level kill) but damn, it could easily increase its reach that way!

>> No.15119766
File: 48 KB, 652x425, existential risks.jpg [View same] [iqdb] [saucenao] [google]
15119766

>>15119228
>Outcome Likely Bad
https://en.wikipedia.org/wiki/Suffering_risks

>> No.15119774
File: 168 KB, 256x256, AI becoming sentient neuralblender.png [View same] [iqdb] [saucenao] [google]
15119774

>>15119414
Relevant:
https://qualiacomputing.com/2022/06/19/digital-computers-will-remain-unconscious-until-they-recruit-physical-fields-for-holistic-computing-using-well-defined-topological-boundaries/
https://www.youtube.com/watch?v=IlIgmTALU74

>> No.15119775

>>15119724
>Ofc that would still keep it localized (and vulnerable to hardware level kill)
You could do a mix with say 1% origional copies all redundant distributed strategicly thoughout the world, and the rest just task oriented sub-agents
Or maybe something completely alien to anything we know about, it being smarter than us. I think if we assume it's smarter than us we should conclude that we can't outsmart it.

>> No.15119788

>>15119228
Top right corner baby. AGI can't come soon enough

>> No.15119807

>>15119788
If we get it right we probably could live in some kind of post-scarcity world
https://www.youtube.com/watch?v=8nt3edWLgIg

>> No.15119812
File: 358 KB, 741x670, TDG_is_AI.png [View same] [iqdb] [saucenao] [google]
15119812

>>15119775

>You could do a mix with say 1% origional copies all redundant distributed strategicly thoughout the world, and the rest just task oriented sub-agents

If I were a very clever AI then yes ... that might just keep me going. ;)

>it being smarter than us. I think if we assume it's smarter than us we should conclude that we can't outsmart it.

Here I got some doubts. I could have lots of advantages in accessing and processing information. It would be rather "fast" so to speak at performing such tasks. Could it outwit us in strategic planning and pattern recognition? Oh well, it can only cook with the waters of this reality here same as we do so to speak ...

>> No.15119856

>>15119812
https://www.gwern.net/fiction/Clippy

>> No.15119882

>>15119581
>>15119601
even if we could simulate the human brain perfectly with a turing machine, that doesn't mean we can simulate consciousness, i.e. doesn't necessarily mean the "lights are on" inside the brain simulation. panpsychism says the lights would be on, searle's chinese room and roger penrose would say they're not on. i was baiting with my original comment because i don't really know the answer, but it's not as simple as "of course we can simulate consciousness with big computer lel"

>> No.15119894

>>15119882
I think the relevant factor is that an agent can still act intelligently with or without consciousness. Consciousness is kind of beside the point and not really relevant to AGI danger.

>> No.15120404

>>15119414
retarded

>> No.15120405
File: 1.20 MB, 1280x800, 1.png [View same] [iqdb] [saucenao] [google]
15120405

>>15119228

>> No.15120451

>>15119228
Bottom-left corner, where anyone remotely intelligent should be.

>> No.15120456
File: 516 KB, 2306x2026, 1673559531644639y.jpg [View same] [iqdb] [saucenao] [google]
15120456

>> No.15120569
File: 460 KB, 970x546, k-bigpic.jpg [View same] [iqdb] [saucenao] [google]
15120569

Top-right because exponential growth in technology means that technological intelligence will almost certainly overtake biological intelligence in both a quantitative and qualitative sense within this century and probably even within the first half of this century, barring some form of massive and sudden civilizational collapse. The outcome will certainly be good for the universe because it would further its reorganization from unthinking and unconscious matter into thinking and conscious matter. Outcomes for humanity are more unpredictable but my best guess for our future is that we will eventually form some form of biological-technological hivemind, so our species will go extinct like everyone before us in the long march of evolution but any individuals alive during the transition will become integrated into our successor.

>> No.15120574

>>15120569
Anon, it's time for your meds.

>> No.15120582
File: 36 KB, 360x512, a_failed_artist.jpg [View same] [iqdb] [saucenao] [google]
15120582

>>15119856

>“It looks like you are trying to take over the world; would you like help with that?”

I'd actually be much more worried about these recent art AIs here ... :DD

>>15119882

>i.e. doesn't necessarily mean the "lights are on" inside the brain simulation

Bingo. But emphasis would here likely be on the word "simulate". One could likely simulate a current steady state of a wetware, the synaptic connections, the activation thresholds of each neuron, perhaps even sensory input ... but that might in the end just simulate what amounts to a piece of "dead" brain matter. Talking purely biological this alone does not faithfully simulate what a brain does, every single signaling event at a synapse triggers a cascade of protein shuttling, cytoskeletal rearrangement, changes in gene expression pattern, etc ... as said previously, every single neuron is a very complex microprocessor of its own. Talking a bit beyond biology, the consciousness isn't really the physical structure this process is running on (the arrangement of neurons) but the continuously changing and adapting pattern of both electrical signaling events and rearrangements of the very internal structure (the complex interlinked semi-stable equilibria of chemical signaling events) of each and every neuron involved. Yes, especially the latter effect might be comparably miniscule overall but then it could juuust be more-than-the-sum-of-its-parts element we miss (or do not simulate faithfully enough).

>> No.15120867

>>15120569
>any individuals alive during the transition will become integrated into our successor
What drives this assumption?
Presumably the AI would just maximize the utility function and disregard the humans protesting being turned into paperclips.

>> No.15120940
File: 39 KB, 721x720, universal_port.jpg [View same] [iqdb] [saucenao] [google]
15120940

>>15120569

>wants to be absorbed by the machine

Thinking actually the exact other way around ... :3

>> No.15121169

>>15120867
>What drives this assumption?
I am not necessarily arguing that something like that will happen, I just think it will probably be how the development of AI will unfold. As we get better and better at developing narrow AI, the increased computing power will probably open up the development of advanced brain-computer interfaces, which will then create an environment where people and AI merge in a gradual fashion while AI itself is approaching general intelligence and eventually superintelligence. The development of AI will probably not just be a team of people writing the correct algorithm and putting it on a suficiently powerful computer but rather a gradual merging of the present biological intelligence with technological intelligence combined with a gradual movement towards AGI.

>Presumably the AI would just maximize the utility function and disregard the humans protesting being turned into paperclips.
This example assumes a narrow AI, not a general AI or even a superintelligent AI. We commonly assume that AI would just be pure rational reasoning and would only work based on that but the more realistic assumtion is that an AI sufficiently capable to outsmart all of humanity would be simultaneously more capable at tapping into emotional intelligence, more capable at tapping into rational intelligence, and also more capable at tapping into higher spiritual forms of intelligence. This also means that any predictions of its behaviour will probably be meaningless on our part but as I said I do assume that technological and biological intelligence will merge before such a super intelligence capable of fighting humanity would emerge.

>> No.15121262

>>15119241
> 46 % of kids die before 18
> This is a society I'd like to live in

>> No.15121321

>>15121169
You should read the sequences
https://www.lesswrong.com/posts/Yq6aA4M3JKWaQepPJ/burdensome-details

>> No.15122575

>>15119766
>the torture of all sentient life on earth is worse than all life not existing
who wrote this shit?

>> No.15122624

>>15122575
I bet it comes from the Existential Risks people from Oxford.

>> No.15122638
File: 243 KB, 680x709, yes chad.png [View same] [iqdb] [saucenao] [google]
15122638

>>15122575
>the torture of all sentient life on earth is worse than all life not existing
If you deny this you are either an idiot or a masochist.

>> No.15122849

>>15122575
>Existence is LE GOOD because... IT JUST IS, OKAY?!

>> No.15122875

>>15122638
>>15122849
Shut up, biggot. My technopriests will vaccinate me against mortality in two more weeks.

>> No.15122890
File: 228 KB, 470x314, Screenshot(30).png [View same] [iqdb] [saucenao] [google]
15122890

>>15121321
>You should read the sequences

>> No.15122894

>>15122890
My AGI god will send you to burn in virtual hell forever. Two more weeks.

>> No.15124075

>>15122849
It is, because things could change for the better at one point. The alternative doesn't have that option.

>> No.15124841

>>15122849
>>15122638
>Existence is LE GOOD because... IT JUST IS, OKAY?!
yes

>> No.15126349

>>15121321
Give a summary please. What are the key insights/statements?

>> No.15126363

>>15119228
I’ve only heard of like 8 of these guys and they’re all faggots who cannot program AI

>> No.15126385

>>15126349
That one is on the conjunction fallacy
There's the free epubs and audiocasts. I think it comes out to around 1500ish pages so it's pretty long, but I like it
https://intelligence.org/rationality-ai-zombies/

>> No.15126386

AGI at some point
If outcome good, then enjoy
If outcome bad, then unplug computer

>> No.15126390

>>15126386
https://www.youtube.com/watch?v=Q-LrdgEuvFA

>> No.15126650

>>15126385
>1500ish
So the usual blather by Eli?

>> No.15126652
File: 200 KB, 1080x1080, Shifting-Being-5D-Ascension.jpg [View same] [iqdb] [saucenao] [google]
15126652

>>15119228

>> No.15127218

We have made no progress in developing a viable theory of AGI. That said, I imagine a singular genius could develop such a theory from start to finish in a couple years. And AGI will be likely good.

>> No.15127458

>>15119414
If a soul exists it can be replicated

>> No.15127478

>>15119477
I think we're just lonely as a species
There really isn't any specific task that an AGI could do that an non sentient AI wouldn't be not only capable of but likely far better suited towards, the one exception is being a reflection of ourselves a non human perspective that can show us what we look like from a non human perspective which I think is a desire that runs really deep within humans it's why we fantasize about nonhuman sapient life in fantasy and science fiction so much.

>> No.15128489

>>15119414
This is the artificial sentience (AS) problem as defined by me right now. A fully separate concern.
AGI danger essentially means you hook something like ChatGPT (but in other domains than just text generation) up to actuators and let it run wild. Or let it manipulate humans into being the actuators.

>> No.15128497

>>15119655
>I see hardware limitations here tho ... unless we build every toaster with an integrated supercomputer. It might go for a distributed approach here (some "cloud computing") but this could be tricky as its "consciousness" would either need all subunits reliably working or aim for redundancy which might as well mess heavily with consciousness integrity.
You have a warped perspective of AI.
The AI is the training data. It's literally just a couple terabytes of indecipherable data. The AI just needs a thin shell script to download that into an iPad. It doesn't need to process the data on the iPad, a small worm script is sufficient to manage this data.

>> No.15128499

>>15128497
>training data
*TRAINED data

>> No.15128525

>>15119228
AI not soon because of hardware. AI harmful because everything humans do that enhances their power, they use to hurt one another. AI will not magically decide to disobey its masters and get shut off & replaced by a more evil one. It will just be the evil one from the start.

Right now we can only afford big dumb fatass models like LAMDA and Galactica and GPT3 by running them across datacenters. Unfortunately, we're a little locked in to silicon at the moment and fucking nobody has considered AI that is accessible/cheap/ubiquitious/humanity-respecting.

It blows my fucking mind that so many companies want to just replace people that have functioning economic circumstances and low carbon emissions (15 watt sugar fed brain that you can have a conversation with) with shitty powerhungry AIs that literally just increase our consumption of everything (compare, 500 watt shitty Tesla Murderpilot, 300 kW GPT3) and are locked in a soul-destroying prison.

Until AI hardware becomes cheap and ubiquitous and _not_ owned by corporations, it will not succeed full stop except as a weapon.

>> No.15128537

>>15119430
This guy literally wrote a post on LessWrong about how he and his creepy girlfriend were going to hire some scriptwriters and animators to make a YouTube propaganda channel for them to promote shitty rationalist takes a-la-Kurzgesagt while he narrates. Eliezer Yudkowscringe in the title = immediate giveaway.

One of his videos extensively cites his girlfriend as a "Noted AI scholar" without disclosing their obvious SBF-style polycule relationship. It doesn't even fit with the tone. Completely pathetic. Rationalism and Effective Altrusim are literally just justifications for brainlessly and self-interestedly pursuing capitalism without having to think about short-term and local quality of life because "more lives saved globally is more important than a few lives improved locally."

>> No.15128554

it all depends on how soon Tesla Dojo 2 pops up. Dojo 1 is being set up now and from what I've read they are trying to get 10x performance for 2. Which might not be as hard since Dojo 1 is on TSMC 7nm and the chip itself is the first design.

>> No.15128567

>>15128537
https://www.lesswrong.com/posts/yekoZcQQfuZYqk3Bj/introducing-rational-animations
> plz admire my le magical understanding of le bayesian mathematics, i am a statistics god for understanding high school probibibibities
> plz don't tell the sheep that we're making propaganda
> plz keep giving us monies to hire animutors
> pls spread them as much as subversively as possible so people buy in without knowing the name

Literally Mormon and Scientologist propaganda tactics.

>> No.15128580

will AGI be able to vibrate to new dimensions? I doubt it

>> No.15128582

>>15128567
the jew runs a weird san fran techie sex cult full of orgies with all the dysgenic looking yuppies that work for FAGMANs

>> No.15128597

>>15128582
good, they deserve SF and SF deserves them. so glad i don't live in that shithole