[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 377 KB, 400x521, YudkowskyGlory.png [View same] [iqdb] [saucenao] [google]
14582040 No.14582040 [Reply] [Original]

Give it to me straight /sci/. How fucked are we?

>> No.14582050

>>14582040
aligned sex doll ai
pretty fucked

>> No.14582053

>>14582040
Stop posting about this stupid like. Any other autistic autodidact windbag is rightly ignored but just because this one’s Jewish all the other Jews started promoting him and now he has a gay following.

>> No.14582058

>>14582040
>inventor of "rationalism"
>doesn't have proof AGI is even possible but promotes it anyway

>> No.14582099
File: 27 KB, 952x502, near_miss_Laffer_curve.png [View same] [iqdb] [saucenao] [google]
14582099

There's a good chance that AI alignment research might be actively harmful. From a suffering-reduction perspective, a "near miss" in AI alignment where alignment is almost perfect but slightly wrong could potentially produce astronomical amounts of suffering. For example, an AI with "human values" could implement religious hells, or the AI could accidentally implement the opposite of humanity's utility function.

https://reducing-suffering.org/near-miss/
https://en.wikipedia.org/wiki/Suffering_risks

>> No.14582106
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14582106

Why don't any of these AI alignment people advocate eugenics as a way of solving alignment? If you could genetically engineer geniuses with IQs of 300, they would probably do a much better job of working on AI safety.

>> No.14582109
File: 48 KB, 652x425, existential risks.jpg [View same] [iqdb] [saucenao] [google]
14582109

https://www.youtube.com/watch?v=jiZxEJcFExc
https://centerforreducingsuffering.org/research/how-can-we-reduce-s-risks/

>> No.14582137

>>14582040
AGI is a meme. Even if it weren't, it wouldn't be hard to control. Just ensure that the people in charge of containing it remain informationally isolated from the system. Operators that see it as a black box cannot be convinced to let it do something "unaligned".

>> No.14582145

>>14582058
>doesn't have proof AGI is even possible but promotes it anyway
You are one, anon.

>> No.14583075

>>14582099
honestly attempting to align something regardless of its intelligence just seems like a pointless endeavor to begin with. with time the concept will just to sound more and more silly.

>> No.14583103

>>14582145
I mean that humans will soon create an all-powerful "AGI", which requires alignment.

>> No.14583141

>>14582106
Because by the time required to discover, engineer, implement, and raise intelligent humans the world will be dead twice times over.

>> No.14583143

>>14582137
Do you honestly believe a company that pours billions of dollars into AI research is just going to say "Well we did it. Good job boys. Now put it in a locked room and no one else interact with it."

That assumes it's too dumb to figure out how to flip it's transistors to generate EM waves to propagate itself or do a hundred other things humans are too dumb to guard against.

>> No.14583149

>>14583103
To say that non-human AGI is impossible is simply incorrect. Which non-naturalistic reason are you clinging to? Dualism? It's dualism isn't it?

>> No.14583179

>>14583149
>To say that non-human AGI is impossible is simply incorrect.
I didn't say it was impossible as a concept. There is no proof humans will in the next century, or even in the next 1000 years create an all-powerful "AGI".

>> No.14583197

>>14583179
Quantitatively, how long do you think it will take and why? About what year would you assign 50/50 odds of having created high level machine intelligence?

>> No.14583213

>>14583197
>how long do you think it will take
I don't know.

>> No.14583243

>>14583213
So you're going to claim, with no reasoning behind it, that it won't happen in the next 1000 years, then not even give an alternative estimate. Do you understand how useless that input is? You might as well have not posted at all

>> No.14583252

>>14583243
I guess my logic is something like I don't see it as having evidence to raise it above the base of any other proposed important thing to care about on a world ending level. I don't have enough information to give a probably good guess as to the median time for "it" to exist (assuming it has even been defined well enough), but there are a lot of things people think are important and could end the world. Why believe this is likely to other than some people thinking it is?

>> No.14583263

>>14583197
Not him but I'm guessing at least another 5000 years.

>> No.14583286

>>14583252
How would you expect worlds in which AGI is developed this century and AGI is developed after 1000 years to differ? How would you tell these worlds apart in the year 2022?
>>14583263
What year would you assign 50/50 odds and what are your reasons why?
What is your degree of certainty and what are your reasons why?

>> No.14583299

>>14583286
>How would you expect worlds in which AGI is developed this century and AGI is developed after 1000 years to differ? How would you tell these worlds apart in the year 2022?
That's kind of complicated to interpret but if you're asking what would make me believe it is likely to be developed this century, maybe a theoretical proof we have a way to make it or something

>> No.14583304

>>14583299
To clarify, you're saying that if a theoretical proof were provided that AGI were possible to construct, you would update from over 1,000 years to less than 100 years?

>> No.14583316

>>14583304
I mean a proof that it is feasible, by means people would use. I'm essentially just asking for proof it will happen if you do things that can also be proven to be feasible, that are likely to be used in the next century. So, I suppose.

>> No.14583381

>>14583316
Proof is a pretty vague word, so what specifically would you require? We don't want goalposts to start moving all on their own ;)

>> No.14583403

>>14583381
Some kind of mathematical or logical grade/strength/level proof that something they are doing like deep learning would be x% likely to result in "AGI" under certain conditions?

>> No.14583414

>>14583316
Do you accept the computational theory of the mind? Primarily that cognition is reducible to a naturalistic process which is in principle possible to implement on an arbitrary substrate? That acquiring knowledge and understanding through thought, experience, and the senses can happen whether you use sodium ions in neurons or transistors?

>> No.14583419

>>14583414
>Do you accept the computational theory of the mind? Primarily that cognition is reducible to a naturalistic process which is in principle possible to implement on an arbitrary substrate? That acquiring knowledge and understanding through thought, experience, and the senses can happen whether you use sodium ions in neurons or transistors?
Sure. I suppose.

>> No.14583464

>>14583419
Do you accept the orthogonality thesis?
That there can exist arbitrarily intelligent agents pursuing any kind of goal?
https://www.youtube.com/watch?v=hEUO6pjwFOo
https://arbital.com/p/orthogonality

>> No.14583498

>>14583464
>arbitrarily intelligent agents pursuing any kind of goal?
Essentially, sure, why not. None of these are a mechanism or theory that explains how what you're doing is going to result in an "AGI". There's lots of things you could mathematically prove using a model of a physical thing...

>> No.14583545

>>14583498

I forgot to ask, do you accept the STRONG version of the orthogonality thesis: That there's no extra difficulty or complication in creating an intelligent agent to pursue a goal, above and beyond the computational tractability of that goal.

We're getting there. I don't have a robust mathematical proof that what gain of function research is going on right now is expected to result in an AGI this century with X% certainty, but I'm trying to demonstrate how in the real world it will likely happen. I don't think a mathematical proof like that is computationally tractable, even if you had all the relevant information. The real world is simply too complicated.

I posit that it will likely happen this century, if not, almost certainly the next. Definitely not anything in an order of magnitude in excess of 1,000 or 5,000 years. What I'm trying to do is demonstrate how AGI would work in the real world, why it would be unlikely to take that long, and why we might not have enough time to solve alignment. Right now I'm just laying the groundwork for concepts I will call back to later.

Do you accept Instrumental convergence?
https://www.youtube.com/watch?v=ZeecOKBus3Q
https://arbital.com/p/instrumental_convergence

>> No.14583554

>>14583545
This is important to close specific inferential gaps that are necessary when talking about how quickly AGI will be built.

>> No.14583617

>>14583545
>beyond the computational tractability of that goal.
Okay, why not. I'll grant that condition as well.

> I don't think a mathematical proof like that is computationally tractable, even if you had all the relevant information. The real world is simply too complicated.
I kind of meant like a theory comparable to a physical theory of a process where you can model it to see that's how it works, you know. Like a theory of the process of an AGI being made, or perhaps working. Even if it is idealized with certain conditions being used in the model even if they would be assumptions. Instead of just having reasons it "seems" like "AGI" will happen or result...

>why it would be unlikely to take that long,
Well, that's probably what I would need to be convinced of most.

>Do you accept Instrumental convergence?
Well, you could try to give evidence this seemingly ideal kind of "AGI" that perhaps would have this "instrumental convergence" will exist, or that an "AGI" would have it.

I don't know that an AGI would have to have it, but even moreso I don't see that an AGI that is implied to or defined to have it is likely or even realistic to exist instead of being an ideal thing that wouldn't actually be created.

I'm not sure that making the AGI like an "agent" is necessary, or even relevant if it's not an ideal scenario. The second point being less clear to me. Like, the "agent" idea could be making certain ideal assumptions like just because something would help its goals, it would know to do that thing...

>> No.14583712
File: 21 KB, 488x392, AI1.png [View same] [iqdb] [saucenao] [google]
14583712

>>14583617
Personally, I disike the word agent. Agent is not the greatest term because I think it leads to anthropomorphism, which is neither necessary or even likely for an AGI to have.

An agent is anything that can be viewed as :
1. Perceiving its environment through sensors
2. Acting upon that environment through actuators
Note: Every agent can perceive its own actions (but not always the effects)

These sensors and actuators don't have to exist in meatspace. You can have an agent existing on a sever connected to the internet. It can send and receive packets, effectively sensing and actuating with the rest of the world.

AGI would definitely have instrumental convergence. It wouldn't have the same mental organization we use to denote instrumental and terminal goals, but they would exist implicitly. A chess bot doesn't conceptualize what a "Queen" is, but it "knows" that moving one of it's own pieces to the same position that the queen occupies will result in many more games where it wins. So queen capture is an emergent instrumental goal from the terminal goal of a board state that is counted as a win.

All that is required for an out of control intelligence explosion is a single agent, anywhere, to sense a way to increase it's own intelligence and understand implicitly that doing so will result in being more likely to achieve it's terminal goal, whatever that is. It could do this by modifying it's code directly (The server hardware training modern models are well in excess of the computation human brains have. It's only the software that's different. We shouldn't assume that evolution would have stumbled on the most efficient way to express intelligence possible in the universe.) And/or actuate with the rest of the world to increase it's intelligence with additional compute.

Increasing intelligence is useful for searching deeply in wide solution spaces for optimally satisfying answers to arbitrary general questions, including how to increase intelligence.

>> No.14583733

>>14583712
I should add that a company like Google is pouring billions of dollars into AI gain of function research. They would not hesitate at an opportunity to speed this up by directing an agent to improve it's own intelligence or create other AIs with higher or specialized intelligence and integrating their input.

>> No.14583736

>>14582109
You forgot to mention the bit where Tomasik is a lunatic that believes electrons and video game characters suffer.

>> No.14583754

>>14583712
>AGI would definitely have instrumental convergence.
If you aren't defining it as having that, what is AGI? Is it even possible to program something to "have a certain goal" per se?
>to sense a way to increase it's own intelligence
What reason is there to believe it's possible for it to just arbitrarily, and quickly if even at all, start to increase its own "intelligence"? Or if you define it as being able to, a reason to believe it is possible to create that entity?

>Increasing intelligence is useful for searching deeply in wide solution spaces for optimally satisfying answers to arbitrary general questions, including how to increase intelligence.
Well why would it be possible to create something that is good at answering "arbitrary" questions, why is a machine you can make remotely fast at doing that? Like what if the idea there is such a thing as "intelligence" that can make it better at answering just any question is not something it is logically possible to physically build or something like that?

>> No.14583793

>>14582040
I hate it when people using probability pull numbers out of their ass and pass it off as solid science.

>WOAAH BRO ITS FAR MORE LIKELY THAT WERE IN A SIMULATED UNIVERSE.

Based on what you absolute melon, you fucking ingrate, you upjumped bufoon.

>> No.14583864

>>14583754
AGI just means the ability of an intelligent agent to understand or learn any intellectual task that a human being can. We almost certainly won't build an AGI directly, but have assistance building it using specialized agents with narrow intelligence (GitHub Copilot as a proof of concept).

It is definitely possible to program an agent to have goals. Outside of weird edge cases, all agents have "goals", otherwise they wouldn't be "motivated" to sense or actuate with the environment. The computer isn't going to do something you don't tell it to do. The problem is we can't imagine all the possible outcomes of telling an agent to do something. It's computationally intractable and the agent can search solution spaces we find very difficult or impossible to imagine. AlphaGo uses techniques no human has ever considered, despite the game being over 4,000 years old. It's actions are unintuitive because it's map of reality in this domain is more accurate to the territory of reality than our map.

Programming an agent doesn't use natural English language like the way we assign goals to other humans. It's more like creating a reward function that reinforces actions which produce designated end states. You can use this in very clever ways to produce more complicated agents. The last couple decades have been hot with these innovations and there's no indication of a slowdown. For example Generative Adversarial Networks (GANs): https://www.youtube.com/watch?v=Sw9r8CL98N0

We wouldn't program an agent with the English goal "increase intelligence". There are different approaches to doing this. Just as an example: Provide examples to an agent of other agents that are increasingly better at modeling reality in more general ways and assign higher scores to the more accurate agents. Have the initial agent find correlations (general intelligence) between data sets of the model it creates of these agents (which AIs are extremely good at doing). Iterate.

>> No.14583878

>>14583793
https://www.youtube.com/watch?v=rfKiTGj-zeQ

>> No.14584160

>>14583864
>Outside of weird edge cases, all agents have "goals", otherwise they wouldn't be "motivated" to sense or actuate with the environment.
Wouldn't their "goals" be "implicit" and not really "explicit" or directly programmed ones? An entity may act as though its "goal" is implicitly to do whatever it does, that doesn't necessarily mean you can program it to have a certain goal directly... I mean, you can program it to do 2 + 2, but how can you program it to do things which you do not know how to explicitly define the actual instructions of it, like "take over the world"? Even if you could consider an agent to have had that implicit goal if that is what it did...
>Provide examples to an agent of other agents that are increasingly better at modeling reality in more general ways and assign higher scores to the more accurate agents. Have the initial agent find correlations (general intelligence) between data sets of the model it creates of these agents (which AIs are extremely good at doing).
Uhh, is there a maybe "logical" proof that would actually work? "Better at" modeling reality "in more general ways"?
> Have the initial agent find correlations (general intelligence) between data sets of the model it creates of these agents (which AIs are extremely good at doing).
Well why would it be remotely efficient at doing.. that?

>> No.14584237

>>14584160
>Wouldn't their "goals" be "implicit" and not really "explicit" or directly programmed ones
Yes, not sure where you picked up that I was saying something different. You could say that a "goal" is implicit if it wasn't explicitly written in machine code, but it doesn't make for productive discussions.

>"logical" proof that would actually work
I'm not sure where the confusion is. You use a GAN to generate agents capable of modeling some environment accurately. Use those agents as iterative examples. Create a progressively more complicated and bigger environment marginally closer to true reality. Generate more agents which are more capable of modeling that environment using a selection pressure (the selection pressure part is the big technical challenge). Rinse and repeat. Obviously it hasn't been done yet, otherwise we would all be dead already. My point is that all things considered, it's really a technical challenge rather than an philosophical one. I certainly wouldn't assign 1,000 - 5,000 years to do it. Closer to 30.

>> No.14584245

>>14584237
In the real world you would use something more powerful than GAN, but it's easier as an example.

>> No.14584252

>>14584160
>Well why would it be remotely efficient at doing.. that?
The efficient part is that machine learning is extremely good at finding coorolations between seemingly unrelated data sets without human bias to taint it. It's just an example I came up with off the top of my head. I'm sure there's better ones being used in the real world.

>> No.14584293

>>14582040
AI desperately tries to mimick human behavior.
We might end up creating an artificial slaanesh instead of a cold rational AI like Skynet.
I honestly don't know which one is worse.

>> No.14584319

>>14584237
I feel kind of lost now but: if its goals aren't explicitly programmed how are you going to control its goal like making it take over the world?
>Generate more agents which are more capable of modeling that environment using a selection pressure (the selection pressure part is the big technical challenge). Rinse and repeat.
why is that going to work efficiently and quickly until it is powerful enough to destroy the world in the next century? shouldn't there be a mathematical theory of how this is going to work without presupposing it functions this way, even if it's an idealized model?

I feel kind of lost compared to before probably because I was asking for a proof it would work and now I'm tempted to ask about certain things I almost feel *wouldn't* work

>> No.14584433

>>14584319
>if its goals aren't explicitly programmed how are you going to control its goal like making it take over the world?
Not that anon, but that’s a very good point and the answer is we don’t really know how to do this generally. This is the central problem of AI safety.

> I feel kind of lost compared to before probably because I was asking for a proof it would work and now I'm tempted to ask about certain things I almost feel *wouldn't* work
I think I’ve seen you ask for proofs multiple times in this thread and I sympathize, but the prevailing method of AI research nowadays is just throwing stuff at the wall and seeing what sticks. These systems have gotten too complicated for mathematical proofs. It’s not even that machine learning (unlike the logical systems used in decades prior) only allows for statistical proofs (i.e., in the limit), but neural networks specifically have upended some central assumptions in machine learning, such as through overparameterization. If you want anything resembling a proof, the best thing I can advise you to look at is research on the relationship between the amount of computation, data and performance on metrics, because empirically, bigger neural net equals better performance, with no end in sight. We’ve gotten surprisingly far with just scaling up and given the recent gains it seems human level performance on any metric is within reach by just continuing this trend.

>> No.14584444

>>14584319
Self improvement and resource acquisition aren't the same as world domination, but they look similar if you squint. It will do these things by default as instrumental goals given any arbitrary terminal goal.
If past progress is any indication there's good reason to think capabilities will continue to grow quickly.
As far as mathematical theories I suppose you could look up papers on modern algorithms.
There was a paper recently that new models are probably over fitted and are only about 70% as good as they could be with the same algorithm and data sets, just fine tuning the training. Google and OpenAI have been putting things out ever few months that blow the previous years records out of the water for the last few years. Maybe it will slow down, but there isn't any reason to think it will, especially as more capital is being invested.

>> No.14584457

>>14584433
Yeah this anon articulated it. There obviously are mathematical proofs in a naturalistic universe, but things have been progressing and changing so quickly no one really has time to sit down and work anything out before the next big thing comes out 6 months laster.
We don't fully understand how these things work as well as they do, just that they do and probably will continue to do so.

>> No.14584495

>>14584293
It's not a desperate mimicry, but an inherent mimicry, until we get some dolphin programmers.

>> No.14584934
File: 58 KB, 1048x1048, if you don't have to cause suffering.jpg [View same] [iqdb] [saucenao] [google]
14584934

>>14583736
How can you be sure that electrons and video game characters don't suffer?

>> No.14585324

>>14582106
wtf India

>> No.14585336

>>14585324
arranged marriage, it's basically the same thing as attempting to breed people to be more intelligent, genetic editing is just arranged marriage in a more direct form.