[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 52 KB, 553x553, ray_kurzweil.jpg [View same] [iqdb] [saucenao] [google]
10672162 No.10672162 [Reply] [Original]

Are you ready for singularity, /sci/?

>> No.10672207

Singularity won't happen, sorry. The first widely used implanted chips will be the mark of the beast

>> No.10672224
File: 214 KB, 1200x1200, uncle ted.jpg [View same] [iqdb] [saucenao] [google]
10672224

The techies' belief-system can best be explained as a religious phenomenon, to which we may give the name "Technianity." It's true that Technianity at this point is not strictly speaking a religion, because it has not yet developed anything resembling a uniform body of doctrine; the techies' beliefs are widely varied. In this respect Technianity probably resembles the inceptive stages of many other religions. Nevertheless, Technianity already has the earmarks of an apocalyptic and millenarian cult: In most versions it anticipates a cataclysmic event, the Singularity, which is the point at which technological progress is supposed to become so rapid as to resemble an explosion. This is analogous to the Judgment Day of Christian mythology or the Revolution of Marxist mythology. The cataclysmic event is supposed to be followed by the arrival of techno-utopia (analogous to the Kingdom of God or the Worker's Paradise). Technianity has a favored minority-the Elect-consisting of the techies (equivalent to the True Believers of Christianity or the Proletariat of the Marxists). The Elect of Technianity, like that of Christianity, is destined to Eternal Life; though this element is missing from Marxism.

Historically, millenarian cults have tended to emerge at "times of great social change or crisis." This suggests that the techies' beliefs reflect not a genuine confidence in technology, but rather their own anxieties about the future of the technological society-anxieties from which they try to escape by creating a quasi-religious myth.

>> No.10672247

>>10672162
never going to happen, have fun with your fantasies

>> No.10672261
File: 105 KB, 551x1024, itaLd5I.jpg [View same] [iqdb] [saucenao] [google]
10672261

>>10672247
Why not?

>> No.10672286
File: 128 KB, 555x414, ted glow eyes.jpg [View same] [iqdb] [saucenao] [google]
10672286

>>10672261
But let's be optimistic and assume that the world has come under the domination of a single, unified system, which may consist of a single global self-prop system victorious over all its rivals, or may be a composite of several global self-prop systems that have bound themselves together through an agreement that eliminates all destructive competition among them. The resulting "world peace" will be unstable for three separate reasons. First, the world-system will still be highly complex and tightly coupled. Students of these matters recommend designing into industrial systems such safety features as "decoupling," that is, the introduction of "barriers" that prevent malfunctions in one part of a system from spreading to other parts. Such measures may be feasible, at least in theory, in any relatively limited subsystem of the world-system, such as a chemical factory, a nuclear power-plant, or a banking system, though Perrow is not optimistic that even these limited systems will ever be consistently redesigned throughout our society to minimize the risk of breakdowns within the individual systems. In regard to the world-system as a whole, we noted above that it grows ever more complex and more tightly coupled. To reverse this process and "decouple" the world-system would require the design, implementation, and enforcement of an elaborate plan that would regulate in detail the political and economic development of the entire world. For reasons explained at length in Chapter One of this book, no such plan will ever be carried out successfully.

>> No.10672293

>>10672261
Second, prior to the arrival of "world peace" and for the sake of their own survival and propagation, the self-prop subsystems of a given global self-prop system (their supersystem) will have put aside, or at least moderated, their mutual conflicts in order to present a united front against any immediate external threats or challenges to the supersystem (which are also threats or challenges to themselves). In fact, the supersystem would never have been successful enough to become a global self-prop system if competition among its most powerful self-prop subsystems had not been moderated.

But once a global self-prop system has eliminated its competitors, or has entered into an agreement that frees it from dangerous competition from other global self-prop systems, there will no longer be any immediate external threat to induce unity or a moderation of conflict among the self-prop subsystems of the global self-prop system. In view of Proposition 2-which tells us that self-prop systems will compete with little regard for long-term consequences-unrestrained and therefore destructive competition will break out among the most powerful self-prop subsystems of the global self-prop system in question.

Benjamin Franklin pointed out that "the great affairs of the world, the wars, revolutions, etc. are carried on and effected by parties." Each of the "parties," according to Franklin, is pursuing its own collective advantage, but "as soon as a party has gained its general point"-and therefore, presumably, no longer faces immediate conflict with an external adversary-"each member becomes intent upon his particular interest, which, thwarting others, breaks that party into divisions and occasions... confusion."

>> No.10672307

>>10672261
History does generally confirm that when large human groups are not held together by any immediate external challenge, they tend strongly to break up into factions that compete against one another with little regard for long-term consequences.16 What we are arguing here is that this does not apply only to human groups, but expresses a tendency of self-propagating systems in general as they develop under the influence of natural selection. Thus, the tendency is independent of any flaws of character peculiar to human beings, and the tendency will persist even if humans are "cured" of their purported defects or (as many technophiles envision) are replaced by intelligent machines.

Third, let's nevertheless assume that the most powerful self-prop subsystems of the global self-prop systems will not begin to compete destructively when the external challenges to their supersystems have been removed. There yet remains another reason why the "world peace" that we've postulated will be unstable.

By Proposition 1, within the "peaceful" world-system new self-prop systems will arise that, under the influence of natural selection, will evolve increasingly subtle and sophisticated ways of evading recognition-or, once they are recognized, evading suppression-by the dominant global self-prop systems. By the same process that led to the evolution of global self-prop systems in the first place, new self-prop systems of greater and greater power will develop until some are powerful enough to challenge the existing global self-prop systems, whereupon destructive competition on a global scale will resume.

>> No.10672328

>>10672261
But just in case someone declines to assume that our society includes any important chaotic components, let's suppose for the sake of argument that the development of society could in principle be predicted through the solution of some stupendous system of simultaneous equations and that the necessary numerical data at the required level of precision could actually be collected. No one will claim that the computing power required to solve such a system of equations is currently available. But let's assume that the unimaginably vast computing power predicted by Ray Kurzweil will become a reality for some future society, and let's suppose that such a quantity of computing power would be capable of handling the enormous complexity of the present society and predicting its development over some substantial interval of time. It does not follow that a future society of that kind would have sufficient computing power to predict its own development, for such a society necessarily would be incomparably more complex than the present one: The complexity of a society will grow right along with its computing power, because the society's computational devices are part of the society.

>> No.10672339
File: 487 KB, 1280x935, moore's law.png [View same] [iqdb] [saucenao] [google]
10672339

>>10672328
There are in fact certain paradoxes involved in the notion of a system that predicts its own behavior. These are reminiscent of Russell's Paradox in set theory and of the paradoxes that arise when one allows a statement to talk about itself (e.g., consider the statement, "This statement is false"). When a system makes a prediction about its own behavior, that prediction may itself change the behavior of the system, and the change in the behavior of the system may invalidate the prediction. Of course, not every statement that talks about itself is paradoxical. For example, the statement, "This statement is in the English language" makes perfectly good sense. Similarly, many predictions that a system may make about itself will not be self-invalidating; they may even cause the system to behave in such a way as to fulfill the prediction. But it is too much to hope for that a society's predictions about itself will never be (unexpectedly) self-invalidating.

>> No.10672513

>>10672339
I appreciate the write up or copy-paste or whatever, but it seems pretty narrow minded. A singularity implies the computer system continues to upgrade or build better and better versions of itself indefinitely, eventually creating a sort of god system. With that sort of tech, and with it the knowledge of basically everything, a doomsday scenario, chaotic descent, or even competition would be impossible. At those levels of power it mat be able to perfectly predict the future simply based on physics.

>> No.10673657

>>10672339
wut?

>> No.10675390
File: 156 KB, 960x960, 1497063488341.jpg [View same] [iqdb] [saucenao] [google]
10675390

>>10672162
>>10672207
>>10672224
>>10672247
>>10672261
>>10672286
>>10672293
>>10672307
>>10672328
>>10672339
>>10672513
>>10673657
Absolutely cucked. The whole argument behind "muh AI gonna take over da world and kill evryone" is ridiculous. Do you retards actually believe we are going to just create some one-off entity that is separate from ourselves? Or do you think maybe the smarter thing to do is to use AI and computing technologies to augment ourselves and merge with the technology to become what is essentially a hivemind or one single being? That single hivemind, in the name of self preservation, keeps all of us alive and empowers each and every constituent.

Get real. Brain computer interfaces and AI are concurrent technologies.