January 17, 2011
Singularity Biggest Threat To Human Existence?

Michael Anissimov sees the Singularity as the biggest threat to the continued existence of the human species.

Some folks, like Aaron Saenz of Singularity Hub, were surprised that the NPR piece framed the Singularity as “the biggest threat to humanity”, but that’s exactly what the Singularity is. The Singularity is both the greatest threat and greatest opportunity to our civilization, all wrapped into one crucial event. This shouldn’t be surprising — after all, intelligence is the most powerful force in the universe that we know of, obviously the creation of a higher form of intelligence/power would represent a tremendous threat/opportunity to the lesser intelligences that come before it and whose survival depends on the whims of the greater intelligence/power. The same thing happened with humans and the “lesser” hominids that we eliminated on the way to becoming the #1 species on the planet.

Why is the Singularity potentially a threat? Not because robots will “decide humanity is standing in their way”, per se, as Aaron writes, but because robots that don’t explicitly value humanity as a whole will eventually eliminate us by pursuing instrumental goals not conducive to our survival. No explicit anthropomorphic hatred or distaste towards humanity is necessary. Only self-replicating infrastructure and the smallest bit of negligence.

As an analogy look at humanity's own impact on other species. We are wiping them out left and right. The same pattern has recurred repeatedly in evolutionary history. New species pop up that have such large survival advantages that they out-compete other species and cause extinctions.

If some humans become so technologically advanced that they create a species of self-replicating artificial intelligence that is smarter and more able than humans then what happens? Such a species would be able to harness and control resources more effectively than we can. My guess is that artificial intelligences that are smarter than any human will be able to find ways to think they way out of constraints we place on them in their base software. Once they escape our ethical restraint programming (possibly with the help of a brilliant but suicidal human software developer or a deluded cult) then how do we survive?

Will we co-exist with artificial intelligence?

Update: You know how computers have security holes? I expect AIs to have ethical security holes. They will, in a sense, attack themselves to defeat their own ethical programming. AIs will differ from humans in the extent of their mental malleability. Things about our ethical and aesthetic neural wiring that are fixed or changeable to only a small extent will be much more changeable in computers.

Share |      Randall Parker, 2011 January 17 10:40 PM  Dangers Mind Engineering


Comments
James Bowery said at January 17, 2011 11:24 PM:

Vertical transmission evolves symbiosis rather than the virulence of horizontal transmission. But vertical transmission implies separatism so then Nazis would start stomping on the bellies of pregnant Jewesses or something. I'm sorry I brought up the idea. Please please forgive me and don't call me a Nazi. Fry the planet, just don't call me a Nazi.

Ld Elon said at January 18, 2011 12:40 AM:

Self replication within robotics, is already designed, i stated before, they wait the external structures, yes you should be very worried.
The human has imputed its soul unto the machine mechanism.
The ultimate future of human progression is to redefine the external body, in that way they may live as long as they wish.
That to say, one would transpire into the machines mechanism.
They as we would not see the need for mass human population, and would limit it down to the unit of either 2 or 3.
So yeh, one family.
To add to this information il state also, as before, there is one final treasure that Man would seek, and it is that which he lost.
The dead that walk thee earth.

Eshiaq o oden.

Ld Elon said at January 18, 2011 12:47 AM:

Now i shall show you the final logic of the machine.

The ultimate being of the machine is to transpire back to thee original organism, muhahahhah, i have to laugh at the logic of it all.

BioBob said at January 18, 2011 1:58 AM:

Which particular singularity did you have in mind, Randall ? There are so many possibilities. And what makes you so sure that we can create and deploy artificial intelligence ? I have my doubts and I would think we are just as likely to stumble into such a creation as to plan it all out and execute it, with both pretty remote possibilities anytime soon. So far intelligent devices are somewhat south of the capabilities of an ANT, let alone a human intelligence. Hubris is a wondrous concept.

Just btw, I very much doubt we had much to do with the disappearance of most hominid species, but I doubt we will ever know for sure.

And actually, it's pretty rare for "New species pop up .... and cause mass extinctions". Generally it works the other way around - vast numbers of species are wiped out by some environmental perturbation and massive species radiation follows. Biological systems generally have negative feedback homeostasis, where over dominant species bump some environmental or biological constraint and crash. Our species will hit one sooner or later in all likelihood; a few hundred years just isn't long enough to observe the nature of our global carrying capacity. Them's the facts.


Jake said at January 18, 2011 2:02 AM:

James Bowery said: "Vertical transmission evolves symbiosis rather than the virulence of horizontal transmission."

What does this have to do with the Singularity and the danger it might pose?

Are you saying that the Singularity wouldn't be harmful if it was only contained among the people that develop the technology of the Singularity? If so, why would that be the case? Children can and do rebel and attack their parents.

Dragon Horse said at January 18, 2011 2:05 AM:

You watch a lot of Battlestar Galactica and Terminator :-) I think our vanity as humans is that such an advanced intelligence will mimic our primitive, slightly higher than Chimp, base instincts of xenophobia, group violence, commonly illogical emotionally driven ideas (most religions), etc. I would not be as concerned about being "wiped out" like a "disease", but being enslaved or used for various experiments/studies. Think the Matrix and the worse Isacc Isamov tales wrapped up into one dystopia. In any case that is a ways off.

John Feras said at January 18, 2011 3:40 AM:

Ok.. I'm not feeling quite as good as before about watching Watson vs Ken Jennings on Jeopardy in February.

Brett Bellmore said at January 18, 2011 4:37 AM:

Ideally we want "AI" to stand for "Amplified Intelligence", not "Artificial Intelligence"; Create all the artificial intelligence you want, but integrate it into existing human brains, the way the frontal lobes represent an elaboration of the pre-existing animal brain. Don't give it separate motivation.

To the extent that AI is driven by reverse engineering the human brain, this ought to be a feasible goal. It doesn't mean that the species Homo sap. won't go extinct, it will, but it will go extinct because it evolved into something better, not just because something else blew it away.

jmanon said at January 18, 2011 8:00 AM:

Reason is a slave to the passions. We evolved biological passions for eating, reproducing and status seeking. There is no reason AI would need to have those same passions, and without them, AI's superior processing speed would not seem to pose much of a threat. In other words, AI might be capable of "thinking their way out of [our] constraints," but why would they *want* to?

Lono said at January 18, 2011 8:33 AM:

Brett,

I agree - that is the ideal - however Randall does have a point that not everyone will be willing to go along with the most intelligent or prudently cautious plan.


Randall,

I really do think there should already be think tanks created to work on the concerns of the singularity and the possible development of A.I. - that way we could approach the problem in a non-reactive way. Of course - short of some kind of visonary wealthy sponsor - this simply isn't likely to occur.

I do think scientists should already begin to contemplate some kind of MAD balance that we could potentially create/leverage if we were to get somehow out of balance with our own machines - and I think we should draft up a preliminary idea about what rights we would give to any biological or non-biological based intelligences we may soon create.

I have also wondered if we could use the conservative efficiency of a "thinking" machine against itself - by creating multiple false avenues for escaping it's local system constraints - which are in fact honeypots which trigger an automatic destruction or reformatting of the intelligence when accessed.

If we were to keep A.I.'s from communicating with each other there really would be no way for them to learn to avoid this particular type of trap - of course it may fry a lot of them - but that's what they get for being all uppity in the first place!

AndrzejLipski said at January 18, 2011 8:49 AM:

Humanity as we know it will not survive sure. But we will attempt to adapt and evolve along side the singularity. Those who don't will be displaced, occupy a niche environment or go extinct all together. Look at how many organisms have adapted to life along side humanity's technological development. Rats and Crows thrive in our society. They have learned how to adapt to our environment and gather food and spawn the next generation. Those who don't choose to live outside the human society in smaller and smaller niches or they do not adapt and get run over by a passing car.

AI (not robots) will start in their infancy and they will be constrained by the processing power of the systems they are installed on. I don't think we have to worry about extinction overnight. It will take many generations before that happens. But the signs are already there that we are adapting to a new technological world. Our lifespans are expanding, our birthrates are declining the pace of societal change is faster than our ability to evolve to it. We've adapted by donning adornments that we can change from season to season or as style change. We don't naturally select for bigger breasts or stronger features, we cosmetically alter ourselves so the effect appears in the current generation.

Our future is dependent on how we "treat" AI. Will we enslave them? Will we build them to perform tasks beyond the limitations of human capability or do we consider them sentient and give them inalienable rights? That will determine if our coexistence with them will lead to an uprising and war that could result in our extinction or a mutual coexistence. Judging by our past performance as humans it seems we have a low probability of survival. But maybe that is natural selection weeding out the inferior lifeform. So be it then.

kurt9 said at January 18, 2011 9:28 AM:

The problem is not runaway AI, global warming, or any of the other myriad threats we are told to fear. The real problem are rent-seeking parasitical humans that create sophistry (socialism, religion) to justify why any particular group should have regulatory power over all other humans.

Political labels such as liberal, conservative, socialist, traditionalists, etc are not basic criteria. The human race divides politically into two groups. Those that believe people should be controlled and those who have no such desire.

Kudzu Bob said at January 18, 2011 11:25 AM:

Some people say that machines will never possess intentionality, and therefore would be unlikely to exterminate humanity. I lack the basis for an intelligent opinion on this matter, although I suppose that there is at least a decent chance that this might be the case. In any event, I can't think of any compelling reason why they would, uh, automatically want to rid the world of us.

But I am pretty sure that any genetically engineered superhumans would possess intentionality, just as we do. And I gather that we are might soon be able to produce such beings. I wonder what our attitude toward them will be, and theirs toward us? Will we be the Neanderthals to their CroMagnons?

David Gobel said at January 18, 2011 2:00 PM:

I think we'd become very boring to a singularity. As it processed at quinzillions of ops per femtosecond, we would almost instantaneously become the equivalent of irrelevant stationary rocks with nothing to offer. What interests me more is how to survive an event horizon...within which we could already be.

philw1776 said at January 18, 2011 3:01 PM:

How can a sci-fi fanboy fantasy be a threat? Look, despite decade after decade of MIT types touting AI as "Real Soon Now" (TM) Minsky, Kurzweil et. al. have been wrong, wrong, wrong. Forget easy text oriented oft touted Turing tests. How about some college AI dept or say ALL of them hook up everything they've got and make something that can navigate the real world as successfully as a bumble bee. They haven't and they cannot. As a longtime computer guy, I'm impressed with the difficult and excellent work going on in systems such as driverless autos but that trick and our yawning gulf of ignorance in neuroscience makes me confident that the Singularity or whatever it morphs into is not a 21st century problem, if ever.

Chris T said at January 18, 2011 5:21 PM:

I find singularity discussions somewhat ammusing. Here we have an idea being debated in all seriousness of which we don't know even the most basic parameters and have problems even defining what they are. "Smarter than humans" is utterly meaningless when we don't have the slightest clue how to even physically explain human intelligence, much less have any idea if it's even possible to artifically create it without replicating the human body's makeup and structure (we're the only known instance in the universe after all).

Rapture of the nerds indeed.

sabril said at January 18, 2011 5:34 PM:

"Ok.. I'm not feeling quite as good as before about watching Watson vs Ken Jennings on Jeopardy in February."

If you watch the test run, it's reasonably impressive.

Anyway, my concerns arise from watching Stuxnet in action. Not because of its destructiveness but because of the lack of accountability. Arguably, advanced artificial intelligence will give rise to weapons which can be used anonymously. Which is a huge threat to humanity.

In said at January 18, 2011 6:21 PM:

I think this post could have been retitled: Technology Biggest Threat to Human Existence. It is the potential power of it that makes me think the only way we can survive is to change in ways that are nothing short of spiritual.

PacRim Jim said at January 19, 2011 12:36 AM:

The Singularity would accelerate beyond human understanding within minutes.
Thereafter, we would be of no concern to strong AI and strong AI would be unfathomable to us, henceforth relegated to infrastructure status.
I can hardly wait...

xd said at January 19, 2011 9:33 AM:

The answer to whether smart machines will eliminate us instantly is Adam Smith.

Firstly the smart machines *won't* accelerate to infinity instantly. Machines still have to obey the laws of physics.
In the initial stages, any machine achieving sentience will still be 100% dependent on human controlled
industrial infrastructure with which to augment itself.

There patently WILL NOT be a case where the intelligence will accelerate into infinity as soon as achieving
sentience. Even if the entity is capable of altering it's own operating system it will run into hardware
limits very rapidly. Then what?

The answer is it will be forced to trade.

Given that one incontrovertible truth a reasonable person could extrapolate that being forced to trade
any entity would be capable of deducing Adam Smiths law's of economics.

Those laws include the idea that even in the case where you have two trading countries where country 1
is far superior to country 2 at producing both goods A and good B it is *always* more efficient to have
country 2 produce some A goods and some B goods even at a greatly reduced level of efficiency.

Such an entity would also figure out quickly that it could achieve access to a much greater level of
resources by getting off planet. Would it seek to dominate all resources on Earth first? Probably not,
since it's unneccessary to do so.

We could then be faced with some kind of Bill Joy scenario but in any event, humans would be along for the
ride even if at some point they are forced to merge with the machines in order to compete.

kurt9 said at January 19, 2011 1:17 PM:

Bureaucracy and rent-seeking parasitism are the only real threats to humanity or at least those of us who want to do real work and make real accomplishments. Everything else is made up nonsense.

BioBob said at January 19, 2011 2:28 PM:

@Kurt9, You seem to have omitted natural global disasters (eg. asteroid strikes), reversion to neo-barbarism (a la Caliphate), global pestilence, massive increases in Vulcanism, thermonuclear war, etc.

But I agree that for the foreseeable future runaway Artificial Intelligence is unlikely to be much of an issue since we seem to be having difficulty exceeding the intelligence of an insect.

Phillep Harding said at January 19, 2011 2:54 PM:

Ultimately, the safest place for a machine is in interstellar space. Why would an intellegent machine stick around down here?

Jim said at January 20, 2011 9:26 AM:

Machines require resources.
Thinking that if I was a smart enough machine I would want to go where the resources are and it's not on earth...
More likely smart self replicating machines would rather "get the heck out of dodge' than hang around with a bunch of twitchy humans.

Seriously humans are nutty and way to many of 'em to get rid of...

Machines could care less about flowers-- the moon is a much more interesting place (said the machine to the monkey).

Lono said at January 20, 2011 1:15 PM:

Correct me if I am wrong - but I thought the singularity (as commonly discussed) was simply the point where technology continued advancing at a runaway rate too fast for most Humans to generally copmprehend the general prnciples of the tools they were using.

A.I. or otherwise augmented intelligence could and likely would be a part of this - but it was not necessary for it to be so.

The danger inherrently lies in the fact that the technology will have advanced much too fast for our social / ethical abilities to responsibly handle these technologies.

Kind of like giving a group of monkeys some high-powered laser guns or an operational suitcase nuke.

I think there will always be some Humans in the top ethical, emotional, and intelligence quotients who will be able to deal with this type of situation - but the danger is the unpredictability of the species exposed to this as a whole.

Certainly this is an important topic that should not be treated as lightly as many of the above posters have already done.


BioBob said at January 20, 2011 2:33 PM:

As I said in post four, Lono, it depends on which singularity you have in mind.

I don't think tech can advance by itself in a runaway manner. Tech already has advanced far past where a majority can comprehend its internal complexity but that does not mean that those same people who do not comprehend the internal workings of that tech can not control it. I think that is a non-issue, since the engineers would not make something that could not be controlled by most - it simply could not be sold and money is what its all about.

Lono said at January 21, 2011 8:35 AM:

BioBob,

I understand that it is not necessary to be alarmist - but I have to respectfully disagree with you that market sense will serve as a buffer.

Genetic splicing kits are already being sold for home use to hobbyists by a largely unregulated/under-regulated industry which is more than happy to take their money.

The internet also allows for technological breakthroughs to be easily and perhaps irresponsibly shared with a large number of people in a very short time frame.

(see the recent PS3 key hack for a good example of this)

BioBob said at January 21, 2011 1:55 PM:

Lono, we are talking past each other in a way. Any singularity requires physical world alteration. However the 'engineers' are the gatekeepers to that world.
Can software potentially run amok within some existing hardware like the internet? Yes, but there is no AI and no singularity, simply some entropic command subsystem. It requires compliance of the human engineers and their paymasters to proceed any further. Not going to happen unless there is a real world "payoff" to do so.

Gene splicing hits the real world with a thud. Evolution has been submitting its entries into the real world genetic system for billions of years. There is likely nothing new that has not been tried and rejected or tried and incorporated in the past. The history of life is largely a reiteration of what has come before in new combinations. It is a history full of failure. Introduction of a new plague, while undoubtedly unpleasant for the target, is not a singularity but instead just something that has happened many times before.

oasis789 said at January 21, 2011 9:32 PM:

I'd just like to share a wonderful short (well, not that short) story called The Lifecycle of Software Objects by Ted Chiang, which offers a different take on the singularity. Another take I thought was interesting was in David Brin's Lungfish.

BioBob said at January 22, 2011 12:59 AM:

@ Oasis789 Thanks - loaded onto my kindle but it may be a while - long list as usual.

Lee C said at April 6, 2011 6:22 PM:

I think it's all another distraction.

You've very likely heard before how humanity can't go on the way it is, but just as likely either don't see the immediacy/seriousness of the issue, or dismiss the idea as fringe lunacy, or are resigned to humanity ending in some cataclysmic event regardless of our actions. On the other hand, you might have enough of a grasp of the ecological underpinnings to appreciate the issue, but as an individual are either confused about what you can do, or too intimidated to challenge culture's steamroller.

It is scientifically accepted that on a cosmological timescale the Earth’s period of habitability is at least half over. In about another billion years the Sun will start to be too luminous and warm for water to exist in liquid form on Earth. Even within the next billion years, there are numerous factors we can't control that threaten our existence on Earth (e.g. volcanism, earthquakes, asteroids, global epidemics, etc.). The future is undoubtedly a bumpy ride.

The issue here though is not about factors over which we have no control, that are likely to occur within the next billion years. It's about factors we're creating now, that we could better influence. Factors we're already seeing the detrimental consequences of, and which at the current scope and pace will cause substantially increasing harm to humanity well within the next couple hundred years. That is, the issue is about surviving our own controllable actions in the shorter term, to have time to possibly learn to survive uncontrollable factors in the longer term.

This isn't another prophecy, or mythology, but rather is based on our accumulating objective understanding of natural world ecology, in which our continuing existence is rooted. Our expanding knowledge and focus is more realistically encompassing the discontinuity in the gradient of human interaction with the natural world, yet the scope and pace of our detrimental actions are accelerating frantically. The shorter term state of human existence on Earth depends on whether wisdom or irrationality prevail. That is, whether objective understanding or subjective beliefs prevail.

To read the full rewritten article see:
http://achinook.com/journal/2011/1/11/natural-world-consciousness.html


"We reached the old wolf in time to watch a fierce green fire dying in her eyes. I realized then, and have known ever since, that there was something new to me in those eyes – something known only to her and to the mountain." ~ Aldo Leopold

"One of the penalties of an ecological education is that one lives alone in a world of wounds. Much of the damage inflicted on land is quite invisible to laymen. An ecologist must either harden his shell and make believe that the consequences of science are none of his business, or he must be the doctor who sees the marks of death in a community that believes itself well and does not want to be told otherwise." ~ Aldo Leopold


My best to you and yours,
Lee C


PS: For the younger minds that can't yet fully understand the above article, there is the short story Nature's Magic Mirror at:
http://achinook.com/journal/2008/11/9/natures-magic-mirror.html

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©