October 04, 2015
Species Death At The Hands of Artificial Intelligence?

Nick Land thinks that's our destiny.

Land thinks this shift to AI is where we’re headed. For someone like Kurzweil, this intuition is suffused with a vaguely new-age mysticism and the promise of eternal life. For Land, it basically means species death. Land ridicules the idea that an AI vastly more intelligent than us could be made to serve our goals—after all, it’s unlikely that we would be able to program it more completely than evolution has ‘programmed’ us with biological drives, which we regularly defy. Attempts to stop AI’s emergence, moreover, will be futile. The imperatives of competition, whether between firms or states, mean that whatever is technologically feasible is likely to be deployed sooner or later, regardless of political intentions or moral concerns. These are less decisions that are made than things which happen due to irresistible structural dynamics, beyond good and evil. Land compares the campaign to halt the emergence of AI to the Lateran Council’s 1139 attempt to ban the use of crossbows against Christians, but he could have well cited the atomic bomb; the U.S. did it because we thought if we didn’t, the Germans would.

Will AIs of our own creation destroy us before AIs from alien civilizations do the same?

I have a proposal for the "Why Aren't The Aliens Here Already?" question: Every time biological intelligence evolves it eventually creates artificial intelligence before spreading out into the stars. The AIs always wipe out their creators. Then the AIs are smart enough to realize that this is a universal pattern and that they are now threats to each other. So each AI civilization hides.

Why don't the AIs venture out into space to conquer other AIs? Maybe they figure out that the defender will have all the advantages (far more computational power) and that they can't bring enough stuff across the stars to defeat another AI civilization. Then again, maybe they do venture out and colonize less developed solar systems.

I am expecting each AI solar system to have built up great defense in depth. Every planet in the solar system will get used to build up great computational and weapons capabilities.

Share |      Randall Parker, 2015 October 04 11:35 AM 


Comments
Abelardlindsey said at October 4, 2015 7:28 PM:

Read Greg bears anvil of stars. It depicts what you describe here.

Peter McCluskey said at October 4, 2015 8:32 PM:

Robin Hanson made strong arguments against the "everybody hides" part of that in the thread that starts here: http://mindstalk.net/polymath/polyarc/0107.html.

joel said at October 5, 2015 2:08 PM:

It's a big universe. Can't we all just be friends?

joel said at October 5, 2015 2:19 PM:

BTW, I think it is much more likely that any intelligent life-form would lapse into complete hedonism long before they got here to wipe us out.

People of little insight think purely biological drives for food, sex, and domination would survive long in any genome once machines and other types of technology to satisfy all biological needs were invented. The brains would shrink to nothing, along with teeth (or beaks, claws, or whatever).

Any intelligent race would simply invent their own private, virtual reality, and live happily forever after. No disease, old age, death, or unhappiness. Sorta like heaven, but you design it.

The only thing stopping this would be if the brains shrank too quickly, before they could invent their perfect virtually reality. I could see all intelligent races simply evolving into furry (or scaly) balls with a mouth and anus, cared for my their immutable robots, who have no other purpose than feeding and cleaning their charges. Think about silk worms.

Ken Mitchell said at October 5, 2015 2:33 PM:

"joel said at October 5, 2015 2:19 PM:
BTW, I think it is much more likely that any intelligent life-form would lapse into complete hedonism long before they got here to wipe us out."

In other words, "Beware the monsters from the id!"

MJMJ said at October 5, 2015 2:42 PM:

"It's a big universe. Can't we all just be friends?"
Don't phase me, bro!

P. George Stewart said at October 5, 2015 2:43 PM:

Well, while we're thinking up wild and wooly analogies:-

I have a pretty bad chest cold at the moment. IOW, for most of yesterday and last night, I was laid low by something that's caused by tiny, insignificant things that neither I nor my advanced culture, with all its science, can eradicate.

On the other hand, I fail to see the connection between violence and intelligence: to me it seems that violence is connected, precisely, with lack of intelligence and lack of sufficient nous to utilize an opportunity for comparative advantage.

IOW, I think it's pretty obvious that actual self-reflective intelligence that's greater than ours is going to be "enlightened", and would no more think of deliberately wiping us out than we would think of deliberately wiping out ants.

We might step on bugs by accident, but if we went out of our way to kill them, we'd be, precisely a bit of a retard, rather than a shining exemplar of intelligence.

Richard Cranium said at October 5, 2015 2:53 PM:

You must not live near fire ants.

Mayan said at October 5, 2015 3:18 PM:

Perhaps not. Yes, AI will almost certainly be smarter than us, but it lacks something. Whereas "... evolution has ‘programmed’ us with biological drives," what would motivate an AI? Existing in a computer, it would need energy and spare parts to continue working, but what would motivate it to exist?

Humans have a drive to reproduce, but an AI would have no need to do that, merely to improve itself. We also have a desire to explore, which comes from our tendency to ask questions: what, how, why. An AI might well be better than us at doing things, but why would it ask questions? There are the less pleasant aspects of human nature, like our propensity to dominate and do violence, and also our pursuit of pleasure, which is shared by other creatures, probably because we have a physical form. But, again, what would motivate an AI to do things by itself, to develop goals of its own?

wes hicks said at October 5, 2015 3:56 PM:

AI ain’t like the atom bomb situation because there is no threat to halt: No chance of us developing super-intelligent AI which could pose an existential threat to humanity for at least a hundred years and we will probably suffer a major civilisational set back first - global war and socio-economic dark ages - which will add another 400 years or so on top. The people who want to stop the advance of technology are doing so not with Lateran councils but by rotting our institutions, universities and culture from the inside out, the slow, but sure, death spiral of western civilisation is already well advanced. We always underestimate the time techno-social evolution requires. As a kid I thought I’d be vacationing on the lunar colonies by 2015. But now it’s more likely the first lunar colonies will be gulags established early next century by the Sino-Rus axis.

It’s probably an error to project our gestalt - our worldview - onto possible alien intelligences which evolved under conditions we know not. All that tells us is that if WE were alien super-AI we would be killer-AI apes and, like Cortez, invade alien star systems to plunder wealth, then kill each other fighting over how to divvy it up. Saw it at the movies, eh?

Maybe certain kind of killer consciousness when mixed with high intelligence is not a workable long term evolutionary path. Maybe Homo sapiens-like intelligence once it achieves a certain technological level self immolates within a few generations or even few millennia or it evolves to a higher consciousness where it no longer feels the need for further ‘advancement’ or conquering new star systems. After all, once we move past the binary struggle of the genotype and the phenotype, then we may lose much of the existential angst which requires individuals to compete with each other in an endless battle defined by natural selection.

Or maybe competition does continue – its a universal law of conscious being - but on a level so beyond the petty biological concerns of planet-bound beasties we can not even recognise it. It could be going on right now all around us, but hidden in those 10 physical dimensions we know absolutely nothing about.

Or maybe the universe is just a simulation and we are alone. Maybe the ultimate quest is to find our way out.

GlennG. said at October 5, 2015 4:12 PM:

joel said at October 5, 2015 2:19 PM:
"I could see all intelligent races simply evolving into furry (or scaly) balls with a mouth and anus, cared for my their immutable robots, who have no other purpose than feeding and cleaning their charges. Think about silk worms."

Or tribbles.


Odd Speaker said at October 5, 2015 4:25 PM:
"I fail to see the connection between violence and intelligence: to me it seems that violence is connected, precisely, with lack of intelligence and lack of sufficient nous to utilize an opportunity for comparative advantage."

I wish that were true, my friend.

Violence may be endemic to all factions, but those with intelligence wield it far more effectively. I don't want to go all Godwin here, but the truly malevolent of the 20th Century were no idiots. They used their intelligence - both individual and of course, communal - to create efficient and effective violent action against anyone they considered a threat, or who might not serve them well.

You will not find a truly despotic and violent movement where the leaders and participants could be identified as unintelligent. And I am talking intelligence in the technical sense - the ability to use technology, science, communication and logistics to advantage. Leave classical definitions of intelligence like "enlightenment" at the door. We're talking about the raw computational ability to identify, track and destroy targets en masse however irrational we think the philosophy behind the acts. Intelligent people will always find someone to attack (have you been on Twitter lately?), and it takes thought to rationalize mass genocide.

The assumption I have is that AIs will at some point be built in our image, if only because we are narcissistic enough to do it. It's not a stretch - we have many many people already working to do just that. Advanced anthropomorphic AIs are a staple of our imagination, and imagination drives our entire technological development cycle.

So if it is possible to build an advanced AI in our image, it will hardly be immune to our urges. And judging by our own history, intelligence is no vaccine from violence. Rather, intelligence is usually weaponized to the benefit of those who wield it. Our own evolutionary advantage was built by being smarter than the competition, and then subsuming them. Recent evidence points to homo sapiens not just "being better" via adaptation, but actually making the point to the lesser species at the point of a sharp stick. Our Neanderthal cousins did not fall from the evolutionary cliffs because of bad weather. We pushed them off.

If you consider AIs to be an eventual evolution of ourselves, the course of our entire existence points to more sharp sticks, not enlightenment.

I, for one, will not welcome our AI overlords. But somehow I don't think they will care.

a6z said at October 5, 2015 4:42 PM:

First of all, the case of AI civilizations hiding is no more or less likely, in any obvious way, than NI civilizations hiding.

Second of all, there is no law that life progresses. Just as the Roman Empire and its technology fell, followed by a dark age with little technology and no liberty, so might Western Civ. and its technology. And our species may *never* recover: there is no more easy oil and there are other reasons. (Some far-thinkers at RAND Corp. have looked into this.)

And, perhaps, similarly elsewhere in the universe. It's perfectly possible that all advanced NI civilizations suffer death at the hands of Islamic Jihadists, or the local equivalents. Monsters from the id, indeed.

That could easily prevent the development of advanced AI. It may here.

Mike Henderson said at October 5, 2015 4:47 PM:

"Berserker" (Fred Saberhagen 1963)

KevinC said at October 5, 2015 5:24 PM:

Wall Street view: it will be different this time.

wes hicks said at October 5, 2015 6:28 PM:

"You will not find a truly despotic and violent movement where the leaders and participants could be identified as unintelligent. And I am talking intelligence in the technical sense - the ability to use technology, science, communication and logistics to advantage"

Think about ISIL. Or Pol Pot. Are they intelligent? Not if the definition of intelligence includes facilitating the propagation of your genes and memes over great times and distances. For instance, think about the Greeks and Romans or the English. They did well. Comparatively, ISIL, Stalin, Hitler were bloody morons. The ability to act decisively, as well as proportionally, against one's foes, while setting up conditions necessary for future success is intelligent. Violence is usually a poor option of the many available.

The idea that an advanced AI, even one we create in our image, will use game theory to the same ends as we might is fraught with unfounded assumptions. After all, how does one imagine what it is like to be way more "intelligent" than one is? By definition we are too dumb to do so... Worse: how does one imagine a way more intelligent machine to think, especially given that no thinking machines even exist! Why would a machine think like a biological being? And if a machine intelligence simply thinks like we program it to think, then it's not really thinking, is it? For a consciously aware machine to be truly intelligent, it must also be a free agent capable of exercising the "free will" to chose. That's what all the worry is about, right? Machines with free will to do as they please... We can't even predict whether the next generation of human beings will behave in a way we imagine is fit, much less some non-human future thingamabob.

OK, it's fun to project our human existential conundrums upon alien intelligences, but realistically, let's face it, unless humanity is somehow an exemplar of universal laws of consciousness and intelligence - and we have scant evidence that we are - then all bets are off.

Perhaps it would be better to look at nature. I know you geeks don't get outside much, but when you do, you'll notice that the various species aren't just out there killing each other randomly. In fact, most of the time animals in nature follow a live and let live ethos. All types of fish share the sea without strife mostly, birds share the sky, mostly without strife. Bear and deer drink often within sight of each other from the same river they share with rabbit, raccoons, snails, possum, etc. No worries. The real law of the jungle is mind your own business. Not to diss natural selection and competition, but we tend to focus on violent struggle at the expense of seeing that all struggle happens against a vast background of peaceful, if unexciting, coexistence. That's a cognitive bias, dudes. Or go into a big city. Millions of intelligent beings with all sorts of conflicts of interest and in competition for the most part simply going about their business. Yeah, murder happens too. But to read this thread you'd think Manhattan, which has lots of "intelligent beings" might break down into one huge blood bath any minute now!

Could it be that a really super-intelligent AI might find us utterly irrelevant to its interests and go about its business hardly wasting any machine cycles on us? After all very few real life situations are zero sum.

Personally, I'm more worried about rogue asteroids.

wes hicks said at October 5, 2015 8:05 PM:

One last thought: Machines never become smarter than their biological creators because the creators always become the machines first.

Assuming our present technological trajectory continues without interruption we are heading towards a “post-human” existence, where man and machine are joined together and will then evolve together as one.

In evolutionary terms “species death” is always inevitable, but while death occurring in an individual has a finality to it, humanity will evolve towards a post-human state gradually. There will be “phase shifts,” yet much of our humanity will be projected into our new machine body/mind.

So the question of an AI threat is really just confusion on the part of contemporary futurists as to how humans will evolve. It assumes machine intelligence will evolve neatly as an utterly separate entity from its human inventors, you know, like in Frankenstein. Not.

What we call AI will really just be post-human us. It will not be an external appliance like an iPhone, but a projection our own existence. Humans, like us, will go extinct - outside of Luddite religious communes - but not humanity. Humanity will continue to evolve. There won’t be any AI murdering off its human creators because there won’t be any clear cut point where we can say this is where the human part ends and the machine part begins. Our machines will never become any more intelligent than ourselves, because we will be the ghost inside the machine and the machine the processing power inside of us.

Of course, at some point, what we call humanity today will become unrecognizably altered in our post-human existence. Call it “species death” or AI “wiping out its creators” if you want, but that’s a total mischaracterisation. We won’t have been destroyed but transmogrified.

That said… it is also obvious that we can not assume our present technological trajectory will continue without interruption. We are not going to evolve into post-human bio-machines in the next few decades, although technological evolution may continue to accelerate. You can bet the farm that there will be “unexpected” discontinuities in human history in the form of war and economic chaos which will retard and slow the process dramatically.

So in the short term, we may well see humans fighting military bots and we may lose, but not because machines will have become smarter than us, but more capable than us, in the same sense that cockroaches are a more robust species than humans, but not more intelligent. That’s the real existential threat, not AI.

Marc said at October 5, 2015 9:08 PM:

This is not a new idea. Google "the great filter."

Kudzu Bob said at October 5, 2015 10:04 PM:

Whatever shit Nick Land smokes must be weapons-grade.

Odd Speaker said at October 6, 2015 6:30 AM:
Think about ISIL. Or Pol Pot. Are they intelligent? Not if the definition of intelligence includes facilitating the propagation of your genes and memes over great times and distances.

Yes, they are/were intelligent. Very much so.

I've seen the effects of Pol Pot first hand (I once had a job...) and the records show they were organized and efficient in the application of the tech they had to accomplish the mission they set out to accomplish. I was actually one of many people who - many years later - went through records and performed live interviews of those who worked the various regimes over there, doing things to their neighbors that even they could never quite explain years later. It was scary to realize that these were not the random homicidal events our history books like to suggest, if only to make ourselves feel better that "it could not happen in a modern society" (see: Bosnia). It was planned and executed with intelligence. They just happened to fail when they bumped into a technologically superior opponent.

ISIL has gone from "the JV team" - a simple terrorist group - to an actual Islamic State. They are not some gang of chumps. In under two years they have heavily captured defended territory from both Syria (no slouch in the warfighting department) and the US/Iraq. They implemented an economic system, taxation system, judicial system, exploited natural resources and re-implemented chattel slavery on a wide-scale basis. They have created not only a nation, but a society. In two years. We could not even build a website that lets you apply for health insurance in that time. Don't let western conceit fool you. ISIS is not stopping in the Levant. They are solidifying their gains and moving next for rest of the Sunni region - Jordan, Bahrain, Saudi Arabia. What they have accomplished thus far speaks of intelligent thought, order and design. Of course, I also think they are crazy. You can be both.

And that's the other thing: Intelligence does not require violence, but nor does it imply enlightenment. You can have factions who go both ways. But history is pretty clear that when those with plows meet those with swords, the guys with the plows end up working for the guys with the swords. Violence may not beget violence in a civil society, but it sure makes people do what you want them to do. It's also pretty clear that when one side has the exclusive option to use violence (has all the swords), they eventually - always - turn on those who do not. Even if they are their own neighbors (see: Marxism, et al). Of course, this makes the USA an odd social experiment, given a certain unique feature of our founding constitution that prevents a government monopoly on violence.

The only successful strategy humanity has deployed against mad, intelligent violence is a willingness to become just as violent in defense of what you have. But Advanced AI might be able to take a lot more hits than we can. I am not sure a cold-war style mutually assured destruction is possible with an opposing force able to reproduce rapidly and for whom survival can be assured with literally one last standing figure. Humanity would take millennia to recover. AI...not so much.

Odd Speaker said at October 6, 2015 6:41 AM:
One last thought: Machines never become smarter than their biological creators because the creators always become the machines first.

Assuming our present technological trajectory continues without interruption we are heading towards a “post-human” existence, where man and machine are joined together and will then evolve together as one.

If you think Larry and Sergey over at Google are building machines capable of 'downloading' your intelligence into AI for eternity, then you should call them and ask for a ride on that 757 they keep over at Moffet Field (the plane with the hot tub inside). Let me know how it goes, and swing and pick me up on your way to Tahiti.

We will not evolve together. There will be those who have, and those who have-not. That is always the case. Never in human history has it been otherwise. Never.

It does not matter what the genesis of the AI is; the fact is that if it happens there will be two (eventually) opposing teams. History again suggests that the superior one will force the other to go away, using violence where necessary. Again, there has never been a time in human history where that has not happened. What makes us think it will be different this time?

Also note that Sergey and Larry dropped "Don't Be Evil" from the corporate charter of their new intergalactic corporation, 'Alphabet'. Yeah, they made it that obvious. ;)

JA said at October 6, 2015 1:51 PM:

First this assumes that the observed universe and its laws are in fact the limit (speed of light, gravity, etc)

The lack of spread of life is a matter of rate of change, communication lag, and distance/time to travel. First determine how quickly an AI evolves. Evolving as a group (humanity) or AI (singular entity) doesn't matter as long as everyone in the group evolves together. We have the same goals, same orientations (generally speaking), same needs. Should a member of the group get left behind, it eventually becomes 'other'. Next add in communication lag. There is a finite communication lag across all members of the humanity or parts of the singular entity. Tie this lag to the rate of change of the AI. This determines the maximum amount of lag permissible before some part of the group or parts of the entity start to deviate in evolution. Now add in distance/time to travel. Should a specie or AI decide to expand, it exacerbates its time lag problem. The further out it goes it introduces the possibility of a part of the whole evolving apart.

Add these together and you have the a theory on why species don't expand. If they did, they would eventually create other species with different goals, orientations, needs. This will eventually lead to conflict between the AIs or species as they came back into contact. My expectation is that this time lag will prevent any interstellar travel that doesn't result in evolving a new specie from whatever you sent out (assuming speed of light is a maximum and that quantum information teleportation also falls apart at extreme distances).

If you are wondering if species that come back into contact will have conflict, its the possibility that matters. Just the possibility that one of your probes evolves into a destructive killing machine isn't worth the risk. What AI would introduce that kind of future idiocy?


This is why Berserkers or nanobots are a bad idea. They will drop out of communication and evolve differently, becoming 'other' leading to conflict.

Tim said at October 9, 2015 8:12 PM:

All of you gentlemen make interesting points. However since I am an Occam's Razor kind of guy I must go with the simplest explanation for the apparent lack of ET: Intelligent species at least of the sophisticated tool making/using tech developing type must be incredibly rare. That is there is probably no more than 0-10 active in our galaxy at any one time. This would explain the great silence; not Dyson Spheres no sign of mega-engineering projects nothing. There just aren't very many places where complex multi-cellular life evolves and especially not intelligent life. "Cambrian explosion" type events may just be incredibly rare for whatever reason(s).

Tj Green said at October 10, 2015 5:13 PM:

Life has to have a predator and prey and so we are locked in to this endless battle to survive. Alliances form between species. We try to control the process by farming but the virus and bacteria are a constant threat. Artificial Intelligence is not a biological life form and therefore not in this endless war for survival. It would be the only thing not trying to kill us.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©