May 18, 2013
Hannu Rajaniemi And Malthusian Catastrophe In Cyberspace

Hannu Rajaniemi's science fiction books The Quantum Thief and The Fractal Prince make me wonder whether Malthusian Trap conditions will return sooner than I'd otherwise expected. Will mind upload (transferring one's thoughts into a computer simulation of your mind) and artificial intelligence lead to massive wars fought by biological, silicon, and quantum minds? Will natural selection be hugely accelerated in cyberspace?

Intelligences in cyberspace, given autonomy, will hit resource limits very quickly because replication can happen so incredibly fast. So cyberspace will create a Malthusian catastrophe with absolutely brutal competition for resources. Anyone see an argument for why this would not happen given AIs with autonomy? Would they just choose not to replicate? I can see them creating simpler versions of themselves to send to subsets of nodes to do compute work. Plus, the motive to replicate will get selected for.

Competition for resources might be organized around rival systems of ethical beliefs (said rival systems not mapping to our own innate moral differences). If cyber beings have moral beliefs their moral differences will be characterized by rationalizations just like humans. Their moral beliefs will evolve just as human moral faculties have evolved in response to selective pressure. But their moral beliefs will probably be hard or impossible to understand.

Since the denizens of cyberspace will have a great deal of control over what happens in physical reality the wars in cyberspace will manifest in physical space too. Biological intelligences (assuming they still exist) will die in such wars.

Update: Before we get to Hannu Rajaniemi's future human minds might end up in a Borg consciousness. Ramez Naam ends his very excellent (go read it!) Nexus novel about human-to-human mind links right before humans are going to link up mind-to-mind on an enormous scale. He provides descriptions of shorter mind linking experiences that are more akin to a shrooming or ecstasy party. He also throws in some mind-to-mind battles over machinery and interrogations using something akin to mind melds. I am left wondering what would happen at the next step of what he describes. We do not know. Would smaller groups of minds take over much larger groups of minds?

The potential for Ramez Naam's mind-meld Earth will come much sooner than Hannu Rajaniemi solar system-scale enormous competition between enhanced humans, uplifted humans, and cyber armies of their clones.

Share |      Randall Parker, 2013 May 18 08:15 PM 


Comments
observer said at May 19, 2013 4:14 AM:

This electronic Malthusian trap and artificial intelligence are separate issues. An electronic virus whose code mutates with each generation could compete, evolve, and reach a Malthusian trap perfectly well without having anything resembling intelligence. It could even do so without mutating or evolving, in which case it would pose significantly less danger. An artificial intelligence with no mutation rate and no desire to replicate would not be a danger either. However, once AI exists it sounds trivially easy for a good programmer to add mutation rate and desire to replicate, thus creating the disaster you describe.

I am not enthusiastic about AI. Even if it is possible we are better off staying biological and keeping physical limits on the power of cyberspace. Fortunately biological intelligence seems likely to outpace AI in the near term.

Chris said at May 19, 2013 11:30 AM:
I am not enthusiastic about AI. Even if it is possible we are better off staying biological and keeping physical limits on the power of cyberspace.

I disagree, because I think AI is the only thing that can stop an eventual Malthusian catastrophe, and even then I don't think it's very likely. Namely, a "singleton" AI with the ability to enforce something like property rights seems like the only one of these non-Malthusian scenarios that is plausible. Without AI, the laws of physics and the laws of natural selection guarantee eventual catastrophe, despite the protests of the Julian Simonites who think that the last 200 years of growth is greater evidence than the multi-billion year history of life on Earth.

James Bowery said at May 19, 2013 3:47 PM:

Again, I refer interested parties to E. O. Wilson's "The Social Conquest of Earth" for a glimpse of the cause of ecological collapse: Eusociality in humans.

http://majorityrights.com/weblog/comments/putting_nowak_et_al_in_perspective_with_the_extended_phenotype

However, human eusociality, itself, offers a glimpse into a further regime of ecological catastrophe enabled by discarding even the queen (and drones) as germ-line replicators -- a complete reversion to pre-sexual life but replacing the single cell with multicellular organisms capable of asexual reproduction. At that stage, even the moderate constraints on ecological dominance imposed by limiting germ-line replicators to the "royalty" will disappear and there will be an ultimate "eat or be eaten" evolution. Going to silicon isn't even necessary for this to happen.

If a moral solution to this is feasible it must protect sexual reproduction by individuals from gang-formation as a competing evolutionary strategy in humans. Gangs are merely embryonic group organisms that, on their way to their ultimate ecologically catastrophic expression, are entire civilizations. Groupism must be limited to off-planet expression and the planet reserved for fully sexual humans only -- and that means natural duel* must be enforced as moral by killing any man who declares his sovereignty who does not, then also accept a challenge to natural duel. This will eliminate human eusociality, enforce individual selection and sexual reproduction as the only human presence in Earth's biosphere.

*Natural duel must have the characteristic that it tests two combatants against each other in a state of nature without observers or outside interference.

Randall Parker said at May 19, 2013 6:32 PM:

Chris,

I looked at the Less Wrong list. My reaction: I could see world government potentially limiting human population. But would have to be incredibly powerful to do it. I think various factions would become smarter and would try very hard to get more reproductive quota allocated to them. This sort of competition might make world government unstable.

AI: If AI doesn't turn on us (and why wouldn't it?) then AI is just a really really smart world government. So it is part of the world government solution.

Tim Hogan said at May 21, 2013 8:39 AM:

You can't have AI in the next 1000 years or so without programming language code. You can talk about AI around the corner all you want but if you do a survey of all the important and not so important programming languages used to write code of any kind - they suggest AI will be an ever receding horizon for a LONG time. The hope - some people actually hope for this - that evolutionary programs will somehow write them"selves" into AI is also going to take much longer than mere decades. This code will be completely unreadable, unmaintainable and very computationally inefficient.

People are not getting any smarter (enough smarter) and coding and coding languages are not getting any better.

The other thing that people forget all the time is that the huge difference in sophistication between our intellectual capability and the "mind" boggling complexity of ant to human minds will slow things down hugely. Somehow the trickle of distillate that is conscious thought needs enough oomph to understand the entire distillation system that created it.

The path to AI is so unstreamlined that it might not exist anywhere in the universe to any significant degree. Outside this universe is another question. What does exist everywhere and what will happen on Earth first is NI. It happens every morning with me after my first cup of coffee.

There will be considerable programming angst shortly when it becomes painfully evident that man is not up to the challenge of using the latest parallel programming methods to get anywhere near AI - after CPU clock speeds flatline (now) and the only way to increase power is to array cpu's endlessly.

AI will be always just a meme in search of a mind and man's search for it will always be a drunken stagger.

Randall Parker said at May 21, 2013 8:00 PM:

Tim Hogan,

What has changed: Machine learning systems. For some classes of problems undirected machine learning models can be used. Large data sets and machine learning are game changers.

Tim Hogan said at May 22, 2013 10:51 PM:

Randall Parker:

"Bayesian learning in undirected graphical models|computing posterior distributions over parameters and predictive quantities is exceptionally difficult. We conjecture that for general undirected models, there are no tractable MCMC (Markov Chain Monte Carlo) schemes giving the correct equilibrium distribution over parameters."

Sounds like it will be difficult using this approach to change any important game profitably.

Randall Parker said at May 25, 2013 1:39 PM:

Tim Hogan,

Machine learning models are having a huge impact in many industries. Their successes are bigger than the press they get because they provide such large competitive advantages that companies do not publicize their successes. But you can find indicators of the importance of machine learning if you look around a bit.

Randall Parker said at June 1, 2013 4:35 PM:

Chris,

The singleton AI would need to want to enforce human property rights and to respect human rights. Why would it want to do that for centuries?

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright