September 25, 2014
Singularity Nirvana: We Can Not Solve Our Own Problems?
Some people are optimistic about our future because (the thinking goes) some computer scientists and computer engineers will create artificial intelligences that are smarter than us that will build machines even smarter in a rapid loop which will result in intelligences orders of magnitude greater than ours. Then these machines will solve all our problems.
One assumption in this scenario is that these AI machines will stay friendly to us. I'm seriously skeptical about this assumption. How friendly are we to bacteria or molds or even a tree or rat? This is not an original observation on my part. Lots of people are afraid that AI machines will go hostile on us. Movies (can you say "Terminator"? sure) have explored this topic.
But there is another assumption to this scenario that is even more disturbing because it also might be true: We are unable to solve our own problems and we need entities orders of magnitude smarter than us to solve our problems for us.
That's really a scary thought. But look at our progress in recent decades. Have we cured cancer yet? Nope. Aging? Our elites aren't even interested in trying to reverse aging and their lack of interest is really puzzling. But there it is. The price of oil is sustaining a quite elevated level for many years (and you can see this is true after adjusting for inflation). Technological advances haven't suddenly made that high price irrelevant. Similar things can be said for the prices of grains and meats and some minerals to varying extents.
As I've previously argued in my essay Innovation Costs for Maintaining Civilization we need to innovate just to hang onto the standards of living we currently enjoy. As populations increase and resources deplete the amount of innovation we need to maintain our current levels of consumption rises. We need to develop substitutes for depleting resources and pollute less per unit of production to compensate for rising populations. Can we do this without AI?
If we need to create beings orders of magnitude smarter than us to solve our problems then not only are we not very impressive but also we are in deep trouble as a species and quite possibly threatened with extinction. If we are doomed to extinction without the problem-solving power of AI and also doomed to extinction from AIs destined to go rogue then we are between a rock and a hard place.
So I have to ask you? Is AI the only solution? Is the only solution beyond our power to control and likely to turn on us?
Randall Parker, 2014 September 25 09:40 PM
I don't think AI is the only solution to having broadly acceptable, and rising, living standards. Aging isn't cured, but life expectancy has increased. Energy is expensive, but certainly the median person on the planet has more access to quality transportation than they did 20 years ago, and that goes for Americans too. It's not utopia, but it's not that bad either.
That said, AI may be the only solution to the (eventual, quite far in the future) Malthusian crush. I know when you mention that name you get hordes of commenters who mysteriously think the last 200 years "disproves" him, as if the preceding 10000 years of the history of civilization were irrelevant. Or even worse, that a silly bet with Julian Simon is persuasive evidence, again somehow ignoring the mountains of history for the tiny little foothills of the recent centuries.
You and I know better. Natural selection + physics = Malthus. And short of a winner-take-all AI, natural selection will go on (and even then...)
As for whether it's likely to turn on us, I have no idea.
I would forget about the AI. I don't think this will be developed in the foreseeable future. Rather, the emphasis should be on technology that makes it possible for smaller groups of individuals as well as lone individuals to do the things that are currently done only by governments and large corporations. This kind of individualized empowerment is what I seek and what the "transhumanism" movement should be about. All of the technologies that make up the "mundane" singularity would do this and are realizable in the next 20 years. AI is a giant distraction that should be ignored.
For those really interested in providing funds for SENS, it might be a good idea to question the premise that the rich are motivated by egoistic concerns when it comes to SENS. If that were true the M-Prize would be funded in the tens of billions, not a begrudgingly-granted million.
I've got my ideas but until people free themselves from a mindset that is blatantly at odds with reality, why bother conveying them?
I think you're dangerously loyal to what you think think of as your own intellectual superiority. Surely you aren't equipped to effectively manage the billions strong hordes of animals on this finite planet that are humans and neither are any of the rest of us. And I seriously doubt that any "enhancement" would truly help. We just weren't built for this state of existence. I'd say the only hope for the long term survival of humans in any technologically advanced state is super AI, or the ability to rapidly expand beyond the Earth. No offense intended!
Actually the elites are almost certainly interested in longevity research, but they are discrete about it. And the complexity of biology is such that because everything is interconnected, it will take a lot of time to solve the problem of aging. For example, there is research and even some commercial drugs to lengthen telomeres, but there are also counterarguments that artificially intervening to lengthen telomeres might cause cancer. Basically, it took millions of years of evolution for the human body to attain the current balance of functionality and longevity, and a simple modification of a couple of genes probably won't be sufficient to increase longevity without compromising important functions. In fact, it seems very likely that the global laws of physics that dictate evolution, actually require that the longevity of all organisms should be limited, so that procreation becomes obligatory. If we suddenly attain near immortality with a life expectancy of 5,000 years, this would probably hinder evolution. On the other hand, if human life expectancy is too short, then this also hinders the advancement of civilization, because it takes many years of training to attain the maturity of knowledge to make wise decisions and to invent fundamentally important things.
But it will be possible to increase the average intelligence of humans, albeit at a slow rate at the beginning, and this will accelerate the solution of many problems.
On the plus side, with the exception of Africa, the world population is not increasing as fast as before. The industrialized world has shrinking population, and even the Islamic world has a significantly slower population increase. The only serious shortage will be mined hydrocarbons, but it is guaranteed that there will be alternative fuels that can be recycled.
Yes, I think natural selection brings back the Malthusian Trap in the long run. Heck, Africa is still in the Malthusian Trap. I wish we could be smarter than that. But it doesn't appear to be in the cards..
I need to remind my panglossian utopian libertarian readers more often that in more recent decades Paul Ehrlich would have beat Julian Simon in most years in a repeat of that famous natural resource price bet.
There is a reason for that: for a long stretch of decades automation lowered the cost of mining faster than declining ore grades raised the cost. But now the ore grades have gotten so low that the energy required to separate out the desired minerals is driving up costs faster than automation can lower them.
Similarly, it gets harder to boost productivity of land as plants become more efficient for our purposes. Sure, we can still reduce labor costs on the farm. But there is not a lot of labor still left on the farm. Chemicals (made from more expensive oil) and diesel for the tractor have become more expensive as oil fields deplete.
Certainly biotech will enable some people to create Human 2.0 offspring. But suppose some people decide to make super humans who want to have large numbers of offspring. We could be faced with a population explosion of 150 IQ people engaged in an extremely intense competition for resources.
Maybe the 150 IQ eugenic elite won't aim to increase their population dramatically, but they might still decide to exterminate the ones who don't meet their standards. For one thing, such an elite would have a greater sense of entitlement, and it is possible that they will think that the world is too crowded even if their number is not so large.
"Our elites aren't even interested in trying to reverse aging and their lack of interest is really puzzling. "
What??? Did you even bother to google before writing that? Because if you had then you'd know the founders of google had poured money into it -- along with a lot of other big name billionaires. Paul Allen has sunk about a half billion into it. And he's not the only one who's sunk hundreds of millions into it, either. Heck, he's not even the only billionaire who wants to transmit human consciousness into computers.
Reading the summaries of what some of those billionaires want to do makes me realize how weird I'm not. Which is odd, because I'm kind of quirky. If they ever come up with some anti-aging technologies I might go for it. Until then I plan on ensuring my immortality the old-fashioned way -- a bottle of wine and some poontang.
One last thought, if they cure aging without restricting procreation then this ball of dirt will fill up fast.
I am quite aware of what a small number of Silicon Valley billionaires are doing on a small scale. I also know that the vast majority of other billionaires aren't ponying up money for this. Plus, all the old guys in the US Senate aren't aggressively pushing for big bucks to defeat aging either.
The NIH is funding over a billion dollars per year on stem cell research. The DoD is also funding research due to all the injuries and amputations. That's just two branches of government funding in the US. Not to mention the private research and other countries. The argument could be made that we should be spending 10 billion per year. If I thought it would help I'd support spending 50 billion. But is there any reason to believe that spending more money would be more effective or achieve faster results? Perhaps it would actually be counter-productive by siphoning off talent into less important areas?
I'm not that old. I'm healthy, intelligent, attractive and athletic. I already have all the health and genetics I could ask for.... at the moment. So why would I want to give billionaires the ability to buy genetic advantages over me? By the time I'm old enough to need it, it will be available and I'll have more than enough money to pay for it. I can afford to wait. In the meantime, anyone waiting around for technology to extend their life or genetically engineer their kids is a tool who's making a mistake. I don't need to wait. I'm living and having children right now.
As for AI machines solving our problems for us... I don't see how that's any different than researchers using computers to crunch numbers. Or engineers using maths to design things. Or a caveman using a spear to hunt. We built the tools. They're ultimately all a product of our intelligence. They can't do anything that we couldn't do using pencil and paper to process the algorithms by hand. The only difference is that AI's could do it a lot faster. We would merely be leveraging AI's to reach higher faster. Of course, there's always the potential that technology will be misused. I won't deny that. It concerns me, too. But you won't prevent the problem by being a Luddite. The genie is out of the bottle.
There's a pretty straightforward route to safe AI, or at least as safe as may be expected. (Natural intelligence isn't exactly guaranteed safe.)
Just make the A stand for "amplified". We need neural prosthesis to serve as super-frontal lobes for our evolved brains. Just as our frontal lobes go about satisfying the primitive desires of the hind-brain, while amusing themselves along the way, an amplified intelligence would quite naturally satisfy our desires, while doing along the way things we might not comprehend.
We shouldn't make independent artificial intelligences, unless we want to be replaced. We should enhance ourselves. But I suppose the desire for some kind of technological genie will drive us to do it anyway.