November 16, 2010
Will Perpetually Young Become Extremely Risk Averse?
Seth Shostak thinks perpetual youth is a recipe for stagnation and extreme cocooning where people stay home out of fear in virtual reality.
Regrettably, you might not find any groceries. Farming is one of the most dangerous jobs around, and any farmer who lives long enough to fear riding in a car has had a more-than-even chance of being killed in the back forty. Incidentally, that's about the same death rate as mining coal, so we'll need to get those wind turbines built if you want electricity at home.
Here's the problem in a nutshell: if we extend human lifetimes a lot -- to millennia, rather than centuries -- all the small risks you heedlessly take every day will have a devastating cumulative impact. Most jobs will become unattractive, because just about any occupation becomes, eventually, a deadly occupation. We'll automate nearly everything we can, and stay at home immersed in a virtual world.
First of all, he's missing just how much interaction and change will be possible using virtual worlds. Second, what's wrong with having robots do all the dangerous work? That seems like a feature, not a bug.
Third, I question the extent to which people will avoid risk-taking. Humans aren't that rational. Look at the risk-taking junkies now jumping out of airplanes, kayaking, snow boarding, and speeding around country roads. They find the prospect extreme thrills in risky endeavors extremely alluring. Absent some gene therapy to dampen their desire for risk I expect rejuvenation will make them more prone to take risks, not less. Who is more likely to do extreme skiing? Someone with a young body or an old body?
I've previously explored this issue in my posts Will Longer Lives Make People More Risk Averse? and Will Eternal Youthfulness Lead To Less Ambition?
It could be one answer to the Fermi paradox. Although I believe that the bigger risk is to take no risks. Here is what I wrote elsewhere regarding the Fermi paradox and risk aversion:
Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving at all for being unable to prove something being 100% safe before trying it and thus never taking the necessary steps to become less vulnerable to naturally existing existential risks
Good news for Shostak: there's a high probability his wish for a painful death will come true. Best of luck on that!
Just reducing death rates as much as we have has resulted in people being much more cautious. Imagine the early jet aircraft tests being conducted today with their high fatality rate!
The main problem with long life will be that people with old, inferior ideas will keep hanging around. To use an example I could be wrong about, I believe that Karl Marx and John Maynard Keynes' theories were fundamentally flawed (for very different reasons) and that the world will be better off when their believers in the Boomer generation die off. But if that never happens we could be stuck with a large population of Marxists and Keynesians for a very long time. This will hold back social progress.
But if we had people around from the late 1800s we'd have people who predate Keynes who would support a gold standard. So this sort of thing cuts both ways. Some new ideas are monumentally bad.
I personally have become extremely risk averse as I've gotten older. A lot of it has to do with finally understanding that I'm running on very delicate hardware that is easily destroyed and (currently) unrepairable.
However, I've thought a lot about this topic, and I can see a very simple scenario where I risk aversion does not happen: when I have proper backups and runnable instances.
If I know I can (and will) be restored from last night's backup, I will worry a lot less about dying right now. Better yet, if I fork a copy of myself to do a dangerous task, whichever of us ends up doing it wouldn't worry a whole lot about death, knowing that the other copy was still safe and functional.
I've talked about this with other people, but for the most part I run into the brick wall of "neither of my copies would ever do that, we would both want to live". I personally have no issues with shutting myself off so long as I know other copies exist and that the information drift between copies isn't too large.
Young people often take risks because of their hormone makeup. I could actually see eternal youth, depending on where you stop the clock, creating a society of huge risk-takers.
Besides, what makes Shostak think that that kind of behavior won't be manipulatable if it actually becomes a problem? Psychoactive drugs make billions for big pharma.
>But if we had people around from the late 1800s we'd have people who predate Keynes who would support a gold standard. So this sort of thing cuts both ways. Some new ideas are monumentally bad.
This is also something that eternal youth could help with. Most people who can change their political opinions at all do so when they're young, say in their 20's. If we stop aging there, we may have a huge group of people who are actually paying attention to this kind of debate and are capable of using reason rather than blindly reacting on gut feelings (not to say I think many people will actually use reason... tribal banter is huge even in high school). Older people are set in their opinions not because they're older and wiser, but because their instinct at that age is to avoid risk.
>A lot of it has to do with finally understanding that I'm running on very delicate hardware that is easily destroyed and (currently) unrepairable.
I don't think you were any less capable of understanding that when you were young. I think it just bothers you more now.
What's so bad about risk avoidance? Thinking more is definetly risk avoidance. As people become eternal, they'll care more about the environment and politics.
And fixed ideas happen due to people thinking: "Oh, why would I waste my time learning this new thing? I probably only have 10 more years to live, so why bother?". Being eternal would make them rethink things. Also, there are people that are always willing to improve on themselves. They wouldn't die and would be amazing after some centuries, being able to outshine even the fixed ideas types.