August 09, 2011
Singularity 2045?

Ray Kurzweil predicts a breakthrough in artificial intelligence will enable the emergence of near immortal transhumanists by 2045.

That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity our bodies, our minds, our civilization will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.

The underlying assumption here is that artificial intelligence will speed up the rate of technological innovation by orders of magnitude. But is that really true? Look at the trend today: Computer power is still following the pattern of doublings, albeit with more difficulty. The shift toward use of more CPU cores to achieve this does not work for all problems.

A contrarian view is that in spite of the repeated doublings of computer power the rate of technological innovation slowed up starting in the early 1970s, the low-hanging technological fruit has all been picked, and we are being held back by depleting natural resources. For signs of the latter look at a big shift in the long run trend in natural source prices. The Julian Simon-Paul Ehrlich bet on natural resource prices stopped coming out out in Simon's favor starting in 1994.

The rosier Singularity view seems based on a few assumptions:

  • Humans will create artificial intelligence.
  • Humans (in some form) will survive the creation of artificial intelligence.
  • Artificial intelligences will be far more productive than human minds.
  • The nature of physical reality is such that large numbers of innovations remain to be found.
  • Artificial intelligences will be capable of finding the remaining innovations fairly quickly.

While I expect the computer industry will create artificial intelligences I'm less sure we will survive once they become far smarter and more powerful than us. Yes, I expect they'll be more productive.

What I'm least sure about: How much computing power do we need to turn currently extremely hard problems into easily solved problems. Consider that Moore's Law of computer power doubling every couple of years or so has been running for decades. Yet, for example, it hasn't enabled plasma physicists to come up with workable fusion reactor designs. Nor has it enabled the solar photovoltaics industry to suddenly come up with ways to drop the cost of photovoltaics by orders of magnitude. Nor has it prevented the big run-up in commodities prices as Asia industrialized and ore concentrations declined.

Some substantial (and I suspect growing) portion of all innovation goes to basically trying to keep running in place. How to grow enough food as world population grows? How to manage water more carefully as aquifers get drained? How to use oil more efficiently as the remaining oil is in harder to reach places?

Maybe things get worse for years until the Singularity suddenly ushers in a Golden Age. But even if true we have to live thru the intervening years before we reach the Singularity.

Share |      Randall Parker, 2011 August 09 08:40 AM  Trends Technological Advance


Comments
Jonathan said at August 9, 2011 9:40 AM:

"Yet, for example, it hasn't enabled plasma physicists to come up with workable fusion reactor designs."

They are working on it. My money is on Focus Fusion!

Food said at August 9, 2011 9:58 AM:

Kurtzweil is an annoying self promoter, not much different than a televangelist. He hand-waves all of the real issues in favor of naive extrapolations, and his predictions inevitably amount to, "look! That curve goes asymptotic!"

If you think we haven't been through singularities already, consider trying to have a conversation with someone from the 10th century about the stock market.

Another factor working against exponential acceleration in technology: patent lawyers. Rimshot!

David Gobel said at August 9, 2011 10:08 AM:

Moore's (transitors), Gilder's (bandwidth) and Metcalf's (nodes) law comprise a continuation of the implosive involutions of society that began literally with the printing press. Thinking non-linearly, here are a few results of the action of the above laws:

The desire for the utility gained via the above laws, people want to carry that utility around. This has resulted in vastly improved batteries. Which then results in enough KW per unit mass to propel a Tesla, Volt and Nissan Leaf. So, a major unforeseen value of the laws is compact portable power and decision electronics sufficient to gradually overturn the transportation food chain.

The laws are also being applied to biomedicine which until know has been either a random walk/oral tradition, or a trial and error montecarlo process. The laws as applied to medicine are accelerating improvements much faster than the laws would be expected to proceed (perhaps a catchup effect), and are now providing brute force fact finding that will destroy the current mediface within 20 years and replace it with deterministic-per-person pre-emptive medicine.

The laws are making it possible to move jump the rails from transistors and fiber as enablers to add proteins and genes to produce innovations. This possibility space makes the phrase "vast opportunity" seem beggarly.

The laws are making fundamental value delivery so cheap that it is now possible to pay for primary medical care at THE SAME price as the co-pay if you have insurance. This is a blowoff phenomenon in the FIRE (finance, insurance, real estate) economy. It now costs more to pay for insurance than to get actual medical care in the vast majority of doctor visits (see concierge medicine and high deductible catastrophic insurance).

The laws are undermining the entire waterfront of global, national and local institutions by making them moot (postal service for instance)

My general position on the subject of singularity is this: just as in physics nothing survives and event horizon, rather than worrying about singularity we need to pay more attention to the event horizon that surrounds it - or it will not be survived.

Jehu said at August 9, 2011 10:34 AM:

Make no mistake, we're in a race. The horses of computing technology, nanotechnology, and biotechnology are in a race with the paler horses of debt, depletion, dysgenics, and depression. Will the center hold long enough for one of the first three horses to totally transform the race?
That's the real question.

bbartlog said at August 9, 2011 11:16 AM:

@Jehu: Nice poetic summation... but I quibble to note that technological forces can ultimately transform and triumph even if the center doesn't hold. Sometimes you get better results when the falcon cannot hear the falconer.

Rankine said at August 9, 2011 1:24 PM:

Debt is proving to be a problem from Japan to Europe to the US. Dysgenics and a decline in demographic quality is happening more in Europe and North America, since Japan has very limited immigration. The race riots in Britain are a foretaste of what is coming to countries with skankier immigration policies and low quality immigrants.

As oil prices continue to skyrocket upward into the $200 a barrel and higher, we see how it is shrinking supplies of oil which drive the markets and not demand. Doom is coming as peak oil demonstrates too clearly.

Abelard Lindsey said at August 9, 2011 2:12 PM:

Forget about this singularity jazz.

Moore's Law might end later this decade (at either the 10 or 7nm design rule), unless self-assembly chemistry based molecular electronics are developed. This development will extend Moore's Law for an additional 1-2 decades. Once the molecular level is reached (probably around 2030 or so), that will be the end. The real action is in biotechnology and bio-engineering. This has its own cost/performance rule called Carlson's Curves. Bio-engineering is where the real developments will be over the next 50 years. This will also give us the biological immortality (SENS, for example). The bio-engineering will likely give us the comprehensive rejuvenation and regeneration by mid-century (2050). Electronics and computers will plateau out around 2030-2040, but will be dirt-cheap. Even the bio-medicine will be dirt cheap by mid-century or so. All of these technologies progress in an incremental fashion, and will continue to do so.

That's it, guys. No AI/uploading stuff. I think we can get AI. But it will be based on organic molecular electronics that will not be too dissimilar to our brains. However, it will take a long time in pursuing many architectures that will lead to dead-ends before we are successful. I see AI as a second half of this century thing. I think uploading is unlikely.

The wild cards are the development of fusion power (Helion, Tri-Alpha, EMC2, Focus Fusion, LENR) and breakthrough propulsion (Woodward-Mach Effect). None of these wildcards need sophisticated computer technology. They only need rigorous experimentation, which they are getting right now.


If we don't get fusion, its no problem, because we always have Kirk Sorenson's LFTR's to fall back on, which do not need any fundamentally new technology. Its only an engineering project. There is enough Thorium and Uranium in the Earth to power civilization for the next billion years or so.

PacRim Jim said at August 9, 2011 4:15 PM:

2045?
Well, it looks like I'll miss the train to (relative) immortality.
And here I stand on the platform, ticket in hand, but the train won't arrive for 44 years, give or take a pseudo-Singularity.
Those of you who manage to survive until then, have a good trip.
The good news is that, once you get your life extended, more research during that bonus period will extend your life even further.
Ultimately, your biostasis will become permanent.
Jackpot!

Brett Bellmore said at August 9, 2011 6:38 PM:

"Yet, for example, it hasn't enabled plasma physicists to come up with workable fusion reactor designs."

We invented working fusion fuel pellets back in the early 50's. We can build a power plant around them, conventional engineering, any time the politics stops being in the way. The problem isn't plasma physics, it's that we're not willing to build fusion plants the way the physics is telling us they need to be built. Pulsed, and really, really big.

Peter said at August 9, 2011 8:49 PM:

4 more assumptions here randall:

6) Intelligence is an algorithm.

7) Free will in the intellect does not exist.

8) causal processes underlie the ability to reason.

9) Known physics can fully model the human mind.

Nanonymous said at August 9, 2011 9:40 PM:

As long as reliance is on computers as we know them, no AI and no singularity. I think AI folks underestimate complexity of brain function by many, many, many orders of magnitude. Down to what matters (cellular physiology) each individual living cell is infinitely more complex than any of the supercomputers today.

Mthson said at August 10, 2011 12:20 AM:

Re Randall: "Artificial intelligences will be far more productive than human minds."

It just occurred to me how low a bar that is. I'd say 85% of humans in wealthy countries are incompetent. Even most of humankind's greatest thinkers are sub-human retards when they comment on matters related to HBD (human bio-diversity). And that's just in the most intelligent countries... globally, including the retarded countries, I'd say at least 97% of humans are incompetent.

It's only a matter of time before AI outperforms us all, but if we can just raise AI to a normal incompetent level, that'd be transformative.

Lono said at August 10, 2011 9:01 AM:

Mthson,

Totally agreed!

In fact - I think it is more likely we will be likely able to create a digital/mechanical copy of a Human before we will have the insight to create a self-aware A.I.

If they model such an artificial Human after one of the more competent and intelligent individuals of our species - giving them (or "it") the ability to expand and increase their memory on demand - while reducing or eliminating their need to sleep - such enhancements alone will lead to significant breakthroughs due to massive increases in productivity per unit manufactured.

I think with exercise - and a little luck - I may make that deadline - but I presume Kurzweil has come to the reasonable conclusion the he - himself - may not make the cut off.

(so I guess it will be in bad form when we all laugh because his prediction was based - yet again - on nothing but hot air)

Abelard Lindsey said at August 10, 2011 9:15 AM:

It just occurred to me how low a bar that is. I'd say 85% of humans in wealthy countries are incompetent. Even most of humankind's greatest thinkers are sub-human retards when they comment on matters related to HBD (human bio-diversity). And that's just in the most intelligent countries... globally, including the retarded countries, I'd say at least 97% of humans are incompetent.

This is the point that even the singularity/transhumanist people don't get. We don't need sentient AI in order to automate manufacturing and wealth creation. We just need better robots and manufacturing processes. I think we will get this around 2030. This is the whole point because this will allow for self-interested groups to decouple from the rest of humanity and to go out into space (or, initially, the oceans) to do their own thing. This has always been the name of the game for me ever since I joined the milieu in the mid 80's. It was in this context that we all got excited over Eric Drexler's concepts for nanotechnology.

Rollory said at August 10, 2011 11:23 AM:

In order to get to 2045, we have to get through 2020 and 2030. I am not convinced the USA will do so. If the USA fails to survive as an integrated and stable political unit, I am not convinced the economic and social capital will be available to continue technological progress.

The debt situation is pushing us ever closer to an Argentine or Soviet style collapse. Transhumanism doesn't arise in the former Soviet states or anything like them.

David Gobel:
It isn't the event horizon that is the problem, it is the gravitational tidal effects. With a big enough event horizon - galaxy mass, say - you could go right through it without even noticing.

Troy Riser said at August 10, 2011 11:51 AM:

"According to his [Kurzweil's] calculations, the end of human civilization as we know it is about 35 years away."

35 years is it? Can't get much more precise than that, especially when discussing something as variable-laden and complex as technological progression and the dialectic of human development in toto. While neither scientist nor prophet myself, seems to me all of this futuristic speculation implies a more or less uninterrupted trajectory along evidently predictable, pattern-discernible lines, a causal chain leading inexorably to the stated conclusion, even to the specific year when all of this is supposed to come about.

Color me skeptical. And where's my flying jet car, anyway? I thought for sure we would have flying jet cars by 2011, based upon my calculations. And while we're at it, where's my phased plasma rifle in the 40 watt range?

Does anyone but me remember 'The Great Crisswell', the Z-list celebrity friend of 'Plan 9 From Outer Space' director Ed Wood who would make fantastical future predictions? Same thing here. The only thing that can be predicted with any certainty is the end of everything at some point. I can't give you a date.

jdelphiki said at August 10, 2011 11:55 AM:

Sci-fi writer, Charles Stross covered that in one of his singularity books, Accelerando. In it, the solar system is dismantled down to the molecular level to support the processing power necessary to maintain the post-singularity AI reality's needs. The AI's nanobots create a humungous "Matrioshka Brain" out of all the raw materials (planets, asteroids, etc.), all of it layered in spheres up to and including the heliopause of the sun's influence. It's pretty cool reading if you're into that kind of thing.

In the case of the declining resources mentioned in the article, one would suppose that the AI's would also make breakthrough innovations at converting matter into processing power, much the way that Stross describes it.

don said at August 10, 2011 11:58 AM:

"The future is always oversold"

AnonyMouse said at August 10, 2011 12:40 PM:

As an aspiring sci-fi writer, I've spent a lot of time thinking about what the AI problem- That if we do develop artificial intelligence, it will kill off humanity or enslave a la The Matrix and The Terminator, or make humanity stagnate, a la Asimov's spacer society from his Robot novels.

And it's one I think I've solved, at least for the purposes of my fiction. If we do eventually develop truly artificial intelligence and it's smarter than we are, it probably won't have much reason to stick around Earth, and probably won't care about humans at all. I figure it'll develop space travel for itself and fly off to explore the universe. So no AI problem.... But also little chance for humans to take advantage of that extreme intelligence.

Vince said at August 10, 2011 1:27 PM:

Abelard Lindsey,

I disagree completely wrt your early comments. Working in neuroscience and neural-engineering, it's clear your ideas are juxtaposed. When viewed over long time-frames, it's somewhat clear that entities whose advance is defined by self-organizing and/or evolutionary mechanisms are increasing in rate. We can look very shortly at the shift from biological evolution (mean generation time/human = 30 years), to cultural evolution (mean generation time/human

Kurzweil claims this is because entities use the outputs of such evolutionary mechanisms as inputs, thus allowing acceleration as every generation is built by better designs: an attractive idea. Intel experiences this as they use their latest designs to bootstrap the design and routing of a microchip, which I believe is an NP-hard problem.


Biotechnology is a very fast moving and relevant field (hell, I love it!), but it's certainly an intermediate one. Ultimately, we are stuck with the legacy of the design decisions made by evolution: genetic tweaks will not be able to change this proposition significantly. Kurzweil continues to use the red-blood cell example which I'm sick of hearing, but consider that your average ion channel is using 10^5 to 10^8 times more energy than the thermodynamic minimum. As Laughlin has noted,

How fast can this molecule switch, and how much energy is needed? The motor protein kinesin cycles conformation approximately 100 times per second, hydrolyzing 1 ATP per cycle and does considerable mechanical work. Freed from heavy mechanical work, ion channels change conformation in roughly 100ms. In principle, therefore, a single protein molecule, switching at the rate of an ion channel with the stoichiometry of kinesin, could code at least 10^3 bit per second at a cost of 1 ATP per bit. We find that a molecular system, the chemical synapse, transmits at only 5% of the rate of the single molecule but at 10^4 times the cost per bit.

Are you going to design and encode this genetically? Even so, we're still not close to the thermodynamic minimum of KbT. We're limited by our legacy: DNA as an encoding system is still predicated on what can be carried out with our carbon biochemistry and a closed set of reactions you can preform on it. That said, such a system can be designed...


Ultimately, as I noted, biotechnology will be transient to molecular nanotechnology of the K. Eric Drexler type. Once we can manipulate matter at that level, at that granularity, biology as we currently know it is superfluous. It could be augmented in a set of ways that is just impossible given the limits of carbon biochemistry. AI will increase the entire while in the background as a facilitator, as it already does.

In terms of information and computational limits, you're also quite off IMHO. The increase in computation has some run to run, but that argument involves the Bekenstein bound, Quantum computation, etc and is best left to next time!

Joe Mama said at August 10, 2011 1:36 PM:

I will welcome our new robotic overloards with much glee.

bfwebster said at August 10, 2011 2:09 PM:

I took CS courses in AI both as an undergrad (mid-70s) and as a grad student (1980ish), so I've been following the field for about 35 years. The track record for predictions of breakthroughs in AI is not very good, though most practitioners are more cautious now than they were during the first 30+ years of AI research (which dates back to the 1950s). Because of that, I tend to take most AI breakthrough predictions with quite a bit of salt. ..bruce..

Abelard Lindsey said at August 10, 2011 3:22 PM:

Ultimately, as I noted, biotechnology will be transient to molecular nanotechnology of the K. Eric Drexler type. Once we can manipulate matter at that level, at that granularity, biology as we currently know it is superfluous.

The reason for my earlier comments is because I'm not convinced molecular nanotechnology is even possible. In fact, I am almost convinced that "dry" (this mean vacuum-phase nano-mechanical approach) is not possible at all. This leaves "wet" nanotechnology (solution-phase processes), which is really biotechnology on steroids. Probably I should have been clearer in defining biotechnology or bio-engineering as any kind of solution-phase based nanotechnology, rather than limited to known DNA-based biology.

In any case, I think that solution-phase "wet" nanotechnology will be capable of accomplishing all of the feats described in Drexler's first book, "Engines of Creation". It will most certainly be capable of doing more than DNA-based biology as we know it now and it should enable the manufacturing necessary to do the kinds of things we want to do as independent self-interested groups. This is the bottom line as far as I'm concerned.

In short, I don't believe "dry" nanotech is possible. But its not necessary to go where we want to go.

Wolf-Dog said at August 10, 2011 9:15 PM:

All of the brain functions can be emulated by computers. It will take more time, but it will ultimately be emulated neuron by neuron, atom by atom. Then the productivity of all manufacturing and R & D will increase dramatically, and it is very possible that life will become much more comfortable for humans. Unfortunately most of us will be dead by then.

SMSgt Mac said at August 10, 2011 9:26 PM:

LOL.
Read Penrose's "The Emperor's New Mind".
'AI'? First come up with a decent definition of real 'I'.

Vince said at August 11, 2011 12:16 AM:

Abelard Lindsey, fair enough. I disagree with you, but respect the position. We're falling on opposing sides of the classic Drexler-Smalley debate.


SMSgt Mac,

Roger Penrose is a fantastic mathematical physicist, but his entry into neuroscience is suspect. I've always looked at it as brilliant mental masturbation. Having previously conjectured that if quantum gravity is based off 4D manifolds, he sought to show how it might provide a physical computational fabric that can decide the 4D classification problem, which is undecidable by Turing-Machines. Thus, human consciousness isn't computable by a Turing Machine, etc. We don't seem to need quantum mechanics in neuroscience, thank god.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright