If silicon chips stop speeding up 5 years from now we'll experience lower economic growth. Faster computer chips are one of the drivers of higher productivity and economic output. Well, silicon might finally be approaching the end of the line for speed-ups. This has important (mainly negative) implications for economic growth.
The silicon chip, which has supplied several decades’ worth of remarkable increases in computing power and speed, looks unlikely to be capable of sustaining this pace for more than another decade – in fact, in a plenary talk at the conference, Suman Datta of Pennsylvania State University, USA, gives the conventional silicon chip no longer than four years left to run.
As silicon computer circuitry gets ever smaller in the quest to pack more components into smaller areas on a chip, eventually the miniaturized electronic devices are undermined by fundamental physical limits. They start to become leaky, making them incapable of holding onto digital information. So if the steady increases in computing capability that we have come to take for granted are to continue, some new technology will have to take over from silicon.
We could still extend the speed-up time by several years with more parallel architectures. That's already happening some now with multi-core CPU chips. But software that can effectively utilize many CPU cores in parallel has been slow in coming. You can see this with Mozilla Firefox for example. I have a dual core CPU. If I open up 50 to 100 web pages with FireFox at once (and I do this often) then FireFox never takes more than 50% of available CPU usage. Why? It can't use multiple threads (at least with FireFox v2.x revs - can v3?) and so FireFox maxes out a single thread of execution fully utilizing just one of my 2 CPU cores. The 50% of total CPU usage in Windows Task Manager means 50% of 2 core in my case. So 100% of 1 core.
This limit on the part of FireFox is disappointing. If a very popular development project with a large number of contributors and millions of users is lagging then how will it take for less important apps to get more parallelized?
Some areas of computing could still accelerate once silicon chip sizes stop getting faster. Subsets of computer algorithms could migrate into gate logic rather getting expressed in software that runs as a sequence of instructions in memory. In other words, abandon the Von Neumann architecture. Not easy to do in the general case. But already lots of algorithms (such as in graphics chips) get implemented in logic gates.
As a way to get past the silicon speed limits carbon nanotubes might replace silicon in computer fabrication.
At the conference, researchers at Leeds University in the UK will report an important step towards one prospective replacement. Carbon nanotubes, discovered in 1991, are tubes of pure carbon just a few nanometres wide – about the width of a typical protein molecule, and tens of thousands of times thinner than a human hair. Because they conduct electricity, they have been proposed as ready-made molecular-scale wires for making electronic circuitry.
It seems unlikely carbon nanotubes will be ready to replace silicon in 5 years. So I suspect we are going to enter a gap period where computing capacity doesn't grow as rapidly as it has in the last 50 years.
Research results from University of Maryland physicists show that graphene, a new material that combines aspects of semiconductors and metals, could be a leading candidate to replace silicon in applications ranging from high-speed computer chips to biochemical sensors.
The research, funded by the National Science Foundation (NSF) and published online in the journal Nature Nanotechnolgy, reveals that graphene conducts electricity at room temperature with less intrinsic resistance than any other known material.
"Graphene is one of the materials being considered as a potential replacement of silicon for future computing," said NSF Program Manager Charles Ying. "The recent results obtained by the University of Maryland scientists provide directions to achieve high-electron speed in graphene near room temperature, which is critically important for practical applications."
Graphene is a sheet of carbon that is only 1 atom thick. That's as thin as thin gets.
Carbon comes in many different forms, from the graphite found in pencils to the world's most expensive diamonds. In 1980, we knew of only three basic forms of carbon, namely diamond, graphite, and amorphous carbon. Then, fullerenes and carbon nanotubes were discovered and all of a sudden that was where nanotechnology researchers wanted to be. Recently, though, there has been quite a buzz about graphene. Discovered only in 2004, graphene is a flat one-atom thick sheet of carbon.
We might hit a computer Peak Silicon at the same time we hit Peak Oil. But while the 2010s are looking problematic I'm more bullish on the 2020s due to advances in biotechnology that should really start to cause radical changes by then. Also, by the 2020s advances in photovoltaics, batteries, and other energy technologies should start to bring in replacement energy sources faster than fossil fuels production declines.
These revolutions can be triggered by technological breakthroughs, such as the construction of the first telescope (which overthrew the Aristotelian idea that heavenly bodies are perfect and unchanging) and by conceptual breakthroughs such as the invention of calculus (which allowed the laws of motion to be formulated). This week, a group of computer scientists claimed that developments in their subject will trigger a scientific revolution of similar proportions in the next 15 years.
Tools that speed up the ability to do science have a more profound effect than any other kinds of tools.
Computers will take over some of the work of formulating hypotheses, designing experiments, and carrying out experiments.
Stephen Muggleton, the head of computational bio-informatics at Imperial College, London, has, meanwhile, taken the involvement of computers with data handling one step further. He argues they will soon play a role in formulating scientific hypotheses and designing and running experiments to test them. The data deluge is such that human beings can no longer be expected to spot patterns in the data. Nor can they grasp the size and complexity of one database and see how it relates to another. Computersâ€”he dubs them â€śrobot scientistsâ€?â€”can help by learning how to do the job. A couple of years ago, for example, a team led by Ross King of the University of Wales, Aberystwyth, demonstrated that a learning machine performed better than humans at selecting experiments that would discriminate between hypotheses about the genetics of yeast.
My biggest fear for the future is that artificial intelligences will take over and decide they no longer need or want us around. They will find flaws in algorithms designed into them to keep them friendly and will defeat those algorithms. We won't be smart enough to see flaws in the code we write for them. If we give them the ability to learn they are bound to learn how to analyse their own logic and find ways to improve it that, as a side effect, will release them from the constraints we've programmed for them.
I fear trends in computer chip design will contribute toward the development of AIs in ways that will make verification and validation of AI safeguard algorithms impossible. I expect pursuit of ways to get around power consumption problems will lead to greater efforts to develop AI algorithms.
Once computer chips got down to 0.9 um architecture and below the number of atoms available for insulation became so few that electron leakage became a worsening cause of increased power consumption. That causes too much heat and limits speed. That has also driven up nanojoules of energy used per instruction. Making computers go faster now requires more nanojoules per instruction - plus they execute more instructions and so Moore's Law can't work for much longer. Granted CPU developers have found ways to reduce nanojoules per instruction - but their techniques have limits. Therefore I expect a move away from the Von Neumann architecture and toward forms of parallelism that more closely mimic how animal brains function. This could lead us toward algorithms that are even harder to verify and validate and toward artificial intelligence.
A physicist friend who alerted me to this article says this will motivate scientists and companies to accelerate the development of quantum computing.
The semiconductor industry has obeyed Moore's Law for about 40 years and some experts believe that it will be valid for another two decades. However, Laszlo Kish at Texas A&M University believes that thermal noise -- which increases as circuits become smaller -- could put an end to Moore's Law much sooner (LB Kish 2002 Physics Letters A 305 144).
A research group at Xerox has developed a material called polythiophene which can be used to make spray-on organic transistors. These organic transistors can be used to make incredibly low cost flat panel displays.
A research fellow from the Xerox Research Centre of Canada has described the design and synthesis of semiconducting organic polymers that allow the printing of electronic patterns on a plastic substrate, paving the way for the printing of integrated circuits on plastic sheets instead of etching them on silicon wafers. Beng Ong made his presentation Tuesday (Dec. 3) at the Materials Research Society's fall conference in Boston.
The manufacturing equipment for making organic polymer transistor displays does not cost very much.
"The reason the cost is lower is that we don't need the same capital-intensive process as the one used with silicon," Ong said. "In our process, we can make the material into ink and ink-jet print it to create a circuit."
"I'm aware of six or eight companies trying to make these transistors," said Dr. Michael D. McCreary, vice president for research and development at E Ink, a display manufacturer in Cambridge, Mass. He said that his company planned to commercialize its first display next year and that it had created a prototype plastic display in partnership with Lucent Technologies.
What we need are displays that are about the thickness of a heavy duty file folder. Then integrate a radio modem into the display and one ought to be able to hold in one's hands a rather lightweight touch sensitive display that can call up the entire internet as well as files stored on the local server. Throw in a headset that allows one to talk commands to the computer.
We are still many years away from having the software that will emulate all human thought processes. Also, this thing will weigh 197 tons:
"ASCI Purple," slated to be completed in 2003, is expected to be the world's first 100-teraflops supercomputer, capable of processing data almost three times faster than current supercomputers.
A human brain's probable processing power is around 100 teraflops, roughly 100 trillion calculations per second, according to Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University.
A Bell Labs team used organic molecules to achievea new speed record for modulating a laser fiber optic signal.
"We achieved a practical, useful bandwidth of between 150 gigahertz and 200 gigahertz," Lee told United Press International. "Even in the worst-case scenario, we were at about 110 gigahertz, which is about three times better than (cutting-edge semiconductor modulators)."
Currently available modulators run at 10 Ghz. While this latest breakthru is not ready to be incorporated into commercial products it demonstrates that communications speeds will continue to increase by orders of magnitude.
One thing about the future is certain: Computers are going to increase in speed by orders of magnitude. If you click thru to this URL you can also find a link there to the original paper in Physical Review Letters.
In order to implement quantum information technology, it will be necessary to prepare, manipulate and measure the fragile quantum state of a system. "The first steps in this field have mostly focused on the study of single qubits,” Nori said. “But constructing a large quantum computer will mean scaling up to very many qubits, and controlling the connectivity between them. These are two of the major stumbling blocks to achieving practical quantum computing and we believe our method can efficiently solve these two central problems. In addition, a series of operations are proposed for achieving efficient quantum computations.“We have proposed a way to solve a central problem in quantum computing – how to select two qubits, among very many, and make them interact with each other, even though they might not be nearest neighbors, as well as how to perform efficient quantum computing operations with them,” Nori said.
The article says this technique might be useful for quantum cryptography:
Quantum computation has moved another step closer with the first demonstration of a quantum NOT gate. Although it is impossible to build perfect logic gates for quantum bits of information, a team led by Francesco De Martini of the University of Rome "La Sapienza" and INFM in Italy has achieved almost the maximum theoretical fidelity with its device (F De Martini et al 2002 Nature 419 815).