Harvard chemistry professor Alán Aspuru-Guzik has developed software that can find better organic electronic materials.
A chemical compound designed with the aid of a Harvard-created computer program has turned out to be one of the best organic electronic materials to date. This new material, an organic semiconductor, could be used to make new electronics such as colorful displays that roll up. It's an important proof of principle for using computers to aid materials design.
The article also mentions a computational model used to develop faster charging batteries which A123Systems is in the process of commercializing.
Why does this matter? If more advances in batteries, photovoltaic materials, and other important areas for materials could be done with computational experiments rather with lots of time- and labor-consuming physical experiments then the rate of advance in many key fields would accelerate. How much will this happen as successive generations of computers keep going thru doublings in speed? Are we more limited by a lack of needed computational power? Or by lack of sufficient quantity and quality of software to do the virtual experiments? Or lack of sufficient knowledge about the physical laws so that we do not know what to code up in a simulation to do virtual experiments?
The great hope of the Singularitarians is that the creation of artificial intelligence (AI) will enable a sustained dramatic rate of acceleration of technological advance. But if so, mainly how? Primarily by the AI formulating and physically testing more hypotheses? Or more by the AI writing more and better software to simulate and test hypotheses? Or by the AI designing new generations of computers more rapidly so that computational power increases more rapidly? You might say "yes" to all those things. But surely they are not all equally important. So what bottleneck will AI do the most to break thru?
On the one hand, I worry AI will inevitably turn on us. On the other hand, I wonder whether AI is really capable of delivering on the biggest promises made for it. My skepticism has two sources:
Each doubling in computational power seems to yield less net gain in utility (at least per person in industrialized countries) than the doubling before it even though each doubling is twice as big as the doubling before it. You have orders of magnitude more computational power at your beck and call than you did 20 years ago. Do you feel much richer? We seem to have sharply diminishing returns on computational power. Will we reach some threshold of computational power (or software sophistication) where the returns will swing up dramatically?
It could be that I'm underestimating the net gain from higher computational power because some of that added utility is compensating for other things that are going wrong which would otherwise lower our quality of life. For example, computers lower the cost of resource extraction even as resource extraction becomes more expensive due to lower quality of remaining extractable minerals and fossil fuels. However, if that's happening computers need to do a much better job of coming up with solutions for resource limitations. Hence the importance of computers doing science to develop better batteries, photovoltaic materials, nuclear reactor designs, and the like.
If we are going to reach a point where the returns on new generations of computers rise sharply then why? My guess: Software that writes its own new versions with much higher quality and at a far faster rate than humans can. But if that happens will the bigger gain come from automation or a resulting much faster rate of useful scientific discovery? More automation by computers seems a more certain bet. Whether AI can solve the problem of a lack of low-hanging fruit remains to be seen.
|Share |||Randall Parker, 2011 August 27 01:09 PM Computing AI|