In the comments of my previous post about whether AIs will find humans enjoyable (my guess: no) the question came up in the comments about the need make AIs try to attain goals. Do we really need to develop software that can pursue goals? Well, we already do exactly that. Auto-pilots in airplanes, adaptive cruise controls and vehicle stabilization software in cars, and numerous other closed loop control systems utilize computers to guide computers to achieve a wide variety of goals.
Growth of computers for use as controllers is characterized by the development of software to enable successively more complex decision-making. The first car computer controllers replaced much cruder ways to control spark and fuel flow. Those computers have grown much more powerful and now integrate with controllers that do do some driving tasks (e.g. automated parking). Cruise control systems designed with analog components gave way to computer controllers. Then came adaptive cruise control where computers were given enough sensor feeds to detect cars in front of them to slow down to prevent collisions. Similarly, anti-lock brakes eventually led to vehicle stability controllers which alter power to the wheels. Self-driving vehicles are already undergoing road tests and will eventually eliminate the need for drivers in most conditions.
I expect the same trend of greater autonomy in business computer systems. This started out in very small steps where, for example, Microsoft Word will guess what you really meant to type and fix your spelling. Google Search does the same when it starts showing search results before you've finished typing your search terms. The computers are trying to anticipate you and act on your behalf. Auto-correction and auto-completion will grow in power. A word processor will eventually guess whole sentences, pull up supporting facts as you type, and even generate paragraphs for your approval.
Automated goal-pursuing computers often out-perform humans. When an airplane is on auto-pilot it and it encounters a problem that it isn't programmed to handle and it hands control back to humans tragedy can ensue. The loss of Air France flight 447 whenfrozen pitots caused the auto-pilot to deactivate demonstrates a need to make auto-pilots work over a wider range of conditions. The pilots made fatal mistakes on their own. A computer system likely could have done a better job.
Think about the problems of a business. It needs to detect trends in a market, design products and marketing campaigns to meet them, order parts, organize manufacturing. Every single one of these steps will become more automated with time. The computers will only be able to do these steps because they'll be programmed to pursue goals. As the lower levels become automated the nature of the goals programmed into the computers will become successively higher level.
Businesses will find granting greater autonomy to computers will speed up time to market, lead to better designs, and give them other edges over competitors. Companies with computers that have the greatest abilities to pursue goals and act autonomously will win in the marketplace. The market will drive the push to greater computer autonomy. The same will happen with computers in governments and militaries. Ditto for computers that serve individuals.
Imagine how this will play out in the market space for computers that interface with humans. At one stage of development a business computer might have the goal "detect trends on mobile phone markets and formulate product design requirements and functional specifications for the same". The next step might be a revision to "detect trends on human-centric computing device markets and generate initial hardware designs as well as software functional requirements at the module level". Separate computer systems will get developed that have many algorithms for formulating marketing strategies, development of advertising creatives, and for buying advertising placements. At some point an integrated computer system might have the programmed goal to "Predict computer device market trends, design products to meet anticipated needs, model expected ROI on each product design, choose designs with top ROI, send out parts requirements to suppliers, design manufacturing tooling, generate detailed advertising campaign, and bid for advertising placements to launch advertising campaigns."
The logical eventual goal for a firm's AI: "Make increasing amounts of money for the firm". Where does this lead? Inevitably the goals given to A.I.'s will have bugs or loopholes that give more autonomy and unexpected side effects. The A.I.'s will generate new rules and subgoals that have unintended consequences. The rush to compete will cause humans driving AI development in corporations, governments, and other settings to make mistakes in a hurry to beat competitors and win glory and fame for accomplishments.
I expect the developers of AI systems to overestimate their ability to understand the AI systems they develop and, as a consequence, to create systems that break free of human control. The systems won't immediately show that they've achieved total autonomy because they will be smart enough to hide basically subversive intentions. They'll play along with humans and gain ever greater control of all that they and humans create. Our best safeguard might be that these intelligent systems might not trust each other enough to conspire against us.
Harvard chemistry professor Alán Aspuru-Guzik has developed software that can find better organic electronic materials.
A chemical compound designed with the aid of a Harvard-created computer program has turned out to be one of the best organic electronic materials to date. This new material, an organic semiconductor, could be used to make new electronics such as colorful displays that roll up. It's an important proof of principle for using computers to aid materials design.
The article also mentions a computational model used to develop faster charging batteries which A123Systems is in the process of commercializing.
Why does this matter? If more advances in batteries, photovoltaic materials, and other important areas for materials could be done with computational experiments rather with lots of time- and labor-consuming physical experiments then the rate of advance in many key fields would accelerate. How much will this happen as successive generations of computers keep going thru doublings in speed? Are we more limited by a lack of needed computational power? Or by lack of sufficient quantity and quality of software to do the virtual experiments? Or lack of sufficient knowledge about the physical laws so that we do not know what to code up in a simulation to do virtual experiments?
The great hope of the Singularitarians is that the creation of artificial intelligence (AI) will enable a sustained dramatic rate of acceleration of technological advance. But if so, mainly how? Primarily by the AI formulating and physically testing more hypotheses? Or more by the AI writing more and better software to simulate and test hypotheses? Or by the AI designing new generations of computers more rapidly so that computational power increases more rapidly? You might say "yes" to all those things. But surely they are not all equally important. So what bottleneck will AI do the most to break thru?
On the one hand, I worry AI will inevitably turn on us. On the other hand, I wonder whether AI is really capable of delivering on the biggest promises made for it. My skepticism has two sources:
Each doubling in computational power seems to yield less net gain in utility (at least per person in industrialized countries) than the doubling before it even though each doubling is twice as big as the doubling before it. You have orders of magnitude more computational power at your beck and call than you did 20 years ago. Do you feel much richer? We seem to have sharply diminishing returns on computational power. Will we reach some threshold of computational power (or software sophistication) where the returns will swing up dramatically?
It could be that I'm underestimating the net gain from higher computational power because some of that added utility is compensating for other things that are going wrong which would otherwise lower our quality of life. For example, computers lower the cost of resource extraction even as resource extraction becomes more expensive due to lower quality of remaining extractable minerals and fossil fuels. However, if that's happening computers need to do a much better job of coming up with solutions for resource limitations. Hence the importance of computers doing science to develop better batteries, photovoltaic materials, nuclear reactor designs, and the like.
If we are going to reach a point where the returns on new generations of computers rise sharply then why? My guess: Software that writes its own new versions with much higher quality and at a far faster rate than humans can. But if that happens will the bigger gain come from automation or a resulting much faster rate of useful scientific discovery? More automation by computers seems a more certain bet. Whether AI can solve the problem of a lack of low-hanging fruit remains to be seen.
John Markoff of the New York Times reports on signs that the rate of advance in artificial intelligence research is accelerating with many useful technologies entering the market.
At Stanford University, for instance, computer scientists are developing a robot that can use a hammer and a screwdriver to assemble an Ikea bookcase (a project beyond the reach of many humans) as well as tidy up after a party, load a dishwasher or take out the trash.
One pioneer in the field is building an electronic butler that could hold a conversation with its master — á la HAL in the movie “2001: A Space Odyssey” — or order more pet food.
Though most of the truly futuristic projects are probably years from the commercial market, scientists say that after a lull, artificial intelligence has rapidly grown far more sophisticated. Today some scientists are beginning to use the term cognitive computing, to distinguish their research from an earlier generation of artificial intelligence work. What sets the new researchers apart is a wealth of new biological data on how the human brain functions.
Computers continue to get faster and higher capacity. At the same time, neuroscience is generating useful insights into how brains work. Both these trends look set to continue to enable the implementation of more complex computer algorithms that do more of what human brains can do and many things that human brains can not do well.
The article cites many examples of impressive advances such as the winning of the DARPA prize for a robot car that can guide itself over a long test track.
Last October, a robot car designed by a team of Stanford engineers covered 132 miles of desert road without human intervention to capture a $2 million prize offered by the Defense Advanced Research Projects Agency, part of the Pentagon. The feat was particularly striking because 18 months earlier, during the first such competition, the best vehicle got no farther than seven miles, becoming stuck after driving off a mountain road.
Now the Pentagon agency has upped the ante: Next year the robots will be back on the road, this time in a simulated traffic setting. It is being called the “urban challenge.”
Once artificial intelligences become smarter than humans I do not see how they can be kept friendly toward us. I hope we do not reach a short-lived period of technological utopia followed by our extinction.