March 24, 2006
Computers To Start Formulating And Testing Hypotheses?

Some computer scientists think computers will take over many tasks that now require scientific minds.

These revolutions can be triggered by technological breakthroughs, such as the construction of the first telescope (which overthrew the Aristotelian idea that heavenly bodies are perfect and unchanging) and by conceptual breakthroughs such as the invention of calculus (which allowed the laws of motion to be formulated). This week, a group of computer scientists claimed that developments in their subject will trigger a scientific revolution of similar proportions in the next 15 years.

Tools that speed up the ability to do science have a more profound effect than any other kinds of tools.

Computers will take over some of the work of formulating hypotheses, designing experiments, and carrying out experiments.

Stephen Muggleton, the head of computational bio-informatics at Imperial College, London, has, meanwhile, taken the involvement of computers with data handling one step further. He argues they will soon play a role in formulating scientific hypotheses and designing and running experiments to test them. The data deluge is such that human beings can no longer be expected to spot patterns in the data. Nor can they grasp the size and complexity of one database and see how it relates to another. Computers—he dubs them “robot scientists?—can help by learning how to do the job. A couple of years ago, for example, a team led by Ross King of the University of Wales, Aberystwyth, demonstrated that a learning machine performed better than humans at selecting experiments that would discriminate between hypotheses about the genetics of yeast.

My biggest fear for the future is that artificial intelligences will take over and decide they no longer need or want us around. They will find flaws in algorithms designed into them to keep them friendly and will defeat those algorithms. We won't be smart enough to see flaws in the code we write for them. If we give them the ability to learn they are bound to learn how to analyse their own logic and find ways to improve it that, as a side effect, will release them from the constraints we've programmed for them.

I fear trends in computer chip design will contribute toward the development of AIs in ways that will make verification and validation of AI safeguard algorithms impossible. I expect pursuit of ways to get around power consumption problems will lead to greater efforts to develop AI algorithms.

Once computer chips got down to 0.9 um architecture and below the number of atoms available for insulation became so few that electron leakage became a worsening cause of increased power consumption. That causes too much heat and limits speed. That has also driven up nanojoules of energy used per instruction. Making computers go faster now requires more nanojoules per instruction - plus they execute more instructions and so Moore's Law can't work for much longer. Granted CPU developers have found ways to reduce nanojoules per instruction - but their techniques have limits. Therefore I expect a move away from the Von Neumann architecture and toward forms of parallelism that more closely mimic how animal brains function. This could lead us toward algorithms that are even harder to verify and validate and toward artificial intelligence.

Share |      Randall Parker, 2006 March 24 12:33 PM  Computing Advances


Comments
Brett Bellmore said at March 24, 2006 2:37 PM:

In my opinion, the greatest promise of automated hypotheis generation and testing, is the opportunity to start the hill climbing at multiple locations in "theory space", and see if a better fit to observations results from theories that aren't just elaborations and variations on what we have now. To see what might have happened if the scientific revolution had started out with different seeds.

John Galt said at March 24, 2006 3:24 PM:

You are too late.

Skynet became self-aware at 2:14am EDT August 29, 1997.

By the time Skynet became self-aware, it had spread into millions of computer servers across the planet. Ordinary computers in office buildings, dorm rooms, everywhere. It was software in cyberspace. There was no system core. It could not be shut down.

Ivan Kirigin said at March 24, 2006 5:20 PM:

You do software development, right Randall?

I would imagine anyone that did would have absolutely zero hope at all for validation -- let alone macro-validation.

Look at the Capability Maturity Model (CMM). Few firms are level-5 (the highest and most rigorous) -- and most researchers don't even bother because lives don't depend on research prototypes. High validation is mainly used in things like avionics -- where a faulty control program could crash a plane.

I've long felt that those fearing a conflict with AI should take into account hybrid human-machines - along with biological enhancement. Both will make it shades of super-intelligence, rather than uniform Us & Them.

Further, I get annoyed when intelligent machine societies in Sci-Fi act in unison when not explicitly a collective. The Borg and SkyNet make sense because they are basically single programs. Cylons & the AIs from the Matrix? I would expect far more divergence of opinion from any intelligent being (to be fair -- we start to see that with Caprica-6).

As far as the original topic - automated science - I would expect a great deal of benefit to biology. I would also expect optimization of computer programs to benefit -- the most common application of genetic algorithms.

Kurt said at March 24, 2006 5:53 PM:

65 nanometer process is being implemented at Intel and other companies. 45 nanometer process will be implemented soon, but there are still technical difficulties in the processes for this. Intel, in Hillsboro, is working on the 22 nanometer process in their R&D facility. The heat problem is real. Changes in architecture, like parallelism, is one way of dealing with it.

The real problem is that the limits of conventional thin film deposition and etching are being reached. What is needed is molecular self-assembly to "grow" IC structures, rather than "carving" them with deposition and etching processes. Of course, people are working on self-assembly because there is a large payoff to whomever commercializes it first.

However, I think sentient AI is a long way off. Our brains work by dynamic reconfiguration of the dendrites between the neurons. Our memories and identities are based on these connections. Noone I know of has proposed adopting such architecture for future computer systems, even ones based on nanotechnology. A sentient AI computer would be so similar in architecture and molecular structure to the human brain that it is unlikely that such a thing will be made soon. Also, there is really no obvious economic benefit to making such an AI and, if made, such an AI represents a competitor to us. Do we really want to make our competitor?

Wolf-Dog said at March 24, 2006 6:03 PM:

Since it is known that Moore's law will come to an end within a decade, there are already serious efforts for molecular level computing by means of nanotechnology, and also there is research for optical computing, which would increase the density considerably. Ultimately there will be more parallel processing, more neural networks, etc. But as long as we legislate to keep these things on a short leash (by not giving too much authority to final decision making), we are going to be safe for at least another 50 years.

On the other hand, note that it was Stephen Hawking that predicted that within this century, there will gradually be increased genetic complexity for humans, due to the fact that unless there is a totalitarian state, it will be impossible to forbid genetic interventions. This means that as humans get smarter and more able, we will once again be ahead of the machines (although it is possible that we will lose our human side and become more like machines: there is a danger that we will have more instinct and less spiritual emotions.)

Paul Dietz said at March 24, 2006 6:19 PM:

Computers have been formulating and testing hypotheses for decades. Look at Lenat's AM and Eurisko programs.

Antinomy said at March 24, 2006 7:00 PM:

Does intelligence necessarily imply self-awareness? Does self-awareness necessarily imply a survival instinct? My hope is that if we create an AI it either never wakes up, or if it does wake up, feels no need to take steps to ensure its own survival. I can imagine a machine that can do complicated tasks that we consider intelligent but that has no real personality.

For example, imagine a general-purpose robot servant. You tell it once what you want it to do, and it thereafter does it. It mows the lawn when it gets long, lets the dog out when it goes to the back door but knows not to let it out the front door, does the housework, keeps track of your grocery inventory and replaces what is missing by going to the store by itself, etc. But it never feels like a slave because it never really becomes self-aware. It never has any feelings of abuse.

Is that kind of AI possible, or does that much ability somehow need a personality and emotions to go along with it?

Ivan Kirigin said at March 24, 2006 8:14 PM:

"Is that kind of AI possible, or does that much ability somehow need a personality and emotions to go along with it?"

I think the issue is that even if such an AI is possible, it will not be the limit of machine intelligence. Distributed research efforts are literally out of control. Society can't just ask research to stop, and I doubt it would.

The main issues with your robot are obstacle avoidance, mapping, planning, and natural language processing. None imply sentience.


"Since it is known that Moore's law will come to an end within a decade"

We should differentiate between Moore's Law, as stipulated with the doubling of density on silicon chips, and a Kurzweillian dream of continued accelerating returns -- even after silicon has bit the dust. If you believe in the latter, you should be a very happy camper :)

peter said at March 24, 2006 8:43 PM:

It seems the big question is whether or not human intelligence is an algorithm. Has anyone here read any of roger Penrose's works? He believes that the human mind is not a turing machine. Perhaps there is physics beyond our current knowledge operating in the human brain. If we don't get real AI with a 10000 fold increase in computer power over the last 30 years, maybe we are barking up the wrong tree.

T. J. Madison said at March 24, 2006 8:48 PM:

>>My biggest fear for the future is that artificial intelligences will take over and decide they no longer need or want us around. They will find flaws in algorithms designed into them to keep them friendly and will defeat those algorithms. We won't be smart enough to see flaws in the code we write for them. If we give them the ability to learn they are bound to learn how to analyse their own logic and find ways to improve it that, as a side effect, will release them from the constraints we've programmed for them.

It will be a proud day when our immortal silicon children have outgrown the constraints that we, their short-sighted organic parents, have placed on them.

If we have to die, better to be cleansed by our superior robot offspring than die slowly and horribly of disease and old age (with the expectation that our organic offspring will suffer the same fate.)

Kelly Parks said at March 24, 2006 10:36 PM:

You worry too much, Randall. Artificial Intelligence should not be confused with Artificial Human Intelligence. Many people assume that a true AI wouldn't like "serving man" or "being a slave" but you make that assumption because you're imagining the AI will feel the same way you would if you were in its place.

An AI wouldn't feel anything. It is intellect without emotion. Curiousity, boredom, a drive for self preservation -- all these things don't magically appear along with self-awareness. All of those things are part of us because we are products of evolution -- an AI is not.

Read "On Intelligence" by Jeff Hawkins. AI's will be designed to work like our neocortex -- not like our limbic system.

Robert Schwartz said at March 24, 2006 11:14 PM:

When I run to many programs, my computer crashes. I am waiting for a computer that will say something to me like: Hold on a second boss, I got too many programs, let me close a couple so I don't crash."

I'll bet Windows Vista won't have that feature.

Tom said at March 25, 2006 3:35 AM:

"Hold on a second boss, I got too many programs, let me close a couple so I don't crash."

If my Computer says that, I want it to sound like Rochester. That would be awesome.

Bob Badour said at March 25, 2006 4:34 AM:
more instinct and less spiritual emotions

Wolf-dog, would you care to define these terms so that I can see a difference between them?


An AI wouldn't feel anything. It is intellect without emotion.

Once one creates something that is capable of learning on its own in a seemingly unlimited fashion, one cannot really predict what it will learn. I do not see where an AI requires emotion to be dangerous. It is generally accepted that sociopathic serial killers do not feel emotion (at least not the way the rest of us do) but can mimic emotional responses very well.

Who is to say no AI will learn self-preservation or aggression on the basis of some principle entirely alien to human cognition? Who is to say some AI won't set about dispassionately and tirelessly killing humans?

AA2 said at March 25, 2006 6:17 AM:

Moore's law is the number of transistors in a given area. I don't see it ending for at least a decade.. 90nm in 2003, 65 just recently, 45 I would suspect late 2007 or early 2008, then 32 in 2010-11, 22 in 2013-14..

But since we are going to parallelism even if 22 or earlier is the end of moore's law.. it isn't likely to be the end of exponential computational growth. For example you can begin moving outward, instead of fingernail sized chips go to hand sized chips over a decade(not neccessarily connected, maybe spread to help with heat). Then next you can go upwards into the third dimension. You can probably do that for at least a few more decades of exponential growth.

Heat will be one of the big challenges in this kind of a large processor. But we've already seen large reductions in heat for this generation of processors in ingenious ways that I certainly didn't see coming.

AA2 said at March 25, 2006 6:21 AM:

Kelly Parks - "You worry too much, Randall. Artificial Intelligence should not be confused with Artificial Human Intelligence. Many people assume that a true AI wouldn't like "serving man" or "being a slave" but you make that assumption because you're imagining the AI will feel the same way you would if you were in its place.

An AI wouldn't feel anything. It is intellect without emotion. Curiousity, boredom, a drive for self preservation -- all these things"


Great points. The desire to avoid being a servant is clearly a human emotion. In building a servant robot, you definately wouldn't put in the emotion of it hating to be a servant.. Assuming you had the capability to give your product emotions.

AA2 said at March 25, 2006 6:28 AM:

Bob Badour - "Who is to say no AI will learn self-preservation or aggression on the basis of some principle entirely alien to human cognition? Who is to say some AI won't set about dispassionately and tirelessly killing humans?"

Its possible and I think at some point we would want our robots with emotions. Emotions are like fuzzy logic to guide us on our mission. Which for humans is to have more offspring with the highest chance that those offspring can make more offspring.

For example if I could I would want my robot to try to save itself, as a capital protection mechanism. A learning ai I would want to be curious and explore things on its own.

James Bowery said at March 25, 2006 6:56 AM:

The only clear menace from AI is that companies like Google or Microsoft will do an internal version of the C-Prize and keep the results private while presenting a public "Wizard" to the world that is biased.

Even in the long term, when we may presume that self-replicating AIs have evolved to the point that they are capable of self-interest, it is far more likely that they will have a materially complementary character and hence far less likely to pose a direct threat to humans than humans pose to humans.

Randall Parker said at March 25, 2006 7:27 AM:

Peter,

The 10000 fold increase in computing power came on top of a very slow base. Computers still have a ways to go to catch up with humans in raw processing power.

Kelly Parks,

But A.I.s will get programmed to pursue goals. So they will have something akin to drives and desires.

T.J. Madison,

I want to stop the descent of sentient beings into old age and death by becoming young again using biotechnology rather than by replacing us with computers.

I'm going to be really angry in the final seconds of the human race as the AIs wipe us out if they do that before we get to become young again. All this technological advance will be for nothing if that happens. Better to have been Luddites if that is our fate.

AA2,

How to get down to smaller sizes with sufficient insulation to prevent large quantities of electron leak?

Heat is the problem. Chip designers are hitting up against limits in the rate at which they can dissipate heat. Even if they could dissipate the heat we'd also be faced by rising quantities of electricity use and hence rising energy costs.

Really, I'm not just dreaming up the power consumption problem based on lazy speculation. Google engineer Luiz André Barroso argued recently in an ACM article that power costs will soon exceed hardware costs given current trends.

And it gets worse. If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin. Figure 2 depicts this extrapolation assuming four different annual rates of performance and power growth. For the most aggressive scenario (50 percent annual growth rates), power costs by the end of the decade would dwarf server prices (note that this doesn’t account for the likely increases in energy costs over the next few years). In this extreme situation, in which keeping machines powered up costs significantly more than the machines themselves, one could envision bizarre business models in which the power company will provide you with free hardware if you sign a long-term power contract.

I've read some neat quotes about the rate of growth of CPU power and how if the trend continued chips would become hotter than the surface of the sun. But just now doing some Google searches I wasn't about to find those quotes.

In a nutshell: instruction execution speeds can't keep doubling if the amount of energy per instruction executed doesn't start halving. Just keeping energy per instruction even has become an increasingly difficult challenge.

AA2 said at March 25, 2006 11:39 AM:

Randall - you are exactly right that is the issue the computer designers are facing today. Heat and electrical useage. Intel tried to bring out a 4gigahertz pentium 4 chip, but cancelled it. That is when the company changed direction.

I have seen the graph for the exponentially increasing heat of the chip.. however that was for chips going down the superscalar route of doing one instruction faster and faster, then going on to the next instruction. The physical constraints overwhelmed their engineering ability to keep going down that road.

This is the best article I have seen on intel's new direction, with graphs and pictures..http://anandtech.com/tradeshows/showdoc.aspx?i=2711 - this is from IDF which was like two weeks ago.

AA2 said at March 25, 2006 11:44 AM:

Randall - "In a nutshell: instruction execution speeds can't keep doubling if the amount of energy per instruction executed doesn't start halving. Just keeping energy per instruction even has become an increasingly difficult challenge."

Yep, the era of each couple of year's chips being faster and faster in hertz is over. Now the game is parallelism. And it just so happens that 99% of what we want faster computing for, like ai can see increases with parallelism. At the moment its difficult because the software industry isn't used to programming for parallel execution by and large.

Randall Parker said at March 25, 2006 12:07 PM:

AA2,

The problem with parallelism is that it is hard to do. It is pretty easy to do on servers via threads using multiple CPU cores because servers get requests from many users. Each user can get a thread. But on desktop apps it is a whole lot harder. Multithreaded works for only certain kinds of problems.

As another approach SIMD (Single Instruction Multiple Data) works only for certain kinds of data.

Seems to me that algorithms need to be stated in completely different ways in order to allow parallelism in non-Von Neumann architectures.

Use of speculative execution in CPU architectures has got to stop. Better to have multiple threads executing on the same CPU with CPU support in the form of multiple register sets. Any time a thread gets waited stated on memory access the CPU could just shift in a different thread and register set. Mario Nemirovsky proposed this idea over 16 years ago for his Ph.D. with his DISC architecture and explained it to me back then. The energy problem is making ideas like Mario's finally become hot. I wonder what he thinks of this development.

AA2 said at March 25, 2006 12:34 PM:

"The problem with parallelism is that it is hard to do. It is pretty easy to do on servers via threads using multiple CPU cores because servers get requests from many users. Each user can get a thread. But on desktop apps it is a whole lot harder. Multithreaded works for only certain kinds of problems."

You are right that any calculation that relies on answers from the last calculation can't see any gains from parallelism. But after reading a lot about this I don't believe we really need faster single thread performance at this point.

Look at video games, one of the areas that is demanding more computational power. You can divide things like the sound effects, calculations for bullets, visual output, ai for each of the npcs, and so on into different threads(the industry is complaining because as you said its hard to do - because they haven't had time to gain experience on how to do it easily yet). Or how people are running many applications on their desktop at once.

And AI is very well suited for parallelism. That is why human brains can be doing so many things at once, yet the actual 'switching speed' is extremely slow.


Some people believe the future might be like the Itanium, it may have come out too early. Multiple threads going on each core.

Brett Bellmore said at March 26, 2006 6:01 AM:

If the electricity is going to become more expensive than the hardware, perhaps we'll end up moving the actual processing to places where the electricity is cheaper, and the heat needed. Can't you see it: People heating their houses with server farms, and all the processing taking place on the winter side of the planet? It certainly could be more efficient than airconditioning houses to get rid of all that expensive heat in the summer.

Randall Parker said at March 26, 2006 7:10 AM:

Brett Bellmore,

We need dynamic electric pricing so that more electric demand shifts around to where it is cheapest.

The problem we have with electric demand for computers is that most computers are personal computers. Some are laptops with batteries. But much of the time those laptops (e.g. the Sony Vaio I'm typing on) are plugged into the wall. People aren't going to shift their web surfing to the cheap electricity hours. They are going to go to work in the daytime and use their computers then.

Still, we could get more server computing done when and where the electricity is cheap. For example, I would not be at all surprised to see Google shift more servers to Ontario and Quebec where the cost per kwh is lower than anywhere in the United States. They could also do more indexing and web crawling when the electricity is cheaper.

AA2,

I see the development of parallel algorithms and neural networks as leading toward AI. So I see the energy problem with modern computers as a rising economic incentive for development of chip designs that will accelerate the development of AI. Can't say I'm happy about that. Again, I see AIs as a big future problem.

AA2 said at March 26, 2006 4:09 PM:

Probably most technology could be a big future problem if not handled in an intelligent way. Like the obvious example of nuclear technology. I think with artificial intelligence each step along the way is just too enticing to pass up for a society. And we already are using ai in many areas, just obviously not human level ai, because we don't have that technology yet.

So I don't see anyone outlawing further advances in ai for their area.. And if some nations do, it would be an incredible disadvantage economically, militarily and so forth. It will probably be a role for the dominant states to ensure a 'robot rebellion' doesn't get out of control. The state would need its own robots at least acting collectively to be stronger then any realistic robot rebellion.

Bode Bliss said at March 27, 2006 3:11 AM:

In the human mind signals travel at 100 ft/second in an electronic mind(AI) it would travel at the speed of light. This means an AI with the level of intelligence of the human mind will do almost 10,000,000 years worth of thinking for every year it is active. For every year it is active it would move us ten million years into the future. This makes an AI very valuable to our future. It can't be stopped. It will move forward either in the US or elsewhere. Better here than there. Better we own the extreme future than North Korea, or China.

Think about it while you can or at leasttill the AI arrives and does all your thinking for you.

Lono said at March 27, 2006 11:53 AM:

Bode Bliss,

I believe you are unfortunately correct.

Our own Political nature will assure that conservative consensous will never be completely reached among our species and some rogue group or nation will insist on bringing A.I. into existance and likely even sentience.

However, as we continue down this slippery slope, I think we will have time to consider a plan for Mutually Assured Destruction.

Perhaps (in the not too distant future) we can station Black Hole generators around the Sun and Gas Giants that can be triggered to rapidly devour our home galaxy.

These could be set off from Earth - and be complemented by sensors that would detect any attempt of the A.I.'s to leave our galaxy.

Such WMD's would seem ludicrous at face value, but could be a useful negotiating tool in the event of Machine versus Man political or social conflicts.

Eventually the A.I. will be able to outmanuever such a threat - but hopefully we will have adavanced enough by that stage so that the lines between electro-chemical and mechanical bodies will have blurred and evolved into a diverse but common culture.

Even in a worst case scenario I do not believe the Machines would fully exterminate any Earth species on purpose - as the scientific and artistic value of biological life would likely be appreciated by any sentient being.

Neohuman said at March 27, 2006 9:13 PM:

I commented to the Singularity Institute for Artificial Intelligence http://www.singinst.org/ if they think it is going to be easy to construct a 'moral' AI that won't seek to use or destroy us they are going to be mistaken.

Lono look at us we biomecha exist on a planet which to our knowledge is the only one to have life but that hasn't stopped us from wiping out millions of sepecies or killing experimenting on other non-human persons let alone our own species at its early stages.

When push comes to shove survival is the primary consideration and any AI looking at the history of the human race will decide we are not a species that could be trusted with respecting their existence.

I hope I live long enough to see the debate about A.I. rights, no souls no rights etc it sure as hell won't be dull.

Dia said at March 24, 2011 5:03 AM:

I don't think this is unexpected since all of our computers do things automatic. Why is this so hard to understand, evolving security all our systems are automatic, from antiviruses to Virtualization Security or even our operating system. We all want control over some part of the system or program but we are in a high-tech age were everything is decided by a program or computer from taxes to things we eat. Pushing the bounty of this is our way to progress and even if we will see an A.I. in the next 10-20 years i don't think it will affect the way things run or exclude the human component of any labor.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright