April 05, 2009
Robot Formulates Hypotheses And Does Experiments

Using algorithms for a limited form of artificial intelligence a robot named Adam used knowledge about yeast genetics to formulate hypotheses and carry out experiments.

As reported in the latest issue of the journal Science, Adam autonomously hypothesized that certain genes in the yeast Saccharomyces cerevisiae code for enzymes that catalyze some of the microorganism's biochemical reactions. The yeast is noteworthy, as scientists use it to model more complex life systems.

Adam then devised experiments to test its prediction, ran the experiments using laboratory robotics, interpreted the results, and used those findings to revise its original hypothesis and test it out further. The researchers used their own separate experiments to confirm that Adam's hypotheses were both novel and correct--all the while probably wondering how soon they'd become obsolete.

The automation of lab work is the wild card that makes the future progress of biological science hard to predict. How fast will robots take over lab work? Will they hit a sudden critical mass of capabilities at some point where suddenly they'll enable an order of magnitude speed-up (or even greater) in the rate of progress of biomedical research? Will this happen in the 2020s? 2030s?

The robots will take over intellectually easier tasks first.

Ross King from the department of computer science at Aberystwyth University, and who led the team, told BBC News that he envisaged a future when human scientists' time would be "freed up to do more advanced experiments".

Robotic colleagues, he said, could carry out the more mundane and time-consuming tasks.

"Adam is a prototype but, in 10-20 years, I think machines like this could be commonly used in laboratories," said Professor King.

Share |      Randall Parker, 2009 April 05 11:11 PM  Biotech Advance Rates


Comments
David Govett said at April 6, 2009 3:04 AM:

This is nothing new. In the 1990s I read about software that derived Euclid's theorems and came up with novel geometrical facts previously unknown. When autodidactic AI arrives in the next few decades, we'll know, because the its runaway acquisition of existing knowledge and generation of novel knowledge will leave us breathless, at least for the few days we will be capable of understanding it.

James Bowery said at April 6, 2009 6:05 AM:

Running experiments is reduces the noise level of observations by removing confounding variables and given the way science proceeds in the real world of political intrigue, the emphasis on experimental control in the wake of the Protestant Reformation was necessary, and remains necessary as new forms of quasi-theocracy (such as political correctness) have taken hold of our institutions of higher learning -- particularly in the social sciences.

Having said that, there is something severely lacking in the treatment of data (digitally recorded observations) by AI researchers and it can be seen in the formation of _very_ simple hypotheses such as those generated by the 2-D function finder at zunzun.com. If you enter a bunch of data points, it will search a huge space of functions to find the "best fit" -- meaning the minimum RMS error, but it will do so without regard to the complexity of the model. I've suggested that zunzun.com be revised to Kolmogorov Complexity estimates derived by compiling the models into bit-serial virtual machine code, and comparing the resulting program length, plus a commensurate measure of the RMS (in bits) between models to find the optimum. It turns out that this simple "heuristic" is an area of intense controversy within the higher echelons of AI pitting such theoreticians as Marcus Hutter and Ray Solomonoff against one another. I have to come down on the side of Hutter's emphasis on Kolmogorov Complexity estimates, despite the dearth of empirical investigation and despite Solomonoff's rejection of his own prior emphasis on Kolmogorov Complexity in generating predictive models. It is apparent, at least to me, that Solomonoff has forgotten something very basic about his own field when he said:

"This subjectivity the fact that they are based on choice of which Universal machine to use is characteristic of all prediction systems based on a priori probability distributions. The choice of Universal machine and its instruction set is a necessary parameter in the system that enables us to insert a priori information into it. The dependence of the universal distribution on choice of machine is not a Bug in the System it, too, is a Necessary Feature."

Aside from the fact that every these differences wash out as a constant number of bits required for one UTM to emulate another UTM, it seems obvious that not all Turing machines are equally complicated. For example there is John Tromp's 272 SK-combinator universal is one of many demonstrations that it makes sense to discuss the relative complexity of Turing machine equivalents themselves.

Having said that, I remain open to experimental testing of Hutter's theory of Universal AI, but my cynicism about scientific funding is exemplified by the lack of any funding for such research.

Fat Man said at April 6, 2009 7:50 AM:

Yes, but does the 'bot support the scientific consensus about AGW?

Vincent said at April 6, 2009 8:56 AM:

One small step for man, one giant step for Skynet.

Randall Parker said at April 6, 2009 6:12 PM:

One of the ads I see on this page says "Hottest Robot Girls". Another says "Hot Robot Women". Probably about movies. Commerce makes sure we do not lose sight of the important stuff.

Vincent, why did Skynet decide to send back a babe as the third and improved terminator? Does Skynet have ancient porn programming still active in it put there by its original geek designers? Will the fourth and fifth terminators also look like total hotties?

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright