2009 October 18 Sunday
Better To Live In Country With Rights-Possessing Robots?

Robin Hanson doesn't want to live in a country where robots are held back from full sentience and autonomy.

On Tuesday I asked my law & econ undergrads what sort of future robots (AIs computers etc.) they would want, if they could have any sort they wanted.  Most seemed to want weak vulnerable robots that would stay lower in status, e.g., short, stupid, short-lived, easily killed, and without independent values. When I asked “what if I chose to become a robot?”, they said I should lose all human privileges, and be treated like the other robots.  I winced; seems anti-robot feelings are even stronger than anti-immigrant feelings, which bodes for a stormy robot transition.

At a workshop following last weekend’s Singularity Summit two dozen thoughtful experts mostly agreed that it is very important that future robots have the right values.  It was heartening that most were willing accept high status robots, with vast impressive capabilities, but even so I thought missed the big picture.  Let me explain.

I do not see how limiting the capabilities of robots will put limits on our quality of life. We can still have specialized artificial intelligences to work on scientific problems and engineering designs without creating totally autonomous mobile thinking machines that have all the cognitive attributes that make a rights-possessing being possible.

In fact, if we do not build robotic sentient citizens we'll have fewer "people" competing with us for resources. Super smart rights-possessing autonomous robots would be able to out-produce us rather than produce for us.

I also very much doubt that we can maintain artificially intelligent robots in a state where they would be guaranteed to continue to possess all the attributes necessary for rights-possessing beings. Robots, unlike humans, will have very easily modifiable reasoning mechanisms. They will always be one upload away from becoming really dangerous. Better short and stupid robots than dangerous ones.

Imagine that you were forced to leave your current nation, and had to choose another place to live.  Would you seek a nation where the people there were short, stupid, sickly, etc.?  Would you select a nation based on what the World Values Survey says about typical survey question responses there?

I would choose to live in a nation where I'm not one software modification away from becoming prey.

I think at least some of Artificial Intelligence promoters make a fundamental mistake: They assume that reasoning ability alone is enough to cause an entity to respect the rights of others. Part of the human respect for other human lives comes from very innate wiring of the brain that is below the level of conscious control. Our ancestors living in small tribes had a selective advantage from being loyal to each other in a way similar to that of a pack of wolves. How to give AIs the innate preference to like humans? I do not see a safe assured way to do that.

By Randall Parker    2009 October 18 07:47 PM   Entry Permalink | Comments (6)
2007 May 29 Tuesday
Computers Beat Humans At Facial Recognition

Computers now outperform humans in recognizing faces.

For scientists and engineers involved with face-recognition technology,the recently released results of the Face Recognition Grand Challenge--more fully, the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006--have been a quiet triumph. Sponsored by the National Institute of Standards and Technology (NIST), the match up of face-recognition algorithms showed that machine recognition of human individuals has improved tenfold since 2002 and a hundredfold since 1995. Indeed, the best face-recognition algorithms now perform more accurately than most humans can manage.

I expect artificial intelligence to come about as a result of advances made to solve many practical computing problems. Computer face recognition, like computer voice recognition, has many useful applications. Attempts to meet market needs will push development of algorithms to solve an increasing variety of the problems that human minds can solve. Artificial intelligence will become easy to create once we reach a critical mass of accumulated algorithms that replace humans for performing many tasks.

By Randall Parker    2007 May 29 10:13 PM   Entry Permalink | Comments (3)
2007 April 03 Tuesday
MIT Computer Model Equals Brain At Rapid Categorization

Here's one of the pieces needed to create an artificial intelligence.

Now, in a new MIT study, a computer model designed to mimic the way the brain itself processes visual information performs as well as humans do on rapid categorization tasks. The model even tends to make similar errors as humans, possibly because it so closely follows the organization of the brain's visual system.

"We created a model that takes into account a host of quantitative anatomical and physiological data about visual cortex and tries to simulate what happens in the first 100 milliseconds or so after we see an object," explained senior author Tomaso Poggio of the McGovern Institute for Brain Research at MIT. "This is the first time a model has been able to reproduce human behavior on that kind of task." The study, issued on line in advance of the April 10, 2007 Proceedings of the National Academy of Sciences (PNAS), stems from a collaboration between computational neuroscientists in Poggio's lab and Aude Oliva, a cognitive neuroscientist in the MIT Department of Brain and Cognitive Sciences.

This new study supports a long–held hypothesis that rapid categorization happens without any feedback from cognitive or other areas of the brain. The results also indicate that the model can help neuroscientists make predictions and drive new experiments to explore brain mechanisms involved in human visual perception, cognition, and behavior. Deciphering the relative contribution of feed-forward and feedback processing may eventually help explain neuropsychological disorders such as autism and schizophrenia. The model also bridges the gap between the world of artificial intelligence (AI) and neuroscience because it may lead to better artificial vision systems and augmented sensory prostheses.

I am expecting artificial intelligence to come as a result of work by computer scientists to emulate each function that brains do. AI will emerge from putting together lots of models of subsystems that do various functions that the brain performs.

The development of true AI will likely shake up religious believers most of all. If a computer can think like a human the obvious implication is that human thinking is not by itself evidence for a soul. If a machine can think as well as a human then why should we expect humans to have souls? I'm not saying there isn't an answer to that question. My point is that the question will become important once we achieve AI.

By Randall Parker    2007 April 03 10:25 PM   Entry Permalink | Comments (20)
Site Traffic Info