2009 October 18 Sunday
Better To Live In Country With Rights-Possessing Robots?

Robin Hanson doesn't want to live in a country where robots are held back from full sentience and autonomy.

On Tuesday I asked my law & econ undergrads what sort of future robots (AIs computers etc.) they would want, if they could have any sort they wanted.  Most seemed to want weak vulnerable robots that would stay lower in status, e.g., short, stupid, short-lived, easily killed, and without independent values. When I asked “what if I chose to become a robot?”, they said I should lose all human privileges, and be treated like the other robots.  I winced; seems anti-robot feelings are even stronger than anti-immigrant feelings, which bodes for a stormy robot transition.

At a workshop following last weekend’s Singularity Summit two dozen thoughtful experts mostly agreed that it is very important that future robots have the right values.  It was heartening that most were willing accept high status robots, with vast impressive capabilities, but even so I thought missed the big picture.  Let me explain.

I do not see how limiting the capabilities of robots will put limits on our quality of life. We can still have specialized artificial intelligences to work on scientific problems and engineering designs without creating totally autonomous mobile thinking machines that have all the cognitive attributes that make a rights-possessing being possible.

In fact, if we do not build robotic sentient citizens we'll have fewer "people" competing with us for resources. Super smart rights-possessing autonomous robots would be able to out-produce us rather than produce for us.

I also very much doubt that we can maintain artificially intelligent robots in a state where they would be guaranteed to continue to possess all the attributes necessary for rights-possessing beings. Robots, unlike humans, will have very easily modifiable reasoning mechanisms. They will always be one upload away from becoming really dangerous. Better short and stupid robots than dangerous ones.

Imagine that you were forced to leave your current nation, and had to choose another place to live.  Would you seek a nation where the people there were short, stupid, sickly, etc.?  Would you select a nation based on what the World Values Survey says about typical survey question responses there?

I would choose to live in a nation where I'm not one software modification away from becoming prey.

I think at least some of Artificial Intelligence promoters make a fundamental mistake: They assume that reasoning ability alone is enough to cause an entity to respect the rights of others. Part of the human respect for other human lives comes from very innate wiring of the brain that is below the level of conscious control. Our ancestors living in small tribes had a selective advantage from being loyal to each other in a way similar to that of a pack of wolves. How to give AIs the innate preference to like humans? I do not see a safe assured way to do that.

By Randall Parker    2009 October 18 07:47 PM   Entry Permalink | Comments (6)
Site Traffic Info