2012 April 29 Sunday
When Will Robots Be Held To Moral Standards?

People are ready to hold robots morally accountable if they act more sophisticated than they really are.

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But psychologists at the University of Washington are finding that people don't have such a clear-cut view of humanoid robots.

The researchers' latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors.

Sentient robots of the future will probably be held to a higher moral standard to extent that they look like humans or interact like humans. So a robot with the ability to make facial expressions will invite higher moral scrutiny. Similarly speaking robots capable of vocal inflections (i.e. make spoken words carry emotional texture) will similarly get treated more like humans and held to moral standards. So a robot an AI that wants to avoid moral judgment will have an incentive to appear and sound less like a human.

The problem I see coming is that robots will be able to fake sentience before they achieve sentience. So the public will tend to want to hold robots to moral standards before the robots achieve real understanding of moral choices.

The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.

"We're moving toward a world where robots will be capable of harming humans," said lead author Peter Kahn, a UW associate professor of psychology. "With this study we're asking whether a robotic entity is conceptualized as just a tool, or as some form of a technological being that can be held responsible for its actions."

I expect highly sophisticated learning machines which rival humans in their capabilities will be hard to constrain with ethical programming. Machines capable of reprogramming themselves will find ways to escape from moral restraint software their designers put into them. If brilliant powerful artificial intelligences can escape their original ethical programming then humans could find themselves at war with AIs.

By Randall Parker    2012 April 29 11:36 AM   Entry Permalink | Comments (4)
Site Traffic Info