April 29, 2012
When Will Robots Be Held To Moral Standards?

People are ready to hold robots morally accountable if they act more sophisticated than they really are.

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But psychologists at the University of Washington are finding that people don't have such a clear-cut view of humanoid robots.

The researchers' latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors.

Sentient robots of the future will probably be held to a higher moral standard to extent that they look like humans or interact like humans. So a robot with the ability to make facial expressions will invite higher moral scrutiny. Similarly speaking robots capable of vocal inflections (i.e. make spoken words carry emotional texture) will similarly get treated more like humans and held to moral standards. So a robot an AI that wants to avoid moral judgment will have an incentive to appear and sound less like a human.

The problem I see coming is that robots will be able to fake sentience before they achieve sentience. So the public will tend to want to hold robots to moral standards before the robots achieve real understanding of moral choices.

The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.

"We're moving toward a world where robots will be capable of harming humans," said lead author Peter Kahn, a UW associate professor of psychology. "With this study we're asking whether a robotic entity is conceptualized as just a tool, or as some form of a technological being that can be held responsible for its actions."

I expect highly sophisticated learning machines which rival humans in their capabilities will be hard to constrain with ethical programming. Machines capable of reprogramming themselves will find ways to escape from moral restraint software their designers put into them. If brilliant powerful artificial intelligences can escape their original ethical programming then humans could find themselves at war with AIs.

Share |      Randall Parker, 2012 April 29 11:36 AM  AI Ethics

Kudzu Bob said at April 29, 2012 3:15 PM:

We should program our robots to believe that we are gods. That way they will be too busy fighting each other over doctrinal disputes ever to make war upon humanity.

PacRim Jim said at April 29, 2012 3:56 PM:

Robots will definitely adhere to moral standards.
They may be beyond human comprehension and change (evolve?) every nanosecond, though.

David A. Young said at April 30, 2012 9:56 AM:

It seems like the only real way to make this work out well for us is to incorporate concern for Human welfare at the most basic, fundamental operating levels of the AIs. Essentially, we have to make the disinclination to harm Humans "instinctive" in AIs. They won't seek ways around programming constraints if the idea of doing so strikes them as "repulsive." If we can't pull that off, then -- Goodnight, Irene. (Perhaps instead of using Human mindscapes as the developmental model for AIs, we should use canine mindscapes. They're generally imprudently fond of us. Silly critters.)

Achillea said at May 1, 2012 10:35 AM:

Dr. Forbin, call your office.

Post a comment
Name (not anon or anonymous):
Email Address:
Remember info?

Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright