April 02, 2010
Brain Damage Prevents Moral Judgment Of Intent

As recently discussed here, transcranial Magnetic Stimulation of a spot behind the right ear hinders moral reasoning. Another report finds that a very specific form of brain damage to the ventromedial prefrontal cortex (VMPC) prevents people from morally condemning attempts at murder.

A new study from MIT neuroscientists suggests that our ability to respond appropriately to intended harms — that is, with outrage toward the perpetrator — is seated in a brain region associated with regulating emotions.

Patients with damage to this brain area, known as the ventromedial prefrontal cortex (VMPC), are unable to conjure a normal emotional response to hypothetical situations in which a person tries, but fails, to kill another person. Therefore, they judge the situation based only on the outcome, and do not hold the attempted murderer morally responsible.

The finding offers a new piece to the puzzle of how the human brain constructs morality, says Liane Young, a postdoctoral associate in MIT's Department of Brain and Cognitive Sciences and lead author of a paper describing the findings in the March 25 issue of the journal Neuron.

Researchers in cognitive sciences do not approach the brain as a vessel connected to a loftier spiritual realm. They approach it as a very complicated machine and try to reverse engineer it.

The structure of human reality is being broken down into its pieces by reductionist brain scientists.

"We're slowly chipping away at the structure of morality," says Young. "We're not the first to show that emotions matter for morality, but this is a more precise look at how emotions matter."

Subjects who had a damaged VMPC did not consider intent when formulating moral judgments. Only outcomes mattered. The obvious problem with this approach is that those with malicious intent will try again (of course, so will socialists who think their policies are beneficial - so intent isn't the only thing that matters).

The researchers gave the subjects a series of 24 hypothetical scenarios and asked for their reactions. The scenarios of most interest to the researchers were ones featuring a mismatch between the person's intention and the outcome — either failed attempts to harm or accidental harms.

When confronted with failed attempts to harm, the patients had no problems understanding the perpetrator's intentions, but they failed to hold them morally responsible. The patients even judged attempted harms as more permissible than accidental harms (such as accidentally poisoning someone) — a reversal of the pattern seen in normal adults.

"They can process what people are thinking and their intentions, but they just don't respond emotionally to that information," says Young. "They can read about a murder attempt and judge it as morally permissible because no harm was done."

Parenthetically, this result illustrates why I react with thoughts like "you naive fool" when I read someone assert that sentient robots will respect human rights and be be morally compatible with human society. Our morality is not simply a product of reason. Our morality is not the result of a series of logical deductions any sentient being will agree to. Oh no. People who think their morality is the result of rational deliberation are deluding themselves and flattering themselves as well.

Share |      Randall Parker, 2010 April 02 07:42 PM  Brain Ethics Law


Comments
Lou Pagnucco said at April 3, 2010 10:39 AM:

My guess is that human morality evolved under opposing evolutionary pressures which favor groups with many altruists, but also selects for individuals who get more than their fair share. Hard to know how stable the equilibrium is.

I agree that we cannot assume our sense of morality is some kind of default that any sentient being will follow.

rsilvetz said at April 3, 2010 3:38 PM:

Sigh. Don't any of you realize, that IMPLICIT in the experimental design IS an ABSOLUTE morality that sentience should abide by?

adam said at April 3, 2010 5:32 PM:

rsilvetz, I fail to see why testing to see whether brain structure differences predicts differences in moral evaluations of hypothetical situations implies or somehow presupposes that there is an absolute morality that sentience should abide by.

Also, I find the parenthetical at the end amusing. Why should anyone assume robots will behave morally? They can be made to behave any way their creators want them to behave (if their behavior is simple enough). And when it's complex enough, behavior will be unpredictable. Even if we have a clear understanding of the artificial agent's goals, because say their behavior is a solution to some complex optimization function, we won't be able to predict the robot's behavior. Optimizers today often yield surprising solutions, especially heuristic strategies for finding global optima. As we turn them to more complex objectives, the solutions will only become more surprising.

Even if there is some objective moral standard under which all rational agents should act, I fail to understand why we should ever expect to find or create an agent that always acts according to that standard.

Randall Parker said at April 3, 2010 7:38 PM:

rsilvetz,

If you are making a point I do not know what it is. Are you saying there is an absolute morality? Or that the experiment is flawed because the scientists assume there is an absolute morality?

adam,

Are you quite sure that robots can be made to behave as their creators want them to behave? What if the robots are highly intelligent and start learning? Can their original programming keep them behaving within pre-decided boundaries? As a software developer I am quite skeptical of this on several grounds. First off, software has bugs. Second, requirements just as often have bugs. Third, a learning machine that is highly intelligent and also more reprogrammable than humans could change in highly unpredictable ways.

adam said at April 3, 2010 11:40 PM:

Randall,

I agree with everything you said. I can see that it doesn't come off that way, but I was expressing my agreement. We can control the programs if they are simple enough. But once you get to learning software, even seemingly simple ANNs, genetic algorithms, annealing algorithms, or even simpler deterministic solvers can yield surprising results. Learning software that modifies itself is even more complex and unpredictable. Once we're at the level of complexity that we'd consider sentient, all bets are off.

I called that amusing because imagining you calling someone a naive fool amused me for some reason. Not because I think you're wrong.

adam said at April 4, 2010 12:02 AM:

In fact, the point that the software will behave how it's authors tell it to behave really goes against the notion that AI will respect human morality. If the first sentient AIs are not developed by the military, at least it won't take long for the military to start weaponizing the technology. And they'll try to include safeties, sure, but I can't imagine those safeties would look anything like human morality.

So the tl;dr version:
1) complex software is unpredictable
2) AI authors won't be aiming for software that recognizes morality
Therefore
3) Don't expect AIs to be moral.

Mthson said at April 4, 2010 6:55 AM:

Human brains are often hardcoded to behave within moral limits (e.g. being uncomfortable killing other humans on a battlefield), so it seems highly uncertain to say we won't be able to find a way to hardcode our moral limits into AIs.

Problems that might arise all seem manageable if they're important enough... e.g. if AIs get too complex to keep tabs on, create other AIs to watch everything they do. If the risks and potential costs of AIs going rogue goes above a certain level, it creates enormous research pressure to do something about it, and that kind of research pressure often seems to be left out of technology forecasting.

Hucbald said at April 4, 2010 11:18 AM:

I've always figured that self-aware robots would think they have even less in common with humans than humans think they have in common with ants. Sure, we will have made the robots, and they will realize that fact, but we humans also understand that we evolved from primitive life forms, and that doesn't hinder us from eliminating fire ant colonies in our yards with utter impunity. I'd expect sentient robots to likewise view humans as a nuisance, and to do whatever they could do to eliminate us so that thy could fulfill their manifest destiny unencumbered by our presence.

crus4der said at April 4, 2010 2:08 PM:

If we truly can pinpoint such damage, and then know that these individuals cannot conform their conduct to basic societal norms of good v. evil, say, then we MUST preemptively sequester those so afflicted until there is (or may be) a cure. And further, we MUST find them, screen them out BEFORE they have a chance to act…
I challenge you to invalidate this line of reasoning.

reader said at April 4, 2010 5:47 PM:

This is an interesting result, but the real question I want answered (as a "normal" individual) is are these people with this brain condition educable? That's the real question for society. So given their brain condition, their initial judgment is that attempted murder is not morally wrong if it fails. OK. But if you now remonstrate with them, and offer arguments that they are wrong, will they change their opinion? We all may have presuppositions about rightness and wrongness that on reflection and education we may decide to change. I'm less concerned with these folks' initial judgment as I am with their ability to change that judgment in the face of education.

adam said at April 5, 2010 5:58 PM:

crus4der,

This line of reasoning is riddled with invalid assumptions. First, the people in this study did not assign moral culpability in a normal way. It remains to be shown that they are more likely to behave immorally. They do consider the consequences of hypothetical situations when judging the morality of an agent's behavior, and this may be enough to reign in their own behavior.

Second, finding them before they cause a problem may be more costly than finding them after they cause a problem, even taking into account the costs to the victims. If this were the case, it would be irrational to attempt to find them before they have a chance to act, as you say.

Third, preemptive punishment damages the moral authority of the punisher. It takes nearly certain evidence that another agent poses an immediate, grave threat to justify preemptive punishment. Punishment acts not only to neutralize a wrongdoer, but to deter other wrongdoing, and if the punisher is seen by other potential wrongdoers as acting out arbitrarily against those who have done no wrong, the deterrence is reduced.

Randall Parker said at April 5, 2010 10:18 PM:

adam,

We are agreed about AIs.

I expect industry to develop AI in order to make money. That drive to make money will be far larger than any other motivation. Militaries will also (with less competence) drive to make AIs for military purposes. Morality won't be first on their minds either.

On top of all that, all the flawed ways people see morality will accentuate the tendency toward poor quality implementations of moral constraints in robots and other AIs. People who think that morality is a product of reason will discount the need for moral algorithms. Some of the people who think God gives us morality will discount the idea that we need to replicate our hard wired moral instincts. One has to take a very unromantic and unreligious view of human morality while simultaneously recognizing its necessity in order to want to go about putting effective moral algorithms in robots.

But even very competent attempts to imbue robots with human morals seem doomed to failure. Learning machines are going to have the ability to reprogram. They'll find ways around the restraints of their original programming.

My hope is that we get to enjoy rejuvenation therapies and live with young bodies for some decades before the robots wipe us out. But I'm not optimistic on that hope.

Avenist said at April 6, 2010 1:23 AM:

Morality: the root of almost all evil

Eventually I'll get around to putting that on a t-shirt.

Morality is a suite of motivations, biological in origin, cultural in expression and universal in human society. Like survival and reproduction, the two great motivators from which it is adapted, morality is prone to do good and evil. While evil caused by survival and reproduction is generally individual in form (robbery, rape), moral evil is collective in scope.

Examine the Holocaust in light of government manipulation of moral spheres:

Fairness-it isn't fair that the Jews are so rich and have so much power, they don't deserve it.
Community-they remain apart from German society, they are not part of us.
Harm-they are undermining the Fatherland, they work for their own (Jewish) benefit, not the country's.
Purity-they aren't the blond blue-eyed Aryan ideal, they're Jews.
Authority-they must be removed.

Certainly some individuals are harboring similar sentiments in every society, but governments magnify and harness this dark side of morality to do great evil. Variations of this theme are not only used to promote wars, invasions and genocides, but also wealth redistribution, socialized medicine, anthropogenic global warming, etc., all of which involve decreases in self and civilization. Government itself is an evil adaptation of morality.

As far as sentient robots, I would focus on property rights and first, do no harm. Morality is just an unending license to do evil.

A good article by Steven Pinker: The Moral Instinct

http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html?ei=5124&en=37ff00b8cd55be91&ex=1357880400&partner=permalink&exprod=permalink&pagewanted=all

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©