Many people flatter themselves that they've got firm, unchanging, and incorruptible moral compasses. Yet people can be easily swayed to adopt different moral positions by what role they think they are playing.
CORVALLIS, Ore. – An individual’s sense of right or wrong may change depending on their activities at the time – and they may not be aware of their own shifting moral integrity — according to a new study looking at why people make ethical or unethical decisions.
Focusing on dual-occupation professionals, the researchers found that engineers had one perspective on ethical issues, yet when those same individuals were in management roles, their moral compass shifted. Likewise, medic/soldiers in the U.S. Army had different views of civilian casualties depending on whether they most recently had been acting as soldiers or medics.
One wonders: As assorted occupations get automated out of existence and people shift into other occupations what is the net effect on moral perspectives? What ethical positions are people becoming more likely to take because of growth in some occupations? Which moral positions are becoming more of rarities as factories get automated or because functions previously done by people meeting face-to-face are now done on the phone or thru web pages?
Just hints to a person about what role they should use as their perspective caused them to take different ethical positions.
The researchers conducted three different studies with employees who had dual roles. In one case, 128 U.S. Army medics were asked to complete a series of problem-solving tests, which included subliminal cues that hinted they might be acting as either a medic or a soldier. No participant said the cues had any bearing on their behavior – but apparently they did. A much larger percentage of those in the medic category than in the soldier category were unwilling to put a price on human life.
In another test, a group of engineer-managers were asked to write about a time they either behaved as a typical manager, engineer, or both. Then they were asked whether U.S. firms should engage in “gifting” to gain a foothold in a new market. Despite the fact such a practice would violate federal laws, more than 50 percent of those who fell into the “manager” category said such a practice might be acceptable, compared to 13 percent of those in the engineer category.
Are more people thinking like managers? Do they compensate for their managerial ethics by becoming more altruistic in other areas? Or does managerial ethical thinking pervade their ethical calculations in other aspects of their lives?
Do you find your ethical positions more influenced by online communities where you play a role? Do you have more or less contact with humans than you did in your job 10 years ago? Do you sense your ethical perspective shifting? If so, in what directions?
One thing I see changing: As people work with and online chat with people who come from distant places people are growing their in groups. There is less local focus and more of a recognition of the need to form and maintain relationships with people in distant places and to incorporate perspectives and interests of distant groups into one's own moral calculations.
If you don't feel empathy for someone do you fail to recognize them as human? I think it depends on what you think the full range of variations people can have and still be natural humans. My own view of that the natural full range of what constitutes humanity is incredibly broad and I can hold a very very low opinion about someone and still think them quite human.
"When we encounter a person, we usually infer something about their minds. Sometimes, we fail to do this, opening up the possibility that we do not perceive the person as fully human," said lead author Lasana Harris, an assistant professor in Duke University's Department of Psychology & Neuroscience and Center for Cognitive Neuroscience. Harris co-authored the study with Susan Fiske, a professor of psychology at Princeton University.
Social neuroscience has shown through MRI studies that people normally activate a network in the brain related to social cognition -- thoughts, feelings, empathy, for example -- when viewing pictures of others or thinking about their thoughts. But when participants in this study were asked to consider images of people they considered drug addicts, homeless people, and others they deemed low on the social ladder, parts of this network failed to engage.
In defense of some of these reactions to drug addicts and the homeless: Imagine you felt so much compassion for each drug addict you took them into your home and tried to care for them. Would they stop using drugs? Probably not. There is something quite adaptive about suppressing empathy toward hopeless cases.
I'm reminded of how city dwellers are sometimes criticized for passing by someone in trouble that a country dweller would stop to help. But one of the important differences between city and country dwellers is the much higher number of people in need a city dweller is going to encounter. In order to function in a city a greater level of callousness seems necessary. Being parsimonious about your empathy makes the most sense for those who have a larger list of potential candidates for their empathy.
It is a lot more rewarding to successfully help someone than to fail in your charity. When presented with someone who has low odds of getting their life turned around the feeling of a desire to help is actually counterproductive. If you spend a great deal of effort trying to help someone who is intractable then you do effectively waste effort or resources to help a larger number of people with problems that are both tractable and smaller in terms of time and money needed to help.
For this latest study, 119 undergraduates from Princeton completed judgment and decision-making surveys as they viewed images of people. The researchers sought to examine the students' responses to common emotions triggered by images such as:
-- a female college student and male American firefighter (pride);
-- a business woman and rich man (envy);
-- an elderly man and disabled woman (pity);
-- a female homeless person and male drug addict (disgust).
After imagining a day in the life of the people in the images, participants next rated the same person on various dimensions. They rated characteristics including the warmth, competence, similarity, familiarity, responsibility of the person for his/her situation, control of the person over their situation, intelligence, complex emotionality, self-awareness, ups-and-downs in life, and typical humanity.
Participants then went into the MRI scanner and simply looked at pictures of people.
The study found that the neural network involved in social interaction failed to respond to images of drug addicts, the homeless, immigrants and poor people, replicating earlier results.
The difference between pity and disgust is interesting. An elderly body is the fate of everyone and so far it can not be fixed. Becoming elderly is not seen as a moral failing. But becoming a drug addict (rightly or wrongly) is widely seen as a moral failing. It makes sense that people are more disgusted by those who make wrong moral choices.
I am as concerned about counterproductive empathy as I am by deficiency of empathy. I think empathy is a necessary attribute for humans to maintain a civilized society. But it is not an unalloyed good. It has to be tempered and empathetic feelings are not a substitute for rational thought.
Here is the research paper for this report.
Imagine you had a choice to divert a run-away train box car onto a different track and that doing so would kill one person while saving five lives. Would you pull the lever? About 90% of participants in a study chose to kill one to save five.
EAST LANSING, Mich. — Imagine a runaway boxcar heading toward five people who can’t escape its path. Now imagine you had the power to reroute the boxcar onto different tracks with only one person along that route.
Would you do it?
That’s the moral dilemma posed by a team of Michigan State University researchers in a first-of-its-kind study published in the research journal Emotion. Research participants were put in a three dimensional setting and given the power to kill one person (in this case, a realistic digital character) to save five.
The results? About 90 percent of the participants pulled a switch to reroute the boxcar, suggesting people are willing to violate a moral rule if it means minimizing harm.
I'd hate to face this choice in real life. I think if I didn't know any of them I'd kill one to save five. Imagine you would face no legal repercussions from either choice. What would you do? Suppose you did not know any of the people involved.
Those who didn't pull the switch were more emotionally aroused.
Of the 147 participants, 133 (or 90.5 percent) pulled the switch to divert the boxcar, resulting in the death of the one hiker. Fourteen participants allowed the boxcar to kill the five hikers (11 participants did not pull the switch, while three pulled the switch but then returned it to its original position).
The findings are consistent with past research that was not virtual-based, Navarrete said.
The study also found that participants who did not pull the switch were more emotionally aroused. The reasons for this are unknown, although it may be because people freeze up during highly anxious moments – akin to a solider failing to fire his weapon in battle, Navarrete said.
I'd like to see a larger study done that controls for sex, age, ethnicity, citizenship of different nations, level of education, type of education, personality type, and varying degrees of autism. What factors have impact on what choices people make?
Also, I'd love to see this controlled for who is on the two train tracks. Would someone let five unknown die to save their wife or mother? Their brother? Their best friend? A powerful or rich figure? A beautiful woman? A small child?
If you need to trust people in a job then hire the least imaginative. Creative people are more likely to cheat for money when they are deceived into thinking they can get away with it.
WASHINGTON -- Creative people are more likely to cheat than less creative people, possibly because this talent increases their ability to rationalize their actions, according to research published by the American Psychological Association.
Creative rationalizations. Yes, if you are going to do something you are going to need to rationalize it is best to be good at rationalizing. Of course, an employer with a lot of money could hire a few creative types to come up with rationalizations.
Sure, creative people are great at coming up with new solutions to problems. But can you trust them?
"Greater creativity helps individuals solve difficult tasks across many domains, but creative sparks may lead individuals to take unethical routes when searching for solutions to problems and tasks," said lead researcher Francesca Gino, PhD, of Harvard University.
Gino and her co-author, Dan Ariely, PhD, of Duke University, conducted a series of five experiments to test their thesis that more creative people would cheat under circumstances where they could justify their bad behavior. Their research was published online in APA's Journal of Personality and Social Psychology.
The researchers used a series of recognized psychological tests and measures to gauge research subjects' creativity. They also tested participants' intelligence. In each of the five experiments, participants received a small sum for showing up. Then, they were presented with tasks or tests where they could be paid more if they cheated. For example, in one experiment, participants took a general knowledge quiz in which they circled their answers on the test paper. Afterward, the experimenter told them to transfer their answers to "bubble sheets" – but the experimenter told the group she had photocopied the wrong sheet and that the correct answers were lightly marked. The experimenters also told participants they would be paid more for more correct answers and led them to believe that they could cheat without detection when transferring their answers. However, all the papers had unique identifiers.
So then giving people the opportunity to cheat on low stakes games is a way to discover who you can rely on when you really need no cheating.
You don't need to fear cheating from people just because they are smart. As long as an intelligent person isn't creative they are just as trustworthy as a dumb person isn't creative.
The results showed the more creative participants were significantly more likely to cheat, and that there was no link between intelligence and dishonesty – i.e., more intelligent but less creative people were not more inclined toward dishonesty.
But surely some creative people are honest and good. What additional element of personality determines whether smart creative people will cheat?
John Alford and John Hibbing, noted researchers on the biological basis of political orientation, have joined with a few other researchers in a Plos One report on evidence that rightward leaning people appear to have a stronger disgust reflex.
Disgust has been described as the most primitive and central of emotions. Thus, it is not surprising that it shapes behaviors in a variety of organisms and in a variety of contexts—including homo sapien politics. People who believe they would be bothered by a range of hypothetical disgusting situations display an increased likelihood of displaying right-of-center rather than left-of-center political orientations. Given its primal nature and essential value in avoiding pathogens disgust likely has an effect even without registering in conscious beliefs. In this article, we demonstrate that individuals with marked involuntary physiological responses to disgusting images, such as of a man eating a large mouthful of writhing worms, are more likely to self-identify as conservative and, especially, to oppose gay marriage than are individuals with more muted physiological responses to the same images. This relationship holds even when controlling for the degree to which respondents believe themselves to be disgust sensitive and suggests that people's physiological predispositions help to shape their political orientations.
The report has an intro with a pretty interesting survey of what is known about the biological basis for political orientation. Here's an excerpt:
Disgust has been referred to as “the most visceral of all basic emotions”  and the lust-disgust axis is often seen as the original building block of all emotions . The role of disgust in the avoidance of disease, one of the primary sources of mortality over the centuries, makes it essential to survival . Numerous connections between disgust responses and social behavior have been identified –. The foundation for hypothesizing a connection between disgust response and political behavior more specifically is anchored the groundbreaking work of Haidt and colleagues , . On the basis of numerous large N surveys, Haidt reports that people on the left make judgments primarily on the basis of two “moral foundations:” harm avoidance and a desire for fairness/equity. People on the political right, on the other hand, display similar attention to harm avoidance and fairness but demonstrate additional concerns for purity, in-group/loyalty, and authority/structure. Interestingly, these differences in moral foundations hold up across cultures , a finding consistent with the work of Schwartz on cross-cultural similarity in the relationship between political orientations and patterns of values as well as work on the relationship between political orientations and personality traits across cultures –. This nuanced view of differentially weighted decision considerations is the basis for expecting people on the right to be more likely to emphasize purity/disgust as a foundation for moral and political orientations.
What I want to know: Once offspring genetic engineering becomes possible will the population as a whole shift left or right? Or will the population splinter into 2 or more factions that are more strongly their pure type? (e.g. leftists with even stronger desires for equality of outcomes and right-wingers with even stronger desires for loyalty or authority). In other words, will humanity splinter into mutually incomprehensible or hostile factions made so by genetic differences that cause deep differences in moral natures?
In situations where people were given awards they did nothing to earn the sense of getting an unjustified advantage caused people to act more altruistic. They probably wanted to dampen down the feeling of malicious envy in others.
“In anthropology, they say if you are envied, you might act more socially afterward because you try to appease those envious people,” van de Ven says—by sharing your big catch of fish, for example. They wanted to know if these observations from anthropology held up in the psychology lab.
In experiments, he and his colleagues made some people feel like they would be maliciously envied, by telling them they would receive an award of five euros—sometimes deserved based on the score they were told they’d earned on a quiz, sometimes not. The researchers figured the deserved prize would lead to benign envy, while the undeserved prize would lead to malicious envy. Then the volunteer was asked to give time-consuming advice to a potentially envious person.
People who had reason to think they’d be the target of malicious envy were more likely to take the time to give advice than targets of benign envy.
In another experiment, an experimenter dropped a bunch of erasers as the volunteer was leaving; those who thought they’d be maliciously envied were more likely to help him pick them up.
None of this is terribly surprising. The researchers previously found envy comes in a benign form that caused those who experience benign envy to want to improve themselves. Basically success inspires attempts to become more successful. But malicious envy causes people to want to bring down others.
In previous research, Niels van de Ven of Tilburg University and his colleagues Marcel Zeelenberg and Rik Pieters had figured out that envy actually comes in two flavors: benign envy and malicious envy. They studied people who showed these two kinds of envy and found that people with benign envy were motivated to improve themselves, to do better so they could be more like the person they envied. On the other hand, people with malicious envy wanted to bring the more successful person down.
Note that a person who focuses on feeling malicious envy misses the opportunity to motivate themselves to become more successful. Benign envy is more adaptive in most cases.
You can see from this why political class warriors who want to raise taxes or regulate an industry try to argue that their targets do not deserve their success. They want to bring out that feeling of malicious envy.
COLUMBUS, Ohio – College students who exhibit narcissistic tendencies are more likely than fellow students to cheat on exams and assignments, a new study shows.
The results suggested that narcissists were motivated to cheat because their academic performance functions as an opportunity to show off to others, and they didn't feel particularly guilty about their actions.
"Narcissists really want to be admired by others, and you look good in college if you're getting good grades," said Amy Brunell, lead author of the study and assistant professor of psychology at Ohio State University at Newark.
"They also tend to feel less guilt, so they don't mind cheating their way to the top."
But narcissism is made up of a few components. It was the most strongly exhibitionist narcissists who were most likely to cheat. They wanted to draw attention to themselves with high grades.
"We found that one of the more harmless parts of narcissism -- exhibitionism -- is most associated with academic cheating, which is somewhat surprising," she said.
Exhibitionism is the desire to show off, to make yourself the center of attention.
The two other dimensions of narcissism -- the desire for power and the belief you are a special person -- were not as strongly linked to academic dishonesty.
I'd like to know what percentage of famous actors are narcissists. Are successful musicians more or less likely to be narcissists? With music part of the draw is the love of music. Another part, especially for guys, is getting laid. So exhibitionism might play a smaller role in motivating guys to become famous musical performers.
A variation of my standard question on genetics and human traits: Once offspring genetic engineering becomes feasible will parents choose more or less narcissistic children? Will future generations be more or less exhibitionist on average?
One theory -- the conventional wisdom in political science -- sees drug attitudes as primarily coming from people's political ideology, level of religious commitment, and personality, for example, openness to experience.
The other theory, proposed by the researchers and driven by ideas from evolutionary psychology, holds that drug attitudes are really driven by people's reproductive strategies.
So then slutty girls and pick-up artist guys are the biggest advocates of drug legalization? I'd be curious to know whether this connection between views on promiscuity and drugs holds equally well for men and women.The evolutionary psychology viewpoint also holds that women and men have different and conflicting reproductive strategies. Also, how do views about drug legalization change upon marriage and upon birth of a baby.
When the Penn researchers questioned almost 1,000 people in two subject populations, one undergraduate and one Internet-based, a clear winner emerged between the competing theories: Differences in reproductive strategies are driving individuals' different views on recreational drugs.
While many items predict to some extent whether people are opposed to recreational drugs, the most closely related predictors are people's views on sexual promiscuity. While people who are more religious and those who are more politically conservative do tend to oppose recreational drugs, in both study samples the predictive power of these religious and ideological items was reduced nearly to zero by controlling for items tracking attitudes toward sexual promiscuity.
If people use too many drugs they'll reproduce like bunnies. This has got to be stopped. Look at what happened when bunnies were introduced into Australia.
"This provides evidence that views on sex and views on drugs are very closely related," said Kurzban, associate professor in the Department of Psychology and director of the Pennsylvania Laboratory for Experimental Evolutionary Psychology at Penn. "If you were to measure people's political ideology, religiosity and personality characteristics, you can predict to some degree how people feel about recreational drugs. But if, instead, you just measure how people feel about casual sex, and ignore the abstract items, the predictions about people's views on drugs in fact become quite a bit better."
Didn't Alice see a bunny in wonderland? In the follow-up to Alice In Wonderland does she become a single mom with kids by 3 different guys? "Go ask Alice, I think she'll know." I can hear Gracie Slick singing.
Once genetic tests can at least partially predict sexua reproduction strategies will genetic tests also predict one's views on drug legalization? Will the legalizers or anti-legalizers make more babies and win in the field of public opinion in the long run by Darwinian natural selection?
As recently discussed here, transcranial Magnetic Stimulation of a spot behind the right ear hinders moral reasoning. Another report finds that a very specific form of brain damage to the ventromedial prefrontal cortex (VMPC) prevents people from morally condemning attempts at murder.
A new study from MIT neuroscientists suggests that our ability to respond appropriately to intended harms — that is, with outrage toward the perpetrator — is seated in a brain region associated with regulating emotions.
Patients with damage to this brain area, known as the ventromedial prefrontal cortex (VMPC), are unable to conjure a normal emotional response to hypothetical situations in which a person tries, but fails, to kill another person. Therefore, they judge the situation based only on the outcome, and do not hold the attempted murderer morally responsible.
The finding offers a new piece to the puzzle of how the human brain constructs morality, says Liane Young, a postdoctoral associate in MIT's Department of Brain and Cognitive Sciences and lead author of a paper describing the findings in the March 25 issue of the journal Neuron.
Researchers in cognitive sciences do not approach the brain as a vessel connected to a loftier spiritual realm. They approach it as a very complicated machine and try to reverse engineer it.
The structure of human reality is being broken down into its pieces by reductionist brain scientists.
"We're slowly chipping away at the structure of morality," says Young. "We're not the first to show that emotions matter for morality, but this is a more precise look at how emotions matter."
Subjects who had a damaged VMPC did not consider intent when formulating moral judgments. Only outcomes mattered. The obvious problem with this approach is that those with malicious intent will try again (of course, so will socialists who think their policies are beneficial - so intent isn't the only thing that matters).
The researchers gave the subjects a series of 24 hypothetical scenarios and asked for their reactions. The scenarios of most interest to the researchers were ones featuring a mismatch between the person's intention and the outcome — either failed attempts to harm or accidental harms.
When confronted with failed attempts to harm, the patients had no problems understanding the perpetrator's intentions, but they failed to hold them morally responsible. The patients even judged attempted harms as more permissible than accidental harms (such as accidentally poisoning someone) — a reversal of the pattern seen in normal adults.
"They can process what people are thinking and their intentions, but they just don't respond emotionally to that information," says Young. "They can read about a murder attempt and judge it as morally permissible because no harm was done."
Parenthetically, this result illustrates why I react with thoughts like "you naive fool" when I read someone assert that sentient robots will respect human rights and be be morally compatible with human society. Our morality is not simply a product of reason. Our morality is not the result of a series of logical deductions any sentient being will agree to. Oh no. People who think their morality is the result of rational deliberation are deluding themselves and flattering themselves as well.
CAMBRIDGE, Mass. — MIT neuroscientists have shown they can influence people's moral judgments by disrupting a specific brain region — a finding that helps reveal how the brain constructs morality.
To make moral judgments about other people, we often need to infer their intentions — an ability known as "theory of mind." For example, if a hunter shoots his friend while on a hunting trip, we need to know what the hunter was thinking: Was he secretly jealous, or did he mistake his friend for a duck?
Previous studies have shown that a brain region known as the right temporo-parietal junction (TPJ) is highly active when we think about other people's intentions, thoughts and beliefs. In the new study, the researchers disrupted activity in the right TPJ by inducing a current in the brain using a magnetic field applied to the scalp. They found that the subjects' ability to make moral judgments that require an understanding of other people's intentions — for example, a failed murder attempt — was impaired.
Feeling all morally indignant and judgmental? Is this moral indignation wearing you out with stress and strain? Perhaps a magnet is what you need to escape from the burdens of strong feelings of morality.
Transcranial magnetic stimulation (TMS) in 500 msec bursts is enough to alter moral judgments.
In one experiment, volunteers were exposed to TMS for 25 minutes before taking a test in which they read a series of scenarios and made moral judgments of characters' actions on a scale of 1 (absolutely forbidden) to 7 (absolutely permissible).
In a second experiment, TMS was applied in 500-milisecond bursts at the moment when the subject was asked to make a moral judgment. For example, subjects were asked to judge how permissible it is for someone to let his girlfriend walk across a bridge he knows to be unsafe, even if she ends up making it across safely. In such cases, a judgment based solely on the outcome would hold the perpetrator morally blameless, even though it appears he intended to do harm.
In both experiments, the researchers found that when the right TPJ was disrupted, subjects were more likely to judge failed attempts to harm as morally permissible. Therefore, the researchers believe that TMS interfered with subjects' ability to interpret others' intentions, forcing them to rely more on outcome information to make their judgments.
Could interrogators use TMS to convince, say, a captured spy that it's okay to divulge state secrets? Might nations or rogue corporations kidnap engineers and scientists and use TMS to convince them that they are morally wrong to try to hold back commercial secrets or military secrets?
Do you want to alter your own moral judgment or someone else's? If so, over what issue or behavior?
Neurolaw: the use of neuroscience in legal settings. Scare you any?
In the article "Neurolaw," in the inaugural issue of WIREs Cognitive Science, co-authors Walter Sinnott-Armstrong and Annabelle Belcher assess the potential for the latest cognitive science research to revolutionize the legal system.
Neurolaw, also known as legal neuroscience, builds upon the research of cognitive, psychological, and social neuroscience by considering the implications for these disciplines within a legal framework. Each of these disciplinary collaborations has been ground-breaking in increasing our knowledge of the way the human brain operates, and now neurolaw continues this trend.
I think one of the ways neuroscience is going to be used is to show that we can't trust human memories in many settings. It isn't just going to be about ascertaining what a person knows or has done. It'll also be about whether we can trust what a person believes to be witnessed events.
How accurate will brain scans become at detecting deception? Will brain scans be able to detect whether, say, a person's reaction to a picture of a crime scene shows they've been there?
One of the most controversial ways neuroscience is being used in the courtroom is through 'mind reading' and the detection of mental states. While only courts in New Mexico currently permit traditional lie detector, or polygraph, tests there are a number of companies claiming to have used neuroscience methods to detect lies. Some of these methods involve electroencephalography (EEG), whereby brain activity is measured through small electrodes placed on the scalp. This widely accepted method of measuring brain electrical potentials has already been used in two forensic techniques which have appeared in US courtrooms: brain fingerprinting and brain electrical oscillations signature (BEOS). Brain fingerprinting purportedly tests for 'guilty knowledge,' or memory of a kind that only a guilty person could have. Other forms of guilt detection, using functional magnetic resonance imaging (fMRI), are based on the assumption that lying and truth-telling are associated with distinctive activity in different areas of the brain. These and other potential forms of 'mind reading' are still in development but may have far-reaching implications for court cases.
If neuroscience will be used in the legal system will its use be voluntary?
"Some proponents of neurolaw think that neuroscience will soon be used widely throughout the legal system and that it is bound to produce profound changes in both substantive and procedural law," conclude the authors. "Other leaders in neurolaw employ a less sanguine tone, urging caution so as to prevent misuses and abuses of neuroscience within courts, legislatures, prisons, and other parts of the legal system. Either way we need to be ready to prevent misuses and encourage legitimate applications of neuroscience and the only way to achieve these goals is for neuroscientists and lawyers to work together in the field of neurolaw."
I expect dictatorships to use brain scans and other neurotech in courts and in police investigations a lot more aggressively. China will use it more than the United States.
Okay folks, what do you make of this? Some people do not feel tempted to lie.
CAMBRIDGE, Mass. – A new study of the cognitive processes involved with honesty suggests that truthfulness depends more on absence of temptation than active resistance to temptation.
Using neuroimaging, psychologists looked at the brain activity of people given the chance to gain money dishonestly by lying and found that honest people showed no additional neural activity when telling the truth, implying that extra cognitive processes were not necessary to choose honesty. However, those individuals who behaved dishonestly, even when telling the truth, showed additional activity in brain regions that involve control and attention.
The study is published in Proceedings of the National Academy of Sciences, and was led by Joshua Greene, assistant professor of psychology in the Faculty of Arts and Sciences at Harvard University, along with Joe Paxton, a graduate student in psychology.
"Being honest is not so much a matter of exercising willpower as it is being disposed to behave honestly in a more effortless kind of way," says Greene. "This may not be true for all situations, but it seems to be true for at least this situation."
Do you know people who seem to default to lying to a point where it is counterproductive for them? I come across this tendency and I suspect imaging of their brains would show the opposite of what scientists found in honest people above. Some people are more disposed to dishonesty or violence or thievery or other unethical activities.
Others do not even feel tempted to unethical behavior. The people who live more ethical lives mostly do not fight an uphill battle to act ethically.
When someone is accused of committing a crime, it is the responsibility of impartial third parties, generally jurors and judges, to determine if that person is guilty and, if so, how much he or she should be punished. But how does one’s brain actually make these decisions? The researchers found that two distinct areas of the brain assess guilt and decide penalty.
This work is the joint effort of Owen Jones, professor of law and of biology, and René Marois, a neuroscientist and associate professor of psychology. Together with neuroscience graduate student Joshua Buckholtz, they scanned the brains of subjects with a highly sensitive technique called functional magnetic resonance imaging or fMRI. Their goal was to see how the brain was activated when a person judged whether or not someone should be punished for a harmful act and how severely the individual should be punished.
The right dorsolateral prefrontal cortex decides whether to convict. Surely potential jurors should undergo testing of their right dorsolateral prefrontal cortexes to make sure they work within acceptable ranges. Then the amygdala and other parts of the brain decide how much punishment to dole out.
The researchers found that activity in an analytic part of the brain, known as the right dorsolateral prefrontal cortex, tracked the decision of whether or not a person deserved to be punished but, intriguingly, appeared relatively insensitive to deciding how much to punish. By contrast, the activity in brain regions involved in processing emotions, such as the amygdala, tracked how much subjects decided to punish.
“These results raise the possibility that emotional responses to criminal acts may represent a gauge for assessing deserved punishment,” said Marois.
“There are long-running debates about the proper roles in law of ‘cold’ analysis and ‘hot’ emotion,” said Jones. “Our results suggest that, in normal punishment decisions, the distinct neural circuitries of both processes may be jointly involved, but separately deployed.”
Neuroscientists will discover much more about the inner workings of brains and how they differ. Those discoveries will likely lead to the development of ways to measure how well people judge. This won't just be a measure of how smart each person is. The ability to judge - especially to judge human behavior - using many types of evidence has got to be a rather complex skill and the ability to do that judging varies greatly between people. Ideally a jury should be made up of people with exceptional skill at judging.
If you are going to get judged by a jury of your peers should it be a jury of people whose right dorsolateral prefrontal cortexes are similar to your own?
If you can't stand for some people to have more than others then you'll be miserable. Sure glad I'm tolerant of rich people.
To add some ammo to these explanations, Napier and Tost conducted a series of surveys on political attitudes of Americans and citizens of 8 Western countries, using previously collected data. Their results affirmed the "conservatives are happy, liberals are mad" findings of previous polls, but income, education, religion and other demographic variables couldn't explain the happiness gap.
However, when the authors instead grouped people by their "rationalisation of inequality," the differences between conservatives and liberals dissolved. Republican or Democrat, people not bothered by social or economic disparities tend to be happy.
This trend held for non-Americans, as well. Right-wingers in the Czech Republic, Germany, New Zealand, Norway, Slovakia, Spain, Sweden and Switzerland were all happier than liberals, on average.
My guess is that genetic differences account for a substantial fraction of the observed difference. Maybe the resistance to inequality was selected for because for most of human evolution having more stuff upped the odds that a person would create offspring who would create offspring. Nowadays the reproductive fitness advantage of having more stuff is much less or non-existent. Poorer people are creating more progeny than richer people. Yet this trait remains in some people.
My guess is that when we know far more about how our genetic differences cause cognitive differences we are going to discover that many of our political differences flow from our genetic differences. This could make disagreements stir up stronger passions because people will lose faith in an important (and incorrect) idea that helps legitimize the institutions of society: The idea that most policy disagreements can eventually be resolved by reaching consensus as a result of debate. If the opposing side holds their views because they are wired up by their genes to have ethical preferences that differ from one's own then one can always expect to disagree with one's opponents on key issues.
Even though people will know that certain of their preferences and beliefs come as a result of their genes my guess is that people will still cling strongly to those preferences and beliefs. Knowing that one can never convince the opposition of the rightness of one's viewpoint could make people less willing to argue and more willing to just try to seize more power to get one's genetic preferences turned into policy.
Update: My subject title is not meant to imply that rationalization was required to accept inequality while not being required to object to it. Whether one needs to do more rationalization to accept inequality than to object to it remains an open question. But even if one requires more cogitating to accept inequality that does not imply that inequality is bad. It could be that accepting inequality is wise. After all, the countries that tried to stamp it out impoverished themselves. But seeing acceptance of inequality as the correct choice requires creating a pretty sophisticated mental model of the world.
University of Chicago researchers find that while undergoing functional magnetic resonance imaging (fMRI) brain scans children 7 to 12 show similar patterns of brain activity to adults when watching animated videos of people experiencing pain.
The programming for empathy is something that is "hard-wired" into the brains of normal children, and not entirely the product of parental guidance or other nurturing, said Decety. Understanding the brain's role in responding to pain can help researchers understand how brain impairments influence anti-social behavior, such as bullying, he explained.
For their research, the team showed 17 typically developed children, ages seven to 12, animated photos of people experiencing pain, either received accidentally or inflicted intentionally. The group included nine girls and eight boys.
While undergoing fMRI scans, children where shown animations using three photographs of two people whose right hands or right feet only were visible.
The photographs showed people in pain accidently caused, such as when a heavy bowl was dropped on their hands, and situations in which the people were hurt, such as when a person stepped intentionally on someone's foot. They were also shown pictures without pain and animations in which people helped someone alleviate pain.
The scans showed that the parts of the brain activated when adults see pain were also triggered in children.
"Consistent with previous functional MRI studies of pain empathy with adults, the perception of other people in pain in children was associated with increased hemodymamic activity in the neural circuits involved in the processing of first-hand experience of pain, including the insula, somatosensory cortex, anterior midcigulate cortex, periaqueductal gray and supplementary motor area," Decety wrote.
However, when the children saw animations of someone intentionally hurt, the regions of the brain engaged in social interaction and moral reasoning (the temporo-parietal junction, the paracigulate, orital medial frontal cortices and amygdala) also were activated.
Suppose this sort of scanning was carried out with a much larger group of children. Would some small fraction of them show deficiencies in their reaction to seeing others suffer pain?
What I'd like to see: Do brain scans on a few hundred children to measure their empathy and other forms of reaction and then follow the children as they grow up and enter adulthood. Can future psychopaths or criminals be identified via brain scans?
In a review to be published in the May 18 issue of the journal Science, Jonathan Haidt, associate professor of psychology at the University of Virginia, discusses a new consensus scientists are reaching on the origins and mechanisms of morality. Haidt shows how evolutionary, neurological and social-psychological insights are being synthesized in support of three principles:
1) Intuitive primacy, which says that human emotions and gut feelings generally drive our moral judgments;
I see a lot of moral rationalizing where people try to come up with rational arguments to justify moral judgments they made for other reasons. How can I tell? When presented with flaws in logical reasoning about morality most people try to restructure their logic to keep the same conclusion rather than change to a different conclusion.
It perhaps says something about the lingering effects of the Enlightenment period in the West that most people feel a need to construct rational-sounding arguments to justify their moral beliefs. Or maybe the rationalizing serves the primary purpose of building arguments to persuade others?
2) Moral thinking if for social doing, which says that we engage in moral reasoning not to figure out the truth, but to persuade other people of our virtue or to influence them to support us; and
I agree with this point:
3) Morality binds and builds, which says that morality and gossip were crucial for the evolution of human ultrasociality, which allows humans - but no other primates - to live in large and highly cooperative groups.
It is worth noting in this context that people who join political parties choose most of their political positions after they join their party. They find out from other members what position they should take on a variety of issues. My interpretation: Political parties are like tribes and people behave in them in ways similar to how earlier humans behaved in tribes. People choose a political party which seems to share some values and styles of cognition. Then they demonstrate loyalty to their political tribe by subscribing to its myths.
"Putting these three principles together forces us to re-evaluate many of our most cherished notions about ourselves," says Haidt, whose own research demonstrates that people generally follow their gut feelings and make up moral reasons afterwards.
Well, it only forces some of us to re-evaluate. Others are happy to ignore anything that challenges the myths they want to believe.
Conservatives have more subsystems in their moral processing brain centers.
Haidt argues that human morality is a cultural construction built on top of - and constrained by - a small set of evolved psychological systems. He presents evidence that political liberals rely primarily on two of these systems, involving emotional sensitivities to harm and fairness. Conservatives, however, construct their moral understandings on those two systems plus three others, which involve emotional sensitivities to in-group boundaries, authority and spiritual purity.
When offspring genetic engineering becomes possible I expect parental choices to produce bigger differences in how people morally reason. Conservative-leaning people will make their children morally reason even more strongly in the conservative style. The liberals will do likewise. So the size of the center will shrink. This will lead to deeper political divisions and perhaps civil war in some countries and wars between countries.
I also expect offspring genetic engineering to produce more other styles of moral reasoning including ones that are rare today and others that do not exist at all today. Who knows, maybe genetic engineering will move libertarianism up in the ranks of moral reasoning styles.
Using functional magnetic resonance imaging (fMRI), researchers in the United States, Germany, and elsewhere have started taking scans of the brains of psychopaths while the patients view horrific images, such as photographs of bloody stabbings, shootings, or evisceration. When normal people view these images, fMRI scans light up to indicate heavy brain activity in sections of the emotion-generating limbic system, primarily the amygdala, which is believed to generate feelings of empathy. But in psychopathic patients, these sections of the amygdala remain dark, showing greatly reduced activity or none at all. This phenomenon, known as limbic underactivation, may indicate that some of these people lack the ability to generate the basic emotions that keep primitive killer instincts in check.
Should we use information from brain scans and other measurement methods to identify people to preemptively target before they commit crimes? Some day scientific measures will probably allow us to calculate different odds for each person on whether the person will kill or rape or molest children or otherwise violate the rights of others. How should we use the future ability to perform those odds calculations? I think the answer depends on a number of factors:
1) The cost of the preemptive action for us and those who feel its effects.
2) The efficacy of the preemptive action. How much would a given preemptive action reduce the odds of a person from committing rape, assault, theft, etc.
3) The avoided costs of whatever might be prevented. The costs depend on the potential crime(s) that a given person has a propensity to commit. But then what price tag to put on, say, a rape avoided?
4) The accuracy of the odds prediction. How high would the odds and the accuracy of the odds have to be to make you think the odds warrant action by the state against a currently innocent person?
5) The costs of identification of threats. Brain scans, blood tests, gene tests, and other tests will cost money to perform.
What sorts of preemptive actions to use? I can think of a lot of actions aside from preemptive imprisonment: For example:
A) Talk therapy. But would it help?
B) Drugs or other treatments that reduce violent behavior. Note that the power of these treatments will go up as biotechnology and medicine advance.
C) Exile. This can be from a country or a region or specific neighborhoods. For example, imagine an island to ship potential pedophiles to where there are no children.
D) Tracking bracelets. For example, track when a potential pedophile goes near a playground or school. Or track when a potential murderer is parked along a street at night watching.
E) Warn the neighbors. That way they can arm and otherwise protect themselves appropriately.
F) Outlaw the creation of offspring that carry genes that'll make them high risks to become murderers, rapists, etc. This intervention requires the existence of technology for offspring genetic engineering. That technology will come in 10 or at most 20 years.
Are you philosophically opposed to all preemption guided by the results of brain scans, genetic tests, and other methods of measurement? Or do you see it as valuable and worthwhile under some circumstances?
Consider the following scenario: someone you know has AIDS and plans to infect others, some of whom will die. Your only options are to let it happen or to kill the person.
Do you pull the trigger?
Most people waver or say they could not, even if they agree that in theory they should. But according to a new study in the journal Nature, subjects with damage to a part of the frontal lobe make a less personal calculation.
The logical choice, they say, is to sacrifice one life to save many.
Who can read that line without flashing on Wrath Of Khan?
Spock: Don't grieve, Admiral. It is logical. The needs of the many outweigh...
Kirk: ...the needs of the few.
Spock: ...Or the one. I never took the Kobayashi Maru test until now. What do you think of my solution?
With the right form of brain damage you too could think like a Vulcan. Elective neurosurgery anyone?
Conducted by researchers at the University of Southern California, Harvard University, Caltech and the University of Iowa, the study shows that emotion plays an important role in scenarios that pose a moral dilemma.
If certain emotions are blocked, we make decisions that – right or wrong – seem unnaturally cold.
The scenarios in the study are extreme, but the core dilemma is not: should one confront a co-worker, challenge a neighbor, or scold a loved one in the interest of the greater good?
The core dilemma would also confront a starship captain or an away team leader. Or, hey, the core dilemma might even confront a Hollywood director who has a cast member who needs to go into rehab. What to do? Kill off the character so that the rest of the cast members can survive.
A reduction in empathy and passion will put a person on the path of Vulcan logic?
A total of 30 subjects of both genders faced a set of scenarios pitting immediate harm to one person against future certain harm to many. Six had damage to the ventromedial prefrontal cortex (VMPC), a small region behind the forehead, while 12 had brain damage elsewhere, and another 12 had no damage.
The subjects with VMPC damage stood out in their stated willingness to harm an individual – a prospect that usually generates strong aversion.
“Because of their brain damage, they have abnormal social emotions in real life. They lack empathy and compassion,” said Ralph Adolphs, Bren Professor of Psychology and Neuroscience at Caltech.
“In those circumstances most people without this specific brain damage will be torn. But these particular subjects seem to lack that conflict,” said co-senior author Antonio Damasio, director of the Brain and Creativity Institute and holder of the David Dornsife Chair in Neuroscience at USC.
But if you get your VMPC damaged will you get the occasional urge to engage in pon farr?
The idea of damaging my VMPC would only become appealing to FuturePundit if it also conferred a 3 times longer lifespan as compared to humans.