2012 September 03 Monday
The Future Does Not Include Brain Privacy

I hear Mick Jagger singing "these days its all secrecy, no privacy". Do not take for granted the privacy of your own thoughts.

A team of security researchers from Oxford, UC Berkeley, and the University of Geneva say that they were able to deduce digits of PIN numbers, birth months, areas of residence and other personal information by presenting 30 headset-wearing subjects with images of ATM machines, debit cards, maps, people, and random numbers in a series of experiments.

Imagine how an interrogation system could be constructed to show assorted pieces of information along with questions in order to measure how the brain reacts.

By Randall Parker    2012 September 03 09:16 PM   Entry Permalink | Comments (8)
2012 March 26 Monday
Eye Movement Analysis Detects Liars

University of Buffalo researchers find that an automated system for analyzing eye movements has high accuracy for detecting lies.

Results so far are promising: In a study of 40 videotaped conversations, an automated system that analyzed eye movements correctly identified whether interview subjects were lying or telling the truth 82.5 percent of the time.

That's a better accuracy rate than expert human interrogators typically achieve in lie-detection judgment experiments, said Ifeoma Nwogu, a research assistant professor at UB's Center for Unified Biometrics and Sensors (CUBS) who helped develop the system. In published results, even experienced interrogators average closer to 65 percent, Nwogu said.

Imagine people using hidden cameras in a conversation with a spouse or in a job interview. They could analyze the images later.

Do you know anyone you'd like to subject to lie detection analysis? For what reason?

By Randall Parker    2012 March 26 10:55 PM   Entry Permalink | Comments (17)
2010 June 24 Thursday
Brain Scans Predict Sunscreen Lotion Use

Picture mom worrying about whether the kids will put on sunscreen at the beach if she's not around. Well, should she go along to make sure they do it? Not if she just happens to have a brain scanner down in the basement. Put the kids into the scanner, show them a public service announcement on the importance of skin protection to avoid skin cancer, and read what the scanner about activity in the medial prefrontal cortex. With a healthy level of activity in the medial prefrontal cortex the odds are high that the oil will get spread all over the skin.

"Half the money I spend on advertising is wasted; the trouble is I don't know which half." — John Wanamaker, 19th-century U.S. department store pioneer

In a study with implications for the advertising industry and public health organizations, UCLA neuroscientists have shown they can use brain scanning to predict whether people will use sunscreen during a one-week period even better than the people themselves can.

"There is a very long history within psychology of people not being very good judges of what they will actually do in a future situation," said the study's senior author, Matthew Lieberman, a UCLA professor of psychology and of psychiatry and biobehavioral sciences. "Many people 'decide' to do things but then don't do them."

The new study by Lieberman and lead author Emily Falk, who earned her doctorate in psychology from UCLA this month, shows that increased activity in a brain region called the medial prefrontal cortex among individuals viewing and listening to public service announcement slides on the importance of using sunscreen strongly indicated that these people were more likely to increase their use of sunscreen the following week, even beyond the people's own expectations.

People are such liars. Brain scans predict behavior more accurately than what people say.

"From this region of the brain, we can predict for about three-quarters of the people whether they will increase their use of sunscreen beyond what they say they will do," Lieberman said. "If you just go by what people say they will do, you get fewer than half of the people accurately predicted, and using this brain region, we could do significantly better."

No need for expensive functional magnetic resonance imaging equipment to check out the kids in the basement. Cheaper scanners will do the job in the future.

"Given that there are emerging technologies that are relatively portable and approximate some of what fMRI can do at a fraction of the cost, looking to the brain to shape persuasive messages could become a reality," Lieberman said. "But we're just at the beginning. This is one of the first papers on anything like this. There will be a series of papers over the next 10 years or more that will tell us what factors are driving neural responses."

This opens up all sorts of possibilities. Show the kids videos on the dangers of illicit drug use and watch their brains. Are they going to go off sneaking around to take hallucinogens? The brain scan will tell you. Show pubescent adolescents a video on the dangers of teen sex. Are they going to sneak off and do the wild thing? You'll know. Break out the chastity belt of the scans do not look promising.

By Randall Parker    2010 June 24 12:02 AM   Entry Permalink | Comments (1)
2009 December 09 Wednesday
Promise Breakers Found Via Brain Scans

Some day you won't be able to escape into the privacy of your own mind.

Despite the ubiquity of promises in human life, we know very little about the brain physiological mechanisms underlying this phenomenon. In order to increase understanding in this area, neuroscientist Thomas Baumgartner (University of Zurich) and economists Ernst Fehr (University of Zurich) and Urs Fischbacher (University of Konstanz) carried out a social interaction experiment in a brain scanner where the breach of a promise led both to monetary benefits for the promise breaker and to monetary costs for the interaction partner. The results of the study show that increased activity in areas of the brain playing an important role in processes of emotion and control accompany the breach of a promise. This pattern of brain activity suggests that breaking a promise triggers an emotional conflict in the promise breaker due to the suppression of an honest response.

Furthermore, the most important finding of the study enabled the researchers to show that "perfidious" patterns of brain activity even allow the prediction of future behavior. Indeed, experimental subjects who ultimately keep a promise and those who eventually break one act exactly the same at the time the promise is made – both swear to keep their word. Brain activity at this stage, however, often exposes the subsequent promise breakers.

The ability to detect promising breaking is a subset of lie detection.

What I'm wondering: Can some people train with brain scanners to learn how to fool brain scanners to see seemingly honest behavior while a person really is deciding to deceive and break promises?

By Randall Parker    2009 December 09 08:35 PM   Entry Permalink | Comments (1)
2009 October 15 Thursday
Brain Probes Measure Language Processing Delays

Patients who were to undergo brain surgery to treat seizures had electrodes inserted in their brains to help the surgeons. But some scientists collected data from those electrodes and found that it is possible to detect when three major aspects of language processing can be detected starting sequentially.

A study by researchers at the University of California, San Diego School of Medicine reports a significant breakthrough in explaining gaps in scientists' understanding of human brain function. The study – which provides a picture of language processing in the brain with unprecedented clarity – will be published in the October 16 issue of the journal Science.

"Two central mysteries of human brain function are addressed in this study: one, the way in which higher cognitive processes such as language are implemented in the brain and, two, the nature of what is perhaps the best-known region of the cerebral cortex, called Broca's area," said first author Ned T. Sahin, PhD, post-doctoral fellow in the UCSD Department of Radiology and Harvard University Department of Psychology.

The study demonstrates that a small piece of the brain can compute three different things at different times – within a quarter of a second – and shows that Broca's area doesn't just do one thing when processing language. The discoveries came through the researchers' use of a rare procedure in which electrodes were placed in the brains of patients. The technique allowed surgeons to know which small region of the brain to remove to alleviate their seizures, while sparing the healthy regions necessary for language. Recordings for research purposes were then made while the patients were awake and responsive. The procedure, called Intra-Cranial Electrophysiology (ICE), allowed the researchers to resolve brain activity related to language with spatial accuracy down to the millimeter and temporal accuracy down to the millisecond.

I wonder why the temporal accuracy of these probes is only 1 millisecond. Oscilloscopes and logic analyzers can measure finer granularity than that. Anyone have any idea why the 1 millisecond temporal measurement accuracy?

So what would it take to optimize the brain to start lexical, grammatical and articulatory computations sooner than 200, 320, and 450 millseconds after a word is presented?

"We showed that distinct linguistic processes are computed within small regions of Broca's area, separated in time and partially overlapping in space," said Sahin. Specifically, the researchers found patterns of neuronal activity indicating lexical, grammatical and articulatory computations at roughly 200, 320 and 450 milliseconds after the target word was presented. These patterns were identical across nouns and verbs and consistent across patients.

Do any humans do the processing sooner than these numbers? Perhaps high IQ people are quicker?

By Randall Parker    2009 October 15 11:30 PM   Entry Permalink | Comments (6)
2009 July 23 Thursday
Brain Scans Detect Which Of 8 Tasks Subjects Doing

By scanning brains of people doing 8 different tasks scientists can predict most of the time which mental task a person is working on.

New research by neuroscientists at UCLA and Rutgers University provides evidence that fMRI can be used in certain circumstances to determine what a person is thinking. At the same time, the research suggests that highly accurate "mind reading" using fMRI is still far from reality. The research is scheduled to be published in the October 2009 issue of the journal Psychological Science.

In the study, 130 healthy young adults had their brains scanned in an MRI scanner at UCLA's Ahmanson–Lovelace Brain Mapping Center while they performed one of eight mental tasks, including reading words aloud, saying whether pairs of words rhyme, counting the number of tones they heard, pressing buttons at certain cues and making monetary decisions. The scientists calculated how accurately they could tell from the fMRI brain scans which mental task each participant was engaged in.

"We take 129 of the subjects and apply a statistical tool to learn the differences among people doing these eight tasks, then we take the 130th person and try to tell which of the tasks this person was doing; we do that for every person," said lead study author Russell Poldrack, a professor of psychology who holds UCLA's Wendell Jeffrey and Bernice Wenzel Term Chair in Behavioral Neuroscience.

Just how many kinds of tasks can be distinguished? Could scientists detect the difference between someone reading a comedy versus someone reading a political tract that advocates for armed resurrection? My guess is that lots of different mental tasks will end up looking similar.

The scientists can guess the task being done 80% of the time.

"It turns out that we can predict quite well which of these eight tasks they are doing," he said. "If we were just guessing, we would get it right about 13 percent of the time. We get it right about 80 percent of the time with our statistical tool. It's not perfect, but it is quite good — but not nearly good enough to be admissible in court, for example.

This capability is far from a general mind reading tool.

"Our study suggests that the kinds of things that some people have talked about in terms of mind reading are probably still pretty far off," Poldrack said. "If we are only 80 percent accurate with eight very different thoughts and we want to figure out what you're thinking out of millions of possible thoughts, we're still very far away from achieving that."

But will fMRI ever make a good lie detector? Have you come across any fMRI lie detector studies?

Update: The researchers also find that a given function in the brain involves neural connections across multiple regions of the brain.

“You can’t just pinpoint a specific area of the brain, for example, and say that is the area responsible for our concept of self or that part is the source of our morality,” says Hanson. “It turns out the brain is much more complex and flexible than that. It has the ability to rearrange neural connections for different functions. By examining the pattern of neural connections, you can predict with a high degree of accuracy what mental processing task a person is doing.“

The findings open up the possibility of categorizing a multitude of mental tasks with their unique pattern of neural circuitry and also represent a potential first, early step in developing a means for identifying higher-level mental functions, such as 'lying' or abstract reasoning. They potentially also could pave the way for earlier diagnosis and better treatment of mental disorders, such as autism and schizophrenia, by offering a means for identifying very subtle abnormalities in brain activity and synchrony.

I'm curious to know whether people who do a function well (e.g. math) use more or less of the brain than people who do the same function poorly.

By Randall Parker    2009 July 23 07:28 AM   Entry Permalink | Comments (0)
2007 February 08 Thursday
Brain Scans Predict Choices

Dr. John-Dylan Haynes of the Max Planck Institute for Human Cognitive and Brain Sciences and colleagues have recently shown that using brain scans they can predict with fairly high accuracy which of two choices test subjects will choose when deciding to add or subtract two numbers.

To address the question of whether intention might be reflected in prefrontal cortical activity, the researchers in the new work used functional magnetic resonance imaging (fMRI) to assess brain activity while subjects concentrated on their choice of intended mental action, but prior to execution of the action. Specifically, subjects were free to choose between adding or subtracting two numbers and were asked to hold in mind their intention until numbers were presented on a screen, along with a choice of outcomes (one of which was correct for the addition choice, one correct for the subtraction choice). Subjects then selected the correct answer according to their planned task, revealing their intended action.

The researchers found that during the delay between the subjects' choice of task and execution of the task, it was possible to decode from activities in two regions of the prefrontal cortex which of the two actions (addition or subtraction) individuals had chosen to pursue. Different patterns of activity were seen during actual execution of the task, showing that regionally distinct neural substrates were involved in task preparation and execution. Decoding of intentions was most robust when activity patterns in the medial prefrontal cortex were taken into account, consistent with the idea that this region of the brain participating in the reflection of an individual on his or her own mental state.

Are you ever bothered that this sort of research takes all the mystery out of life? Do you start seeing humans as less lofty and noble intentions as no better than the most criminal and vicious intentions?

Their computer model which analyses brain scans can predict the right answer 70% of the time.

Our secret intentions remain concealed until we put them into action -so we believe. Now researchers have been able to decode these secret intentions from patterns of their brain activity. They let subjects freely and covertly choose between two possible tasks - to either add or subtract two numbers. They were then asked to hold in mind their intention for a while until the relevant numbers were presented on a screen. The researchers were able to recognize the subjects intentions with 70% accuracy based alone on their brain activity - even before the participants had seen the numbers and had started to perform the calculation.

Imagine one could develop an algorithm to analyse brain scans that can detect the intention to lie. Such a capability would make a great lie detector. Another use? To operate robotic prostheses.

Intentions exist in a network of neurons.

The study also reveals fundamental principles about the way the brain stores intentions. "The experiments show that intentions are not encoded in single neurons but in a whole spatial pattern of brain activity", says Haynes. They furthermore reveal that different regions of the prefrontal cortex perform different operations. Regions towards the front of the brain store the intention until it is executed, whereas regions further back take over when subjects become active and start doing the calculation. "Intentions for future actions that are encoded in one part of the brain need to be copied to a different region to be executed", says Haynes.

Whenever I think of brain scans done by governments I think of Mick Jagger singing "These days its all secrecy, no privacy".

By Randall Parker    2007 February 08 11:05 PM   Entry Permalink | Comments (3)
2006 June 28 Wednesday
Pictures Of Eyes Make People More Honest

People are more likely to make unsupervised cash donations to a box in exchange for tea, coffee, and milk when a poster over the box has a picture of human eyes looking down.

The researchers say the eye pictures were probably influential because the brain naturally reacts to images of faces and eyes. It seems people were subconsciously cooperating with the honesty box when it featured pictures of eyes rather than flowers.

They also say the findings show how people behave differently when they believe they are being watched because they are worried what others will think of them. Being seen to co-operate is a good long-term strategy for individuals because it is likely to mean others will return the gesture when needed.

Details of the experiment, believed to be the first to test how cues of being watched affect people's tendency for social co-operation in a real-life setting, are published today, Wednesday June 28, in the Royal Society journal Biology Letters.

An honesty box is a system of payment which relies on people's honesty to pay a specified price for goods or services - there is no cashier to check whether they are doing so.

For this experiment, lead researcher Dr Melissa Bateson and her colleagues Drs Daniel Nettle and Gilbert Roberts, of the Evolution and Behaviour Research Group in the School of Biology and Psychology at Newcastle University, made use of a long-running 'honesty box' arrangement.

This had been operating as a way of paying for hot drinks in a common room used by around 48 staff for many years, so users had no reason to suspect an experiment was taking place.

An A5 poster was placed above the honesty box, listing prices of tea, coffee and milk. The poster also featured an image banner across the top, and this alternated each week between different pictures of flowers and images of eyes.

The eye pictures varied in the sex and head orientation but were all chosen so that the eyes were looking directly at the observer.

Each week the research team recorded the total amount of money collected and the volume of milk consumed as this was considered to be the best index available of total drink consumption.

The team then calculated the ratio of money collected to the volume of milk consumed in each week. On average, people paid 2.76 as much for their drinks on the weeks when the poster featured pictures of eyes.

Lead author of the study, Melissa Bateson, a Royal Society research fellow based at Newcastle University, said: "Our brains are programmed to respond to eyes and faces whether we are consciously aware of it or not.

"I was really surprised by how big the effect was as we were expecting it to be quite subtle but the statistics show that the eyes had a strong effect on our tea and coffee drinkers."

Those nations with massive posters of dictators staring down on every street probably have lower crime rates as a result.

This result seems like it has all sorts of obvious immediate results. Parents could put posters of eyes in rooms where their kids might be tempted to misbehave when the parents are not around. Posters of eyes could get put up in bus and train stations to see if the posters deter pick-pockets. Posters of eyes in workplaces might make people less likely to laze off.

One obvious direction for further research would be to try different kinds of faces and facial expressions to see if some faces make people work harder or to treat people more politely on technical support phone calls or otherwise perform better and more honestly in work situations.

Would people in workplaces feel more stressed when eyes in posters look down upon them?

In public places such as town squares, train stations, and airports which have video surveillance cameras (aka CCTVs) would the cameras be more effective in deterring crime if combined with a poster of eyes mounted above them to emphasize to people that they are being watched?

By Randall Parker    2006 June 28 08:42 PM   Entry Permalink | Comments (4)
Brain Scan Lie Detectors Come To Market

Higher accuracy lie detection technology is coming to market.

Two companies plan to market the first lie-detecting devices that use magnetic resonance imaging (MRI) and say the new tests can spot liars with 90% accuracy.

No Lie MRI plans to begin offering brain-based lie-detector tests in Philadelphia in late July or August, says Joel Huizenga, founder of the San Diego-based start-up. Cephos Corp. of Pepperell, Mass., will offer a similar service later this year using MRI machines at the Medical University of South Carolina in Charleston, says its president, Steven Laken.

Both rely in part on recent research funded by the federal government aimed at producing a foolproof method of detecting deception.

Lie detection will become a huge market. It will change personal relationships, marriages, the criminal justice system (I love tools that can exonerate the innocent), the hunt for terrorists, and raise honesty in business dealings.

Want to settle an argument where one party does not trust the other's claims? Even better, how about those arguments where both sides say the other is lying? The solution (assuming you don't mind the 90% accuracy rate) is quite affordable.

No Lie MRI plans to charge $30 a minute to use its device. Cephos has not yet set a price.

Have any disagreements with suspected liars that would be worth at least $30 to verify truth or dishonesty?

Be on the look-out for VeraCenters.

No Lie MRI will debut its services this July in Philadelphia, where it will demonstrate the technology to be used in a planned network of facilities the company is calling VeraCenters. Each facility will house a scanner connected to a central computer in California. As the client responds to questions using a handheld device, the imaging data will be fed to the computer, which will classify each answer as truthful or deceptive using software developed by Langleben's team.

Temple University radiologist Scott Faro sees lie detectors as great money savers.

"People say fMRI is expensive," Faro continues, "but what's the cost of a six-month jury trial? And what's the cost to America for missing a terrorist? If this is a more accurate test, I don't see any moral issues at all. People who can afford it and believe they are telling the truth are going to love this test."

The more parties to a disagreement the less the problem of the only 90% success rate. Ask several employees in a company or suspected members of a terrorist ring some hard questions. See where they all line up in terms of their answers and the fMRI machine's assessements.

The US federal government prevents private companies from using the cost savings of lie detection. This'll become an incentive to move work offshore when business needs place a very high value on honesty and trustworthiness.

No Lie MRI's plans to market its services to corporations will likely run afoul of the 1988 Employee Polygraph Protection Act, which bars the use of lie-detection tests by most private companies for personnel screening. Government employers, however, are exempt from this law, which leaves a huge potential market for fMRI in local, state, and federal agencies, as well as in the military.

I wonder if lie detection will be allowed in divorce cases? "Have you disclosed all your sources of income and all assets?" Or how about "Have you ever done illegal drugs while you had custody of the kids?"

By Randall Parker    2006 June 28 04:25 PM   Entry Permalink | Comments (15)
2006 January 31 Tuesday
Brain MRI Better Than Polygraph For Lie Detection?

I'm reminded of Mick Jagger singing "These days its all secrecy, no privacy". So much for the privacy of your own thoughts.

Traditional polygraph tests to determine whether someone is lying may take a back seat to functional magnetic resonance imaging (fMRI), according to a study appearing in the February issue of Radiology. Researchers from Temple University Hospital in Philadelphia used fMRI to show how specific areas of the brain light up when a person tells a lie.

"We have detected areas of the brain activated by deception and truth-telling by using a method that is verifiable against the current gold standard method of lie detection--the conventional polygraph," said lead author Feroze B. Mohamed, Ph.D., Associate Professor of Radiology at Temple.

Dr. Mohamed explained how the standard polygraph test has failed to produce consistently reliable results, largely because it relies on outward manifestations of certain emotions that people feel when lying. These manifestations, including increased perspiration, changing body positions and subtle facial expressions, while natural, can be suppressed by a large enough number of people that the accuracy and consistency of the polygraph results are compromised.

"Since brain activation is arguably less susceptible to being controlled by an individual, our research will hopefully eliminate the shortcomings of the conventional polygraph test and produce a new method of objective lie detection that can be used reliably in a courtroom or other setting," Dr. Mohamed said.

Dr. Mohamed and colleagues recruited 11 healthy subjects for the study. A mock shooting was staged, in which blank bullets were fired in a testing room. Five volunteers were asked to tell the truth when asked a series of questions about their involvement, and six were asked to deliberately lie. Each volunteer was examined with fMRI to observe brain activation while they answered questions either truthfully or deceptively. They also underwent a conventional polygraph test, where respiration, cardiovascular activity and perspiration responses were monitored. The same questions were asked in both examinations, and results were compared among the groups.

"With fMRI, there were consistently unique areas of the brain, and more of them, that were activated during the deceptive process than during truth-telling," Dr. Mohamed said. In producing a deceptive response, a person must inhibit or conceal the truth, which activates parts of the brain that are not required for truth-telling. Thus, fewer areas of the brain are active when telling the truth.

Fourteen areas of the brain were active during the deceptive process. In contrast, only seven areas lit up when subjects answered truthfully.

By studying the images, investigators were able to develop a better picture of the deception process in the brain. The increased activity in the frontal lobe, especially, indicated how the brain works to inhibit the truth and construct a lie.

Will some people with special intellectual talents be able to develop the ability to fool a functional MRI scan?

By Randall Parker    2006 January 31 09:39 PM   Entry Permalink | Comments (1)
2004 December 01 Wednesday
Brain Scans Show More Of Brain Activated For Lies Than For Truths

Will brain scans be able to always detect a lie?

CHICAGO – When people lie, they use different parts of their brains than when they tell the truth, and these brain changes can be measured by functional magnetic resonance imaging (fMRI), according to a study presented today at the annual meeting of the Radiological Society of North America. The results suggest that fMRI may one day prove a more accurate lie detector than the polygraph.

"There may be unique areas in the brain involved in deception that can be measured with fMRI," said lead author Scott H. Faro, M.D. "We were able to create consistent and robust brain activation related to a real-life deception process." Dr. Faro is professor and vice-chairman of radiology and director of the Functional Brain Imaging Center and Clinical MRI at Temple University School of Medicine in Philadelphia.

The researchers created a relevant situation for 11 normal volunteers. Six of the volunteers were asked to shoot a toy gun with blank bullets and then to lie about their participation. The non-shooters were asked to tell the truth about the situation. The researchers examined the individuals with fMRI, while simultaneously administering a polygraph exam. The polygraph measured three physiologic responses: respiration, blood pressure and galvanic skin conductance, or the skin's ability to conduct electricity, which increases when an individual perspires.

The volunteers were asked questions that pertained to the situation, along with unrelated control questions. In all cases, the polygraph and fMRI accurately distinguished truthful responses from deceptive ones. fMRI showed activation in several areas of the brain during the deception process. These areas were located in the frontal (medial inferior and pre-central), temporal (hippocampus and middle temporal), and limbic (anterior and posterior cingulate) lobes. During a truthful response, the fMRI showed activation in the frontal lobe (inferior and medial), temporal lobe (inferior) and cingulate gyrus.

Overall, there were regional differences in activation between deceptive and truthful conditions. Furthermore, there were more areas of the brain activated during the deception process compared to the truth-telling condition.

Dr. Faro's study is the first to use polygraph correlation and a modified version of positive control questioning techniques in conjunction with fMRI. It is also the first to involve a real-life stimulus. "I believe this is a vital approach to understand this very complex type of cognitive behavior," Dr. Faro said. "The real-life stimulus is critical if this technique is to be developed into a practical test of deception."

Because physiologic responses can vary among individuals and, in some cases, can be regulated, the polygraph is not considered a wholly reliable means of lie detection. According to Dr. Faro, it is too early to tell if fMRI can be "fooled" in the same manner.

However, these results are promising in that they suggest a consistency in brain patterns that might be beyond conscious control.

"We have just begun to understand the potential of fMRI in studying deceptive behavior," Dr. Faro said. "We plan to investigate the potential of fMRI both as a stand-alone test and as a supplement to the polygraph with the goal of creating the most accurate test for deception."

Dr. Faro's co-authors on this paper were Feroze Mohamed, Ph.D., Nathan Gordon, M.S., Steve Platek, Ph.D, Mike Williams, Ph.D., and Harris Ahmad, M.D.

Faro wants money from national security agencies for larger studies.

Will fMRI stand alone as a test for deception? Dr. Faro admits he's not yet sure: "The polygraph looks at only peripheral stimulus as the end result of a long chain of primary central areas of activation of the brain. We're now getting to the origin of the activation."

Dr. Faro called the results "promising" and said he hopes to gain the interest of major organizations such as the Department of Homeland Security, the National Security Administration or the CIA to help fund further research and larger group studies using the same methods, but he says the technology is expensive.

"It's probably going to be used on the academic side to understand psycho-social behavior, and on the criminal side, it's going to be used for major criminals," said Dr. Faro. "We're looking at areas of tremendous concern with terrorism, where the expense is minimal compared to the potential disaster. Looking at industrial or business-related crimes, certainly Martha Stewart could afford this test if she was truly interested."

Imagine people negotiating a huge business deal demonstrating their sincerity by agreeing to a brain scan while being asked questions about whether they intend to go through with a deal in good faith. Questions about each of the commitments in the contract could be asked. Will that give an advantage to superficial people who are sincere but prone to changing their minds? How much do intentions matter?

Update: One thought: If one dreams up a lie well ahead of time (like days or weeks) will its recall look more like a truth on a brain scan? If one can fantasize the lie and make it into something like a real memory it might require less mental effort to recall and/or construct than a lie made up on the spot. Therefore in a brain scan it might look more like a regular memory.

By Randall Parker    2004 December 01 12:48 AM   Entry Permalink | Comments (12)
2004 February 26 Thursday
Voice Stress Lie Detectors Do Not Work

Hand-held lie detectors appear to be useless.

"We tested one of the more popular voice-stress lie detection technologies and got dismal results, both in the system's ability to detect people actually engaged in deception and in its ability to exclude those not attempting to be deceptive," said Mitchell S. Sommers, an associate professor of psychology in Arts & Sciences at Washington University in St. Louis.

"In our evaluation, voice-stress analysis detected some instances of deception, but its ability to do so was consistently less than chance — you could have gotten better results by flipping a coin," Sommers said.

Sommers' research was supported by and conducted in conjunction with the Department of Defense Polygraph Institute (DODPI), located in Fort Jackson, S.C. Findings were presented at the World Congress of International Conference of Psychophysiology in July 2002. An academic paper on the study is under review for journal publication.

Sommers' study assessed the ability of Vericator, a computer-based system that evaluates credibility through slight variations in a person's speech, to detect deception in a number of different scenarios. Participants were questioned using different forms of interrogation and under conditions inducing various levels of stress.

...

"Voice-stress analysis is fairly effective in identifying certain variations in stress levels in human speech, but high levels of stress do not necessarily correlate with deception," Sommers said. "It may someday be possible to refine voice-stress analysis so that it is capable of distinguishing among various sources of stress and accurately identifying those that are directly related to deception. However, all the research that I've seen thus far suggests that it's wishful thinking, at best, to suggest that current voice-stress analysis systems are capable or reliably detecting deception."

My guess is that a high resolution image processing system that analyzed facial muscle changes would have a better chance of working. Take Paul Ekman's research into his Facial Action Coding System (FACS) and develop an automated means of using it and it might be possible to build a useful lie detector.

By Randall Parker    2004 February 26 11:35 AM   Entry Permalink | Comments (4)
Site Traffic Info