Human memory and human judgment can not be trusted. Doctored video caused people to believe they saw something they never saw.
Associate Professor Dr Kimberley Wade from the Department of Psychology led an experiment to see whether exposure to fabricated footage of an event could induce individuals to accuse another person of doing something they never did.
In the study, published in Applied Cognitive Psychology, Dr Wade found that almost 50% of people shown fake footage of an event they witnessed first hand were prepared to believe the video version rather than what they actually saw.
Scary thought: You get arrested for something you didn't do and then humans are called to testify as witnesses. Not Vulcans, oh no. Lowly, flawed, mistaken, gullible, foolish humans. The very thought is enough to make me ill. Lesson to take home: Innocent people should move to planet Vulcan lest they get convicted of crimes they didn't do.
You can't trust half the eyewitnesses.
In a game involving the use of fake money participants were deceived into believing that a person sitting next to them cheated. Shown a doctored video people were willing to claim they saw the cheating when it supposedly first occurred - even though the cheating never happened.
One third of the subjects were told that the person sat next to them was suspected of cheating. Another third were told the person had been caught on camera cheating, and the remaining group were actually shown the fake video footage. All subjects were then asked to sign a statement only if they had seen the cheating take place.
Nearly 40% of the participants who had seen the doctored video complied. Another 10% of the group signed when asked a second time by the researchers. Only 10% of those who were told the incident had been caught on film but were not shown the video agreed to sign, and about 5% of the control group who were just told about the cheating signed the statement.
Think about that. 5% didn't even need to see doctored video to agree to something they never saw.
The brain’s store of willpower is depleted when people control their thoughts, feelings or impulses, or when they modify their behavior in pursuit of goals. Psychologist Roy Baumeister and others have found that people who successfully accomplish one task requiring self-control are less persistent on a second, seemingly unrelated task.
In one pioneering study, some people were asked to eat radishes while others received freshly baked chocolate chip cookies before trying to solve an impossible puzzle. The radish-eaters abandoned the puzzle in eight minutes on average, working less than half as long as people who got cookies or those who were excused from eating radishes. Similarly, people who were asked to circle every “e” on a page of text then showed less persistence in watching a video of an unchanging table and wall.
The article reports you can keep up your willpower by keeping up your blood sugar. Also, exercising your willpower appears to make it stronger.
These claims seem consistent with my own daily experience. I have to let some things slide in order to get through dealing with people and chores in other areas.
The writers of this New York Times article, Sandra Aamodt, editor of Nature Neuroscience, and Sam Wang, a prof of biology and neuroscience at Princeton, have a new book that sounds interesting: “Welcome to Your Brain: Why You Lose Your Car Keys but Never Forget How to Drive and Other Puzzles of Everyday Life.”.
The research literature on brain metabolism and willpower is fascinating. For example, people who try to suppress facial expressions while watching a movie have lower blood sugar as a result. That lower blood sugar means less glucose available for the brain to do additional cognitive processing.
Self-control literally requires energy. Subjects asked to suppress facial reactions (e.g. smiles) when watching a movie have lower blood glucose levels, suggesting higher energy consumption. Control subjects (free to react how they want) had the same blood glucose levels before and after the movie, and performed better than control subjects on a Stroop Task. Restoring glucose levels with a sugar-sweetened lemonade (instead of artificially-sweetened beverages, without glucose) also increases performance. Self-control failures happen more often in situation where blood glucose levels is low. In a literature review, Gailliot et al show that lack of cognitive, behavioral and emotional control is systematically associated with hypoglycemia or hypoglycemic individuals. Thought suppression, emotional inhibition, attention control, and refraining from criminal behavior are impaired in individual with low-level blood glucose (Gailliot & Baumeister, 2007).
Matthew T. Gailliot and Roy F. Baumeister of Florida State University wrote a pretty good review of the relationship between glucose and willpower in 2007: The Physiology of Willpower: Linking Blood Glucose to Self-Control (PDF format).
Attention control is a pervasive and basic form of self-control. Executive control can dictate and choose what information is noticed and processed by the mind, as opposed to letting salience and the environment dictate. In a review of the literature on self-regulation, Baumeister et al. (1994) observed that attention appears to be the front line of control for many problem behaviors, and loss of attention control is a precursor to self-control failure in many different domains. Controlling attention requires self-control because attention automatically orients toward various stimuli in the environment (e.g., Shiffrin & Schneider, 1977). It takes self-control to override these automatic responses so as instead to remain focused on any single task or stimulus (Muraven & Baumeister, 2000). Consistent with the idea that attention control depletes the same energy needed for self-control, research has shown that people are less able to exert self-control after having controlled their attention and that they are less able to control their attention after having exerted self-control (e.g., Vohs et al., 2005; Gailliot & Baumeister, in press; Gailliot et al., 2006; Vohs & Faber, 2004).
One study found evidence consistent with the idea that attention control requires a relatively large amount of glucose (Gailliot, Baumeister, et al., 2007). Participants watched a 6-min video that required them either to control their attention by ignoring certain stimuli appearing on the screen or to watch the video as they would normally, thus not trying to control their attention. Among participants who had controlled their attention, glucose dropped after having watched the video. Among participants who watched without controlling attention, glucose levels did not change. The implication is that controlling attention resulted in the consumption of relatively large amounts of glucose. To be sure, it is more difficult to watch the video while controlling attention than while watching it without such effort, and participants’ ratings of task difficulty confirmed that difference. Self-control requires effort, and that makes it difficult. From this and similar studies, it is not easy to infer whether the differences are due to self-control per se or to the general difficulty of performing the task.
They report that diabetics and those with hypoglycemia have poorer attention control.
If you want to maintain good attention control and willpower then eat your breakfast. Also, eat low glycemic index food.
A New York Times article reports on lots of recent research which demonstrates human minds become much less efficient when multi-tasking.
Several research reports, both recently published and not yet published, provide evidence of the limits of multitasking. The findings, according to neuroscientists, psychologists and management professors, suggest that many people would be wise to curb their multitasking behavior when working in an office, studying or driving a car.
These experts have some basic advice. Check e-mail messages once an hour, at most. Listening to soothing background music while studying may improve concentration. But other distractions — most songs with lyrics, instant messaging, television shows — hamper performance. Driving while talking on a cellphone, even with a hands-free headset, is a bad idea.
I've yet to get a cell phone in large part because I view it as an automated interrupt generator device. Ditto for a Blackberry. I sometimes wear earplugs at work so as to avoid hearing conversations. I need concentration time to get real work done. Oh, and voice mail? Why give someone a place to record a message to tell you something that they'll just repeat once they finally get ahold of you? I know people who do exactly that. They are driven by a deep instinctive need to communicate and be heard.
What I want: An automated device that will take spoken language in an email or a conversation and turn it into text. One can read text in less time than it takes to listen to a slow voice mail or, for that matter, a slow person in a meeting. What, you mean zone out on what someone is saying and then read the transcript? Yes, exactly. Besides, such a device could provide a useful history of what got said and agreed to in a meeting.
Younger people lose their brain speed advantage when interrupted.
Recently completed research at the Institute for the Future of the Mind at Oxford University suggests the popular perception is open to question. A group of 18- to 21-year-olds and a group of 35- to 39-year-olds were given 90 seconds to translate images into numbers, using a simple code.
The younger group did 10 percent better when not interrupted. But when both groups were interrupted by a phone call, a cellphone short-text message or an instant message, the older group matched the younger group in speed and accuracy.
How much productivity is lost by instant messaging and cell phone conversations? I watch people all the time fielding calls in office settings from someone who would not otherwise be calling them if the caller just didn't have a cellphone. The vast bulk of such conversations (at least from the side I hear) could have happened much later or never at all.
At Microsoft each interrupt costs an extra 15 minutes for a full return to work after the interrupt completed.
In a recent study, a group of Microsoft workers took, on average, 15 minutes to return to serious mental tasks, like writing reports or computer code, after responding to incoming e-mail or instant messages. They strayed off to reply to other messages or browse news, sports or entertainment Web sites.
“I was surprised by how easily people were distracted and how long it took them to get back to the task,” said Eric Horvitz, a Microsoft research scientist and co-author, with Shamsi Iqbal of the University of Illinois, of a paper on the study that will be presented next month.
Years ago (15 or 20?) Tom DeMarco and Anthony Lister published a book called Peopleware arguing that workplace interruptions cost each computer programmer a substantial amount of time (I faintly recall 30 minutes) per interruption. They based this on consulting work they'd done in companies where they did test of programmer coding speed combined with having programmers keep logs of every time they got interrupted. In DeMarco and Lister's experience some companies generated so many interruptions (e.g. phone calls, nearby conversations, intercom announcements) that in the course of a normal workday the programmers had 0 hours of useful work time.
Interrupts are bad. A study based on surveys and interviews with office workers put the cost of interrupts at $650 billion per year.
The productivity lost by overtaxed multitaskers cannot be measured precisely, but it is probably a lot. Jonathan B. Spira, chief analyst at Basex, a business-research firm, estimates the cost of interruptions to the American economy at nearly $650 billion a year.
That's a lot of potatoes.
What we need: Multiple phone numbers (or perhaps an extra digit on each phone number) with priorities on each cell phone. Let the callers send a priority signal for a call. If you are busy you could ignore priority levels below 1 or below 2. For example, parents could tell their kids to use 1 when a matter of life and death and 2 when they are stuck somewhere and so on.
We also need more structure in communications so that responses get caught and put into categories and fields. Also, I really want to be able to send an email with a marker that says "Bring this email back to my attention if it does not produce a response from the first 3 people on the To: list in N days.". Make it so that one can more easily track whether tasks are getting done and answers getting generated.
Also see my previous posts Brains Can Not Process Two Tasks In Parallel and Human Brains Limited Parallel Processing Capabilities.
Vanderbilt University neuroscientists Paul E. Dux and René Marois used functional magnetic resonance imaging of human brains to discover how the brain responds to the need to perform two tasks at once. They discovered parts of the brain that are bottlenecks which serialize the processing of information for multiple tasks.
To overcome this limitation, Dux and Marois rapidly sampled brain activity using fMRI while subjects were performing two demanding tasks. Evaluation of the data produced by this rapid sampling method allowed them to characterize the temporal pattern of activity in specific brain areas.
The two tasks consisted of pressing the appropriate computer key in response to hearing one of eight possible sounds and uttering an appropriate syllable in response to seeing one of eight possible images. Different senses and motor responses were enlisted in order to ensure that any interference between the two tasks was not specific to a particular sensory or motor modality, but instead originated at a central information-processing bottleneck.
The results revealed that the central bottleneck was caused by the inability of the lateral frontal and prefrontal cortex, and also the superior frontal cortex, to process the two tasks at once. Both areas have been shown in previous experiments to play a critical role in cognitive control.
"We determined these brain regions responded to tasks irrespective of the senses involved, they were engaged in selecting the appropriate response, and, most importantly, they showed 'queing' of neural activity--the neural response to the second task was postponed until the response to the first was completed," Dux said.
"Neural activity seemed to be delayed for the second task when the two tasks were presented nearly simultaneously – within 300 milliseconds of each other," Marois said. "If individuals have a second or more between tasks, we did not see this delay.
What I'd like to know: Do higher IQ people have an enhanced ability to process two tasks at once? Or do they serialize just as strictly but finish processing each task more quickly?
When you do two tasks at once your response to stimuli for each task gets slowed for as much as a second. So all those people driving around with cell phones are at greater risk for causing an accident.
"I'm Australian, and it's illegal there, so I'm trained not to," Dux said. "Even so, I would never do it. Dual-task costs can be up to a second, and that's a long time when you're traveling at 60 miles per hour."
It would be really handy to have a greater capacity to process two or three or more problems at once. It would also be really handy to have a much larger short term memory. One of the challenges of future post-human genetic engineering is to develop DNA sequences that code for brains that can handle more problems at once.
I am expecting the use of genetic engineering on offspring to make easy the expansion of short term memory working set size. Plenty of people have bigger memory working set sizes. We'll be able to compare their genetic sequences to those of lesser minds and identify the best genes to tweak for bigger short term memories. But will we discover genetic variations that increase the ability of human minds to do many tasks at once? Do such variations exist in the human population? Or are the architectural changes needed to allow parallel processing on major problems too big for such genetic variations to come into existence naturally?
If someone, somewhere hadn't thought to make team uniforms the same color, we might be stuck watching NBA finals or World Cup soccer matches with only two players and a ref.
It is that color coding, Johns Hopkins University psychologists have now demonstrated, that allows spectators, players and coaches at major sporting events to overcome humans' natural limit of tracking no more than three objects at a time.
"We've known for some time that human beings are limited to paying attention to no more than three objects at any one time," said Justin Halberda, assistant professor of psychological and brain sciences in the university's' Zanvyl Krieger School of Arts and Sciences.
"We report the rather surprising result that people can focus on more than three items at a time if those items share a common color," he said. "Our research suggests that the common color allows people to overcome the usual limit, because the 'color coding' enables them to perceive the separate individuals as a single set."
Employers could increase productivity of employees by reducing cognitive overload of needless distractions. Instead office workers have to listen to more than 3 conversations at a time over cubicle walls.
If you know which color you are supposed to keep track of your mind can focus on that quite well.
Knowing that color is the key to making sense of large numbers of objects "informs our understanding of the structure of visual cognition and reveals that humans rely on early visual features to attend large sets in parallel," Halberda said. "Ongoing work in our lab is revealing which other features humans might use."
Halberda and Feigenson reached their conclusion by asking Johns Hopkins undergraduate volunteers to view series of colored dots flashing onto a black computer screen. The subjects were asked to estimate the number of dots in one randomly selected set on each trial.
Half the time, the subjects were told in advance whether to pay attention to, say, just the red dots or just the green ones. Otherwise, the subjects were required to store as much information as possible in visual memory from what they saw briefly onscreen.
Some sets contained as many as 35 dots and subjects viewed the sets for less than one half second, which Halberda points out "is too short to allow the subjects to actually count the dots." Subjects were very accurate when told in advance which set to pay attention to, regardless of how many different colors were present, revealing that humans are able to select a set that shares a common color. Subjects were also very accurate at enumerating a color subset when asked after the flash of dots so long as the flash contained three or fewer colors.
"We found that humans are unable to store information from more than three sets at once," Halberda said. "This places an important constraint on how humans think about and interact with sets in the world."
Just forget about spectator sports with 4 teams playing against each other at once. 3 teams competing at once are within the limits of what human spectators can track. But more than that does not work.
Maybe the real limit is 2 teams because people also have to keep track of the set of things they deem necessary for watching sports. There'd be no mental room for beer and food if 3 teams competed.
So then do space aliens with greater mental capacities routinely watch sports involving a dozen teams? Also, when human minds get genetically engineered to track more than 3 sets of things at once will human sports also change to bring more teams onto the field?
It's readily apparent that handling two things at once is much harder than handling one thing at a time. Spend too much time trying to juggle more than one objective and you'll end up wanting to get rid of all your goals besides sleeping. The question is, though, what makes it so hard to process two things at once?
Two theories try to explain this phenomenon: "passive queuing" and "active monitoring." The former says that information has to line up for a chance at being processed at some focal point of the brain, while the latter suggests that the brain can process two things at once – it just needs to use a complicated mechanism to keep the two processes separate. Recent research from MIT points to the former as an explanation.
Yuhong Jiang, Rebecca Saxe and Nancy Kanwisher, in a study to be published in the June issue of Psychological Science, a journal of the American Psychological Society, examined the brain activity involved in multitasking. They gave people two simple tasks. Task one was identifying shapes, and for some subjects, task two was identifying letters, for others it was identifying colors. The subjects were forced to switch from one task to the other in either one and a half seconds or one tenth of a second. When they had to switch faster, subjects would take as much as twice as long to respond than when switching more slowly.
Using MRI technology, Jiang, Saxe and Kanwisher examined subjects' brain activity while performing these tasks. They observed no increase in the sort of activity that would be involved in keeping two thought processes separate when subjects had to switch faster. This suggests that there are no complicated mechanisms that allow people to perform two tasks at once. Instead, we have to perform the next task only after the last one is finished.
I am looking forward to the day when it becomes possible to genetically engineer minds to have bigger working memories and other cognitive enhancements. Given that some people have larger working memories than others have once we find out the cause of that difference we will probably be able to genetically engineer offspring to have bigger working memories and perhaps to do the same for ourselves. But abilities that do not already exist (such as some types of parallel processing) will be more difficult to add. But if enhancements for parallel processing could be developed it would be very handy. The ability to do productive work while carrying on a demanding conversation would be particularly useful.