After a plane crash, where should the survivors be buried?
If you are considering where the most appropriate burial place should be, you are not alone. Scientists have found that around half the people asked this question, answer it as if they were being asked about the victims not the survivors.
Similarly, when asked "Can a man marry his widow's sister?" most people answer "yes" - effectively answering that it would indeed be possible for a dead man to marry his bereaved wife’s sister.
It is too much work to scan carefully for errors in all the sentences we read and hear all day. Our sentence interpretation circuitry probably does some sort of compare of the sentence against competing meanings and uses some words to influence the meaning assigned to other words. Our minds arrive at interpretations that make definitions assigned to individual words fit into the context of the words around them. So the widow's sister becomes interpreted into something like the dead wife's sister since widow and widower involve someone dying and the man is assumed to be still alive since a question about his intentions is being asked.
EEG scans provide evidence that suggests our brains aren't even slightly noticing errors in sentences.
What makes researchers particularly interested in people’s failure to notice words that actually don’t make sense, so called semantic illusions, is that these illusions challenge traditional models of language processing which assume that we build understanding of a sentence by deeply analysing the meaning of each word in turn.
Instead semantic illusions provide a strong line of evidence that the way we process language is often shallow and incomplete.
Professor Leuthold at University of Glasgow led a study using electroencephalography (EEG) to explore what is happening in our brains when we process sentences containing semantic illusions.
By analysing the patterns of brain activity when volunteers read or listened to sentences containing hard-to-detect semantic anomalies - words that fit the general context even though they do not actually make sense - the researchers found that when a volunteer was tricked by the semantic illusion, their brain had not even noticed the anomalous word.
Semantic illusion experiments I'd like to see: first test a large number of people for IQ and then test their ability to detect semantic illusions. Do smarter people detect semantic illusions at a higher rate? Do some people have very intensive sentence interpretation machinery that enables them to detect semantic illusions at a rate disproportionate for their IQ? Do any subcategories of autistics have enhance ability to detect semantic illusions?
Even if you could reallocate neurons in your brain to give yourself enhanced ability to detect semantic illusions it is not clear to me that's the best way to spend your neurons. If semantic illusions aren't causing you much misunderstanding then assigning more neurons to spatial reasoning, mathematical calculation ability, or larger working memory might make more sense. Personally, I'd ope for larger working memory if I could enhance something in my brain. I want a bigger copy and paste buffer and stacks for putting thoughts onto when I get interrupted.
An excellent book by Daniel Kahnemann, Thinking, Fast And Slow has brought into mainstream discussion the insights that psychological researchers have developed about the automated subconscious mind (called system 1 in Kahnemann's book) versus the rational conscious mind (system 2). We make a lot of mistakes by relying on the (rather flawed) heuristics that system 1 uses to very rapidly reach conclusions about problems the mind tries to solve. We could perform more effectively if we could better identify when we should put system 2 to work and if we could become more aware of when system 1 is basically planting ideas that system 2 incorrectly decides to accept.
But we do not have the mental capacity to solve all problems using system 2. We train our minds to apply techniques automatically (e.g. you don't pay that much attention to tying your shoe laces). We basically drill in skills to allow us to do things below the level of the conscious mind and use habits (and that link refers to Charles Duhigg's new book The Power Of Habit which is also on my Kindle waiting to be read). Our many habits help us to lighten our cognitive load so the conscious mind can (hopefully) focus mainly on what it is most needed for.
Turns out, there's some evidence that for some types of guess work system 1 actually does a better job than system 2 in trying to predict what will happen.
The latest demonstration of this effect comes from the lab of Michael Pham at Columbia Business School. The study involved asking undergraduates to make predictions about eight different outcomes, from the Democratic presidential primary of 2008 to the finalists of American Idol. They forecast the Dow Jones and picked the winner of the BCS championship game. They even made predictions about the weather.
Here’s the strange part: although these predictions concerned a vast range of events, the results were consistent across every trial: people who were more likely to trust their feelings were also more likely to accurately predict the outcome.
When to trust your intuition? Click thru and read the details on that. It is a very important question. A related question: How to train yourself so your emotions provide better quality signals on what to do?
Habits seem pretty similar to system 1 but maybe not always the same thing. Or, rather, system 1 might be many subsystems. Some of them might implement habits. When to use habits? What habits to develop? Which techniques to learn to enable system 2 to catch and correct system 1's bigger mistakes? These are the topics of cognitive research that I've become interested in. Given that our minds are flawed and yet also that they have limited capacity how to develop our minds to compensate for our flaws and at the same time make more effective use of the faster system 1 cognitive machinery?
For some people, the glass is always half full. Even when a football fan's team has lost ten matches in a row, he might still be convinced his team can reverse its run of bad luck. So why, in the face of clear evidence to suggest to the contrary, do some people remain so optimistic about the future?
In a study published today in Nature Neuroscience, researchers at the Wellcome Trust Centre for Neuroimaging at UCL (University College London) show that people who are very optimistic about the outcome of events tend to learn only from information that reinforces their rose-tinted view of the world. This is related to 'faulty' function of their frontal lobes.
People's predictions of the future are often unrealistically optimistic. A problem that has puzzled scientists for decades is why human optimism is so pervasive, when reality continuously confronts us with information that challenges these biased beliefs.
"Seeing the glass as half full rather than half empty can be a positive thing – it can lower stress and anxiety and be good for our health and well-being," explains Dr Tali Sharot. "But it can also mean that we are less likely to take precautionary action, such as practising safe sex or saving for retirement. So why don't we learn from cautionary information?"
I hear Eric Idle singing "always look on the bright side of life".
Human brains have assorted biases built into how they work that limit their ability to understand the world accurately. This is about more than just intelligence. However, I suspect genetic outliers exist who have fewer biases. If the outliers also have sufficient intelligence they make good stock market traders and good scientists.
Brain scans of children with attention-deficit/hyperactivity disorder (ADHD) have shown for the first time why people affected by the condition sometimes have difficulty in concentrating. The study, by experts at The University of Nottingham, may explain why parents often say that their child can maintain concentration when they are doing something that interests them, but struggles with boring tasks.
Using a 'Whac-a-Mole' style game, researchers from the Motivation, Inhibition and Development in ADHD Study (MIDAS) group found evidence that children with ADHD require either much greater incentives — or their usual stimulant medication — to focus on a task.
The research, funded by the Wellcome Trust, found that when the incentive was low, the children with ADHD failed to “switch off” brain regions involved in mind-wandering. When the incentive was high, however, or they were taking their medication, their brain activity was indistinguishable from a typically-developing non-ADHD child.
So the kids are just tuned for zoning out waiting for interesting events to happen. I suspect this tendency was selected for in some environments. Becoming too easily engrossed could cause a hunter to miss some prey.
How can you avoid the risk that your kid will find Phil Collins entertaining and still find a way to make ADD/ADHD kids able to learn? My modest proposal: Make versions of the most popular video games that have educational content mixed in to them.
Trying to develop video games from scratch that will be sufficiently interesting to hold the attention of someone with attention deficit disorder seems like a zero profit herculean task. Video games routinely take tens of millions of dollars to develop. Better to take games that have already succeeded and make variations of them that teach history, vocabulary, math, and other topics. An added benefit: Even non-ADD kids could learn from top notch video games that also did some teaching.
Some Harvard and MIT researchers found that the type of object a person holds influences their judgment about resumes, stories, and other information they are asked to evaluate.
The researchers conducted a series of experiments probing how objects' weight, texture, and hardness can unconsciously influence judgments about unrelated events and situations:
- To test the effects of weight, metaphorically associated with seriousness and importance, subjects used either light or heavy clipboards while evaluating resumes. They judged candidates whose resumes were seen on a heavy clipboard as better qualified and more serious about the position, and rated their own accuracy at the task as more important.
- An experiment testing texture's effects had participants arrange rough or smooth puzzle pieces before hearing a story about a social interaction. Those who worked with the rough puzzle were likelier to describe the interaction in the story as uncoordinated and harsh.
- In a test of hardness, subjects handled either a soft blanket or a hard wooden block before being told an ambiguous story about a workplace interaction between a supervisor and an employee. Those who touched the block judged the employee as more rigid and strict.
- A second hardness experiment showed that even passive touch can shape interactions, as subjects seated in hard or soft chairs engaged in mock haggling over the price of a new car. Subjects in hard chairs were less flexible, showing less movement between successive offers. They also judged their adversary in the negotiations as more stable and less emotional.
Nocera and his colleagues say these experiments suggest that information acquired through touch exerts broad, if generally imperceptible, influence over cognition. They propose that encounters with objects can elicit a "haptic mindset," triggering application of associated concepts even to unrelated people and situations.
We believe we have more conscious control of our opinions and decisions than we really do. We are regularly doing things for reasons unknown to us. The human mind is a very flawed instrument for reasoning.
Update: If you get too worked up by tactile sensations you could always get botox treatment in order to dampen your emotions. If you can't smile or frown you can't feel all that happy or sad.
Fernbach and the other researchers explored the degree to which people are overly focused on a single cause when pursuing two fundamental kinds of thinking — predicting the likelihood of an outcome and diagnosing the causes of an outcome.
They see these two kinds of thinking as flip sides of the same coin. Predicting outcomes calls for thinking forward from the cause of the outcome, such as predicting the likelihood that someone who goes on a diet will lose weight. But offering a diagnosis involves thinking backward from an outcome to the cause, such as diagnosing whether someone who lost weight dieted.
The researchers conducted three studies with medical professionals and Brown undergraduates. Their findings: In each case, the subjects considered alternative causes when they made diagnoses, but did not do so when making predictions.
I can see a way to try to use this result to think more productively: When trying to predict the future list some possible paths. Then for each path imagine you are in the future and a series of events caused developments to happen along that path. Think back from this imagined future vantage point and try to identify the causes of the outcome. If you do that for each possible outcome you might shift your mind into a more backward-looking diagnostic mode.