It is possible for a massive tsunami to hit the North American West Coast. But the United States has a tsunami warning system that includes deep ocean buoys for tsunami detection and government labs that can send out messages to trigger coastal evacuations.
As the death tolls rises into the tens of thousands in Asia and the number of homeless above one million, OSU experts say many of the same forces that caused this disaster are at work elsewhere on the Pacific Ocean "ring of fire," one of the most active tectonic and volcanic regions of the world.
This clearly includes the West Coast of the U.S. and particularly the Pacific Northwest, which sits near the Cascadia Subduction Zone. Experts believe, in fact, that it was a subduction zone earthquake of magnitude 9 – almost identical in power to the sub-sea earthquake that struck Asia on Monday – that caused a massive tsunami around the year 1700 that caused damage as far away as Japan. And the great Alaska earthquake in 1964 caused waves that swept down the Northwest coast, causing deaths in Oregon and northern California.
Robert Yeats, professor emeritus of geosciences at OSU, agrees that the reason for the great loss of life in Sri Lanka, India, and other Asian countries was the lack of a tsunami warning system.
"That much loss of life wouldn't happen here for either a local or distant tsunami because of warning systems operated by the National Oceanic and Atmospheric Administration, with laboratories in Newport and Seattle," Yeats said. "NOAA would record the earthquake on seismographs and issue bulletins about the progress of a tsunami. Deep-ocean buoys off the Aleutian Islands and Cascadia would also record the passage of tsunami waves in the open ocean."
For a tsunami caused by a Cascadia earthquake, people on the coast would have about 15 minutes to get to high ground, Yeats said. Emergency managers of coastal counties have told residents about planning escape routes from a tsunami, and schools in Seaside, Ore. have had tsunami evacuation drills. Some coastal communities also give warnings through a siren for those vacationers who aren't keeping up with the news. Visitors to the coast should look for the blue and white tsunami warning signs on Highway 101 and some beach areas.
Enormously better methods for detecting earthquakes (assuming that it will some day be possible to predict earthquakes within a narrow range of time - a big if) could potentially be more valuable than tsunami warning systems. After all, if a future earthquake that will cause a tsunami could be predicted in advance then the tsunami could be predicted in advance. Then the warning to move to high ground could be announced days, weeks, months, or perhaps even years in advance. Most deaths from tsunamis would then likely come as a result of thrill-seekers doing all sorts of risky things to experience a tsunami. Surfers would try to ride massive waves and voyeurs would try to ride out a tsunami in a tall building or tall tree. Tsunamis would produce less tragedy and more entertainment to be watched via countless numbers of cameras located at every place the waves would wipe out.
Earthquakes and tsunamis are not the only natural disasters in need of better forecasting. Volcano eruption prediction would be of some value. This is especially the case with the most extreme eruptions. If something like the Krakatoa eruption happened today food production would be disrupted. The knowledge that the whole world would experience a temporary cooling for a few years would be used to grow and stockpile more food in advance. Also, volcanic disruptions can kill large numbers of people the immediate vicnity. Plus, island volcanic disruptions can cause tsunamis. So volcanic eruption forecasting is a form of tsunami forecasting.
We also need an asteroid collision detection system. This is doable. It just requires the money to be spent to build more ground and satellite observatories to find all the asteroids. Also, larger asteroids can cause tsunamis. So asteroid detection is another form of tsunami forecasting.
I do not know enough geology to say how much money would have to be spent over how long a period of time to do research to develop useful earthquake or volcanic eruption forecasting capabilities. So I have no advice to offer on geological science policy. However, I think it is a lot easier to argue that not enough is being done to detect large asteroids that are on collision courses with Earth. We have the technology needed to discover asteroids at a much faster rate. So why not delay space exploration initiatives such as the return to the Moon and the trip to Mars and spend that money on asteroid detection satellites and ground observatories? Asteroid study is a valuable way to develop more understanding of the development of solar systems. So the money spent would yield scientific insights and also possibly some day save hundreds of millions or even billions of lives.
The genes that regulate brain development and function evolved much more rapidly in humans than in nonhuman primates and other mammals because of natural selection processes unique to the human lineage. Researchers reported their findings in the cover article of the Dec. 29, 2004, issue of the journal Cell.
"Humans evolved their cognitive abilities not due to a few accidental mutations, but rather, from an enormous number of mutations acquired though exceptionally intense selection favoring more complex cognitive abilities," said lead scientist Bruce Lahn, an assistant professor of human genetics at the University of Chicago and an investigator at the Howard Hughes Medical Institute.
"We tend to think of our own species as categorically different – being on the top of the food chain," Lahn said. "There is some justification for that."
From a genetic point of view, some scientists thought that human evolution might be a recapitulation of the typical molecular evolutionary process, he said. For example, the evolution of the larger brain might be due to the same processes that led to the evolution of a larger antler or a longer tusk. It's just a particular feature that is exaggerated in the human species.
"We've proven that there is a big distinction. Human evolution is, in fact, a privileged process because it involves a large number of mutations in a large number of genes," Lahn said. "To accomplish so much in so little evolutionary time – a few tens of millions of years – requires a selective process that is perhaps categorically different from the typical processes of acquiring new biological traits."
Generally speaking, the higher up the evolutionary tree, the bigger and more complex the brain becomes (after scaling to body size). But this moderate trend became a huge leap during human evolution. The human brain is exceptionally larger and more complex than the brains of nonhuman primates, including man's closest relative, the chimpanzee.
One way to study evolution at the molecular level is to examine changes of when and where proteins are expressed in the body. "But there are many challenges to study the evolution of protein expression. Instead, we chose to track structural changes in proteins," said graduate student Eric Vallender, lead author of the article along with former graduate student Steve Dorus, both of Lahn's laboratory.
Researchers examined the DNA of 214 genes involved in brain development and function in four species: humans, macaques (an Old World monkey), rats and mice. (Primates split from rodents about 80 million years ago; humans split from macaques 20 million to 25 million years ago; and rats split from mice 16 million to 23 million years ago.)
For each of these brain-related genes, they identified changes that altered the structure of the resulting protein, as well as those that did not affect protein structure. Only those genetic changes that alter protein structure are likely to be subject to evolutionary selection, Lahn said. Changes in the gene that do not alter the protein indicate the overall mutation rate – the background of random mutations from which evolutionary changes arise, known as the gene's molecular clock. The ratio of the two types of changes gives a measure of the pressure of natural selection driving the evolution of the gene.
Researchers found that brain-related genes evolved much faster in humans and macaques than in rats and mice. Additionally, the human lineage has a higher rate of protein changes than the macaque lineage. Similarly, the human lineage has a higher rate than the chimpanzee lineage.
"For brain-related genes, the amount of evolution in the lineage leading to humans is far greater than the other species we have examined," Lahn said. "This is based on an extensive set of genes."
They argue that a significant fraction of genes in the human genome were impacted by this selective process. The researchers estimate there may have been thousands of mutations in thousands of genes that contributed to the evolution of the human brain. This "staggering" number of mutations suggests that the human lineage was driven by intense selection process.
The study also revealed two dozen "outliers" – those genes with the fastest evolutionary rates in the human lineage. Of these, 17 are involved in controlling brain size and behavior, arguing that genes that affect brain size and behavior are preferential targets of selection during human evolution. Lahn and his colleagues now are focusing on these outlier genes, which may reveal more about how the human brain became bigger and better.
For two of these outliers, ASPM and Microcephalin, previous work from Lahn's group already has implicated them in the evolutionary enlargement of the human brain. Loss-of-function mutations in either ASPM or Microcephalin cause microcephaly in humans – a severe reduction in the size of the cerebral cortex, the part of the brain responsible for planning, abstract reasoning and other higher cognitive function.
One of the study's major surprises is the relatively large number of genes that have contributed to human brain evolution. "For a long time, people have debated about the genetic underpinning of human brain evolution," said Lahn. "Is it a few mutations in a few genes, a lot of mutations in a few genes, or a lot of mutations in a lot of genes? The answer appears to be a lot of mutations in a lot of genes. We've done a rough calculation that the evolution of the human brain probably involves hundreds if not thousands of mutations in perhaps hundreds or thousands of genes -- and even that is a conservative estimate."
It is nothing short of spectacular that so many mutations in so many genes were acquired during the mere 20-25 million years of time in the evolutionary lineage leading to humans, according to Lahn. This means that selection has worked "extra-hard" during human evolution to create the powerful brain that exists in humans.
Lahn further speculated that the strong selection for better brains may still be ongoing in the present-day human populations. Why the human lineage experienced such intensified selection for better brains but not other species is an open question. Lahn believes that answers to this important question will come not just from the biological sciences but from the social sciences as well. It is perhaps the complex social structures and cultural behaviors unique in human ancestors that fueled the rapid evolution of the brain. "This paper is going to open up lots of discussion," Lahn said. "We have to start thinking about how social structures and cultural behaviors in the lineage leading to humans differed from that in other lineages, and how such differences have powered human evolution in a unique manner. To me, that is the most exciting part of this paper."
Though I am skeptical that better brains are being selected for today. In industrial countries smarter people have fewer kids. So how are better brains still being selected for? I don't see it.
His research and speculations put Lahn into politically incorrect territory. The idea that human brains are still under selective pressure (which is extremely obvious in my view) opens up the possibility that different human environments have been and still are selecting for cognitively different humans. If humans are being selected for to be better suited for each enviromental niche that humans have occupied and currently occupy then humans have less in common with each other than radical egalitarians would like us to believe.
These brain genes that Lahn found to have undergone so many changes leading to humans are excellent candidates for genes that cause changes in intelligence between humans. My guess is that within 10 years DNA sequencing costs will be so low that large scale comparisons of Lahn's genes of interest will be possible using humans with different measured levels of IQ. Then we will find out which genetic variations cause higher levels of intelligence and also different patterns and styles of thinking. For example, I fully expect genetic variations that cause conservative and left-liberal and other categories of political attitudes will be found.
Also see my previous posts on Bruce Lahn's research on human brain genetics and brain evolution: Researchers Find Key Gene For Evolution Of Human Intelligence and Human Brain Size Regulating Gene To Be Inserted Into Mice and Genetic Causes Of Infidelity Found In Twins Study.
Update: Godless Capitalist has more coverage of this story.
Science writer James Oberg reviews the cases where humans have inadvertently caused earthquakes (yes, ways to trigger earthquakes have been discovered accidentally!) and examines the question of whether the damage from large earthquakes could be avoided by causing larger numbers of smaller earthquakes as a way to release tension in faults.
This theory "comes up every few years [but] ... is unfortunately fatally flawed in several ways," according to William Ellsworth, chief scientist of the U.S. Geological Survey's earthquake hazard team.
"First," he said in an e-mail, "it takes 1,000 earthquakes of [magnitude] 6 to release the tectonic forces that go into a [magnitude] 8.” But since those smaller quakes are still serious, it would be better to restrict the force of induced quakes to a magnitude 4 -- which would mean inducing a million smaller earthquakes in order to avoid suffering one giant one.
“Multiply that number by any reasonable estimate of what it would cost to induce one of them, and you are looking at costs far in excess of the expected losses” of the big quake, Ellsworth said.
An additional problem with trying to initiate a small earthquake is that it might trigger an even larger quake.
Even if we accept Ellsworth's arguments there may still be some benefit (leaving aside the military angle) to be had from the ability to trigger earthquakes. How? Trigger the really big earthquakes as a way to make the timing of the most dangerous earthquakes more predictable.
Think about earthquakes from the standpoint of both human lives lost and economic disruption. If earthquake prediction methods could advance to the point where, say, an 8+ Richter scale earthquake could be known to be coming to Los Angeles in 5 years there would be considerable value in being able to make that earthquake happen on a particular weekend. People could stay out of the weakest buildings, away from the bridges most likely to collapse, and otherwise away from anything that might kill them. Also equipment and goods could be protected from construction. Supermarkets alone could save a great deal of money just by taking glass containers off of shelves and putting cardboard around the bottles before a scheduled quake.
Rescue and repair workers could be on duty with vacations cancelled and extra workers brought in from other areas. Workers could be geared up with lots of extra equipment ordered in advance to fix electric power lines and other key structures that would fail in an earthquake. Freeways could be empty. No dangerous chemical rail cargoes would be passing through populated areas when a big quake hit. No jumbo jet would set down on a pitching tarmac and suffer a landing gear collapse. Weak water reservoirs could have their water levels lowered in advance. People could have extra bottled water and flashlights with fresh batteries and sleep outside in tents or cars. Tourists could stay away. Though perhaps thrill-seekers would show up to experience a quake that tourism would surge.
The ability to schedule earthquakes would allow them to be scheduled for the most convenient time of year. My guess is that the ideal time for an earthquake would be in the early summer (say mid June in the Northern Hemisphere) before temperatures get too hot. Then the days would be long and electricity (which may be knocked out in many areas) would be less needed for lighting and it would be easier to spend extended periods of time outdoors. In Southern California mid June would also be well away from the rainy season. So extra time spent outside would be easier to manage. In many different ways civilization could be braced for a really big quake scheduled to occur at some point over a fairly short range of time.
To make the scheduled triggering of large quakes a net benefit requires the ability to narrow the range of time expectations for the next earthquake down to a period of time close enough in the future that people would be willing to accept the inevitability of the quake. There is no point in inflicting an earthquake on ourselves now if the quake otherwise wasn't going to happen in 20 years. 20 years from now many buildings that would be wrecked by an earthquake today will either be reinforced or torn down. We will have better roads, better emergency response capabilities, and the like. Costs delayed will be costs avoided altogether in many cases.
In order to make earthquake triggering worthwhile another crucial capability needed would be the ability to trigger a quake to occur over a fairly short period of time. No one wants to be told that some time in the next 6 months an 8.0 quake is coming. Disrupting our lives for 6 months would be far too costly.
Note that precise prediction of the time of natural earthquakes would provide the same benefits as precise prediction of the time of human-triggered earthquakes but without the political and legal problems that come from human-triggered earthquakes. I have no idea whether geological scientists will ever achieve the ability to predict fairly exact times of natural earthquakes or of human-triggered earthquakes. My guess is that any such precision of prediction or of control lies decades into the future.
T cells are the weakest link in the immune systems of older people, based on a report by Eaton and colleagues in the December 20 issue of The Journal of Experimental Medicine. The authors show that old CD4 "helper" T cells cannot provide the stimulatory signals to B cells that prompt them to make antibodies. Old and young B cells, however, are equally effective if helped by young CD4 T cells. The authors think this may help explain why immunizations are less effective in the elderly.
B cell activation and antibody production are known to be impaired with age in both mice and humans, but it was not clear whether this defect is intrinsic to B cells or is a by-product of declining CD4 T cell helper functions.
Eaton and colleagues transferred young or old CD4 T cells into mice lacking their own CD4 T cells to determine which cells are responsible for the age-related decline in B cell function. They show that B cell activation and antibody production can be restored in old mice if they are infused with young CD4 T cells prior to immunization. On the flip side, young mice infused with old CD4 T cells developed antibody defects, even though their B cells were young. In other words, old B cells function like young ones if provided with signals from young helper T cells. While the mechanism is not completely clear, the authors show that old T cells can travel to the right location in the spleen of the mice, but have fewer of the surface proteins that send stimulatory signals to B cells.
We need the ability to rejuvenate T cells. We also need the ability to rejuvenate the thymus gland that plays a crucial role in immune response.
See my previous post Harmful Reduction In Immune System Cell Diversity With Age. Also, chronic stress more rapidly ages at least part of the immune system. See my post Chronic Stress Accelerates Aging As Measured By Telomere Length.
Mark Clayton of the Christian Science Monitor has written an excellent analysis of the carbon dioxide emissions of planned coal-powered electric generation plants.
China is the dominant player. The country is on track to add 562 coal-fired plants - nearly half the world total of plants expected to come online in the next eight years. India could add 213 such plants; the US, 72.
The rapid industrialization of China and to a lesser extent India, rising natural gas prices, concerns about energy security, and large coal reserves in the United States and China are all driving the shift toward the construction of electricity generators that burn coal.
It is worth noting that the Kyoto treaty does not place any CO2 emissions restrictions on India or China.
Without the development and deployment of CO2 sequestration technology these coal plants alone will boost world CO2 emissions by 14% in a mere 8 years.
Without such technology, the impact on climate by the new coal plants would be significant, though not entirely unanticipated. They would boost CO2 emissions from fossil fuels by about 14 percent by 2012, Schmidt estimates.
One thing that is striking about these numbers is the scale of world economic growth. Coal-fired electric plant construction alone in only 8 years can boost worldwide CO2 emissions by 14%. There is so much money sloshing around in the world that in a relatively short period of time a few hundred gigawatts of electric power plants can be constructed.
The other point that strikes is that if nuclear power was cheaper (perhaps using a pebble bed modular reactor approach) than coal then these plants would be getting built as nuclear plants and there would be no concern
Clayton comments that a US government effort named FutureGen to build a large scale technology demonstrator coal generation plant with CO2 sequestration technology is slipping its schedule due to lack of a push for it by the Bush Administration. Right he is. The schedule for the completion of the FutureGen plant has it being completed in 2018 but even that late date looks unlikely to happen.
Industry groups are becoming increasingly impatient with the US Energy Department's (DOE) handling of the FutureGen project because the government has failed to provide basic financial details and DOE's schedule for building the near-zero-emissions coal power plant is slipping.
Under DOE's plan, the department would spend $620-mil by 2018, industry would contribute $250-mil and foreign participants would kick in $80-mil toward the $950-mil facility. Part of that money would go toward a carbon sequestration system that would be used to store CO2 captured at the site.
I do not know whether we face a serious future problem with global warming. But I think it is prudent to spend the money to develop lots of energy-related technology to make it far cheaper to handle the problem should it become clear at some point in the future that CO2 emissions must be drastically decreased.
Over on the Crumb Trail blog the pseudonymous blogger back40 has reacted to my comments about how it would be cheaper to develop energy technologies that reduce CO2 emissions than to lay on a worldwide regulatory regime and he has provided details on the hundreds of trillions of dollars cost estimates made for some of the CO2 emissions reductions proposals. The underfunded DOE FutureGen proposal is a piddlingly small amount of money as compared to what it would cost to implement Kyoto. Even a very ambitious energy research effort on the scale of $10 billion per year (as Nobelist Richard Smalley argues) would still be orders of magnitude less than the costs of enforcing Kyoto starting now. Such an effort would produce cleaner technologies that would be cheaper than today's energy technologies. The market alone would move many of those technological developments into widespread use without any regulatory costs.
Even if we accept the contention of some of the global warming believers (and MIT climate scientist Richard Lindzen thinks the belief in global warming is religious and irrational) that we face a real problem from global warming in the future that problem will not become hugely expensive to the human race for decades to come. Therefore if we spend relatively smaller amounts of money (but still tens of billions of dollars) now on energy research we will eventually be able to reduce CO2 emissions for many orders of magnitude less in costs than if we attempt to reduce CO2 emissions using today's technologies. A big energy research effort is the most rational and cost-effective response to the potential threat of global warming.
Update: The construction of coal-fired power plants is going to continue well beyond just the next 8 years discussed above. The US alone needs 1,200 300 megawatt power plants over the next 25 years.
Electricity demand will increase 53.4 percent over the next 25 years. Meeting this rising growth rate will require the construction of the equivalent of more than 1,200 new power plants of 300 megawatts each—the equivalent of about 65 plants each year. Coal will remain the largest single source of electricity—accounting for 51 percent of power generation in 2025. Clean coal technologies will help meet these needs, plus continue the decline in SO2 and NOx emissions already underway.
That need could be met by building about 330 AP1100 1,100 Megawatt nuclear power plants (perhaps more than if the nuclear plants have lower capacity utilization rates). What would be cheaper? Coal CO2 emissions sequestration or nuclear power? Both costs are likely to drop in the future. Hard to say which will cost less in the longer run.
Scientists have discovered that a system in our brain which responds to actions we are watching, such as a dancer's delicate pirouette or a masterful martial arts move, reacts differently if we are also skilled at doing the move. The University College London (UCL) study, published in the latest online edition of Cerebral Cortex, may help in the rehabilitation of people whose motor skills are damaged by stroke, and suggests that athletes and dancers could continue to mentally train while they are physically injured.
In the UCL study, dancers from the Royal Ballet and experts in capoeira - a Brazilian martial arts form - were asked to watch videos of ballet and capoeira movements being performed while their brain activity was measured in a MRI scanner. The same videos were shown to normal volunteers while their brains were scanned.
The UCL team found greater activity in areas of the brain collectively known as the 'mirror system' when the experts viewed movements that they had been trained to perform compared to movements they had not. The same areas in non-expert volunteers brains didn't care what dance style they saw.
While previous studies have found that the system contains mirror neurons or brain cells which fire up both when we perform an action and when we observe it, the new study shows that this system is fine tuned to each person's 'motor repertoire' or range of physical skills. The mirror system was first discovered in animals and has now been identified in humans. It is thought to play a key role in helping us to understand other people's actions, and may also help in learning how to imitate them.
This research may have an incredibly important practical application: lazier ways to learn!
Professor Patrick Haggard of UCL's Institute of Cognitive Neuroscience says: "We've shown that the mirror system is finely tuned to an individual's skills. A professional ballet dancer's brain will understand a ballet move in a way that a capoiera expert's brain will not. Our findings suggest that once the brain has learned a skill, it may simulate the skill without even moving, through simple observation. An injured dancer might be able to maintain their skill despite being temporarily unable to move, simply by watching others dance. This concept could be used both during sports training and in maintaining and restoring movement ability in people who are injured."
It is still necessary to develop enough in some skill to that one will have a mind trained to learn from watching experts perform.
Dr Daniel Glaser of UCL's Institute of Cognitive Neuroscience says: "Our study is as much a case of 'monkey do, monkey see' as the other way round. People's brains appear to respond differently when they are watching a movement, such as a sport, if they can do the moves themselves.
"When we watch a sport, our brain performs an internal simulation of the actions, as if it were sending the same movement instructions to our own body. But for those sports commentators who are ex-athletes, the mirror system is likely to be even more active because their brains may re-enact the moves they once made. This might explain why they get so excited while watching the game!"
Deborah Bull, Creative Director at Royal Opera House (ROH2), says: "We are delighted to be working with Patrick Haggard, our Associate Scientist, on this fascinating area of research. As a former dancer, I have long been intrigued by the different ways in which people respond to dance. Through this and future research, I hope we'll begin to understand more about the unique ways in which the human body can communicate without words."
Videos are still greatly underutilized as a means of training and education. Videos of college courses on every subject ought to be widely and cheaply available. Surely governments spend enough money funding unversities and schools that some of the teaching that they fund can be recorded and made available for free download. Also, every type of performance training such as ballet and other forms of dance could have training sessions and performances recorded at many angles for use in schools. Then even children in rural areas could pursue types of training that are now accessible only to much more urbanized populations. Such recordings would also be of value to many city dwellers who can not afford expensive lessons. The process of education is still too stuck in old formats of delivery. Education should become as modernized by technology as retail, communications, factories, and transportation devices have bcome.
A 6000 kilometer area between the two Van Allen belts was previously thought to be a "safe zone" where lighter, smaller, cheaper, and less heavily shielded satellites could operate. But a discovery with a research satellite has found that during severe solar storms high energy particles spill into the zone where it was hoped that cheap satellites with little shielding could operate.
A region of space around the Earth that was previously thought to be free of radiation actually teems with high-energy charged particles during solar storms. The news may dash scientists' hopes of sending up ultra-cheap, lightweight satellites that do not carry much protection against radiation.
Read the full article for the details.
Using a high-resolution version of functional magnetic resonance imaging (fMRI) the researchers observed a structure in the brain important for emotional processing - the amygdala - lights up with activity when people unconsciously detected the fearful faces.
Although the study was conducted in people who had no anxiety disorders, the researchers says that the findings should also apply to people with anxiety disorders.
“Psychologists have suggested that people with anxiety disorders are very sensitive to subliminal threats and are picking up stimuli the rest of us do not perceive,” says Dr. Joy Hirsch, professor of neuroradiology and psychology and director of the fMRI Research Center at Columbia University Medical Center, where the study was conducted. “Our findings now demonstrate a biological basis for that unconscious emotional vigilance.”
A part of the brain involved in the feeling of anxious reactions responded to fearful pictures even if the pictures were flashed up too quickly for the conscious mind to become aware of the pictures.
In the study, the researchers presented images of fearful facial expressions, which are powerful signals of danger in all cultures, to 17 different subjects. None of the 17 volunteers had any anxiety disorders, but their underlying anxiety varied from the 6th to the 85th percentile of undergraduate norms, as measured by a well-validated questionnaire.
“These are the type of normal differences that would be apparent if these people got stuck in an elevator,” Dr. Hirsch says. “Some of them would go to sleep; some would climb the walls.”
While the subjects were looking at a computer, the researchers displayed an image of a fearful face onto the monitor for 33 milliseconds, immediately followed by a similar neutral face. The fearful face appeared and disappeared so quickly that the subjects had no conscious awareness of it.
But the fMRI scans clearly revealed that the brain had registered the face and reacted, even though the subjects denied seeing it. These scans show that the unconsciously perceived face activates the input end of the amygdala, along with regions in the cortex that are involved with attention and vision.
Brain activity varies with level of anxiety
The researchers also noticed that the amount of brain activity varied from person to person, depending on their scores on the anxiety quiz.
The amygdalas of anxious people was far more active than the amygdalas of less anxious people. And anxious subjects showed more activity in the attention and vision regions of the cortex, which manifested itself in faster and more accurate answers when the subjects were asked questions about the neutral face.
“What we think we’ve identified is a circuit in the brain that’s responsible for enhancing the processing of unconsciously detected threats in anxious people,” says Amit Etkin, the study’s first author. “An anxious person devotes more attention and visual processing to analyze the threat. A less anxious person uses the circuit to a lesser degree because they don’t perceive the face as much as a threat.”
Unconscious vs. conscious processing of fearful faces
In contrast to unconscious processing of fearful faces, the researchers found that when subjects looked at the fearful faces for 200 milliseconds, long enough for conscious recognition, a completely different brain circuit was used to process the information. And the activity in that circuit did not vary according to the subject’s level of anxiety.
“Our study shows that there’s a very important role for unconscious emotions in anxiety,” Etkin says.
This reminds me of claims decades ago that some movie theaters were supposedly splicing pictures of food in with movies using durations too short to be consciously registered but still long enough to make someone want to go buy some popcorn and candy. Well, can the flashing up of food pictures for periods too short to be registered by the conscious mind still manage to provoke a hunger pang just like these scary faces provoke the beginning of an anxiety reaction?
This technique could be used in movies to create anxious reactions to scary scenes. Though how well it worked would depend on each movie watcher's proneness to anxiety.
Also, could this technique be used in interrogations to increase the anxiety of a subject of interrogation? Would that help the interrogators succeed in getting useful information? Imagine the subject being left in a room to watch a seemingly soothing TV show that has 33 millisecond flashes of anxious faces spliced into the video stream.
Then there are the anxious and fearful people of the world: They shouldn't look at each other's faces. They probably feed off of each other's fear. They should have pictures of happy and relaxed people on walls near their work desks and at home.
Never mind the tablets - heart disease could be cut by 76% and men could expect to live more than six years longer if they simply ate the right meal once a day, doctors said yesterday.
Last year the British Medical Journal ran a paper advocating the "Polypill" - combining aspirin, folic acid and cholesterol-lowering and blood-pressure drugs - for everybody over 55. But an article in the Christmas issue says a "Polymeal", containing fish, wine, dark chocolate, fruits and vegetables, garlic and almonds, would achieve roughly the same effect.
Note that the diet (assuming it works) would do this by delaying heart heart disease by an even greater number of years. A delay in the onset of heart disease would then allow time for other diseases to kill before heart disease developed to the point of being fatal.
However, a diet that reduces the risk of heart disease probably reduces the risk of a number of other diseases as well. So this diet might provide an even larger benefit than these researchers project. On the other hand, the various components of the diet may provide benefits that are not additive to the extent that these researchers assume. So the benefit could be less than they project.
Note that in order to get the benefit you would need to follow the diet for years, decades even. Also, if you already have very low cholesterol and low blood pressure I would not expect much of a benefit.
Oscar Franco, a public health scientist at the University Medical Centre in Rotterdam, The Netherlands, and his colleagues suggest the "Polymeal" as a natural alternative to the "Polypill".
This wonder pill - a cocktail of six existing drugs - was proposed in June 2003 as a preventive pill which might slash the risk of heart attack or stroke in people over 55 years old by as much as 80%. The proposal was underpinned by an analysis of over 750 trials of the existing drugs.
The polypill is made up of aspirin, folic acid (a vitamin that would lower homocysteine in the blood), a statin (to lower cholesterol; e.g. Crestor or Lipitor), and three blood pressure lowering medicines. The Polypill brings with it a number of risks including stomach bleeding from the aspirin and muscle problems from a statin.
The recommended daily "dosage" is 150 milliliters of wine, 100 grams of dark (nondairy) chocolate, 400 g. of fruits and vegetables, 2.7 g. of fresh or frozen garlic, 68 g. of almonds and 114 g. of fish four times a week.
Milk in chocolate is believed to counter the beneficial antioxidant effects of bittersweet chocolate, which is known to reduce blood pressure.
Garlic reduces cholesterol, and fresh produce cuts both that and blood pressure, the authors wrote.
My advice? I think anyone considering the Polymeal Diet ought to first consider the Ape Diet to lower cholesterol and inflammation. The Ape Diet has a fair amount of overlap with the Polymeal Diet but will probably exert a stronger cholesterol lowering effect. Though the two could probably be blended. One could eat the Ape Diet with a few of the elements from the Polymeal Diet that are not specifically called out in the Ape Diet.
The most important thing to keep in mind about diet improvements is that you should pursue sustainable improvements. If you think you will stay on a diet only for a few months before tiring of it then there isn't much point pursuing it. But if there are parts of one of these diets you think you could sustain without any feeling of constant sacrifice then certainly start doing those parts.
About the wine in the Polymeal Diet: It should be red wine in order to get the antioxidant resveratrol. My guess is that consuming dark grape juice may work just as well and using a lot of red wine vinegar may also deliver the resveratrol and other antioxidants found in red wine.
As for the chocolate: Some of the dark chocolates have a lot of their flavonoids removed during processing. The company that makes Doves chocolates has made an effort to preserve the flavonoids and it is likely that Doves Dark Promises provides more benefit than most other dark chocolates. Their point about eating dark bittersweet chocolate has to be taken seriously. One would need to eat more milk chocolate to get the same amount of chocolate as dark chocolate contains. The less sweet the chocolate the better because fewer calories would then need to be consumed to get the flavonoid compounds that are thought to be providing health benefits. However, I am still skeptical about the benefits of chocolate for the vascular system since the vasodilation effect is transitory according to accounts I've read. If anyone has more certain information about this then please post a comment.
As for the fish: Check out my favorite FDA table on fish mercury levels (and this fish mercury chart too) and avoid high mercury fish.
Update: Let me amend what I said above about grape juice and vinegar. Grape juice may not be a useful source of resveratrol.
The resveratrol content of wine is related to the length of time the grape skins are present during the fermentation process. Thus the concentration is significantly higher in red wine than in white wine, because the skins are removed earlier during white-wine production, lessening the amount that is extracted . Grape juice, which is not a fermented beverage, is not a significant source of resveratrol. A fluid ounce of red wine averages 160 µg of resveratrol, compared to peanuts, which average 73 µg per ounce . Since wine is the most notable dietary source, it is the object of much speculation and research.
Also, there are apparently at least two ways to make vinegar. One involves exposure to air and may destroy the resveratrol. I do not know which method of making vinegar is most common. But vinegar does not appear to be a reliable source of resveratrol. If you do not want to drink red wine there is at least one commercial red wine extract whose maker claims it preserves resveratrol.
Even among red wines the concentration of resveratrol can vary by more than an order of magnitude. Red grapes grown in colder conditions and for longer periods of fermentation have higher concentrations of resveratrol. If anyone comes across a table listing resveratrol concentration by wine brand and vintage please post a comment with a link to it.
By studying patients who developed abnormal hoarding behavior following brain injury, neurology researchers in the University of Iowa Roy J. and Lucille A Carver College of Medicine have identified an area in the prefrontal cortex that appears to control collecting behavior. The findings suggest that damage to the right mesial prefrontal cortex causes abnormal hoarding behavior by releasing the primitive hoarding urge from its normal restraints. The study was published online in the Nov. 17 Advance Access issue of the journal Brain.
Comparison of functional magnetic resonance imaging (fMRI) brain scans between collectors and normal people turned up the location where the collecting behavior comes from.
The UI team studied 86 people with focal brain lesions - very specific areas of brain damage – to see if damage to particular brain regions could account for abnormal collecting behavior. Other than the lesions, the patients' brains functioned normally and these patients performed normally on tests of intelligence, reasoning and memory.
A questionnaire completed by a close family member was used to identify problematic collecting and the behavior was classified as abnormal if the collection was extensive; the collected items were not "useful" or aesthetic; the collecting behavior began only after the brain injury occurred; and the patient was resistant to discarding the collected items.
The questionnaire very clearly split the patients into two groups – 13 patients who had abnormal collecting behavior and a majority (73 patients) who did not. Unlike normal collecting behavior such as stamp collecting, the abnormal collecting behavior of these patients significantly interfered with their normal daily life. Patients with abnormal collecting behavior filled their homes with vast quantities of useless items including junk mail and broken appliances. Despite showing no further interest in the collected items, patients resist attempts to discard the collection.
To determine if certain areas of damage were common to patients who had abnormal collecting behavior, the UI researchers used high-resolution, three-dimensional magnetic resonance imaging to map the lesions in each patient's brain and overlapped all the lesions onto a common reference brain.
"A pretty clear finding jumped out at us: damage to a part of the frontal lobes of the cortex, particularly on the right side, was shared by the individuals with abnormal behavior," Anderson said. "Our study shows that when this particular part of the prefrontal cortex is injured, the very primitive collecting urge loses its guidance.
So then are stamp collectors and baseball card collectors slightly brain damaged? My guess is that there is a continuum of urges to collect with some people having stronger natural urges that have to find some outlet.
It seems likely to me that just as there are people who have excessive urges to collect there are other people who lack mimimally sufficient urges to collect possessions to be able to keep enough possessions around them to take care of themselves. On average do wealthier people have stronger urges to collect possessions? Do people who buy things have stronger urges to collect objects than people who go and spend their money travelling and entertaining themselves? Seems plausible.
If a flu pandemic similar to the deadly one that spread in 1918 occurs, it may be possible to keep the pandemic in check through vaccinations, a new study suggests. The infamous 1918 pandemic killed up to 40 million people worldwide, but the virus strain was not unusually contagious compared to other infectious diseases such as measles, according to a new analysis by researchers at Harvard School of Public Health. However, the 1918 flu was quite lethal once contracted, believed to be 10 times more lethal than other pandemic strains.
Epidemiologists analyzed historical epidemic data in 45 US cities and found that the transmissibility of the 1918 strain as measured by the number of people infected by a single case was only about 2 to 4, making the strain about as transmissible as the recent SARS coronavirus. Since people with influenza can transmit the infection before the appearance of symptoms, strategies for transmission reduction, including vaccination, would need to be implemented more rapidly than measures for the control of the recent SARS outbreak. Isolation methods alone, such as those used for controlling SARS, would not be effective.
The analysis, "Transmissibility of 1918 Pandemic Influenza," by Christina Mills, doctoral student in the HSPH Department of Epidemiology, and faculty co-authors appears in the Dec. 16, 2004 issue of Nature.
World Health Organization officials have warned that we are closer now to another pandemic than in any other recent time because of the persistent bird flu epidemic in East Asia that threatens to jump to humans.
"Our study suggests that if you could vaccinate a reasonable number of people before exposure, a future flu epidemic could be controlled," said Mills. "But flu takes just a couple of days to become contagious in a person, unlike the almost week-long latency period we experienced with SARS. Isolation would be only partially effective. Rapid vaccination is essential."
"Before this study, estimates were all over the map on the transmissibility of pandemic flu," said co-author Marc Lipsitch, associate professor of epidemiology at HSPH. "Some thought it was so transmissible that vaccines would be unlikely to stop it. This study is optimistic, except we don't have the vaccine. It is now even more important to put resources into the development of vaccine technology, manufacture and distribution systems to make possible a rapid response to the next outbreak of an entirely new flu strain."
The problem with rapid vaccination as a response to a new strain with pandemic potential is that it takes 6 months to make a vaccine for a new virus strain and then the whole world has a capacity to make only 300 million doses. Worse yet, a completely new strain would require two doses per person for full immunity. We need vaccine manufacturing technology that is faster and more easily scalable.
In event of the emergence of a really dangerous flu strain as things stand now we aren't going to get vaccines quickly enough to prevent the spread of the virus. Therefore societies would need to be quickly reorganized and rearranged to reduce the rate of transmission of flu viruses. People would need to make many changes to how they carry out their daily routines in order to reduce the risk of exposure to influenza.
STANFORD, Calif. – For the first time, researchers at Stanford University School of Medicine have examined how kidneys change at a molecular level with the passage of time. What they found suggests that all human cells age in a similar way, supporting one theory about how cells grow old.
“Until now we really didn’t know what happens when people get old,” said Stuart Kim, PhD, professor of developmental biology and genetics, who led the study that is to be published in the November 30 issue of Public Library of Science Biology. “Our work suggests that there’s a common way for all tissues to get old.”
These findings are contrary to one model for how cells age. This theory holds that because organs have different groups of molecules, they follow different pathways as they age. If this were the case then the aging kidney would look quite different on the molecular level from an aging liver.
Instead the study findings support another model, which suggests that all cells in an animal peter out in the same way. If this were true then researchers would find the same molecular differences between old and young cells from all organs.
In the study, Kim and his group compared which genes are active in kidney cells from 74 people ranging in age from 27 to 92 years. They found 742 genes that become more active as the kidney ages and 243 genes that become less active.
They then did the same experiment using different types of kidney tissue, with one sample from the outer kidney, called the cortex, and the other from the inner kidney, called the medulla. Although these two tissues are both from the kidney, they are as different in function as cells from entirely different organs. The researchers found exactly the same genes varied in old and young samples from these two tissues.
The next obvious experiment would be to repeat this study with tissues from other organs and see if the same genes have changing levels of activity as tissue ages. Do some organs age at more rapid rates? Does this happen for everyone? One might expect some variability between humans due to genetic variants that accelerate aging in particular organs and also due to dietary habits and other habits that impose larger harm on certain organs (e.g. smoking on lungs or drinking on livers or fried meats on intestines).
Note that gene microarrays have gotten so powerful that these researchers were able to check the expression levels of all the known genes in a human cell. Chronological age is not always the same as age as measured by molecular genetic expression profile.
Kim and colleagues then isolated RNA transcripts from the samples to determine the activity of every gene, broken down by age and kidney section, through microarray analysis. Looking for differences in gene expression across the genome, they identified genes that showed a statistically significant change in expression as a function of age. Of 33,000 known human genes on the microarray, 985 showed age-related changes, most showing increased activity. These changes are truly age-regulated, the authors conclude, since none of the medical factors impacted the observed changes in gene expression.
Although cortex and medulla have different cell types and perform different functions, their genetic aging profile was very similar, suggesting a common aging mechanism operates in both structures. In fact, these mechanisms may function broadly, as most of the age-regulated kidney genes were also active in a wide range of human tissues. Other organisms appear to lack these changes, however, prompting the authors to argue that understanding aging in humans will require human subjects.
Most importantly, the genetic profile of the tissue samples correlated with the physiological and morphological decline of an aging kidney. An 81-year-old patient with an unusually healthy kidney had a molecular profile typical of someone much younger, while a 78-year-old with a damaged kidney had the profile of a much older person. Using the power of functional genomics, this study has identified a set of genes that can serve as molecular markers for various stages of a deteriorating kidney and predict the relative health of a patient compared to their age group. These gene sets can also serve as probes to shed light on the molecular pathways at work in the aging kidney, and possibly on the process of aging itself.
I'd love to see a longitudinal study where tissues are taken from a number of elderly people and assayed for gene expression to see if onset of diseases and mortality can be predicted from how far along the cells in a person seem to have aged according to gene expression levels.
One public policy implication of a developed ability to more accurately predict life expectancy is that government-funded old age retirement funds could (not saying they would) offer different ages of eligibility for retirement based on how old a person is measured to be at the molecular level. A highly aged 60 year old could be offered early retirement based on a short life expectancy and a diminished ability to work. At the same time a highly health and functional 70 year old could be told they have to keep working or live off their own savings because all molecular indications are that they have years of healthy vibrant life still left in their bodies. Given that governments face massive unfunded liabilities for old age entitlements programs the ability to distinguish between aged elderly and relatively more youthful elderly could provide governments with a way to lighten their fiscal burdens.
You can read the full research paper free of charge.
Previous research has implicated the brain's opioid system in the development of alcohol-use disorders. The mu-opioid receptor, which is encoded by the OPRM1 gene, is the primary site of action for opiates with high abuse potential, such as opium and heroin, and may also contribute to the effects of non-opioid drugs, such as cocaine and alcohol. Findings published in the December issue of Alcoholism: Clinical & Experimental Research indicate that individuals with the G variant of the A118 polymorphism of the OPRM1 gene have greater subjective feelings to alcohol's effects as well as a greater likelihood of a family history of alcohol-use disorders.
"Alcohol releases endogenous opiates which, in turn, seem to influence the mesolimbic dopamine system," said Kent E. Hutchison, associate professor of psychology at the University of Colorado at Boulder and lead author of the study. "This system is involved in craving and the motivation to use alcohol and drugs. Thus, it is alcohol's effects on endogenous opioids that act as the gateway through which alcohol may influence this system."
"It is well known that alcohol dependence tends to run in families," said Robert Swift, professor of psychiatry and human behavior at Brown University and Associate Chief of Staff for Research at the Providence VA Medical Center. "The inheritance of alcoholism is complex, but there are suggestions that the opiate systems in the brain are involved. Our brains contain proteins, called enkephalins and endorphins, that act like morphine and other opiates derived from the poppy plant. Several researchers have shown that persons with a family history of alcoholism tend to have differences in blood levels of beta-endorphin, a natural opiate hormone, compared to persons without a family history of alcoholism. Children of alcoholics, who are not themselves alcoholics, have lower levels of beta-endorphin than do children of non-alcoholics. Also, when young adults with a family history of alcoholism drink alcohol, they increase their blood levels of beta-endorphin more than those without a family history of alcoholism."
A special protein called the mu-opioid receptor, which is located in the membranes of nerve cells, detects internal opiate neurotransmitters, such as beta-endorphin, that the brain uses to allow nerve cells to communicate with each other. Previous research has shown that the G variant of this gene has a slightly different receptor protein, which causes a big difference in how well the receptor connects with beta-endorphin. For example, the G variant receptor binds three times more tightly than the A variant to beta-endorphin, which means that a nerve cell with the G variant is more greatly affected by beta-endorphin. The net result is that dopamine cells, which play a role in motivation and reinforcement, become more stimulated.
For this study, participants comprised 38 students (20 male, 18 female) at the University of Colorado, 21 to 29 years of age, who indicated drinking patterns classified as moderate to heavy. Participants were either homozygous for the A allele (n=23) or heterozygous (n=15). Each received intravenous doses of alcohol that were designed to cause breath alcohol concentration (BAC) levels of .02, .04, and .06. Researchers measured subjective intoxication, stimulation, sedation, and mood states at baseline and at each of the three BAC levels.
Results indicate that individuals with the G allele had higher subjective feelings of intoxication, stimulation, sedation, and happiness across trials as compared to participants with the A allele.
The OPRM1 gene is likely one of dozens of genes that influence the odds that a person who drinks alcohol will eventually become an alcoholic. Once DNA sequencing costs fall a couple more orders of magnitude all the genes that influence the risk of alcoholism (and the risks of all other diseases - addictive or otherwise) will be identified in short order.
The identification of genes that are involved in addiction will eventually lead to the development of drugs that suppress and increase their expression and other drugs that work at receptors that the genes code for. As a result addiction will become much more easy to treat and even to cure.
"These findings provide empirical support for the widespread belief that powerful women are at a disadvantage in the marriage market because men may prefer to marry less accomplished women," said Stephanie Brown, lead author of the study and a social psychologist at the U-M Institute for Social Research (ISR).
For the study, supported in part by a grant from the National Institute of Mental Health, Brown and co-author Brian Lewis from UCLA tested 120 male and 208 female undergraduates by asking them to rate their attraction and desire to affiliate with a man and a woman they were said to know from work.
"Imagine that you have just taken a job and that Jennifer (or John) is your immediate supervisor (or your peer, or your assistant)," study participants were told as they were shown a photo of a male or a female.
After seeing the photo and hearing the description of the person's role at work in relation to their own, participants were asked to use a 9-point Likert scale (1 is not at all, 9 is very much) to rate the extent to which they would enjoy going to a party with Jennifer or John, exercising with the person, dating the person and marrying the person.
Brown and Lewis found that males, but not females, were most strongly attracted to subordinate partners for high-investment activities such as marriage and dating.
"Our results demonstrate that male preference for subordinate women increases as the investment in the relationship increases," Brown said. "This pattern is consistent with the possibility that there were reproductive advantages for males who preferred to form long-term relationships with relatively subordinate partners.
"Given that female infidelity is a severe reproductive threat to males only when investment is high, a preference for subordinate partners may provide adaptive benefits to males in the context of only long-term, investing relationships---not one-night stands."
This poses a problem for not just for bright, motivated, and successful women but also for the gene pool of the human race. The genetic characteristics that make women bright, motivated, and successful are getting selected against more often than was the case in the past. In the Western industrial countries when women had far fewer opportunities to rise in status hierarchies and earn high incomes the women whose genetic potential was being suppressed appeared in more subordinate roles and hence were more attractive to successful men.
Of course there are limits to just how far the male attractiveness to lower status females will go - especially for marriage. One reason is simple: Money. Higher status females will bring more money to the marriage or the ability to work fewer hours to make the same amount of money as a lower status female makes working full time. So time for mommy duties can be greater with higher status females who are willing to work part-time. These facts have got to cross the minds of a lot of men who are looking at prospects.
Another point: This study used photos and no verbal interaction. In conversations brighter women may do better at raising the interest of men. They can be funnier and do a better job of picking up on cues about interests. Also, looks matter. Well, contrary to popular stereotypes the smarter women may have an edge on looks. intelligence and body symmetry are positively correlated. Body symmetry is also correlated with attractiveness. So brighter women are probably more attractive on average.
Professor Lee Ellis and colleages at Minot State University N.D. have found that mothers of homosexual daughters used thyroxine and amphetamine in pregnancy at higher frequencies than mothers of heterosexual daughters.
The researchers found that the mothers of homosexual women were at least five times more likely to have taken synthetic thyroid medications during pregnancy than mothers of heterosexual women, and eight times more likely to have used amphetamine-based diet pills such as Dexedrine and diethylpropion.
They also found evidence that some drugs have the opposite effect during pregnancy, reducing the probability of homosexual offspring. Mothers of heterosexual males were 70 per cent more likely to have taken drugs to combat nausea than those of male homosexuals.
The effect of the drug use is most pronounced in the first trimester of pregnancy. Since it is generally believed that sexual orientation of the brain is set during the first trimester this aspect of the result is not surprising and it can even be seen as strengthening the likelihood that this result will hold up when a follow-up study is conducted on a larger number of subjects.
I predict that in the future as the influence of various drugs on sexual orientation become known women will intentionally use drugs to increase the odds that their children will be born with the sexual orientation that they prefer. Will lesbian women use sexual selection techniques to ensure that their offspring will be daughters? Also, will they choose to have lesbian or heterosexual daughters?
Since women are the one that carry babies in pregnancy and hence (at least in Western societies) women have a bigger say on what goes on in their bodies during pregnancy my guess is that female homosexuality will be selected for more often than male homosexuality will be selected for. I expect male homosexuality to become more rare once means to decrease its likelihood are identified. I am not sure whether female homosexuality will become more or less common. My guess is it will become less common as well.
For other work by Lee Ellis see my post Obesity Being Selected For In Modern Society?
Update: Given that amphetamines and synthetic thyroid hormone are products of mid 20th century industrial civilization this result strongly suggests that industrial civilization has caused a higher incidence of lesbianism.
What else has industrial civilization done to alter, on average, human biological development? Certainly drug abuse and alcohol abuse cause an increase in the incidence of an assortment of defects. Medicine is allowing more defective babies to survive pregnancy and early childhood. Medicine, public health measures (e.g. sterile water) and higher living standards (more food, better shelter) are also reducing the selective pressure caused by infectious diseases. What is that causing to be selected for instead? Chemical pollutants might be causing other changes that influence fertility, cognitive ability, behavior, and degree of masculinity and femininity.
There are two issues that need to be separated here: First, how are elements of modern environments (e.g. specific drugs, quantity and quality of food) altering development of individuals as reported above? Second, what changes in natural selective pressure are happening as a result of these changes and what genetic variations are being selected for or against? Both of those questions will get many more answers in the years ahead.
One point can be made about the above case: If being a lesbian reduces fertility (anyone know?) then any genetic variant that increases the likelihood that one will take amphetamines or thyroid hormone is (probably very slowly) being selected against. Also, any genetic variant which makes it more likely that a female fetus will become homsexual in response to amphetamine exposure or thyroid hormone exposure is also (again, probably very slowly) being selected against.
French scientists Charlotte Faurie and Michel Raymond of the University of Montpellier have found in a cross-cultural comparison of rates of left-handedness and violence that lefties occur at a higher rates in more violent societies.
Among the Jula (Dioula) people of Burkina Faso, the most peaceful tribe studied, where the murder rate is 1 in 100,000 annually, left-handers make up 3.4% of the population. But in the Yanomami tribe of Venezuela, where more than 5 in 1,000 meet a violent end each year, southpaws account for 22.6%. Faurie and Raymond report their findings in the Proceedings of the Royal Society.
Many sports are kinda like violence under civilized rules. Well, sports people are more often left-handed than the overall population.
And the ratio of left-handers to right-handers is higher in successful sportspeople than it is in the general population, suggesting there is definite advantage to favouring the left hand or foot in competitive games, such as tennis.
Statistical evidence links several auto-immune diseases, such as inflammatory bowel syndrome and ulcerative colitis, with left-handedness.
Given the downsides there would have to be strong selective pressures to cause left-handedness to happen anyway. So the need to discover a selective pressure for left-handedness already existed before this latest report was published.
As any schoolboy could tell you, winning fights enhances your status. If, in prehistory, this translated into increased reproductive success, it might have been enough to maintain a certain proportion of left-handers in the population, by balancing the costs of being left-handed with the advantages gained in fighting.
Suppose that, as this study suggests, the adaptive value of the ability to fight and win fights differed between societies so strongly that it selected for different frequencies of genetic variations that influence handedness. The existence of differences in selective pressures for competence in human-human violence is especially important because such differences in selective pressure would not have acted only on genes that influence the tendency to be left-handed. Such a difference in selective pressure for violence would also have produced differences in the frequencies of many other genetic variations that influence other aspects of the capacity to commit violent acts (e.g. tendency to impulsivness, ease of angering, muscle strength, and other cognitive and physical qualities).
When hamburgers are cooked heterocyclic amines (HCAs) are formed at high temperatures. HCAs are carcinogens and are probably one of the reasons why red meat consumption raises the incidence of cancer. Well, some new research provides evidence that addition of just 1% potato starch to hamburgers reduces carcinogen formation by an order of magnitude.
Overall, the burgers treated with potato starch developed the lowest total quantity of HCAs in their crust, just 5.5 nanograms per gram of cooked meat, Skog's team reports in an upcoming issue of the Journal of Agricultural and Food Chemistry. The next best performer was a fructose-glucose mix, which yielded roughly 8 ng/g. By contrast, the untreated burgers ended up with an HCA content of 60 ng/g, and the one laced with only salt and TPP yielded HCAs at 16 ng/g.
Lower temperature cooking by boiling meats would probably reduce HCA formation by a comparable amount and perhaps even by more.
Previous research has demonstrated that mixing cherries into hamburgers also reduces formation of heterocyclic aromatic amines (HAAs also referred to as heterocyclic amines or HCAs above) during cooking.
In the study, MSU researchers found that ground beef patties containing 15 percent fat and 11.5 percent tart cherry tissue had "significantly" fewer HAAs when pan fried, compared to patties without cherry tissue added. The overall HAA reduction ranged from nearly 69 percent to 78.5 percent. The reduction is "clearly due to cherry components functioning as inhibitors of the reaction(s) leading to HAA formation," according to the journal article. Measurements done during the study showed that the fat content of cherry patties was lower than that of regular patties, but the moisture content was greater, thereby verifying early findings.
Among food samples cooked to a well-done, non-charred state, bacon strips had almost 15-fold more mass (109.5 ng/g) than that of the beef, whereas no heterocyclic amine (HCA) was detected in the fried tempeh burgers.
The total amounts of HCAs in the smoke condensates were 3 ng/g from fried bacon, 0.37 ng/g from fried beef and 0.177 ng/g from fried soy-based food.
Scientists in Japan have made the first device that can convert solar energy into electricity and then store the resulting electric charge. The "photocapacitor" designed by Tsutomu Miyasaka and Takurou Murakami at Toin University in Yokohama could be used to power mobile phones and other hand-held devices (Appl. Phys. Lett. 85 3932).
The cells can be arranged in series to produce 12 volts. (same article here)
The cells can also be connected to form larger, more powerful cells. Conventional capacitors that are charged using electricity can produce a voltage that is no greater than the input, or charging voltage, of one of the cells in a connected series. In contrast, the photocapacitor, like conventional batteries, can produce voltage equivalent to the collective input of photocapacitors connected in series. The researchers' prototype produces 0.7 volts. Connecting 18 cells would yield 12 volts, which is the output of a car battery, said Miyasaka.
The thickness of the photocapacitor depends on the thickness of the electrodes, and could be made narrower than one millimeter, said Miyasaka.
The scientists expect to make these cells practical to use within just 2 years. They are working on making the cells into a flexible plastic material.
If this stuff turns out to be cheap to make then imagine applying it as a coating to a hybrid car to allow car batteries to recharge while parked.
Peccell Technologies, Inc. (Peccell), a developer of a film-type dye-sensitized solar cells (DSC), has achieved a high voltage of over 4V - equivalent to that of a lithium ion battery - under illumination. The electrode of the DSC is made of titanium oxide paste, which is based on fine particles of titanium oxide supplied by Shoa Denko KK (SDK). The paste is applicable to both film-type DSCs for portable applications and conventional glass-substrate DSCs.
Peccell, based in Yokohama City and led by Prof. Tsutomu Miyasaka, Department of Biomedical Engineering, Faculty of Engineering, Toin University of Yokohama, was established in March 2004 for the development of DSC-related technologies.
Dr Jean-Marie Andrieu and Dr. Wei Lu have demonstrated that a vaccine prepared from a patient's own blood can keep HIV viral load down far enough to allow CD4 immune cell counts to rise.
French researchers reported Sunday that an AIDS vaccine designed to treat the disease, rather than prevent it, has scored an initial success by suppressing the virus for up to a year among a small group of patients who tried it.
The vaccine reduces viral load but does not wipe out the virus entirely.
The vaccine was tested in Brazil on 18 volunteers who were already infected with HIV, the virus that causes AIDS, but who were not yet taking any antiviral drugs. After four months, the level of HIV in their bloodstreams had been reduced an average of 80 percent.
The vaccine would cost $4000-$8000 per year and have to be taken yearly. That would be considerably cheaper than anti-retroviral drugs and would probably have far fewer side effects than the drugs.
Cells called monocytes were extracted from volunteers’ blood, grown in laboratory conditions and transformed into dendritic cells, which alert the immune system to possible infections. These dendritic cells were loaded with a chemically-inactivated preparation of HIV taken from the same person, and then transfused back into the trial volunteer.
In eight out of the 18, viral load fell by more than 90% (a one-log reduction) when measured one year after treatment, along with stable or rising CD4 counts. The other 10 also had reductions in viral load, but these were not sustained. Across the whole group, there was a statistically significant reduction of viral load over the course of the year and an increase in CD4 counts of around 100 cells/mm3 which returned over one year to baseline values. This contrasted with the six months before treatment, in which CD4 counts fell by an average of 100 cells.
While this obviously does not cure existing HIV carriers or prevent infections it will probably reduce treatment costs, reduce side effects (which can be debilitating and life-threatening), and reduce the burden and nuisance on patients of treating themselves.
Since wider spread use of a vaccine will reduce anti-viral drug use it will also reduce the rate at which drug-resistant HIV strains get selected for. So the anti-viral drugs will last longer for those who need them.
A vaccine that prevents new infections and a treatment that would totally wipe out HIV in a body would both greatly reduce treatment costs. Optimally effective medical treatments don't just make people healthier. They also almost always much cheaper than treatments that attempt to manage and control a medical condition without curing it entirely.
David Stephens, a biologist at the University of Minnesota thinks some people hobbled by excessive impulsiveness because their ancestors benefitted from the behavior.
The new experiments were modeled on how animals encounter and exploit food clumps. The jays encountered one clump at a time and obtained some food from it. Then they had to decide whether to wait for a bit more from the same clump or leave and search for another clump. Not surprisingly, the birds still acted impulsively, preferring items they could get quickly. They considered only the size and wait time for their next reward--never a reward beyond that, even though it may have been bigger.
What did surprise Stephens was that the birds that went for the immediate reward were able to "earn" as much or more food in the long run as birds that were forced to wait for the larger reward or to follow a mixed strategy. The reason, he said, was that in the wild, animals aren't faced with an either-or choice of "small reward now or big reward later." What happens is that when they find a small bit of food, they don't wait; they just go back to foraging, and they may find lots of little rewards that add up to more than what they would get if they had to hang around waiting for bigger and better.
"Animals, I think, come with a hardwired rule that says, 'Don't look too far in the future,'" Stephens said. "Being impulsive works really well because after grabbing the food, they can forget it and go back to their original foraging behavior. That behavior can achieve high long-term gains even if it's impulsive."
The work may apply to humans, he said, because taking rewards without hesitation may have paid off for our foraging ancestors, as it does for blue jays and other foragers. Modern society forces us to make either-or decisions about delayed benefits such as education, investment and marriage; the impulsive rules that work well for foragers do more harm than good when applied in these situations.
"Impulsiveness is considered a big behavior problem for humans," said Stephens. "Some humans do better at binary decisions like 'a little now or a lot later' than others. When psychologists study kids who are good at waiting for a reward, they find those kids generallly do better in life. It looks as though this is a key to success in the modern world, so why is it so hard for us to accept delays? The answer may be because we evolved as foragers who encountered no penalties for taking resources impulsively.
We are no longer in the environments our ancestors evolved in. We have changed and continue to change our environments. Since natural selection takes many generations to adjust a species to environmental changes humans just are not adapted to the environments we are creating with technology. So Stephens' argument strikes me as very plausible.
Humans are going to have to adapt themselves and their offspring to modern environments. Successful adaptation will require the development of drugs, gene therapies, and stem cell therapies to adjust brains to be more adaptive and compatible with modern environments.
Commuting is the least enjoyable activity for women? I bet women would enjoy commuting a lot more if they regularly car-pooled with close friends.
The technique, described in the Dec. 3 issue of Science, provides insight into what people actually enjoy and what kinds of factors affect how happy we are with our lives.
Some of the findings confirm what we already know while others are counter-intuitive. The researchers assessed how people felt during 28 types of activities and found that intimate relations were the most enjoyable, while commuting was the least enjoyable.
More surprisingly, taking care of their children was also among the less enjoyable activities, although people generally report that their children are the greatest source of joy in their lives.
"When people are asked how much they enjoy spending time with their kids they think of all the nice things---reading them a story, going to the zoo," said University of Michigan psychologist Norbert Schwarz, a co-author of the Science article. "But they don't take the other times into account, the times when they are trying to do something else and find the kids distracting. When we sample all the times that parents spend with their children, the picture is less positive than parents expect. On the other hand, we also find that people enjoy spending time with their relatives much more than they usually assume."
General reports of what people enjoy may also differ from descriptions of how people actually feel in a specific situation because many people hesitate to report socially inappropriate feelings. This is less of a problem when they report on specific episodes. "Saying that you generally don't enjoy spending time with your kids is terrible," Schwarz said, "but admitting that they were a pain last night is quite acceptable." The new Day Reconstruction Method provides a better picture of people's daily experiences by improving accurate recall of how they felt in specific situations.
By illuminating what kinds of activities, under what conditions and with what partners, are most likely to be linked with positive or negative feelings, the method has potential value for medical researchers examining the emotional burden of different illnesses and the health consequences of stress, according to Schwarz, who is a research scientist at the U-M Institute for Social Research (ISR).
For the study, researchers analyzed questionnaires completed by a convenience sample of 909 working women. Participants answered demographic and general satisfaction questions and were asked to construct a short diary of the previous day: "Think of your day as a continuous series of scenes or episodes in a film," the directions began. After participants developed their diary, they answered a series of structured questions about each episode, including when it started and ended, what they were doing, where they were, with whom they were interacting, and how they felt. The study builds on earlier work on Americans' use of time, initiated by ISR economist F. Thomas Juster.
The average number of daily activities participants reported was 14.1 and the average duration of each episode was 61 minutes.
In addition to intimate relations, socializing, relaxing, praying or meditating, eating, exercising, and watching TV were among the most enjoyable activities. Commuting was the least enjoyable activity, with working, doing housework, using the computer for e-mail or Internet, and taking care of children rounding out the bottom of the list.
Interactions with friends and relatives were rated as the most enjoyable, followed by activities with spouses or significant others, children, clients or customers, co-workers and bosses. At the bottom of the list: activities done alone.
Personal characteristics such as trouble maintaining enthusiasm (an indicator of depression) or a poor night's sleep exerted a pervasive influence on how people felt during daily activities. Features of the current situation such as the identity of partners in an interaction or the level of time pressure experienced at work exerted a powerful effect.
But general life circumstances---such as how secure people think their jobs are, or whether they are single or married---had a relatively small impact on their feelings throughout the day. These factors were closely linked with how satisfied people said they were with their lives in general, but had little influence on how positive they felt during specific activities.
"It's not that life circumstances are irrelevant to well-being," notes Schwarz. "On the contrary, we found that people experience large variations in feelings during the course of a normal day. This variation highlights the importance of optimizing the allocation of time across situations and activities. If you want to improve your well-being, make sure that you allocate your time wisely."
Unfortunately, that's not easy. When the researchers examined the amount of time spent on various activities, they found that people spent the bulk of their waking time---11.5 hours---engaged in the activities they enjoyed the least: work, housework and commuting.
Women do not like using the computer to surf the internet. No wonder most blog comment posters appear to be men. But web surfing is a solitary activity in most cases. No wonder women do not like it.
On average, the 900 women gave "Intimate relations" a positive score of 5.10, compared to 4.59 for socializing. Housework scored 3.73, which was better at least than working at 3.62 and commuting with a lowly score of 3.45.
As for who the women preferred to be with, friends clearly won out with a positive score of 4.36. Children landed in the middle, after relatives and spouses.
The tool, called the Day Reconstruction Method (DRM), assesses how people spend their time and how they feel about, or experience, activities throughout a given day. In a trial of the new technique, a group of women rated the psychological and social aspects of a number of daily activities. Among these women, relaxing with friends was one of the most enjoyable activities, and the least enjoyable was commuting.
DRM is described by Daniel Kahneman, Ph.D., a professor of Psychology and Public Affairs at Princeton University, and colleagues in the December 3, 2004, issue of Science. Dr. Kahneman was joined in this study by his Princeton University colleague, economist Alan B. Krueger, and psychologists David A. Schkade of the University of California, San Diego, Norbert Schwarz of the University of Michigan, and Arthur A. Stone of Stony Brook University.
"Current measures of well-being and quality of life need to be significantly improved," says Richard M. Suzman, Ph.D., Associate Director of the National Institute on Aging (NIA), which in part funded the research. "In the future I predict that this approach will become an essential part of national surveys seeking to assess the quality of life. The construction of a National Well-Being Account that supplements the measure of GNP with a measure of aggregate happiness is a revolutionary idea." The NIA is part of the National Institutes of Health (NIH) at the U.S. Department of Health and Human Services.
The new tool will be used in an effort to calculate a "national well-being account," a measure similar to economic gauges such as the gross national product. The research team is working with the Gallup Organization to pilot a national telephone survey using the new method.
"The potential value is tremendous," said Princeton economist Alan Krueger, who worked with psychologist Daniel Kahneman, a Nobel laureate, on the study. "Right now we use national income as our main indicator of well-being, but income is only a small contributor to life satisfaction. Ultimately, if our survey is successful and generates the type of data we hope, we would like to see the government implement our method to provide an ongoing measure of well-being in addition to national income."
The Princeton researchers collaborated with psychologists David Schkade of the University of California-San Diego, Norbert Schwarz of the University of Michigan and Arthur Stone of the State University of New York-Stony Brook.
"Current measures of well-being and quality of life need to be significantly improved," said Richard Suzman, associate director of the National Institute on Aging, which partially funded the research. "In the future I predict that this approach will become an essential part of national surveys seeking to assess the quality of life. The construction of a national well-being account that supplements the measure of GNP with a measure of aggregate happiness is a revolutionary idea."
I will bet that using this method men will eventually be shown to find solitary activities to be more pleasurable than women find them.
So how to make women more happy? More sex obviously. Drugs that increase erotic feelings and joy of sex would greatly help increase average level of happiness. But aside from that? Housework will become more automated in the future as new materials and robots reduce the need for cleaning. Also, computer networks might eventually reduce the need for commuting. But that would also reduce the amount of at-work socializing that women can engage in. So will home office work make women more or less happy?
How to make children easier to raise and less time-consuming? Genetic engineering to make them more well-behaved. Do women with poorly behaved children get less joy out of raising them? Probably.
Scientists led by Walter Koch, Ph.D., director of the Center for Translational Medicine in the Department of Medicine in Jefferson Medical College of Thomas Jefferson University in Philadelphia, used a virus to insert the gene for a protein called S100A1 into failing rat hearts.
“In contrast to other gene therapy strategies geared to overexpressing a gene,” says Dr. Koch, who is W.W. Smith Professor of Medicine at Jefferson Medical College, “because this protein is reduced in heart failure, simply bringing the protein level back to normal restored heart function.” Dr. Koch and his co-workers report their findings December 1, 2004 in the Journal of Clinical Investigation.
S100A1, which is part of a larger family of proteins called S100, binds to calcium and is primarily found at high levels in muscle, particularly the heart. Previous studies by other researchers showed that the protein was reduced by as much as 50 percent in patients with heart failure. A few years ago, Dr. Koch and his co-workers put the human gene that makes S100A1 into a mouse, and found a resulting increase in contractile function of the heart cell. The mice hearts worked better and had stronger beats.
Dr. Koch’s Jefferson team now examined whether it could make failing hearts normal again. The researchers – 12 weeks after they simulated a heart attack in the rats – delivered the human S100A1 gene to the heart through the coronary arteries by injection of a genetically-modified common cold virus as a carrier. After about a week, they found the hearts began to work normally. In addition, the animals’ heart muscle showed improved efficiency in using its energy supply, which was decreased in heart failure. According to Dr. Koch, the improvements were seen in both the whole animal as well as in individual heart cells.
“This is one of the first studies to do intracoronary gene delivery in a post-infarcted failing heart,” he says. “This proves it could actually be a therapy since most of the previous studies of this type are aimed at prevention – giving a gene and showing that certain heart problems are prevented. In those cases, heart problems are not actually reversed. This is a remarkable rescue and reversal of cardiac dysfunction, with obvious clinical implications for future heart failure therapy.”
Koch hopes to eventually do human trials of this therapy.
Next, he and his colleagues hope to learn more about the mechanisms behind S100A1’s actions, and eventually, develop gene therapy protocols in humans. S100A1 is also found in the cell’s energy-producing mitochondria, he notes. He thinks the protein may be a link between energy production and calcium signaling in the heart cell – a crucial part of the process that makes the heart beat.
Between stem cell therapies and gene therapies I find it hard to believe that heart disease will be a major killer 20 years from now.
Why does the immune system become less effective as we age? One reason is that the thymus gland ages. We need to be able to rejuvenate our thymus glands. But another reason is that the T cells in the immune system get old and mess up in a variety of ways. Certain CD8 T cells divide too much and crowd out other needed T cells.
PORTLAND, Ore. – Scientists at the Vaccine and Gene Therapy Institute at Oregon Health & Science University have made a discovery that helps explain why our immune system worsens with age. The work was led by Janko Nikolich-Zugich, M.D., Ph.D., a senior scientist at the VGTI. The scientists hope this new information can be used to better protect the elderly from infectious diseases by finding ways to slow or stop the degradation of the immune system. The research results are printed in the current edition of the Journal of Experimental Medicine.
"One of the major components of the immune system are T cells, a form of white blood cell. These cells are programmed to look for certain kinds of disease-causing pathogens, then destroy them and the cells infected by them," said Nikolich-Zugich who also serves as a professor of molecular microbiology and immunology in the OHSU School of Medicine, and is a senior scientist at the OHSU Oregon National Primate Research Center. "Throughout our lives, we have a very diverse population of T cells in our bodies. However, late in life this T cell population becomes less diverse, potentially resulting in a higher level of susceptibility to disease. We think we have found one of the key reasons behind this age-related susceptibility."
Specifically, in old age, the number of CD8 T cells diminishes. CD8 T cells have two functions: to recognize and destroy abnormal or infected cells, and to suppress the activity of other white blood cells to protect normal tissue. The scientists believe that late in life a different kind of CD8 T cell is increasingly produced by the body. These cells, called T cell clonal expansions (TCE), are less effective in fighting disease. They also have the ability to accumulate quickly as they have a prolonged lifespan and can avoid normal elimination in the organism.
In the end, these TCE cells can grow to become more than 80 percent of the total CD8 population. The accumulation of this one type of cell takes away valuable space from other cells, resulting in an immune system that is less diverse and thus less capable in effectively locating and eliminating pathogens.
To conduct the research, scientists at the VGTI studied mice, which have immune system function very similar to humans. The scientists found the aging mice to have greater TCE levels than normal mice, a less diverse population of CD8 T cells and reduced ability to fight disease. In addition, the scientists were able to show that increasing TCE cells in a normal, healthy mouse reduces that animal's ability to fight disease.
"While this work is still in the early stages, we think it might be of great value," explained Nikolich-Zugich. "If we can find ways to limit the production of TCE in the aging, we might be able to keep their immune systems strong and better able to fight disease. To provide a real-life example: A flu vaccine shortage like the one we are witnessing might be less concerning if elderly Americans were made less susceptible."
During development the immune system's cells reshuffle an area of DNA in many ways to create T cells that have many different antibody active binding sites to bind to different kinds of antigens (antigens being a large assortment of things including pollen, bacteria, viruses, and yet other potential targets). For each potential invading pathogen we need a small number of immune cells that will recognize it and in response divide rapidly to scale up to meet the challenge of binding to and destroying that invader.
When these TCE cells, which make only a very small number of kinds of antibodies, divide too much they basically squeeze out the other types of T cells. When a pathogen comes along that does not match what the TCE cells can handle then the TCE cells basically refuse to die off to make room for the other types of T cells that can handle the pathogen. This prevents a proper immune response from happening.
In a way the TCE cells pose a problem similar to cancer. They divide too much and squeeze out cells that do necessary jobs. In another way the TCE cells are similar to the T cells that cause auto-immune diseases. They propagate too much to create antibodies that are not needed (in the case of TCEs) or even harmful (in the case of T cells causing auto-immune disorders). It would be interesting to know whether some auto-immune disorders are caused by TCE cells and whether people who suffer from auto-immune disorders have less diverse T cell populations.
In a way this report is good news for people suffering frmo auto-immune disorders because it focuses attention on the need to be able to very selectively eliminate large subsets of immune system cells. A therapy developed to eliminate TCE cells might also be adaptable to use against T cells that are causing auto-immune disorders. Effectively the pool of people who need to have their immune systems selectively pruned back has just gotten a lot larger. This increases the odds of more resources being deployed to develop such therapies.
But what sort of approach will yield a useful way to selectively prune back unwanted T cells? Maybe monoclonal antibodies designed to attack each clone population. That requires being able to characterize each clone population and to find a way make antibodies against it. How hard is all that? I have no idea. Anyone know?
Here is a stem cell therapy, just now entering practical clinical use, that is great because illustrates the future of medicine: it fixes the underlying problem. Make the problem stop happening. A person's own cells extracted from muscle can strengthen bladder muscles and cure incontinence.
CHICAGO - Austrian researchers are successfully treating incontinent women with the patient's own muscle-derived stem cells. The findings of the first clinical study of its kind were presented today at the annual meeting of the Radiological Society of North America (RSNA).
"Urinary incontinence is a major problem for women, and for an increasing number of men," said Ferdinand Frauscher, M.D., associate professor of radiology at the Medical University of Innsbruck and the head of uroradiology at University Hospital. "We believe we have developed a long-lasting and effective treatment that is especially promising because it is generated from the patient's own body."
The stem cells are removed from a patient's arm, cultured in a lab for six weeks, and then injected into the wall of the urethra and into the sphincter muscle. The result is increased muscle mass and contractility of the sphincter and a thicker urethra. Many patients have no urinary leakage within 24 hours after the 15- to 20-minute outpatient procedure.
Stress incontinence affects nearly 15 million people — primarily women — around the world. It occurs when the urethra narrows or becomes otherwise abnormal, or when the sphincter muscles that help open and close the urethra become weak or diminished, causing urine leakage when an individual exercises, coughs, sneezes, laughs or lifts heavy objects.
Twenty females, ages 36 to 84, who were experiencing minor to severe stress incontinence participated in the research. Muscle-derived stem cells were removed from each patient's arm and cultured, or grown, using a patented technique that yielded 50 million new muscle cells (myoblasts) and 50 million connective tissue cells (fibroblasts) after six weeks. When implanted into the patient under general or local anesthetia, the new stem cells began to replicate the existing cells nearby. One year after the procedure, 18 of the study's 20 patients remain continent.
"These are very intelligent cells," Dr. Frauscher said. "Not only do they stay where they are injected, but also they quickly form new muscle tissue and when the muscle mass reaches the appropriate size, the cell growth ceases automatically."
Since the stem cells must be in contact with urethra and sphincter tissue for the procedure to work, a major factor in the success of this treatment was the development of transurethral three-dimensional ultrasound. "With real-time ultrasound, we are able to see exactly where the new cells must be placed," Dr. Frauscher said.
This treatment is a far greater economic value because it is of similar cost to existing treatments but works better because it actually fixes the cause of the problem.
Dr. Frauscher said the cost of the stem cell procedure was comparable to two popular treatments for incontinence: the long-term purchase and use of absorbents, such as adult diapers, and collagen injections, which show improvement during the first six months but often result in symptoms returning after a year. Dr. Frauscher also said the stem cell treatment appears to be more successful with women at this time. For men, incontinence is often caused by prostate surgery, which may result in scar tissue formation, where the stem cells do not grow very well. In men without scar tissue stem cell therapy seems to work as well as in women, Dr. Frauscher said.
Think about the economic value of stem cell treatments. Our problem in the United States isn't that we spend $1.6 trillion (as of 2002 - surely higher now) per year on medical care. Our problem is that so much of that care does not really fix the underlying causes of various medical conditions. Imagine medical treatments were capable of fixing everything that breaks just as auto mechanics can fix cars. Imagine you could therefore live in perfect health. Then I for one would not complain if that takes 15+% of the GDP.
Curiously, the one woman who gained no benefit from the procedure was 84 years old. One plausible explanation for the failure in her case is that her muscle stem cells are too old to form vigorous muscle cells.
The team is currently treating eight to 10 women per week and long waiting lists are building up.
Frauscher, head of uroradiology at University Hospital in Innsbruck, Austria, said his team has plans to start using this technique at other centers. "Next year we will start in three centers in Austria, two in Germany, one in Switzerland and one in the Netherlands," he said. "We are also planning to perform this in the USA."
The needed cells were extracted from the muscle cells and grown up by a company called InnovaCell. They claim to use some patented processes to accomplish this.
What I find great about this report is that it shows that stem cell therapies that really fix what is wrong are moving into regular clinical practice. Stem cell therapies are not some distant prospect. They are happening now and every year that goes by from here on out we will have more stem cell therapies that successfully treat more diseases and disorders.
The tricked eggs divide for four or five days until they reach 50 to 100 cells – the blastocyst stage. These blastocysts should in theory yield stem cells, but because they are parthenogenetic – produced from the egg only – they cannot be viewed as a potential human life, says Karl Swann of the University of Wales College of Medicine in Cardiff, UK.
An enzyme which is normally present in sperm and involved in fertilization was isolated and applied to eggs with no sperm in sight.
Swann’s team tricked the eggs into dividing by injecting phospholipase C-zeta (PLC-zeta), an enzyme produced by sperm that Swann discovered two years ago with Cardiff colleague Tony Lai.
It sounds like Swann's group is repeating with human eggs a process he already demonstrated with mouse eggs a couple of years ago.
This brings up a question I raised in the context of possibly being able to some day isolate pluripotent stem cells from the blood of pregnant women: Will religiously motivated opponents of human embryonic stem cell research (hESC) find this latest technique to be an objectionable way to get cells to use to study embryo development or to induce cells to become organs? After all, there is no fertilization by a sperm (and, as we all know, Every Sperm is Sacred) to trigger the hypothesized moment of ensoulment. So are these cells ethically acceptable for stem cell research?
Biological scientists are going to continue to come up with new ways of manipulating cells that make them hard to place into traditional categories based on their origins. Either the religious folks are going to adopt a definition of human life as beginning at conception or they are going to have to define a cell as the beginning point of life by use of a still-to-be-elucidated statement of epigenetic state of an embryonic cell. If they go the latter route they are going to have to wait till science advances far enough that it can describe the epigenetic state (which will likely be a range of states) that can come into existence right after fertilization and then apply that definition to all other cells that have one of those states regardless of how it got there.
Mark P. Mills and Peter W. Huber have an article in the City Journal on whether terrorists could manage to knock out electric power to New York City for a long period of time and on how to make the electric grid more reliable.
It takes almost 11 gigawatts of electricity to keep New York City lit in the late afternoon on a hot summer day—a huge amount of power. All the air conditioners, lights, elevators, and quietly humming computers inside use a whole lot more energy than the cars and trucks out on the streets. But because the fuels and infrastructure that deliver the electric power are so distant and well hidden, it's hard to focus public attention on how vital they are to the city's survival. And how vulnerable.
Few of us have even the vaguest idea just how much a gigawatt of power might be. So let's talk Pontiacs instead: 110,000 of them, parked door to door in Central Park. At exactly the same moment, 110,000 drivers start the 110,000 engines, shift into neutral, push pedal to metal, and send 110,000 engines screaming up to the tachometer's red line. Collectively, these engines are now generating a total of about 11 gigawatts of shaft power.
The writers do not bring this up but the power of those cars suggest a future solution to the power back-up needs of New York City. See my posts Will Electric Hybrid Cars Be Used As Peak Electric Power Sources? and Cars May Become Greater Electricity Generators Than Big Electric Plants.
Mills and Huber say that with more sophisticated technology the massive power blackout that struck the American Northeast and Canada on August 14, 2003 could have been prevented.
Had they had real-time access to SCADA networks in Ohio, utilities across the Northeast would have seen the August 14 problem coming many minutes, if not hours, before it hit and could have activated protective switches before the giant wave swept east to overpower them. But in the deregulatory scramble of the 1990s, regulators had pushed the physical interconnection of power lines out ahead of the interconnection of SCADA networks. The software systems needed for automated monitoring and control across systems had not yet been deployed. By contrast, on-site power networks in factories and data centers nationwide are monitored far more closely and make much more sophisticated use of predictive failure algorithms.
Nor had the grid's key switches kept pace. To this day, almost all the grid's logic is provided by electromechanical switches—massive, spring-loaded devices that take fractions of seconds (an eternity by electrical standards) to actuate. But ultra-high-power silicon switches could control grid power flows much faster, more precisely, and more reliably. Already, these truck-size cabinets, containing arrays of solid-state switches that can handle up to 35 megawatts, safeguard power supplies at military bases, airport control hubs, and data and telecom centers. At levels up to 100 megawatts, enormous custom-built arrays of solid-state switches could both interconnect and isolate high-power transmission lines, but so far, they're operating at only about 50 grid-level interconnection points worldwide.
The current structure of regulation of the electric power grid provides disincentives for building a more reliable grid. If you want to follow the debate on government regulation of the electric power industry and energy policy more generally be sure to check out Lynne Kiesling's Knowledge Problem blog.
One quibble I have with their essay has to do with their argument that capacity for locally generated power increases the reliability of the public grid.
At the same time, distributed, small-scale, and often private efforts to secure power supplies at individual nodes directly strengthen the reliability of the public grid. Large-area power outages like the one on 8/14 often result from cascading failures. Aggressive load shedding is the only way to cut off chain reactions of this kind. A broadly distributed base of secure capacity on private premises adds resilience to the public grid, simply by making a significant part of its normal load less dependent on it. Islands of especially robust and reliable power serve as centers for relieving stresses on the network and for restoring power more broadly. Thus, in the aggregate, private initiatives to secure private power can increase the stability and reliability of the public grid as well.
Local generation capacity reduces the dependence of those locations on the public grid. But if the public grid is overloaded and in danger of a cascade of failures those local generation plants only help if they kick on before the cascade of failures and in response to a building strain in the grid. If local generating capacity is designed to turn on in response to a loss of power then their switching on will come too late to prevent a large grid cascading failure. Back-up generators reduce the disruption to society as a whole by allowing hospitals, phone services, and other services to continue to function. But generators that kick on in response to grid failure do not prevent grid failure.
If those local plants operate continuously as a permanent substitute for getting power from the public grid they have no effect on the reliability of the public grid. Such sites do not figure into calculations of how much generation or distribution capacity the public grid needs. So the public grid will be smaller due to the lack of demand from self-sufficient sites and the public grid will be just as liable to reach some critical point and fall over. If terrorists knock out transmission lines or switching stations or natural gas pipelines that deliver gas to electric plants then local generation capacity will not save the grid from failure. If the local generators are run on natural gas then they will even stop working in response to attacks on natural gas pipeline systems just as the public natural gas powered electric generator plants will stop.
Ultimately vulnerability to the public grid may end because the economies of scale from building large electric power plants may vanish and the need for a grid to distribute power from those large plants may come to an end. For how that might come to pass see my post Thin Film Fuel Cells May Obsolesce Large Electric Plants. However, if fuel cells are powered by natural gas then such a turn of events will not end the vulnerability of society to attacks on large power distribution networks. We will still be vulnerable to attacks on natural gas pipeline systems. However, liquid fuel systems are much less vulnerable to attacks because local storage of liquid fuels is cheap and easy.
An economy based on cheap batteries and cheap solar photovoltaics would be much less vulnerable to attack in some parts of the country. But the problem with solar is that it is not concentrated enough to allow enough local generation of power in a highly dense area such as New York City. So in a solar/battery economy grids would still be needed to bring power to major metropolitan areas, especially in regions closer to the poles.
However, solar power used to drive artificial hydrocarbon synthesis (carbon-hydrogen fixing by either direct photosynthesis in an artificial structure or electrically driven chemical reactions - which photosynthesis really is anyway ) to create liquid hydrocarbon compounds would be fairly invulnerable to any man-made disruption. Though a very large volcanic eruption would seriously disrupt any solar-based power system (and most life on the planet). Nuclear power would continue to deliver power after a huge volcanic disruption.
Pittsburgh, December 1, 2004 – Specific variants in genes that encode proteins regulating inflammation may hold a key to explaining a host of disease processes known to cause increased risk of illness and death among African Americans, according to a study from the University of Pittsburgh’s Graduate School of Public Health (GSPH). The study, “Differential Distribution of Allelic Variants in Cytokine Genes Among African Americans and White Americans,” appears in the Dec. 1 issue of the American Journal of Epidemiology.
“We found that African Americans were significantly more likely to carry genetic variants known to stimulate the inflammatory response,” said Roberta B. Ness, M.D., M.P.H., professor and chair of the department of epidemiology at GSPH and the study’s primary author. “At the same time, genotypes known to dampen the release of anti-inflammatory proteins were more common among African Americans. This is kind of a double whammy.”
The genes for some major immune regulatory proteins were looked at. Note that advances in assay techniques are allowing scientists like these folks to examine many more genes at once than was done in past studies.
Specifically, scientists compared genetic data on 179 African-American and 396 white women who sought prenatal care and delivered uncomplicated, single, first births at Magee-Womens Hospital of the University of Pittsburgh Medical Center between 1997 and 2001. Blood samples were analyzed for a multitude of functionally relevant allelic variants in cytokine-regulating genes. Among these were several genes regulating the immune system proteins interleukin-1, interleukin-1 alpha, interleukin-1 beta, interleukin-6, interleukin-10, interleukin-18 and tumor necrosis factor-alpha.
“In the past, people looked at one or two variants,” said Dr. Ness. “We looked at a whole host, and saw trends that perhaps point to some evolutionary-mediated change in the human genome that has had an impact on inflammation.”
There has been a marked trend in recent years in the discovery of a major role for inflammation as a driver for the development of a large range of diseases including atherosclerosis and cancer. The anti-inflammatory COX2 inhibitor drug celecoxib (brand name Celebrex) is currently being used in a number of cancer treatment trials. Chronic inflammation accelerates the aging process. A genetic difference between groups in their propensity for inflammation is inevitably going to cause group average differences in disease incidence.
Odds ratios for African Americans versus Whites in genotypes up-regulating proinflammatory interleukin (IL) 1 (IL1A-4845G/G, IL1A-889T/T, IL1B-3957C/C, and IL1B-511A/A) ranged from 2.1 to 4.9. The proinflammatory cytokine interleukin-6 IL6-174 G/G genotype was 36.5 times (95% confidence interval (CI): 8.8, 151.9) more common among African Americans. Genotypes known to down-regulate the antiinflammatory interleukin-10 (IL10-819 T/T and IL10-1082 A/A) were elevated 3.5-fold (95% CI: 1.8, 6.6) and 2.8-fold (95% CI: 1.6, 4.9) in African Americans. Cytokine genotypes found to be more common in African-American women were consistently those that up-regulate inflammation.
Note that interleukin-19 (IL-10) is among the genes found to occur in different frequency in blacks and whites. Well, another study found that IL-10 promoter polymorphisms appear to influence the rate of advance of HIV infection. So this result also has implications for the spread of HIV and likely for vulnerability to other pathogens.
The differing frequencies of genetic variations for genes that regulate inflammation response were selected for by the radically different ecological niches humans found as they spread out around the world. Therefore it is not surprising the frequencies of variations are different in human groups which have long settled different parts of the world. What is more interesting than the differences in genetic variation frequencies (at least to me) is that it is likely that neither the frequencies of genotypes in whites or the frequency of genotypes in blacks are ideal for the environments we find ourselves living in today. Our inflammation responses are in all likelihood maladaptive for industrial society. My guess is that humans in general in modern society are experiencing more inflammation on average than is beneficial for them.
We need better smarter ways to control the inflammation response and easier ways to detect triggering of the inflammation response. Lots of people are walking around with chronic inflammation who do not even know it. Think of it as analogous to the people who are walking around with undiagnosed high blood pressure. Some people have chronic inflammation due to an undiagnosed infection. Others have it due to a nutritional deficiency (e.g lack of folic acid, B-6, and B-12 to brake down homocysteine). Still others may have it due to a low grade auto-immune response that they are unaware of. We need to be able to detect and more consciously control our inflammatory responses. But just how what sort of treatment will be optimal to exert on our inflammatory responses will depend at least in part on which particular set of genetic variations we possess for genes that regulate the inflammatory response.
A company called Cerametec of Salt Lake City Utah and the US Department of Energy's Idaho National Engineering and Environment Laboratory have announced a new technique that boosts the efficiency of conversion of energy into hydrogen.
SALT LAKE CITY -- Researchers at the U.S. Department of Energy’s Idaho National Engineering and Environmental Laboratory and Ceramatec, Inc. of Salt Lake City are reporting a significant development in their efforts to help the nation advance toward a clean hydrogen economy.
Laboratory teams have announced they’ve achieved a major advancement in the production of hydrogen from water using high-temperature electrolysis. Instead of conventional electrolysis, which uses only electric current to separate hydrogen from water, high-temperature electrolysis enhances the efficiency of the process by adding substantial external heat – such as high-temperature steam from an advanced nuclear reactor system. Such a high-temperature system has the potential to achieve overall conversion efficiencies in the 45 percent to 50 percent range, compared to approximately 30 percent for conventional electrolysis. Added benefits include the avoidance of both greenhouse gas emissions and fossil fuel consumption.
“We’ve shown that hydrogen can be produced at temperatures and pressures suitable for a Generation IV reactor,” said lead INEEL researcher Steve Herring. “The simple and modular approach we’ve taken with our research partners produces either hydrogen or electricity, and most notable of all – achieves the highest-known production rate of hydrogen by high-temperature electrolysis.”
This development is viewed as a crucial first step toward large-scale production of hydrogen from water, rather than fossil fuels.
The major private-sector collaborator has been Ceramatec, Inc. located at 2425 S. 900 West, Salt Lake City. “We’re pleased that the technology created over the nearly two decades dedicated to high-temperature fuel cell research at Ceramatec is directly applicable to hydrogen production by steam electrolysis,” said Ashok Joshi, Ph.D., Ceramatec chief executive officer.
“In fact, both fuel cell and hydrogen generation functionality can be embodied in a single device capable of seamless transition between the two modes. These years of investment, both public and private, in high temperature fuel cell research have enabled the Ceramatec-INEEL team to move quickly and achieve this important milestone toward establishing hydrogen as a part of our national energy strategy.”
The new method involves running electricity through water that has a very high temperature. As the water molecule breaks up, a ceramic sieve separates the oxygen from the hydrogen.
But this approach requires the design and construction of new nuclear reactors that will use helium gas as the cooling medium and the helium would be heated to much higher temperatures than water is heated in existing reactor designs.
The idea is to build a reactor that would heat the cooling medium in the nuclear core, in this case helium gas, to about 1,000 degrees Celsius, or more than 1,800 degrees Fahrenheit. The existing generation of reactors--used exclusively for electric generation--use water for cooling and heat it to only about 300 degrees Celsius.
This latest advance does not solve any of the hydrogen transportation or storage problems. A whole new generation of nuclear reactors would need to be designed (with, no doubt, some unique and difficult design problems to solve) and built to operate with high temperature helium gas in their cores. Design and construction of those reactors would take several years (my guess is probably 10 or 12 and possibly longer). Given the costs and lead time involved and the existing unsolved problems in hydrogen transportation and storage my guess is that this technology is not going to see widespread use until the 2020s at the earliest.
The hydrogen economy is still a distant prospect. Advances in nanotechnology will probably eventually solve the hydrogen storage problem. But other advances in nanotechnology will also eventually solve the car battery problem to allow the construction of cheap electric-powered cars that can travel long distances. Once the car battery problem is solved it takes far less of a change in energy infrastructure to migrate to electric power than it would to migrate to a hydrogen economy. This is just a guess but in the next 20 years I expect better batteries to do more than better hydrogen technologies to change the energy infrastructure of the world.
For more on the potential of electric power for cars over on the Ergosphere blog Engineer-Poet (yet another pseudonymous blogger whose real identity is a mystery) provides some great tables on where our energy comes from today and an argument for plug-in hybrid cars. Check out the table that shows the amount of energy that comes from different sources as measured in quadrillion BTUs ("Quads"). Note that the coal and natural gas quads combined exceed the petroleum quads. But the coal and natural gas burned for electricity lose a lot of energy in conversion. So delivered to the wheel of a car petroleum does more work per BTU. Would a switch to coal to power electric cars therefore require a much greater total consumption of energy? E-P also casts a skeptical look at someone else's skeptical look at hybrids.
CHICAGO – When people lie, they use different parts of their brains than when they tell the truth, and these brain changes can be measured by functional magnetic resonance imaging (fMRI), according to a study presented today at the annual meeting of the Radiological Society of North America. The results suggest that fMRI may one day prove a more accurate lie detector than the polygraph.
"There may be unique areas in the brain involved in deception that can be measured with fMRI," said lead author Scott H. Faro, M.D. "We were able to create consistent and robust brain activation related to a real-life deception process." Dr. Faro is professor and vice-chairman of radiology and director of the Functional Brain Imaging Center and Clinical MRI at Temple University School of Medicine in Philadelphia.
The researchers created a relevant situation for 11 normal volunteers. Six of the volunteers were asked to shoot a toy gun with blank bullets and then to lie about their participation. The non-shooters were asked to tell the truth about the situation. The researchers examined the individuals with fMRI, while simultaneously administering a polygraph exam. The polygraph measured three physiologic responses: respiration, blood pressure and galvanic skin conductance, or the skin's ability to conduct electricity, which increases when an individual perspires.
The volunteers were asked questions that pertained to the situation, along with unrelated control questions. In all cases, the polygraph and fMRI accurately distinguished truthful responses from deceptive ones. fMRI showed activation in several areas of the brain during the deception process. These areas were located in the frontal (medial inferior and pre-central), temporal (hippocampus and middle temporal), and limbic (anterior and posterior cingulate) lobes. During a truthful response, the fMRI showed activation in the frontal lobe (inferior and medial), temporal lobe (inferior) and cingulate gyrus.
Overall, there were regional differences in activation between deceptive and truthful conditions. Furthermore, there were more areas of the brain activated during the deception process compared to the truth-telling condition.
Dr. Faro's study is the first to use polygraph correlation and a modified version of positive control questioning techniques in conjunction with fMRI. It is also the first to involve a real-life stimulus. "I believe this is a vital approach to understand this very complex type of cognitive behavior," Dr. Faro said. "The real-life stimulus is critical if this technique is to be developed into a practical test of deception."
Because physiologic responses can vary among individuals and, in some cases, can be regulated, the polygraph is not considered a wholly reliable means of lie detection. According to Dr. Faro, it is too early to tell if fMRI can be "fooled" in the same manner.
However, these results are promising in that they suggest a consistency in brain patterns that might be beyond conscious control.
"We have just begun to understand the potential of fMRI in studying deceptive behavior," Dr. Faro said. "We plan to investigate the potential of fMRI both as a stand-alone test and as a supplement to the polygraph with the goal of creating the most accurate test for deception."
Dr. Faro's co-authors on this paper were Feroze Mohamed, Ph.D., Nathan Gordon, M.S., Steve Platek, Ph.D, Mike Williams, Ph.D., and Harris Ahmad, M.D.
Will fMRI stand alone as a test for deception? Dr. Faro admits he's not yet sure: "The polygraph looks at only peripheral stimulus as the end result of a long chain of primary central areas of activation of the brain. We're now getting to the origin of the activation."
Dr. Faro called the results "promising" and said he hopes to gain the interest of major organizations such as the Department of Homeland Security, the National Security Administration or the CIA to help fund further research and larger group studies using the same methods, but he says the technology is expensive.
"It's probably going to be used on the academic side to understand psycho-social behavior, and on the criminal side, it's going to be used for major criminals," said Dr. Faro. "We're looking at areas of tremendous concern with terrorism, where the expense is minimal compared to the potential disaster. Looking at industrial or business-related crimes, certainly Martha Stewart could afford this test if she was truly interested."
Imagine people negotiating a huge business deal demonstrating their sincerity by agreeing to a brain scan while being asked questions about whether they intend to go through with a deal in good faith. Questions about each of the commitments in the contract could be asked. Will that give an advantage to superficial people who are sincere but prone to changing their minds? How much do intentions matter?
Update: One thought: If one dreams up a lie well ahead of time (like days or weeks) will its recall look more like a truth on a brain scan? If one can fantasize the lie and make it into something like a real memory it might require less mental effort to recall and/or construct than a lie made up on the spot. Therefore in a brain scan it might look more like a regular memory.