Taking only the 47 low-bias trials, involving 180,938 people, they found that supplements as a whole increased the death rate by 5 per cent. When the supplements were taken separately, beta carotene increased death rates by 7 per cent, vitamin A by 16 per cent, and vitamin E by 4 per cent. Vitamin C gave contradictory results, but when given singly or in combination with other vitamins in good-quality trials, increased the death rate by 6 per cent.
Selenium was the only supplement to emerge with any credit. It appears to cut death rates by 10 per cent when given on its own or with other supplements in high-quality trials, but the result is not statistically significant.
The researchers wrote: "Our findings contradict the findings of observational studies claiming that antioxidants improve health.
"Considering that 10% to 20% of the adult population in Europe and North America may consume the supplements, the public health consequences may be substantial."
They said there were several different explanations for this increase in risk - and suggested that knocking out 'free radicals' might actually interfere with a natural defence mechanism within the body.
One of the uses of free radicals in the body is in the immune system to kill invading pathogens. Taking big doses of antioxidant vitamins might render aged immune systems less able to take on invader pathogens. So the higher mortality rate of vitamin takers might be due to dampened immune systems.
Another use is for intracellular and intercellular signalling. Denman Harman MD, who first proposed the free radical theory of aging back in the 1950s, once described in an interview (sorry, no link, this is from faded memory) how he felt sluggish if he took very high doses of antioxidant vitamins. His explanation for this effect was that the vitamins were quenching free radicals that were needed for signalling and by doing so they were suppressing his metabolism.
What is in your diet still matters. But the mystery remains as to just which compounds in which foods provide the most benefit. But what about vitamin supplements? My guess: the most beneficial nutrients from a supplementation standpoint are the non-antioxidant vitamins. Most people probably do not get enough vitamin D, for example. Others do not get enough calcium for their bones or iron for red blood cells. But for antioxidants we probably should look more toward non-vitamin compounds found in some foods.
But what we most need are biotechnologies that will let us repair the damage caused by free radicals. Gene therapies, cell therapies, replacement organs, and nanobots will all eventually let us repair the aging damage caused by free radicals. We should treat the development of these technologies as an urgent priority. We are getting older every year. Aging is a defeatable malady. We should defeat it.
ORLANDO, Fla., Feb. 28 -- Drinking a little alcohol every day, especially wine, may be associated with an increase in life expectancy. That’s the conclusion of Dutch researchers who reported the findings of their study today at the American Heart Association’s 47th Annual Conference on Cardiovascular Disease Epidemiology and Prevention.
The researchers found that a light intake of alcohol (on average less than one glass per day) was associated with a lower rate of cardiovascular death and death from all causes. When compared to spirits and beer, consumption of small amounts of wine, about a half a glass a day, was associated with the lowest levels of all-cause and cardiovascular deaths.
"Our study showed that long-term, light alcohol intake among middle-aged men was associated not only with lower cardiovascular and all-cause death risk, but also with longer life expectancy at age 50," said Martinette T. Streppel, lead author of the study and a Ph.D. student in the Division of Human Nutrition at Wageningen University and National Institute for Public Health and the Environment (RIVM) in Bilthoven, The Netherlands. "Furthermore, long-term light wine consumption is associated with a further protective effect when compared to that of light-to-moderate alcohol intake of other types."
Drink, be merry, live longer.
The researchers found that long-term, light alcohol intake of less than or equal to 20 grams per day (1 glass of alcoholic beverage contains 10 grams of alcohol, 1 ounce = ~30 mL of alcoholic beverage) compared to no alcohol intake was associated with a 36 percent lower relative risk of all-cause death and a 34 lower relative risk of cardiovascular death. The average long-term daily intake of the men throughout the 40-year study was six grams based on any alcohol intake of more than zero and up to 20 grams. The long-term average intake of six grams of alcohol is equal to one four-ounce beer, one two-ounce glass of wine or one one-ounce glass of spirits, daily.
When the researchers looked independently at wine consumption, the associated risk reduction was greater. Participants who drank on average half a glass, or 1.5 ounces, of wine per day, over a long period, had a 40 percent lower rate of all-cause death and a 48 percent lower incidence of cardiovascular death, compared to the non-wine drinkers.
Researchers said life expectancy was 3.8 years higher in those men who drank wine compared to those who did not drink alcoholic beverages. Life expectancy of wine users was more than two years longer than users of other alcoholic beverages. Men with a long-term alcohol intake less than or equal to 20 grams per day had a 1.6-year-higher life expectancy, compared to those who consumed no alcohol.
The big question: Why? If wine is more beneficial than other alcoholic beverages then at least part of the benefit comes from something other than alcohol. If so then we can get some of the same benefits by eating foods that contain those compounds. Most obviously, grape juice and grapes. But assorted berries and chocolate contain compounds that are also found in red and white wines. Though resveratrol is found in few other foods besides red wine.
A Wall Street Journal article surveying research on the neurobiology of love reports on the work of Dr. Helen Fisher. Love triggers the dopamine system which is also involved in drug addiction.
Dr. Fisher has studied love by looking at people's brains using magnetic resonance imaging machines. A recent study also looked at 15 subjects who were deeply in love but were nursing broken hearts. While in the scanner, they viewed "neutral" pictures of someone they knew but for whom they didn't have intense romantic feelings. Then they were shown a picture of their beloved.
Those suffering the aftermath of failed relationships have more than just the dopamine system active.
Compared with the neutral photos, a lover's picture triggers the dopamine system in the brain -- the same system associated with pleasure and addiction. But the brain images of those scorned in love also give us clues as to why the breakdown of a relationship can trigger serious health problems. The subjects dealing with failed relationships showed activity in the dopamine system -- suggesting they maintained intense feelings for their loved one. But they also showed activity in brain regions associated with risk taking, controlling anger and obsessive compulsive problems. Notably, the scans showed activity in one part of the brain linked with physical pain.
The article reports on an Italian study that found that love causes the neurotransmitter serotonin to drop to the level found in those with obsessive compulsive disorders. Might obsessive compulsive disorders (OCD) be a side effect of the brain's tendency to fall in love? Do people who fall more deeply in love face a greater risk of OCD?
Jilted lovers who kill. Heartbroken people who who foolish and crazy things. It is no wonder that love causes these behaviors given what it does to emotions.
"It's not a good combination," notes Dr. Fisher. "You're feeling intense romantic love, you're willing to take big risks, you're in physical pain, obsessively thinking about a person and you're struggling to control your rage. You're not operating with your full range of cognitive abilities. It's possible that part of the rational mind shuts down."
I see the human mind as having a lot of obsolete and problematic vestiges. Take, for example, the fight-or-flight response where adrenaline flows and an angry and fearful person wants to either run away or attack. The response is maladaptive in the vast bulk of the situations where it happens. The stress it causes ages us more rapidly. Plus. it causes us to do things that get people fired or thrown in jail or to blow a business deal or romantic relationship. Wouldn't we be better off if we could suppress the neuronal and hormonal chain of events in fight-or-flight?
Our growing ability to figure out how our minds work portends a very different future. The more we understand the better we can intervene. Want to suppress the anger and pain of a romantic break-up? Would doing so make you feel less human? Or would you see the ability to do so as a boon?
People already take anti-depressants and anti-anxiety medicine. The serotonin selective reuptake inhibitors (SSRIs) might even suppress some of the intensity of the feelings of being in love. So Prozac and Zoloft might help mend a broken heart.
But think about future technologies that will provide more powerful and finer grained control of human emotions. How will people use them? Will people choose to make themselves more rational? Will future humans seem emotion-less to present day humans? Or will humans choose to suppress emotional pain and feelings of obsession and addiction while still giving themselves a fairly wide range of other emotions?
Will most humans make themselves less easy to anger? Will some humans see their own anger as always so justified that they'll oppose attempts to suppress it?
The emotionless mind is not optimal for achievement. A mind totally devoid of emotion would lack in motivation. If you do not fear poverty or death or a dark alley in a bad part of a city how are you going to stay alive? If you do not have desire for higher status then you won't strive as hard to advance in your career or to start a company.
How to succeed gloriously in the future age of emotion-controlling neurotechnologies? Tune your emotions for maximum productivity. You'll want ambition but at a level that is not too distracting. You'll want to limit the amount of time you spend feeling anger or resentment or depression since too high a dose of any of those emotions becomes debilitating. You'll want to stay highly rational in dealing with others. Avoid excessive amounts of anger, fear, resentment or, for that matter, complacency. But each emotion has some adaptive value even today. Do not turn them down altogether.
Dr Peter Mazzone and colleagues at the Cleveland Clinic have developed a small device that can detect cancer in breath. Cheap miniature cancer detectors will allow more frequent testing and earlier detection.
A breath test can successfully pick up lung cancer with "moderate accuracy" even in the early stages, reveals research published ahead of print in Thorax.
It could revolutionise the way cancer is detected and potentially save lives, say the authors.
The test comprises a chemical colour sensor, which detects tiny changes in the unique chemical signature of the breath of people with lung cancer.
Metabolic changes in lung cancer cells cause changes in the production and processing of volatile organic compounds, which are then breathed out
This sensor detected 3 out of 4 cases in people known to have lung cancer.
The concept of a "gas fingerprint" for lung cancer is not new, but the kit is.
The sensor, which is slightly bigger than a quarter dollar or a two pound coin, is inexpensive and easy to use.
The small size argues for an eventual low manufacturing cost. But see the picture the previous. It looks like it gets used once. In the longer run microfluidic devices and other silicon-based miniature devices will allow continuous monitoring with electronic connections to a personal health computer. Just lying in bed your bedstand will contain sensors that'll detect a large assortment of diseases while you sleep.
Diagnosis by doctors will become the exception rather than the rule as miniature sensors embedded in bathrooms, bedrooms, cars, workplaces, and in our bodies detect and diagnose diseases automatically. Early diagnosis will enable earlier treatments and better outcomes. Also, the automated nature of diagnosis will cut the cost of diagnosis by reducing the need for human labor.
Will the net result of early diagnosis cut or increases the percentage of the time people spend knowing they are sick? It depends on how much early diagnosis enables effective treatments and cures. If early diagnosis just lets you know further in advance that you have a fatal disease then people will spend more time pondering their coming death. But for cancer I'm hoping early diagnosis will increase cure rates as more cancers get caught and removed before metastasis.
As long time readers know, a recurring theme on FuturePundit is that the use of computer industry technologies to perform biological manipulations will make biotechnology increasingly advance at the rate of computer technology. We all know that's a really fast rate of advancement. Thomas Boland has further refined and improved his technique for laying down cells using a commercial inkjet printer.
Research from Clemson University shows that producing cardiac tissue with off-the-shelf inkjet technology can be improved significantly with precise cell placement. Tom Boland, associate professor in Clemson’s bioengineering department, along with Catalin Baicu of the Medical University of South Carolina, present their findings today (2-18) at the American Association for the Advancement of Science (AAAS) Conference in San Francisco.
Since Boland’s discovery in 2004, “printing” tissue using 3-D printers has focused on printing materials for hard tissue applications, such as in the jawbone. The study presented at AAAS focused on precise placement of cells, which is essential to achieving function in soft tissue, such as the heart. In this study, live, beating heart cells were achieved more efficiently.
How about printing out a new organ?
The latest advance: lay down the scaffolding and cells at the same time using different nozzles. Color printers have 3 nozzles for 3 colors and so this advance builds upon that capability.
“The breakthrough with this technology is that cells now can be precision-placed virtually instantaneously with the materials that make up a scaffold to hold the cells in place,” Boland said. Precision placement of the cells is achieved by filling an empty inkjet cartridge with a hydrogel solution (a material that has properties similar to tissue) and another inkjet cartridge with cells. The printing is accomplished much in the way that color photographs are made, activating alternatively the hydrogel and cell nozzles.
Computer technology is mass-produced and cheap. Some day biotechnology will be cheap as well. Production of internal organs and other body parts will become very cheap. Microfluidic chips and robotic devices will build replacement parts and gene therapies.
Also see my previous post Modified Printers Used For Tissue Engineering. Also, inkjet printers are not the only commonplace cheap computer accessory getting used in novel ways to work with biological materials. CD players have uses in biology as well. See my posts CD Player Turned Into Bioassay Molecule Detection Instrument and CD Will Simultaneously Test Concentrations Of Thousands Of Proteins.
Sometimes it helps to have a “cheat sheet” when you are working on a problem as difficult as deciphering the relationships among hundreds of thousands of genes. At least that's the idea behind a powerful new technique developed by Howard Hughes Medical Institute (HHMI) researchers to analyze how genes function together inside cells.
The new approach is called epistatic miniarray profiles (E-MAP). The scientists who developed it — HHMI investigator Jonathan S. Weissman, HHMI postdoctoral fellow Sean Collins, and colleague Nevan Krogan, who are all at the University of California, San Francisco — have used E-MAP to unravel a key process that prevents DNA damage during cellular replication.
In the first use of this technique researchers tested for 200,000 different gene interactions.
Using the new technique, which enabled them to rapidly analyze more than 200,000 gene interactions, the researchers have made a discovery that helps explain how cells mark which sections of DNA have been replicated during cell division. If the marking process goes awry, DNA becomes damaged as it is copied.
Hundreds of yeast colonies can be grown in the same agar plate and their speed of growth can be measured and analyzed automatically with software. Since yeast share many genes with humans these studies will turn up interactions that provide insight into human biology as well.
The key to E-MAPs is the ability to eliminate single genes or gene pairs and then analyze how each change impacts the growth of yeast colonies. Each yeast colony grows in a tiny spot on an agar plate, and each plate holds around 750 colonies. Software makes it possible to determine the growth rate of each colony and then compare the effect on growth of eliminating one gene at a time with the effect when two genes are simultaneously disabled.
The scientists looked only at the genes involved in maintaining and replicating chromosomes.
The end result is a database that details the functional relationship of each gene to every other gene studied, revealing cases where the product of one gene depends on a second gene in order to carry out its cellular functions. In this experiment, Weissman's team looked at 743 yeast genes involved in basic chromosome biology. “We wanted to look at everything that had to do with chromosome biology, including DNA replication, DNA repair, transcription to RNA, and so on,” said Weissman. “These are very basic cellular processes that are conserved from yeast to man.”
But this same technique could be applied to other subsets of genes to study other aspects of cellular metabolism. This is the way biology is going: Rather than studying one or two things at once thousands of genes or interactions get measured at a time. Automated equipment and methods for working with large numbers of very small samples allows massive parallelism and orders of magnitude more data collected per experiment.
CHARLOTTESVILLE, Va., Feb. 19, 2007 - A research team led by Cato T. Laurencin, M.D., Ph.D., at the University of Virginia Health System has created a synthetic matrix on which the ACL (anterior cruciate ligament) can be regenerated effectively for treatment of ACL tears.
This is an important discovery, because the ACL, the stabilizing ligament that connects the thighbone to the legbone, usually does not heal after it is torn during sports or other injuries. The ACL unravels like an unbraided rope when torn, making healing difficult. More than 200,000 people in the United States suffer this rupture each year.
"This is the first tissue-engineered matrix for ACL to demonstrate such substantial neo-ligament formation, in terms of both vascularity and collagen formation," said Dr. Laurencin, Chairman of the UVa Department of Orthopaedic Surgery and leader of the team. "We tested one synthetic matrix with actual ACL cells from our animal model and one without these cells. While both systems encouraged the ingrowth of neo-ligament tissue, matrices with seeded cells performed particularly well in this study."
Dr. Laurencin concluded that the ACL replacement with ACL cells had a robust functional tissue outcome in the rabbits that received this matrix.
Tissue engineering doesn't get as much attention as stem cell research. But we need tissue engineering advances as much as we need stem cell research advances.
The team grew ligament tissue after first weaving together strands of biodegradable polyester using a machine originally designed for textile production. This material, called polylactide, naturally dissolves in the body over time.
Every industry that develops technology for manipulating small pieces of matter potentially could produce technology also useful for biological manipulations. We see this with gene chips and microfluidic devices developed as spin-offs of the computer semiconductor industry. But we also see scientists using ink jet printers to deposit cells for tissue engineering.
Laurencin's team seeded the woven polylactide structure with cells taken from rabbits' anterior cruciate ligaments and cultured them in a dish for two days. Finally, they surgically replaced whole anterior cruciate ligaments in another group of rabbits with the polylactide scaffold material, attaching it to the joint in the same way as a normal ligament.
Twenty-four hours later, the rabbits could already bear their own weight on their knees, and showed fairly normal mobility.
We need the ability to build and grow replacement parts for our bodies. With these parts most chronic, painful, and debilitating diseases will become curabel. Eventually full body rejuvenation will become possible.
Move aside much reviled SUVs. Time to blame environmental destruction on the cows and the people who eat them.
Livestock are responsible for 18 percent of greenhouse-gas emissions as measured in carbon dioxide equivalent, reports the FAO. This includes 9 percent of all CO2 emissions, 37 percent of methane, and 65 percent of nitrous oxide. Altogether, that's more than the emissions caused by transportation.
The latter two gases are particularly troubling – even though they represent far smaller concentrations in atmosphere than CO2, which remains the main global warming culprit. But methane has 23 times the global warming potential (GWP) of CO2 and nitrous oxide has 296 times the warming potential of carbon dioxide.
Some of the methane could get captured from livestock that are in buildings. Also, the types of compounds found in grasses and other feeds affect how much methane gets generated. Feed genetically engineered for easier digestion would lower methane emissions.
Rising living standards in some developing countries are pushing up meat consumption.
Between 1970 and 2002, annual per capita meat consumption in developing countries rose from 11 kilograms (24 lbs.) to 29 kilograms (64 lbs.), according to the FAO. (In developed countries, the comparable figures were 65 kilos and 80 kilos.) As population increased, total meat consumption in the developing world grew nearly five-fold over that period.
Think about the land involved too. The more people demand meat the more grain will get planted. Brazil has 40% of the world's remaining rainforests. How much of those rainforests will get cut down to provide grazing areas and livestock grain crop areas to feed the growing ranks of hundreds of millions of affluent Asians?
Continued world economic development is going to push global meat consumption much higher.
Beyond that, annual global meat production is projected to more than double from 229 million tons at the beginning of the decade to 465 million tons in 2050.
My guess is this is an underestimate. Robots, nanotechnology, and other advances will spur much more global economic growth. Can these technologies also reduce the ecological footprint of livestock production?
The impact of China still seems underappreciated in projections I see about the future. China has over 4 times the population of the United States and its rapid economic growth is going to make it into a larger source of demand for most goods than the United States. I get the sense that we've become so accustomed to seeing the US as the biggest energy user, biggest user of raw materials, biggest consumer of meat, and top in so many other categories that China's emergence just doesn't seem real yet to most people. Get your mind around the idea that US consumption will rise but China's consumption will rise far more.
The rise of China to the top position is already starting to happen in many categories, For example, China's CO2 emissions may surpass those of the US as early as 2009. This poses two problems from an environmental perspective. First off, Chinese consumption and emissions come on top of US consumption and emissions. Second, the desire to reduce pollution increases as living standards rise. But since China has over 4 times as many people as the US the Chinese will feel less desire to reduce pollution and reduce ecological impact when they have the same total GDP. Why? At the same total GDP they'll have less than a 4th of per capita GDP and, since they've accumulated less stuff, even lower living standards than the per capita GDP difference suggests.
My concern is that a much larger fraction of the human race is going to become huge consumers of resources and huge generators of pollutants. We need to make huge technological strides to cut down the impacts of this development. I'm skeptical that we'll succeed. Affluent people will want bigger houses no larger tracts of manicured lawns. Plus, they'll want more livestock and wood. Habitat destruction in the tropics looks set to continue.
The dietary trial involved 30 healthy men and 30 healthy women (including 30 smokers) eating an 85g bag (a cereal bowl full) of fresh watercress every day for eight weeks. The beneficial changes were greatest among the smokers. This may reflect the greater toxic burden or oxidative stress amongst the smokers, as smokers were also found to have significantly lower antioxidant levels at the start of the study compared to the non-smokers.
Here are the major results from eating a daily bowl of watercress:
The reduction in DNA damage argues for a reduced risk of cancer. DNA mutations probably are the biggest cause of cancer.
Results: Watercress supplementation (active compared with control phase) was associated with reductions in basal DNA damage (by 17%; P = 0.03), in basal plus oxidative purine DNA damage (by 23.9%; P = 0.002), and in basal DNA damage in response to ex vivo hydrogen peroxide challenge (by 9.4%; P = 0.07). Beneficial changes seen after watercress intervention were greater and more significant in smokers than in nonsmokers. Plasma lutein and ß-carotene increased significantly by 100% and 33% (P < 0.001), respectively, after watercress supplementation.
What I'd like to know: If this experiment was repeated for a wide range of vegetables how would they rank in potency? How do cabbage, celery, tomatoes, broccoli, radishes, kale, and other vegetables compare?
Six hundred and seventy six healthy men born between 1900 and 1920 from Finland, Italy and the Netherlands participated in a 10-year prospective cohort study.
Results:Men who consumed coffee had a 10-year cognitive decline of 1.2 points (4%). Non-consumers had an additional decline of 1.4 points (P<0.001). An inverse and J-shaped association was observed between the number of cups of coffee consumed and cognitive decline, with the least cognitive decline for three cups of coffee per day (0.6 points). This decline was 4.3 times smaller than the decline of non-consumers (P<0.001).Conclusions:Findings suggest that consuming coffee reduces cognitive decline in elderly men.
This makes sense. Recall my recent post High Caffeine Diets Reduce Heart Risk In Over 65s. Also recall my other related recent post Reduced Blood Flow And Inflammation Precede Alzheimers Put those two together. If something reduces the risk of heart disease it probably improves circulation overall. Well, circulatory problems might well be the root cause of Alzheimer's Disease. If coffee slows the aging of the vascular system then it will reduce both heart disease and Alzheimer's.
Coffee isn't the only candidate for delivering that benefit. Tea and dark chocolate (even more so cocoa powder) have many of the same compounds.
Mars, the makers of Dove chocolates, has just put out a press release describing recent research on the health effects of cocoa. Flavanols in chocolate boost circulation in the brain.
A special cocoa made to retain naturally occurring compounds called flavanols may have the potential to help maintain healthy brain function and chart the course for future research that could lead to new solutions for preventing cognitive decline and dementia, according to a panel of scientists who presented new data at the annual meeting of the American Association for the Advancement of Science (AAAS).
The enhanced brain blood flow from cocoa mentioned below might reduce the odds of developing Alzheimer's disease and other forms of dementia associated with old age.
During the session entitled "The Neurobiology of Chocolate: A Mind- Altering Experience?," a panel of scientists presented evidence from several recent studies that demonstrated the enhanced brain blood flow after study participants consumed a specially formulated flavanol-rich cocoa beverage that was supplied by Mars, Incorporated. One study, conducted by Ian A. Macdonald, PhD, from the University of Nottingham Medical School in the United Kingdom, found that the consumption of this cocoa resulted in regional changes in blood flow in study participants, suggesting that cocoa flavanols may have therapeutic potential for the treatment of vascular impairments within the brain itself.
"Our study showed that acute consumption of this particular flavanol-rich cocoa beverage was associated with increased blood flow to grey matter for 2 to 3 hours," Macdonald said. "This raises the possibility that certain food components like cocoa flavanols may be beneficial in increasing brain blood flow and enhancing brain function among older adults or for others in situations where they may be cognitively impaired, such as fatigue or sleep deprivation."
Mars has their own method of processing cocoa called CocoaPro that retains more of the bioflavonoids found in cocoa. While choosing one brand of chocolate over another might help it is more important to eat cocoa in a form that is least diluted. For example, milk chocolate is most diluted. Dark chocolate is less diluted. Semisweet chocolate is even less diluted. The less sugar and the more cocoa the stronger the dose.
As part of the University of Bristol Children Of The 90s project dietary information and child cognitive performance was checked for children in thousands of families (the news reports speak of 9000 families or 11,875 pregnant women - maybe the higher number includes multiple pregnancies from some of the women?) . Children whose mothers ate fish more than 3 times a week did better in tests of cognitive function.
Mothers who ate more seafood than the US guidelines (340 grams, or three portions a week) had children who were more advanced in development tests measuring fine motor, communication and social skills as toddlers, had more positive social behaviours and were less likely to have low verbal IQ scores at the age of eight. Those children whose mothers had eaten no fish were 28 per cent more likely to have poor communication skills at 18 months, 35 per cent more likely to have poor fine motor coordination at age three and a half, 44 per cent more likely to have poor social behaviour at age seven and 48 per cent more likely to have a relatively low verbal IQ at age eight, when compared with children of women who ate more than the US guidelines advised.
But did they test parental IQ? Or did they control for socio-economic status of the parents? (which would be a rough proxy for genetic differences)
The new findings suggest that, for developing brains, the risks of limiting seafood consumption outweigh the benefits of such a limit, the NIH's Joseph R. Hibbeln, MD, tells WebMD.
"Regrettably, these data indicate that the [FDA-EPA] advisory apparently causes the harm that it was intended to prevent, especially with regard to verbal development," Hibbeln says.
The FDA-EPA advisory is aimed at reducing mercury exposure. But you can avoid the mercury while still getting lots of omega 3 fatty acids by either eating low mercury fish or by taking fish oil capsules.
The study supports the contrary advice, given by the Food Standards Agency in the UK, which backs fish as a healthy food. The FSA simply advises mothers to avoid shark, swordfish and marlin, and restrict their intake of tuna.
The new research into children’s behaviour and intelligence suggests that women who follow the US “advisory” issued in 2004 to limit consumption, or cut fish out of their diet altogether, may miss nutrients that the developing brain needs — and so harm their children.
At 32 weeks into their pregnancy, the women were asked to fill in a seafood consumption questionnaire. They were subsequently sent questionnaires four times during their pregnancy, and then up to eight years after the birth of their child. Researchers examined issues including the children's social and communication skills, their hand-eye coordination, and their IQ levels. As with any study based on self-reporting methods, however, the results cannot be considered entirely definitive.
What I want to know: Did the mothers who ate less fish have lower IQs than the mothers who ate more fish? In other words, did these researchers measure an effect of nutritional differences or of genetic differences?
We still do not know with certainty that omega 3 fatty acids help make babies smarter. But since there's a chance they might it seems prudent for women to eat very low mercury fish.
The Electric Power Research Institute claims in a new report that the United States can't reduce carbon dioxide (CO2) emissions from electric power plants below 1990 levels any sooner than 20 years from now and that only with their most optimistic scenario.
Electric power companies, which emit about one-third of America’s global warming gases, could reduce their emissions to below the levels of 1990, but that would take about 20 years, no matter how much the utilities spend, according to a new industry study.
No, if money was no object then the entire fleet of coal and natural gas burning electric generation plants could be replaced by nuclear power plants. But it is a question of how much money we are willing to spend.
They think their lowest emissions scenario is optimistic.
The report, prepared by the Electric Power Research Institute, a nonprofit consortium, is portrayed as highly optimistic by its authors, who will present the findings on Thursday at an energy conference in Houston.
The study assumes only a two thirds increase in nuclear power.
The industry study calls for 64 gigawatts of additional nuclear power by 2030, an increase of about two-thirds from the current level. For the first time in three decades, several companies have expressed interest recently in ordering new reactors, but they will probably take nearly 10 years to build and experts expect no more than six or eight in the first round.
The study’s figure implies a net increase of about 50 new reactors by 2030; the Energy Department is counting on about 10.
But imagine instead that we no longer built new coal or natural gas burning electric plants and all new electric plants used energy sources that generate no carbon dioxide. Coal burning technology isn't ready for full carbon sequestration. So go with nuclear and wind instead.
Most drastically, we could halt all carbon dioxide emissions from electric generation (cutting out a third of US CO2 emissions) by switching to only non-fossil fuels for electric power generation. For example, in the United States we could switch to nuclear where we now use coal and natural gas. In 2005 nuclear power accounted for 19.3% of total electric power generated. The United States had 104 nuclear reactors operating in 2005 with a total capacity of 97 gigawatts (almost 1 gigawatt per plant). So as a rough first approximation if we built 400 nuclear power plants or 4 times as much as we already have we could shut down all the fossil-fuels burning plants. Though that would not provide enough electric power during the peak afternoon demand periods.
So here's my question for knowledgeable readers: What percentage of electric power is used for baseline demand and what percentage is used for above baseline usage? Would we have to build 6, 7, or even 8 times as many nuclear plants as we have now in order to eliminate all use of fossil fuels to generate electricity? The multiple is certainly less than 10 and lower if we institute variable pricing for electricity to flatten out demand. Also, hydro could be used for part of the peak demand capacity. But the multiple is higher in order to account for growth in demand which now runs at 1.5% per year.
The average nuclear power plant now operating is smaller than the average that would get built in a new nuclear power plant building program. But if we had to build 8 times as much nuclear power (about 800 gigawatts) as we now have and they cost $1.5 billion per 1 gigawatt of capacity then we are looking at $1.2 trillion dollars to build a fully nuclear electric power plant fleet. That's less than 10% of the US economy's product for one year. Spread out over 20 years it'd be one half of one percent of US GDP per year. So how can eliminating a third of all US carbon dioxide emissions be beyond the possible and affordable?
Mind you, I'm guesstimating. But I'm probably within a factor of 2 or 3 of the real cost. So this stays within the realm of the possible even if my estimate is low. Anyone know pertinent facts that would make a refined estimate more accurate?
We'd have to pay more for electricity if use of fossil fuels for electric generation was gradually prohibited. Nuclear power currently costs more than polluting coal plants. Plus, basically throwing away old coal and natural gas electric power plants has costs (how big the costs would depend on how rapid the phase-out). But we'd get cleaner air, less mercury in fish, and other health benefits. Also, a massive nuclear power plant building program would drive down the cost of nukes.
Update: If we go to an all-nuclear (or mostly nuclear with photovoltaics and wind and geothermal too) electric generation infrastructure then we'd reduce CO2 emissions by more than a third. Why? Within 20 years battery-powered cars are going to become feasible for most uses. Nuclear power and sufficiently advanced batteries combined could probably cut CO2 emissions in half.
The approximate cost of stopping the generation of CO2 for electric is the difference in cost between coal electric and nuclear electric (more if existing coal and natural gas burning plants are shut down before they wear out). That's probably 2 cents per kwh at most. Consider that in the United States electric prices cover a much larger range with, for example, costs in kwh for Connecticut of 16.25, Maine 14.55, Indiana 8.27, West Virginia 6.33 (cheap dirty coal), Kentucky 7.12 (again cheap dirty coal), Wyoming 7.8, Oregon 7.46 (hydro), California 14.23, and Hawaii at 23.57.
I do not see how an additional 2 cents per kwh is going to slow economic growth by much. Also, the real cost difference will likely become smaller if nuclear power plant construction gets ramped up to a rapid rate. Newer reactor designs will eventually lower costs as well.
Conclusion: We could greatly reduce CO2 emissions for a fairly low economic cost.
Here are some basic conclusions I've come to about the CO2 problem so far:
I do not foresee future calamity from CO2 emissions. We have too many affordable options for dealing with global warming. But to be prudent and lower the costs of dealing with the problem we should accelerate energy research and think seriously about a big shift toward nuclear, geothermal, and other non-fossil fuels electric power sources.
If anyone has more accurate data for some of the guesstimating I did above I'd like to hear from you. Is there some reason why I'm underestimating the costs of a switch from coal and natural gas to nuclear? The biggest reason I can see is the cost of generating peak electric power. But my sense there is that dynamic pricing will cause a big flattening of the demand curve as capital and home appliances get designed to shift more demand to when electricity is cheaper.
Why no use of other non-fossil fuels energy sources in this analysis? To demonstrate the practicality of moving away from fossil fuels I wanted to use a power source that already has costs much closer to the cost of coal. Wind and solar introduce even bigger peak power supply problems than nuclear does. And they cost more. Solar is way too expensive. They'll both fall in price. But I wanted to demonstrate that we could phase out coal and natural gas for electric power without waiting for technological advances.
UC Irvine physics professor and science fiction writer Gregory Benford says climate engineering to prevent global warming would be cheap to do and could be done by private parties.
Benford has a proposal that possesses the advantages of being both one of the simplest planet-cooling technologies so far suggested and being initially testable in a local context. He suggests suspension of tiny, harmless particles (sized at one-third of a micron) at about 80,000 feet up in the stratosphere. These particles could be composed of diatomaceous earth. "That's silicon dioxide, which is chemically inert, cheap as earth, and readily crushable to the size we want," Benford says. This could initially be tested, he says, over the Arctic, where warming is already considerable and where few human beings live. Arctic atmospheric circulation patterns would mostly confine the deployed particles around the North Pole. An initial experiment could occur north of 70 degrees latitude, over the Arctic Sea and outside national boundaries. "The fact that such an experiment is reversible is just as important as the fact that it's regional," says Benford.
Benford says treating the Arctic would cost only $100 million per year.
"Anybody who thinks governments are suddenly going to leap into action is dreaming." Benford says that one of the advantages of his scheme is that it could be implemented unilaterally by private parties. "Applying these technologies in the Arctic zone or even over the whole planet would be so cheap that many private parties could do it on their own. That's really a dangerous idea because it suggests the primary actor in this drama will not be the nation-state anymore. You could do this for a hundred million bucks a year. You could do the whole planet for a couple of billion. That's amazingly cheap."
This proposal illustrates a larger pattern: Nation-states are becoming less important for major undertakings because scientists and engineers can find cheap ways to accomplish changes. For interventions whose bases of operations are easy to spot and stop this trend does not disempower nation-states. The governments will retain veto power. But for interventions that are harder to trace back to their perpetrators the loss of accountability could become very problematic for the human race.
Suppose the world heats up a few degrees Celsius and scientific knoweldge about climate advances to the point where scientists can state with certainty that human burning of fossil fuels is the major cause of this change. Then suppose some private group with enough money (or even a single rich guy) wants to put silicon dioxide over the Arctic or Antarctic in order to prevent gradual melting and rising sea levels. Would you support or oppose such a move?
Well, Danish climate scientist Henrik Svensmark argues that cosmic rays and not carbon dioxide build-up is the biggest cause of global warming.
Figure 5 takes the climate record back 300 years, using rates of beryllium-10 production in the atmosphere as long-accepted proxies for cosmic-ray intensities. The high level at AD 1700 corresponds with the Maunder Minimum (1645-1715) when sunspots were extremely scarce and the solar magnetic field was exceptionally weak. This coincided with the coldest phase of what historians of climate call the Little Ice Age (Eddy 1976). Also plain is the Dalton Minimum of the early 19th century, another cold phase. The wobbles and the overall trend seen in figure 5, between cold 1700 and warm 2000, are just a high-resolution view of a climate-related switch between high and low cosmic-ray counts, of a kind that has occurred repeatedly in the past.
Iciness in the North Atlantic, as registered by grit dropped on the ocean floor from drifting and melting ice, is a good example of the climate data now available. Gerard Bond of Columbia University and his colleagues showed that, over the past 12 000 years, there were many icy intervals like the Little Ice Age - eight to ten, depending on how you count the wiggles in the density of ice-rafted debris. These alternated with warm phases, of which the most recent were the Medieval Warm Period (roughly AD 900-1300) and the Modern Warm Period (since 1900). A comparison with variations in carbon-14 and beryllium-10 production showed excellent matches between high cosmic rays and cold climates, and low cosmic rays and the warm intervals (Bond et al. 2001).
Suppose scientists eventually confirm that increased cosmic rays from the sun increase cloud cover and cause cooling and that less cosmic rays are causing the world's current warming trend. Would you be any more or less inclined to support climate engineering to reverse natural warming caused by changes in the Sun's output?
In other words, if nature causes climate changes (whether cooling or warming) are we more or less justified in intervening in the climate than if we cause climate changes?
Suppose climate researchers discover 30 years hence that due to natural cycles the world is going to go through a long term cooling that will last centuries. Would you argue for generation of more green houses gases to counteract the cooling? Or would you argue that we shouldn't intervene in natural processes on such a large scale for our own benefit?
Update: I'm asking two underlying questions here:
I do not know whether the world will warm by much in the 21st century. I do not know whether we are experiencing more climate change due to human intervention or due to natural phenomena. I'm not trying to argue the global warming skeptic or the global warming believer position. I'm trying to find out how much of the support for a reduction in CO2 emissions is due to the known (clearly human) causes of those emissions or the theorized effects of those emissions.
Update II: For those who do not read me regularly, here are several things I believe about the future of energy technology and climate:
My guess is that if global warming becomes a big problem we will use cheap ways to cool down at least parts of the planet. The good news is that if we reach that point the cooling down will be cheap to do. So the nightmare scenarios for warming are unlikely to ever happen.
A newly designed porous membrane, so thin it's invisible edge-on, may revolutionize the way doctors and scientists manipulate objects as small as a molecule.
The 50-atom thick filter can withstand surprisingly high pressures and may be a key to better separation of blood proteins for dialysis patients, speeding ion exchange in fuel cells, creating a new environment for growing neurological stem cells, and purifying air and water in hospitals and clean-rooms at the nanoscopic level.
At more than 4,000 times thinner than a human hair, the new barely-there membrane is thousands of times thinner than similar filters in use today.
This silicon is from the crystals routinely grown for computer semiconductor chip manufacturing. So here's yet another example of how the computer semiconductor industry is producing materials moldable into biologically useful devices.
The membrane is a 15-nanometer-thick slice of the same silicon that's used every day in computer-chip manufacturing. In the lab of Philippe Fauchet, professor of electrical and computer engineering at the University, Striemer discovered the membrane as he was looking for a way to better understand how silicon crystallizes when heated.
He used such a thin piece of silicon—only about 50 atoms thick—because it would allow him to use an electron microscope to see the crystal structure in his samples, formed with different heat treatments.
Back in the 1950s, 1960s, and well into the 1970s all computers were seen as large devices that filled up large rooms. But beneath the surface a technological revolution of doublings in power and halvings in costs kept repeating again and again. Suddenly the computer chips became cheap enough to put into desktop personal computers and computing became useful for the masses. Well, the same is going to happen with microfluidic devices and DNA gate arrays.
After years of technological changes only visible inside of research labs the technological advances for making miniature biochips will reach a critical mass where suddenly they will spread out into the mass market. Personal DNA testing in the private of your own home will give you your DNA sequence uploaded into your home computer. Also, implantable biochips will let you watch your blood chemistry in real time and microfluidic devices will make it possible for you to synthesize your own drugs and other treatments.
What I see coming: downloadable free software that'll program your home microfluidic biochips to make unapproved and restricted drugs and biochemical components. Just as we can download software that'll enhance what our computers can do we will be able to download an ever growing set of programs with instructions for orchestrating microfluidic biochips to more and more kinds of biochemical products.
As regular readers know, I keep arguing that the biological sciences and biotechnology are going to advance at a rate similar to the rate of advance in the computer industry. Why? Computer technologies adapted to labs such as microfluidic devices and DNA gate arrays will displace old style flasks, beakers, human-viewed microscopes, and the like. Here's another example of this trend. Some scientists at U Wisc-Madison have used computer chip fabrication technologies to produce a nanoscale device that can separate out individual strands of DNA in preparation for sequencing them.
Now, however, scientists have developed a quick, inexpensive and efficient method to extract single DNA molecules and position them in nanoscale troughs or "slits," where they can be easily analyzed and sequenced.
The positioning in troughs is a needed precursor step before reading the DNA letters in each strand. So these scientists have moved a big (or incredibly small) step closer toward very small and therefore very cheap DNA sequencing devices.
The technique, which according to its developers is simple and scalable, could lead to faster and vastly more efficient sequencing technology in the lab, and may one day help underpin the ability of clinicians to obtain customized DNA profiles of patients.
The new work is reported this week (Feb. 8, 2007) in the Proceedings of the National Academies of Science (PNAS) by a team of scientists and engineers from the University of Wisconsin-Madison.
"DNA is messy," says David C. Schwartz, a UW-Madison genomics researcher and chemist and the senior author of the PNAS paper. "And in order to read the molecule, you have to present the molecule."
Since computer technology will drive biological technology forward at a rate similar to what we see in the computer industry the future rate of development of new knowledge and eventually new treatments will far exceed what we've seen in the past.
The computer industry is providing the technologies that are accelerating the rate of biotechnological advancement. Semiconductor fabrication technology provided these researchers the tools they needed to fabricate a device that can separate out single strands of DNA.
To attack the problem, Schwartz and his colleagues turned to nanotechnology, the branch of engineering that deals with the design and manufacture of electrical and mechanical devices at the scale of atoms and molecules. Using techniques typically reserved for the manufacture of computer chips, the Wisconsin team fabricated a mold for making a rubber template with slits narrow enough to confine single strands of elongated DNA.
The ability to sequence individual DNA strands will cost less than sequencing of larger amounts of material. Mass production of chips that can sequence DNA from a single cell will make personal DNA profiles commonplace. Also, the ability to sequence a single cell's DNA will find use in criminology, cancer research, and in choice of custom cancer treatments.
Elizabeth Gould at Princeton University found that sleep deprivation in rats inhibits the replication of neural stem cells and therefore prevents creation of new neurons.
Prolonged sleep deprivation is stressful and has been associated with adverse consequences for health and cognitive performance. Here, we show that sleep deprivation inhibits adult neurogenesis at a time when circulating levels of corticosterone are elevated. Moreover, clamping levels of this hormone prevents the sleep deprivation-induced reduction of cell proliferation. The recovery of normal levels of adult neurogenesis after chronic sleep deprivation occurs over a 2-wk period and involves a temporary increase in new neuron formation. This compensatory increase is dissociated from glucocorticoid levels as well as from the restoration of normal sleep patterns. Collectively, these findings suggest that, although sleep deprivation inhibits adult neurogenesis by acting as a stressor, its compensatory aftereffects involve glucocorticoid-independent factors.
In a recent study by psychology professor Elizabeth Gould, rats who were sleep-deprived for 72 hours exhibited increased levels of the stress hormone glucocorticoid. These high stress levels in turn reduced neurogenesis — the birth of new neurons — in the rats' hippocampuses, a part of the brain critical for learning and memory.
Harvard Medical School researcher Seung-Schik Yoo asked a human group to stay up all night and then showed them images. People who stayed up all night did not remember the images as well as those who were well rested when they saw the images.
They correctly identified 74% of the previously viewed images, on average. By comparison, another group who had a proper night’s rest before viewing the 150 images at the start of the experiment correctly identified 86% of these pictures in the pop quiz.
Adequate sleep is needed for proper brain functioning. Of course you already knew that. But maybe the scientific evidence will serve as a useful reminder that you ought to act on that knowledge.
Writing in the current issue of the American Journal of Clinical Nutrition, Greenberg and his co-workers from the State University of New York and the City University of New York report the results of their epidemiological study of 6594 men and women aged between 32 and 86 using data from the 1971–1973 National Health and Nutrition Examination Survey (NHANES I) and follow-up until 1992.Intake of caffeinated beverages, including coffee, tea, and caffeinated cola and chocolate, was calculated from food frequency questionnaires, and classified according to average daily intake: less than half a serving, between half and two servings, two to four servings, four or more servings.
Was the protective effect from caffeine or from other compounds in the caffeinated foods? I wonder if the researchers tried excluding caffeinated colas or tried weighting the foods based on amounts of bioflavonoids to see if the effect tracked better with the amount of caffeine or the amount of other compounds..
Note that this effect was found only for those over age 65. Does this beneficial effect require a lifetime of caffeine consumption? Or could one wait till one reaches one's 60s before becoming a regular consumer of tea, coffee, and chocolate? (I err on the safe side and eat the chocolate many years before reaching the elderly stage)
For this age group, the researchers report that increasing intake of caffeinate beverages was associated with decreasing risk of mortality from these conditions. Indeed, drinking four or more servings per day reduced the risk of heart disease mortality by 53 per cent.
Over the next 15 years, men who consumed cocoa regularly had significantly lower blood pressure than those who did not.
Over the course of the study, 314 men died, 152 due to cardiovascular diseases.
Men in the group with the highest cocoa consumption were half as likely as the others to die from cardiovascular disease.
Their risk remained lower even when other factors, such as weight, smoking habits, physical activity levels, calorie intake and alcohol consumption were taken into account.
The men who consumed more cocoa were also less likely to die of any cause.
The Dutch researchers suspect that polyphenol compounds in cocoa are responsible for the protective effects. But the effective dose needed is quite high. You are best off eating dark rather than milk chocolate and better yet cocoa powder. I put cocoa powder on apple sauce for this reason.
Thanks to Lou Pagnucco for the heads up.
A meta-analysis of 5 studies found that higher blood vitamin D is associated with a 50% lower risk of colorectal cancer.
A larger daily dose of vitamin D could reduce the incidence of colorectal cancer with minimal risk, according to a new review that pools results from five studies.
The analysis found that maintaining a specific target blood level of vitamin D was associated with a 50 percent lower risk of colorectal cancer than that seen in people with consistently lower blood levels.
Previous studies had shown that lower blood levels of vitamin D did not protect against colorectal cancer, according to lead author Edward Gorham, Ph.D., a research epidemiologist with the Naval Health Research Center in San Diego. However, a meta-analysis pools the data from several studies, thus increasing the strength of the results.
I've been telling my regular readers about the benefits of vitamin D for years. A few of you are even acting on this information. Okay, what's with the rest of you? What are your excuses?
The researchers found that a blood serum vitamin D level of 33 nanograms per milliliter or higher was associated with a 50 percent lower risk of colorectal cancer than that seen with blood levels of 12 nanograms per milliliter or lower.
You would need to get between 1000 and 2000 IU of vitamin D per day to get this benefit. You can get there with a supplement or with daily sunbathing.
The amount of dietary vitamin D needed to reach the serum levels that appear to be protective against colorectal cancer — 1,000 to 2,000 international units a day — would not pose any risk, according to Gorham: “The Institute of Medicine has set a ‘No Adverse Effect Level’ of 2,000 IU per day for vitamin D intake, so this recommendation would be safe for most people.”
There is no official recommended dietary allowance for vitamin D, but an adequate dietary intake per day for most adults is currently considered to be 200 to 400 IU.
If you spend alot of time out in the sun during the summer you might only need the supplement during the colder months with the shorter days.
Small amounts of sun exposure would also help people boost their vitamin D levels. Fifteen to 20 minutes per day without sunscreen is enough for the body to synthesize 10,000 IU of vitamin D with minimal risk of sunburn or skin cancer, Gorham said.
Vitamin D will probably lower your risk of Multiple Sclerosis and your incidence of colds and flu too. I believe it is the vitamin that we'd get the most benefit from if we got more of it. Not saying there aren't people out there with plenty of D but not enough iron or zinc or folic acid or C. But for most people more D would deliver the biggest benefit.
The breast cancer study, published online in the current issue of the Journal of Steroid Biochemistry and Molecular Biology, pooled dose-response data from two earlier studies - the Harvard Nurses Health Study and the St. George's Hospital Study - and found that individuals with the highest blood levels of 25-hydroxyvitamin D, or 25(OH)D, had the lowest risk of breast cancer.
The researchers divided the 1,760 records of individuals in the two studies into five equal groups, from the lowest blood levels of 25(OH)D (less than 13 nanograms per milliliter, or 13 ng/ml) to the highest (approximately 52 ng/ml). The data also included whether or not the individual had developed cancer.
"The data were very clear, showing that individuals in the group with the lowest blood levels had the highest rates of breast cancer, and the breast cancer rates dropped as the blood levels of 25-hydroxyvitamin D increased," said study co-author Cedric Garland, Dr.P.H. "The serum level associated with a 50 percent reduction in risk could be maintained by taking 2,000 international units of vitamin D3 daily plus, when the weather permits, spending 10 to 15 minutes a day in the sun."
Most people inherit a version of a gene that optimizes their brain's thinking circuitry, yet also appears to increase risk for schizophrenia, a severe mental illness marked by impaired thinking, scientists at the National Institutes of Health's (NIH) National Institute of Mental Health (NIMH) have discovered. The seeming paradox emerged from the first study to explore the effects of variation in the human gene for a brain master switch, DARPP-32.
The researchers identified a common version of the gene and showed how it impacts the way two key brain regions exchange information, affecting a range of functions from general intelligence to attention.
If higher intelligence was a longer running trait in the human species it is unlikely that we'd have IQ-boosting genetic variations that come with such serious downsides. Bad side effects of genes that provide some benefit are usually a sign that the genetic adaptation in question is a recent response to a recent selective pressure.
Three fourths of subjects studied had at least one copy of the version that results in more efficient filtering of information processed by the brain's executive hub, the prefrontal cortex. However, the same version was also more prevalent among people who developed schizophrenia, a severe mental illness marked by delusions, hallucinations and impaired emotion that affects one percent of the population.
"We have found that DARPP-32 shapes and controls a circuit coursing between the human striatum and prefrontal cortex that affects key brain functions implicated in schizophrenia, such as motivation, working memory and reward related learning," explained Andreas Meyer-Lindenberg, M.D.
"Our results raise the question of whether a gene variant favored by evolution, that would normally confer advantage, may translate into a disadvantage if the prefrontal cortex is impaired, as in schizophrenia," added Daniel Weinberger, M.D. "Normally, enhanced cortex connectivity with the striatum would provide increased flexibility, working memory capacity and executive control. But if other genes and environmental events conspire to render the cortex incapable of handling such information, it could backfire -- resulting in the neural equivalent of a superhighway to a dead-end."
I expect when offspring genetic engineering becomes widespread people will have to face many tough questions about how to weigh the benefits and risks of large numbers of genetic variations they could give their offspring. Some humans are not cognitively well designed to model complex trade-offs that involve probabilities I expect a lot of bad decision-making by prospective parents.
Dr. John-Dylan Haynes of the Max Planck Institute for Human Cognitive and Brain Sciences and colleagues have recently shown that using brain scans they can predict with fairly high accuracy which of two choices test subjects will choose when deciding to add or subtract two numbers.
To address the question of whether intention might be reflected in prefrontal cortical activity, the researchers in the new work used functional magnetic resonance imaging (fMRI) to assess brain activity while subjects concentrated on their choice of intended mental action, but prior to execution of the action. Specifically, subjects were free to choose between adding or subtracting two numbers and were asked to hold in mind their intention until numbers were presented on a screen, along with a choice of outcomes (one of which was correct for the addition choice, one correct for the subtraction choice). Subjects then selected the correct answer according to their planned task, revealing their intended action.
The researchers found that during the delay between the subjects' choice of task and execution of the task, it was possible to decode from activities in two regions of the prefrontal cortex which of the two actions (addition or subtraction) individuals had chosen to pursue. Different patterns of activity were seen during actual execution of the task, showing that regionally distinct neural substrates were involved in task preparation and execution. Decoding of intentions was most robust when activity patterns in the medial prefrontal cortex were taken into account, consistent with the idea that this region of the brain participating in the reflection of an individual on his or her own mental state.
Are you ever bothered that this sort of research takes all the mystery out of life? Do you start seeing humans as less lofty and noble intentions as no better than the most criminal and vicious intentions?
Our secret intentions remain concealed until we put them into action -so we believe. Now researchers have been able to decode these secret intentions from patterns of their brain activity. They let subjects freely and covertly choose between two possible tasks - to either add or subtract two numbers. They were then asked to hold in mind their intention for a while until the relevant numbers were presented on a screen. The researchers were able to recognize the subjects intentions with 70% accuracy based alone on their brain activity - even before the participants had seen the numbers and had started to perform the calculation.
Imagine one could develop an algorithm to analyse brain scans that can detect the intention to lie. Such a capability would make a great lie detector. Another use? To operate robotic prostheses.
Intentions exist in a network of neurons.
The study also reveals fundamental principles about the way the brain stores intentions. "The experiments show that intentions are not encoded in single neurons but in a whole spatial pattern of brain activity", says Haynes. They furthermore reveal that different regions of the prefrontal cortex perform different operations. Regions towards the front of the brain store the intention until it is executed, whereas regions further back take over when subjects become active and start doing the calculation. "Intentions for future actions that are encoded in one part of the brain need to be copied to a different region to be executed", says Haynes.
Whenever I think of brain scans done by governments I think of Mick Jagger singing "These days its all secrecy, no privacy".
Two biologists at Penn State have discovered a master regulator that controls metabolic responses to a deficiency of essential amino acids in the diet. They also discovered that this regulatory substance, an enzyme named GCN2 eIF2alpha kinase, has an unexpectedly profound impact on fat metabolism. "Some results of our experiments suggest interventions that might help treat obesity, prevent Type II diabetes and heart attacks, or ameliorate protein malnutrition," said Douglas Cavener, professor and head of the Department of Biology, who led the research along with Feifan Guo, a research assistant professor. Their research will appear in the 7 February 2007 issue of the scientific journal Cell Metabolism.
Can one get some of the weight loss benefit on a low leucine diet that has enough leucine for basic needs?
A leucine-free diet is the fast path to big time weight loss?
Organisms adapt metabolically to episodes of malnutrition and starvation by shutting down the synthesis of new proteins and fats and by using stores of these nutrients from muscle, fat, and the liver in order to continue vital functions. Cavener and Guo found that the removal of a single amino acid, leucine, from the diet is sufficient to provoke a starvation response that affects fat metabolism. "These findings are important for treating two major problems in the world," Cavener says. "The starvation response we discovered can repress fat synthesis and induce the body to consume virtually all of its stored fat within a few weeks of leucine deprivation. Because this response causes a striking loss of fatty tissue, we may be able to formulate a powerful new treatment for obesity."
Do any readily available protein sources have no leucine in them? Would one need to stop eating all protein in order to avoid leucine? If you wanted to avoid the amino acid tryptophan then gelatin will serve as a cheap easily available tryptophan-free food. But gelatin is 3.5% leucine. So that's not a solution.
Note that leucine is an essential amino acid. Someone who wants to go on a leucine-free diet could not continue on it indefinitely. Also, ingestion of extra leucine with protein stimulates muscle protein synthesis. So leucine has advantages to bodybuilders. Avoidance of leucine involves trade-offs.
Thanks to Lou Pagnucco for the heads-up.
The researchers found that so-called AMP-activated protein kinase (AMPK) slows down in the skeletal muscle of 2-year-old rats relative to 3-month-old rats. A chief regulator of whole-body energy balance, AMPK in skeletal muscle stimulates the oxidation of fatty acids and the production, or biogenesis, of power-producing mitochondria that burn fat and fuel cells,according to the researchers.
The new findings might help to explain "what happens as we age," said Gerald I. Shulman, a Howard Hughes Medical Institute investigator at Yale University School of Medicine.
Why does AMPK decline as we age? Is the decline an adaptation because the muscle's mitochondria become too damaged?
In response to exercise and other stimuli older rats produced far less AMPK.
In the current study, the researchers set out to determine whether the declining mitochondrial function and increased intracellular fat content seen with aging could be traced back to deficiencies of AMPK. They compared AMPK activity in young and old rats following three "perturbations" that normally stimulate the enzyme and, in turn, mitochondria production. The treatments included acute exposure to an AMPK-stimulating chemical, chronic exposure through feeding of another chemical that induces AMPK by mimicking an energy shortage, and exercise.
In every case, older rats showed a decline in AMPK activity compared to younger animals. Young rats infused with a stimulatory chemical showed an increase in muscular AMPK activity not seen in old rats, they found. Similarly, the muscle of exercise-trained young rats showed more than a doubling in AMPK activity. In older rats, that AMPK hike with exercise was "severely blunted." The muscles of young rats fed the AMPK-stimulating chemical also showed an increase in AMPK and a 38% increase in mitochondrial density, they reported. In contrast, older animals' AMPK activity and mitochondrial numbers held steady.
As we get older exercise becomes less effective. Plus, a low level of AMPK might put us at greater risk of type 2 insulin insensitive diabetes.
Would a drug or gene therapy that stimulates AMPK production increase muscle strength and decrease fat? Would it come at some cost? Does the body slow down AMPK production because the muscle cells become too aged to do as much? Or would higher AMPK increase cancer risk?
A study on twins and their offspring provides another chunk of evidence that the effect of environment has been overrated. The parents fight because it is in their genes to do so and so their kids behave poorly due to the same genes.
Children's conduct problems--skipping school, sneaking out of the house, lying to parents, shoplifting, or bullying other children--are a major source of concern for parents and teachers. As a potential cause of these problems, parents' marital conflict has received a lot of research attention. Now a new study finds that parents' fighting may not be to blame but rather that parents who argue a lot may pass on genes for disruptive behavior to their children.
The findings are published in the January/February 2007 issue of the journal Child Development.
A group of researchers from the University of Virginia and several other universities looked at this question, studying 1,045 twins and their 2,051 children. Some of the parents were identical twins and shared all of their genes and some were fraternal and shared only half of their genes. The study found that parents' fighting is not likely a cause of children's conduct problems. On the other hand, parents' genes influenced how often they argued with their spouses and these same genes, when passed to their children, caused more conduct problems.
"This study suggests that marital conflict is not a major culprit, but genes are," said K. Paige Harden, the lead researcher and professor of psychology at the University of Virginia. "Our findings have potential implications for treating conduct problems: Focusing on a child's parents, as is common in family therapy, may not be as effective as focusing on the child."
So if your kids are bad you and your spouse are still to blame. But you are to blame for your genes, not for your behavior.
What I want to know: When offspring genetic engineering becomes possible will people who tend to have low triggers for violence decide to edit out the genetic sequences that cause this when choosing genes for their offspring? Or will they give their kids even stronger doses of the genes that make them carry on yelling and screaming and fighting?
The Bush Administration proposed fiscal year 2008 budget for the National Institutes of Health will once again lag behind the rate of inflation and cause an inflation-adjusted cut on US federal biomedical research funding.
Meanwhile, funding for the National Institutes of Health, which oversees medical research, would rise nearly 2 percent to about $28.7 billion.
Biomedical research funding will deliver more benefit per dollar spent than any other form of government spending. Eventually this research is going to lead to the reversal of aging and the end of death from aging.
In inflation adjusted dollars US federal government funding of all research peaked in 2003 and has been declining since then.
Bush's budget is 4.2% bigger than last year's. He raises defense spending by more than 10%. Spending on veterans and foreign aid soars by double digits. There's more money for programs ranging from Pell college grants to national parks.
Defense and homeland security will get $658 billion or about 22 times as much as the NIH budget for biomedical research. Just the increase in defense and war spending will amount to about twice as much as the total NIH budget. I do not feel better defended and better served by that defense and war spending increase. I figure it will result in a much lower average life expectancy than we'd get if the money was spent on medical research. Worse, the war spending creates big figure costs such as maimed soldiers who'll earn less and need government programs to help care for them.
The total war budget of $163 billion, sought in the 2007 fiscal year, is projected to be $141 billion in 2008 and just $50 billion in 2009, far enough in the future that the estimate is little more than a place holder.
Think about that. Pull the troops out of Iraq and free up money to increase biomedical research by a factor of 5. The war does nothing to make us safer and probably has a net negative effect on our security. We could instead spend money on research that'll cure all the diseases we are going to get as we age. Eventually the money will produce biotechnologies that allow us to rejuvenate our bodies.
Funding for biomedical research, which has been flat for several years, may now begin to grow. The House proposal would give NIH a 2% increase this year, adding $620 million to the current budget of $28.6 billion. The austerity since 2003 has taken a toll, NIH officials say, as inflation significantly eroded NIH's buying power and reduced the number of new and competing grant awards from 10,300 to fewer than 9100.
The percentage of grant proposals funded has been dropping. Scientific and technological knowledge, once discovered, delivers us benefits year after year into the future. The sooner we get the knowledge the more total benefit we'll get from that knowledge. We spend less than 1 percent of the US federal budget on biomedical research. That's a mistake. We could derive greater benefit from a much bigger effort.
Researchers at the University of Pennsylvania School of Medicine have shown that impaired function and loss of synapses in the hippocampus of a mouse form of Alzheimer’s disease (AD) is related to the activation of immune cells called microglia, which cause inflammation. These events precede the formation of tangles – twisted fibers of tau protein that build up inside nerve cells – a hallmark of advanced AD. The researchers report their findings in the February 1 issue of Neuron.
The microglia might cause the tau protein to get all bent out of shape. Then the tau proteins can't get transported to stabilize microtubules. That causes the loss of the transport mechanism and the nerves collapse since needed stuff isn't getting delivered.
So why do the microglia get activated in the first place? Even before the tau protein gets bent out of shape it accumulates. But why does the tau protein accumulate? This report does not answer that question but one potential answer is that aged nerve cells cease to make enough energy to run their internal transport and internal trash destruction mechanisms.
“Abolishing the inflammation caused by the accumulation of the tau protein might be a new therapy for treating neurodegenerative disorders,” says senior author Virginia Lee, PhD, Director of the Center for Neurodegenerative Disease Research. “This work points the way to a new class of drugs for these diseases.”
In addition, the immunosuppressant FK506 diminishes neuron loss and extends the life span of the transgenic Alzheimer’s mice. Normally only 20 percent of these mice survive by one year. With FK506, 60 percent of the mice were alive by one year.
But methods to suppress the immune response, while potentially useful for therapeutic purposes, probably won't get at the original cause of Alzheimers. Decreased blood flow might be the real cause of Alzheimer's Disease.
The latest findings from the University of Rochester Medical Center mesh not only with Dr. Azheimer's initial observations but also with new findings from today's best imaging technologies. While the first visible symptom of Alzheimer's may be a person forgetting names or faces, the very first physical change is actually a decline in the amount of blood that flows in the brain. Doctors have found that not only is blood flow within the brain reduced, but that the body's capacity to allocate blood to different areas of the brain on demand is blunted in people with the disease.
"A reduction in blood flow precedes the decline in cognitive function in Alzheimer's patients," said Berislav Zlokovic, M.D., Ph.D., professor in the Department of Neurological Surgery and a neurovascular expert whose research is causing scientists to consider the role of reduced blood flow in Alzheimer's disease.
"People used to say, well, the brain is atrophying because of the disease, so not as much blood as usual is needed. But perhaps it's the opposite, that the brain is dying because of the reduced blood flow," he added.
Perhaps this phenomenon is at work on a lesser scale with many people whose minds decay to a lesser extreme without getting diagnosed with Alzheimer's. If so then a treatment to prevent this would likely reduce the rate of cognitive decline even in people who never are going to get Alzheimer's.
Look at how they were able to make this discovery. It is only because gene array chips allow the measurement of the activity of thousands of genes that these scientists were able to get clues that the problem was in the vascular system.
The first step in the study came when Zlokovic's team compared the activity of genes in the brain from several people with Alzheimer's who had died, to that of several people without the disease who had died. It's a type of study widely done now by scientists looking at a host of diseases, using vast gene arrays that can tell how active thousands of genes are in a part of the body.
Scientists have been chasing after the cause of Alzheimer's Disease for decades. But now they have the technological tools to figure out the puzzle and they are coming up with answers that eluded them until now. Think about what that portends for the future of biomedical research into the causes of diseases. The gene chips, microfluidic chips (think "lab on a chip"), and other tools are going to keep getting better at a rapid rate.
The scientists were able to narrow their search down to two key genes that regulate contraction of muscle cells found in arteries.
As Zlokovic perused the list of genes whose activity differed depending on whether the person had Alzheimer's or not, he recognized that several play a role in constricting the arteries. He asked colleague Joseph Miano, Ph.D., a cardiovascular researcher and expert on the smooth muscle that makes up part of the arteries, to take a look.
Miano recognized the group as genes that are all controlled by one of two master regulators of gene activity in smooth muscle cells. Those proteins, myocardin and SRF (serum response factor), are well known for the control they exert on blood vessel walls. Working together, the two are the chief players that regulate how much the smooth muscle cells inside the arteries contract. The more the cells contract, the narrower the artery becomes, and the less blood that flows.
They discovered that SRF and myocardin are more active in Alzheimer's brains, that greater activity of SRF and myocardin causes blood vessels to contract, and that silencing SRF allowed blood to flow more freely. So we might be able to prevent Alzheimer's with a drug that turns down SRF or myocardin.
Now we need a way to detect at an early stage that SRF and myocardin are overactive. You do not want to lose a big chunk of your brain before getting diagnosed with Alzheimer's. We also need to know why these genes become overactive in the brains of some old people. With that knowledge scientists could develop ways to prevent the whole chain of events from ever getting started.
Thanks to Lou Pagnucco for the tip on the second article.
Fluctuations in sex hormone levels during women's menstrual cycles affect the responsiveness of their brains' reward circuitry, an imaging study at the National Institute of Mental Health (NIMH), a component of the National Institutes of Health (NIH), has revealed. While women were winning rewards, their circuitry was more active if they were in a menstrual phase preceding ovulation and dominated by estrogen, compared to a phase when estrogen and progesterone are present.
My guess is that it is more rewarding to be around women who are in their pre-ovulatory phase.
What is the purpose of this effect of neurons on reward centers? Is it to make women get greater enjoyment from sex when they are more likely to get pregnant?
Reward system circuitry includes: the prefrontal cortex, seat of thinking and planning; the amygdala, a fear center; the hippocampus, a learning and memory hub; and the striatum, which relays signals from these areas to the cortex. Reward circuit neurons harbor receptors for estrogen and progesterone. However, how these hormones influence reward circuit activity in humans has remained unclear.
To pinpoint hormone effects on the reward circuit, Berman and colleagues scanned the brain activity of 13 women and 13 men while they performed a task involving simulated slot machines. The women were scanned before and after ovulation.
The fMRI pictures showed that when the women were anticipating a reward, they activated the amygdala and a cortex area behind the eyes that regulates emotion and reward-related planning behavior more during the pre-ovulation phase (four to eight days after their period began) than in the post-ovulatory phase.
When they hit the jackpot and actually won a reward, women in the pre-ovulatory phase activated the striatum and circuit areas linked to pleasure and reward more than when in the post-ovulatory phase.
Both reward anticipation and reward reception were enhanced by estrogen.
The researchers also confirmed that the reward-related brain activity was directly linked to levels of sex hormones. Activity in the amygdala and hippocampus was in lockstep with estrogen levels regardless of cycle phase; activity in these areas was also triggered by progesterone levels while women were anticipating rewards during the post-ovulatory phase. Activity patterns that emerged when rewards were delivered during the post-ovulatory phase suggested that estrogen's effect on the reward circuit might be altered by the presence of progesterone during that period.
So then do women enjoy life less after they've ovulated? Also, do women on birh control pills get more or less pleasure from rewards? Same question for post-menopausal women who have less estrogen in their bodies? Do they get less of a thrill from rewards?
What is going to happen with this information in the long run? Imagine drugs that cause or block the effects of estrogen on reward centers and pleasure-related neurons. Will women choose to have their minds always in the pre-ovulatory state and feel more reward from wins and gains? Or will they choose to block the effects of higher estrogen on their brains?
WEST LAFAYETTE, Ind. - A group of scientists have created a portable refinery that efficiently converts food, paper and plastic trash into electricity. The machine, designed for the U.S. military, would allow soldiers in the field to convert waste into power and could have widespread civilian applications in the future.
"This is a very promising technology," said Michael Ladisch, the professor of agricultural and biological engineering at Purdue University who leads the project. "In a very short time it should be ready for use in the military, and I think it could be used outside the military shortly thereafter."
The "tactical biorefinery" processes several kinds of waste at once, which it converts into fuel via two parallel processes. The system then burns the different fuels in a diesel engine to power a generator. Ladisch said the machine's ability to burn multiple fuels at once, along with its mobility, make it unique.
Roughly the size a small moving van, the biorefinery could alleviate the expense and potential danger associated with transporting waste and fuel. Also, by eliminating garbage remnants - known in the military as a unit's "signature" - it could protect the unit's security by destroying clues that such refuse could provide to enemies.
It has a favorable ratio of energy inputs to energy outputs. But that does not tell us what fraction of the energy in the waste material gets converted into electric energy.
Researchers tested the first tactical biorefinery prototype in November and found that it produced approximately 90 percent more energy than it consumed, said Jerry Warner, founder of Defense Life Sciences LLC, a private company working with Purdue researchers on the project. He said the results were better than expected.
The U.S. Army subsequently commissioned the biorefinery upon completion of a functional prototype, and the machine is being considered for future Army development.
It reduces waste volume by a ratio of 30 to 1. But the article provides no indication of production costs. It starts up running on diesel fuel until its processing apparatus starts producing burnable fuel. At that point the fuel it produces powers continued processing to make more fuel. But most of the fuel produced is usable for other purposes.
If the refinery can be made cheaply enough it could provide supplementary power for a number of uses.
The refinery also could provide supplementary power for factories, restaurants or stores, Ladisch said.
"At any place with a fair amount of food and scrap waste the biorefinery could help reduce electricity costs, and you might even be able to produce some surplus energy to put back on the electrical grid," he said.
Much of the fuel the system combusts is carbon-neutral, said Nathan Mosier, a Purdue professor of agricultural and biological engineering involved in the project.
So what would this unit cost to mass produce and operate? If the waste was free from trash collection then what would the cost be per kilowatt-hour? A much larger unit would probably have lower labor costs per kwh. Also, the portability could be sacrificed for lower operating costs.
Experts at the Centers for Disease Control and Prevention have shown that a molecular change in the 1918 pandemic influenza virus stops its transmission in ferrets that were in close proximity, shedding light on the properties that allowed the 1918 pandemic virus to spread so quickly and potentially providing important clues that could help scientists assess emerging influenza viruses, such as H5N1.
The study, which is published in the Feb. 5 issue of Science, showed that a modest change of two amino acids in the main protein found on the surface of the 1918 virus did not change the virus's ability to cause disease, but stopped respiratory droplet transmission of the virus between ferrets placed in close proximity. The experiments were conducted with ferrets because their reaction to influenza viruses closely mimics how the disease affects humans.
The 1918 influenza strain killed tens of millons of people. The interest in studying the 1918 strain is driven in part by fear that the avian flu H5N1 might mutate to cause a similar big killer human pandemic.
But do not panic. This result does not mean that H5N1 bird flu is only 2 mutations away from causing a massive human pandemic. Bird flu probably has additional mutations that make it more suited to spead in birds than in humans.
To spread and cause illness, the influenza virus must first bind to host cells found in humans and animals. The Science study suggests that the hemagglutinin (HA), a type of protein found on the surface of influenza viruses, plays an important role in the 1918 virus's ability to transmit from one host to another efficiently. This research suggests that, for an influenza virus to spread efficiently, the virus's HA must prefer attaching to cells that are found predominately in the human upper airway instead of cells found predominately in the gastrointestinal tracts of birds. Other changes may be necessary as well. Current H5N1 viruses prefer attaching to avian cells, suggesting the virus would need to make genetic changes before it could pass easily between humans.
What I want to know: Will increased knowledge of what makes influenza strains more lethal get used more to reduce the spread of influenza than it will get used by crazies to make lethal strains? Initially I expect this knowledge to be more useful on the side of good. But in the longer run biotechnologies will make the creation of custom virus strains easy for amateurs.
Advances in microfluidics will enable the development of beneficial treatments including full body rejuvenation therapies. But as microfluidic chips become cheaper and the software for controlling these devices becomes more powerful and easier to use individuals in personal labs in their own bedrooms or cellars will be able to develop customized lethal pathogens.
Reasons for optimism? First off, most people do not want to die. The internet enables large numbers of people to contribute solutions to problem. If malicious biological script kiddies start tossing out killer pathogens into the world's population the number of people who will organize to develop defenses will far exceed the number generating killer pathogens. Also, those fighting for the defense will probably be much smarter than those who are malicious. I am expecting people who feel they have low status to tend toward malicious acts.
But will sheer numbers of smart brains be sufficient to defeat malicious makers of dangerous pathogens? Or will defense against designer pathogens run into difficult problems similar to the relative difficulties of stopping versus delivering nuclear bombs? My fear is that the defensive side will be the more difficult.
When it comes to natural pathogens I expect human brains controlling computer simulations and automated equipment to near totally defeat them. Biological evolution won't be able to keep up with computers and microfluidic devices. So we'll reach a point where most of the remaining big killer pathogens cease to rack up big death tolls..
The New York Times reports that Dutch and other European environmental organizations are shocked to find their support for biomass energy is wrecking rainforests and producing lots of carbon dioxide pollution.
AMSTERDAM, Jan. 25 — Just a few years ago, politicians and environmental groups in the Netherlands were thrilled by the early and rapid adoption of “sustainable energy,” achieved in part by coaxing electrical plants to use biofuel — in particular, palm oil from Southeast Asia.
Next time you hear confident policy recommendations from environmental groups just remember that some of them are still stupid enough to see biomass energy as a boon to the environment. The mind boggles. Politicians who see biomass as a way to simultaneously appeal to greenies and farmers are only to happy to provide tax subsidies for habitat destruction.
To be fair, not all environmentalists are lame on biomass. Lester Brown keeps warning that biomass has big downsides and he argues that biomass will raise the price of food for poor people. Making energy demand, food demand, and wildlife all compete for the same land means 2 out of 3 lose. The article reports on other environmental organizations that are skeptical about biomass energy.
Bye bye rain forests.
Rising demand for palm oil in Europe brought about the clearing of huge tracts of Southeast Asian rainforest and the overuse of chemical fertilizer there.
Worse still, the scientists said, space for the expanding palm plantations was often created by draining and burning peatland, which sent huge amounts of carbon emissions into the atmosphere.
Indonesia is pumping massive amounts of carbon dioxide into the atmosphere in the name of sustainable energy.
Considering these emissions, Indonesia had quickly become the world’s third-leading producer of carbon emissions that scientists believe are responsible for global warming, ranked after the United States and China, according to a study released in December by researchers from Wetlands International and Delft Hydraulics, both in the Netherlands.
“It was shocking and totally smashed all the good reasons we initially went into palm oil,” said Alex Kaat, a spokesman for Wetlands, a conservation group.
They saw good reasons for tearing down rainforests. Good reasons. If only it didn't result in lots of CO2 release it was otherwise a good idea from the get go? Hello?
The amount of energy gotten per acre by using an acre for biomass is much less (by over an order of magnitude as compared to if the same acre contains solar panels). Plants have a low efficiency for converting sunlight to chemical energy. Therefore biomass uses a much bigger surface footprint than photovoltaics. Surface footprint size ought to be an important consideration when choosing energy sources. Minimize land use - unless you happen to dislike rain forests and other wildlife areas.
Scientists and engineers can further improve the conversion efficiency of photovoltaics. So when photovoltaics become cheap enough their land needs will be even lower as compared to biomass. Plus, in areas which have real winters when the fields do not grow crops photovoltaics can still capture many photons and use them to get electrons flowing. Even better, photovoltaics can get placed in deserts and other areas which have less biomass per acre. Therefore photovoltaics can cover less of areas that naturally have lots of plant matter.
Geothermal and nuclear power both have even lower land area footprints per amount of energy generated than photovoltaics. But some areas used by photovoltaics can be existing human-used surfaces such as the outer surfaces of houses and commercial buildings. Plus, areas with strong winds and lots of sunlight can have both wind towers and photovoltaics.
The appeal of biomass energy is that is it something that can be ramped up quickly. As long as it remains a fairly minor source of energy it won't cause too much damage. But encouraging biomass energy production in rainforest areas is nutty. Growing populations, industrialization, rising demand for wood for housing, rising demand for areas to build housing, and rising demand for food are already causing lots of habitat destruction. Why make it worse?
Holland is using palm oil to burn for electricity. How dumb. Electricity is the easiest energy to produce without emitting carbon dioxide. Build nuclear power plants. Drill for geothermal. Put up wind mills (though they have their limits). Or require full carbon sequestration of coal burned for electric power. To the extent that biomass does get used for energy the best use is as liquid fuels for transportation. Vehicles are the hardest things to make carbon neutral. If carbon dioxide emissions reduction is the goal then reserve liquid biomass for transportation and use nuclear, geothermal, wind, and eventually photovoltaics for electricity. For solid biomass (e.g. wood chips) use it in limited amounts to burn for building heat.
Rather than look for quick fixes to reduce fossil fuels in the short term we'd be better off if we shifted all the money getting wasted on biomass energy subsidy toward energy research into photovoltaics, battery, nuclear, geothermal and other non-fossil fuel energy technologies. That doesn't provide instant gratification. But it'll solve our energy problems in the medium to long term by providing us with energy sources that are both cleaner and cheaper than fossil fuels.