Low to moderate alcohol consumption among women is associated with a statistically significant increase in cancer risk and may account for nearly 13 percent of the cancers of the breast, liver, rectum, and upper aero-digestive tract combined, according to a report in the February 24 online issue of the Journal of the National Cancer Institute.
With the exception of breast cancer, little has been known about the impact of low to moderate alcohol consumption on cancer risk in women.
To determine the impact of alcohol on overall and site-specific cancer risk, Naomi Allen, D.Phil., of the University of Oxford, U.K., and colleagues examined the association of alcohol consumption and cancer incidence in the Million Women Study, which included 1,280,296 middle-aged women in the United Kingdom. Participants were recruited to the study between 1996 and 2001. Researchers identified cancer cases through the National Health Service Central Registries.
Women in the study who drank alcohol consumed, on average, one drink per day, which is typical in most high-income countries such as the U.K. and the U.S. Very few drank three or more drinks per day. With an average follow-up time of more than 7 years, 68,775 women were diagnosed with cancer.
Alcohol boosts risks for an assortment of cancers.
Each additional alcoholic drink regularly consumed per day was associated with 11 additional breast cancers per 1000 women up to age 75; one additional cancer of the oral cavity and pharynx; one additional cancer of the rectum; and an increase of 0.7 each for esophageal, laryngeal, and liver cancers. For these cancers combined, there was an excess of about 15 cancers per 1000 women per drink per day. (The background incidence for these cancers was estimated to be 118 per 1000 women in developed countries.)
Women with higher intake of calcium appear to have a lower risk of cancer overall, and both men and women with high calcium intakes have lower risks of colorectal cancer and other cancers of the digestive system, according to a report in the February 23 issue of Archives of Internal Medicine, one of the JAMA/Archives journals.
Calcium is known to benefit bone health, according to background information in the article. Because of this, the Institute of Medicine recommends 1,200 milligrams of calcium for adults age 50 and older, and the 2005 dietary guidelines for Americans recommend 3 cups per day of low-fat or fat-free dairy products. Studies of dairy products, calcium intake and cancer have revealed different results for different cancer sites.
The top fifth in calcium consumption are a much less risk for cancer.
"In both men and women, dairy food and calcium intakes were inversely associated with cancers of the digestive system," the authors write. The one-fifth of men who consumed the most calcium through food and supplements (about 1,530 milligrams per day) had a 16 percent lower risk of these types of cancer than the one-fifth who consumed the least (526 milligrams per day). For women, those in the top one-fifth of calcium consumption (1,881 milligrams per day) had a 23 percent lower risk than those in the bottom one-fifth (494 milligrams per day). The decreased risk was particularly pronounced for colorectal cancer. Calcium and dairy food intake was not associated with prostate cancer, breast cancer or cancer in any other anatomical system besides the digestive system.
News you can use. A Yankee Doodle Dandy has a good memory. Who knew?
Doodling while listening can help with remembering details, rather than implying that the mind is wandering as is the common perception. According to a study published today in the journal Applied Cognitive Psychology, subjects given a doodling task while listening to a dull phone message had a 29% improved recall compared to their non-doodling counterparts.
40 members of the research panel of the Medical Research Council's Cognition and Brain Sciences Unit in Cambridge were asked to listen to a two and a half minute tape giving several names of people and places, and were told to write down only the names of people going to a party. 20 of the participants were asked to shade in shapes on a piece of paper at the same time, but paying no attention to neatness. Participants were not asked to doodle naturally so that they would not become self-conscious. None of the participants were told it was a memory test.
After the tape had finished, all participants in the study were asked to recall the eight names of the party-goers which they were asked to write down, as well as eight additional place names which were included as incidental information. The doodlers recalled on average 7.5 names of people and places compared to only 5.8 by the non-doodlers.
Traditionally, stimulating nerves or brain tissue involves cumbersome wiring and a sharp metal electrode. But a team of researchers at Case Western Reserve University is going "wireless."
And it's a unique collaboration between chemists and neuroscientists that led to the discovery of a remarkable new way to use light to activate brain circuits with nanoparticles.
Ben Strowbridge, an associate professor in the neurosciences department in the Case Western Reserve School of Medicine and Clemens Burda, an associate professor in chemistry, say it's rare in science that people from very different fields get together and do something that is both useful and that no one had thought of before. But that is exactly what they've done.
But hey, it uses photovoltaic nanoparticles. At least we'd become environmentally sustainable robots.
By using semiconductor nanoparticles as tiny solar cells, the scientists can excite neurons in single cells or groups of cells with infrared light. This eliminates the need for the complex wiring by embedding the light-activated nanoparticles directly into the tissue. This method allows for a more controlled reaction and closely replicates the sophisticated focal patterns created by natural stimuli.
The electrodes used in previous nerve stimulations don't accurately recreate spatial patterns created by the stimuli and also have potential damaging side effects.
Nanoparticles embedded in tissue would be hard to detect. So a secret agent could get turned into an enemy by some complex layout of embedded nanoparticles.
Their goal is to use it to get around nerve damage. Imagine wireless communication to an embedded device that then shines infrared light on neurons to activate them. One could control nerves in extremities cut off by spinal damage. Or transmit sensory data from extremities to the brain.
"The long-term goal of this work is to develop a light-activated brain-machine interface that restores function following nerve or brain impairments," Strowbridge says. "The first attempts to interface computers with brain circuitry are being done now with complex metal electrode stimulation arrays that are not well suited to recreating normal brain activity patterns and also can cause significant damage."
Powerful neuro-tools for medicine become powerful neuro-tools for other purposes as well.
I can also imagine reasons why a person would want to hand over control of part of their nervous system to an external force. Someone on a diet could program their house computer to disable them from opening the refrigerator or food cabinets between meals. Any time you tried the house computer could flash you with a pattern that rendered your arm immobile. Or, hey, exercise without having to think about it. Get a computer to exercise your body while you watch a movie.
While the bulk of concentrating solar now in use is for solar thermal steam electric power generation it is not the only use of concentrating solar. Highly concentrated light shined on photovoltaic (PV) materials greatly lowers the amount of PV needed. If the concentrator costs less than PV for the same area then concentrator plus PV can be a cheaper way to go. A Canadian company, Morgan Solar, claims to have a better way to concentrate sunlight for PV solar.
A couple of years ago, Nicolas's brother John Paul Morgan came up with the idea of a solid-state solar concentrator system: a flat, thin acrylic optic that traps light and guides it toward its center. Embedded in the center of Morgan Solar's concentrator is a secondary, round optic made of glass. With a flat bottom and convex, mirrored top, the optic receives the incoming barrage of light at a concentration of about 50 suns and amplifies it to nearly 1,000 suns before bending the light through a 90-degree angle.
Unlike other concentrators, the light doesn't leave the optic before striking a solar cell. Instead, a high-efficiency cell about the size of an infant's thumbnail is bonded directly to the center bottom of the glass optic, where it absorbs the downward-bent light. There's no air gap, and there's no chance of fragile components being knocked out of alignment.
They think they can compete with thin film solar on costs by 2011.
Some business and engineering decisions must still be made, but he expects that the company will be able to build its system for less than $1 per watt by 2011--"and with some vertical integration, considerably less." This would lead to a product close to 30 percent efficient at costs competitive with thin film.
FirstSolar is the market leader for low cost thin film photovoltaics. In Q1 2008 FirstSolar claimed a manufacturing cost of $1.14 per watt. They aren't sitting still. By 3Q 2008 they were claiming $1.08 per watt. Their cost will be even lower in 2 years time. Morgan Solar needs to come in under $1 per watt to compete.
Concentrating solar for PV is best used with higher priced PV that has higher conversion efficiencies. The higher cost for materials with higher conversion efficiency does not matter because the amount of PV used is very small. Concentrating solar's ability to compete might end up hinging on how much conversion efficiencies improve. A doubling of conversion efficiency would probably cut Morgan's cost in half. More than a doubling in PV conversion efficiency might be possible.
TEMPE, Ariz.--(BUSINESS WIRE)--Feb. 24, 2009-- First Solar, Inc. (Nasdaq: FSLR) today announced it reduced its manufacturing cost for solar modules in the fourth quarter to 98 cents per watt, breaking the $1 per watt price barrier.
“This achievement marks a milestone in the solar industry’s evolution toward providing truly sustainable energy solutions,” said Mike Ahearn, First Solar chief executive officer. “First Solar is proud to be leading the way toward clean, affordable solar electricity as a viable alternative to fossil fuels.”
First Solar has cut their cost by about 16% in less than a year. Impressive. Complaints that solar PV costs go down too slowly are starting to sound outdated.
The largest solar thermal plant in the world is now operational in Spain. (thanks "Fat Man" for the heads-up)
The salts—a mixture of sodium and potassium nitrate, otherwise used as fertilizers—allow enough of the sun's heat to be stored that the power plant can pump out electricity for nearly eight hours after the sun starts to set. "It's enough for 7.5 hours to produce energy with full capacity of 50 megawatts," says Sven Moormann, a spokesman for Solar Millennium, AG, the German solar company that developed the Andasol plant. "The hours of production are nearly double [those of a solar-thermal] power plant without storage and we have the possibility to plan our electricity production."
7.5 hours is long enough to provide peak electric power during the peak late afternoon and evening hours of a very hot summer day. The article says this is the first heat storage facility for a concentrated solar facility on this scale. So we need to wait and see how well it works in practice. The $380 million cost for a 50 MW facility sounds pretty pricey to me, especially since this isn't 50 MW of 24x7 power.
Maybe someone else can make better sense of this SciAm article than I can. They quote only a doubling in electricity cost for electricity from this method as compared to coal. How can solar thermal (i.e. concentrated solar) cost twice as much as coal regardless of whether molten salt storage is used?
All told, that means thermal energy storage at Andasol 1 or power plants like it costs roughly $50 per kilowatt-hour to install, according to NREL's Glatzmaier. But it doesn't add much to the cost of the resulting electricity because it allows the turbines to be generating for longer periods and those costs can be spread out over more hours of electricity production. Electricity from a solar-thermal power plant costs roughly 13 cents a kilowatt-hour, according to Glatzmaier, both with and without molten salt storage systems.
Coal electric with no conventional pollutant emissions would cost more. Add in the cost of carbon capture and then nuclear power becomes cheaper than coal. Concentrating solar isn't going to compete with coal all that much. Nuclear, geothermal, and (with limits) wind are the real competitors to coal because coal is a base load source of electric power. Solar isn't for base load.
That price is still nearly twice as much as electricity from a coal-fired power plant—the current cheapest generation option if environmental costs are not taken into account. But Arizona's APS and others can then use solar energy to meet the maximum electricity demand later in the day. "Our peak demand [for electricity] is later in the evening, once solar production is trailing off," Lockwood says. That's "the reason we went that direction and are so interested in storage technology."
This is the big reason why concentrating solar might have a big future regardless of what happens with photovoltaic prices. The heat generated by concentrating the light is a lot easier to store than electricity. Concentrating solar with salt storage can stretch across the peak demand hours - at least in the US where summer is peak demand time. In Britain peak electric demand is in winter when not a lot of sun shines. So solar power does not work well in Britain.
The 50MWe AndaSol plant is located in the community of Aldeire in the Marquesado valley in the Province of Granada, Southern Spain. Thanks to the high altitude (1,100 m) and the desert climate, the Marquesado Valley offers exceptionally high annual direct insolation of 2,200 kWh/m²yr. The 549’360m² parabolic trough solar field is built-up by 1,008 EUROTrough collectors, arranged in 168 parallel loops. It will occupy app. 200 hectares of land. It will generate live steam of 100 bar/371°C to the reheat steam turbine with a cycle efficiency of 38%, gross. With an annual direct normal radiation of 2,200 kWh/m² per year, the AndaSol plant will generate almost 182 million kWh per year of clean solar electricity in 100% solar operation. The plant will be built, owned and operated by the specific project company, Partner 1. Partner 1 will sell as renewable independent power producer the generated solar electricity to the utility under the standard renewable power purchase terms regulated by the Spanish Royal Decree 2818/98. Using solar beam radiation as it’s primary energy, the solar plant will avoid approximately 172,000 tons of CO2 annually in Southern Spain otherwise being emitted by coal and heavy fuel oil operated power plants in the region.
In a market that 182 million kwh might sell for, say, 10 cents per kwh. It would probably sell for less in the US where the average retail cost of electricity is about 11 cents per kwh. But let us assume a higher price in Europe. Okay, that would still only amount to $18.2 million per year. Seems like a small return on a few hundred million dollar investment plus operating costs and maintenance costs. But if this electricity is sold during peak hours maybe it sells for more than 10 cents per kwh? Does a political deal assure a higher price? If so, how much higher?
Let us consider the avoided CO2 emissions. If the 172,000 tons of avoided emissions were taxed at $30 per ton (which is one figure I've heard proposed for how much carbon emissions should be taxed) then the amount of avoided carbon taxes would be only $5.2 million per year. That doesn't improve profitability very much.
I am curious to know more about the real costs for concentrated solar plus molten salt storage. Anyone have better sources of information on this topic?
A Seattle test of hybrids modified to be rechargeable and theoretically to run 30 miles on electric power produced disappointing results so far. 14 specially customized plug-in hybrid Toyota Priuses did not do much better than standard Priuses in fuel efficiency. (thanks "Fat Man")
Try 51 miles per gallon, city and highway combined. Not counting the cost of the electricity.
It's what 14 plug-in Priuses averaged after driving a total of 17,636 miles. The pilot project is one of the few in the nation to subject plug-in hybrid cars to regular motor-pool duty, as opposed to being driven by hypermilers or alt-energy enthusiasts.
Vehicles engineered for production quality will probably do better than these customized cars.
The article also points to Google's own fleet of hybrids and plug-in hybrids. At that web page Google provides data on how these vehicles compare in fuel efficiency. Their Ford Escape hybrids are averaging 28.6 mpg while their pluggable versions of the Escape hybrd get 37.7 mpg for a 32% improvement. Not earth shattering. Their conventional Prius hybrids get 42.8 mpg while their pluggable Priuses get 54.9 mpg for a 28.3% improvement Again, not exactly the end of the oil era. Google breaks out the numbers by car. The best has done 60.5 mpg. But if you look at single day results you can find cars hitting 107 mpg.
Why these disappointing results? A fleet car could get driven a lot in a day and run down its batteries. To maximize the benefit of a pluggable hybrid one really need to drive almost the battery's range each day but no more. Someone who happens to commute a distance that is a little less than the range of a hybrid's battery is the best candidate to get maximal benefit. People who drive too little will pay for higher battery costs that take a long time to pay back. People who drive too much will run much of the time on gasoline.
I also wonder how motivated fleet car users will be to plug in every time they stop somewhere they can plug in. Then there's the need to stop at places where plugging in is even possible. There's no golden bullet for replacing oil.
Natural selection in the human race has not stopped. Blogger Audacious Epigone uncovers an interesting pattern in General Social Survey data. Those with firm belief in God and those with firm belief God doesn't exist make more babies. My take: the genes for doubt and skepticism are getting selected against.
The great back-and-forth between Jason Malloy and Bruce G. Charlton led me to wonder if the trend of increased fertility as theistic confidence increases conscious, or if it a subconscious and indirect consequence of values and behaviors not explicitly related to a person's stated ideal family size. BGC suggested that secular women do not just have fewer children than the religious do, but that this stems from a desire to have fewer children to begin with.
From GSS data, I looked at the reported ideal family size* and the actual number of children had, by theistic confidence, among those who had essentially completed their total fertility (age 40-100):
Theistic confidence Desired Actual Don't believe 2.26 2.23 No way to find out 2.25 1.95 Some higher power 2.18 1.98 Believe sometimes 2.37 2.34 Believe with doubts 2.34 2.31 Know God exists 2.58 2.64
The more theistic, the greater the number of ideal children for a completed family to contain. It tracks almost identically with the actual number of children given birth to. That's not too surprising, since people are probably biased towards defining their actual family size as the ideal family size.
Granted, those who believe in God surpass the atheists in fertility. But the biggest doubters have the lowest fertility levels. Either the feeling of certainty boosts fertility or some factor causes both certainty and higher fertility.
As long time readers know one of my interests in the future has to do with which way will human evolution go? The DNA sequencing evidence already points int the direction that human evolution has already accelerated by orders of magnitude in the last 10,000 years and we aren't the same humans as those who walked the Earth even a few thousand years ago. An excellent recent book, The 10,000 Year Explosion: How Civilization Accelerated Human Evolution, explores these findings in greater detail.
But what of the future? My fear is that the human race will splinter into subspecies that have cognitive dissimilarities that lead to wars of enormous lethality on a scale beyond any wars to date. Will genetic engineering for higher IQ give us the insights to doubt our own feelings of certainty and do a better job of seeing the viewpoints of others? Or will some faction of future transhumans use their greater intellectual abilities to ruthlessly pursue the triumph of their genetically engineered extremely strongly felt moral preferences?
We do not just lose the ability to make hair pigment as we get older. Oh no, it is worse than that. Old hair cells pump out hydrogen peroxide (a toxic compound!) which turns our hair white.(thanks Lou Pagnucco for the heads-up)
Wash away your gray? Maybe. A team of European scientists have finally solved a mystery that has perplexed humans throughout the ages: why we turn gray. Despite the notion that gray hair is a sign of wisdom, these researchers show in a research report published online in The FASEB Journal (http://www.fasebj.org) that wisdom has nothing to do with it. Going gray is caused by a massive build up of hydrogen peroxide due to wear and tear of our hair follicles. The peroxide winds up blocking the normal synthesis of melanin, our hair's natural pigment.
This reinforces a belief I've long held: some of our cosmetic changes as we age aren't just cosmetic. The causes of the age-related changes in appearances exact a larger toll on the body. Hydrogen peroxide is toxic. We don't just go gray. Our heads get bathed in the constant release of a peroxide. When you see gray hairs in the mirror think "poison".
"Not only blondes change their hair color with hydrogen peroxide," said Gerald Weissmann, MD, Editor-in-Chief of The FASEB Journal. "All of our hair cells make a tiny bit of hydrogen peroxide, but as we get older, this little bit becomes a lot. We bleach our hair pigment from within, and our hair turns gray and then white. This research, however, is an important first step to get at the root of the problem, so to speak."
My guess is that skin aging similarly causes the skin cells to release compounds that make us less well in the rest of our body. If we could rejuvenate our skin cells we'd probably feel better as a result.
Our hair follicle cells do not make enough of the catalase enzyme which breaks down hydrogen peroxide. Would a gene therapy help or do we need cell therapy that replaces the aged cells?
The researchers made this discovery by examining cell cultures of human hair follicles. They found that the build up of hydrogen peroxide was caused by a reduction of an enzyme that breaks up hydrogen peroxide into water and oxygen (catalase). They also discovered that hair follicles could not repair the damage caused by the hydrogen peroxide because of low levels of enzymes that normally serve this function (MSR A and B). Further complicating matters, the high levels of hydrogen peroxide and low levels of MSR A and B, disrupt the formation of an enzyme (tyrosinase) that leads to the production of melanin in hair follicles. Melanin is the pigment responsible for hair color, skin color, and eye color. The researchers speculate that a similar breakdown in the skin could be the root cause of vitiligo.
"As any blue-haired lady will attest, sometimes hair dyes don't quite work as anticipated," Weissmann added. "This study is a prime example of how basic research in biology can benefit us in ways never imagined."
So your cells get too old and they start spewing toxins. That makes them age even more rapidly and things go from bad to worse in a vicious cycle. What's the answer? We need therapies to rejuvenate our bodies. We start with youthful healthiness. But things start falling apart. Everything else is the slow slide into disease.
Black or green, either way you cut your risk of stroking out of life. Don't want to have half your face drooping when the nerves that instruct it die? Don't want to become a drooler or become confined to a wheelchair? Really you should think about these gruesome outcomes and change your behavior accordingly.
Drinking at least three cups of green or black tea a day can significantly reduce the risk of stroke, a new UCLA study has found. And the more you drink, the better your odds of staving off a stroke.
The study results, published in the online edition of Stroke: Journal of the American Heart Association, were presented Feb. 19 at the American Heart Association's annual International Stroke Conference in San Diego, Calif.
The UCLA researchers conducted an evidence-based review of all human observational studies on stroke and tea consumption found in the PubMed and Web of Science archives. They found nine studies describing 4,378 strokes among nearly 195,000 individuals, according to lead author Lenore Arab, a professor of medicine in the division of general internal medicine and health services research at the David Geffen School of Medicine at UCLA.
"What we saw was that there was a consistency of effect of appreciable magnitude," said Arab, who is also a professor of biological chemistry. "By drinking three cups of tea a day, the risk of a stroke was reduced by 21 percent. It didn't matter if it was green or black tea."
And extrapolating from the data, the effect appears to be linear, Arab said. For instance, if one drinks three cups a day, the risk falls by 21 percent; follow that with another three cups and the risk drops another 21 percent.
This effect was found in tea made from the plant Camellia sinensis, not from herbal teas.
What I want to know: If one eats a lot of berries and other fruits does one get equivalent compounds (e.g. flavonoids) from them that provide the same benefits?
But after considering factors such as cigarette and alcohol consumption, van Dam and his colleagues found that healthy women who consumed two to three cups of caffeinated coffee a day had, on average, a 19 percent lower risk for any kind of stroke than did women who drank less than one cup a month. Drinking four or more cups a day lowered risk by 20 percent.
Women who drank five to seven cups of coffee a week were 12 percent less likely to have a stroke than were those who downed just one cup a month, the study found.
When the results were stratified by smoking status, women who had never smoked or who had quit and drank four cups of coffee or more had a 43% reduced risk of stroke (RR 0.57, 95% CI 0.39 to 0.84).
The study involved 20,040 men and women aged 40-79 years old who were taking part in the European Prospective Investigation into Cancer Study (EPIC). Between 1993 and 1997, participants completed a detailed health and lifestyle questionnaire and underwent a thorough health examination by trained nurses.
Participants scored one point for each of four healthy behaviours: current non-smoking, physically not inactive, moderate alcohol intake (1-14 units per week) and blood vitamin C levels of 50 µmol/l or more, indicating fruit and vegetable intake of at least five servings a day.
An individual could therefore have a total health behaviour score ranging from zero to four, with a higher score indicating more protective behaviour.
Participants were then followed for an average of 11 and a half years. Strokes were recorded using death certificates and hospital discharge data.
There were a total of 599 incident strokes during the follow-up period. After adjusting for other factors that may have affected the results, the risk of stroke was 2.3 times greater in those with a score of zero compared to those with a score of four.
A significantly higher percentage of women scored four compared to men.
The risk of stroke increased in linear fashion with every point decrease in health behaviour score. So, for example, those with a score of two were one and a half (1.58) times more likely to have a stroke than those with a score of four, while those with a score of just one were just over twice (2.18) as likely to have a stroke.
Heather Mac Donald points out that even though the state of California is trying to fill in a $42 billion dollar budget deficit the people of California are so adamantly opposed to a gasoline tax increase that the state legislature opted to increase the sales tax rather than enact a carbon tax.
So did a proposed 12-cents-a-gallon surcharge on gas make it into the crippling $12.8 billion in tax hikes which the California legislature finally passed yesterday? Of course not. Voters would raise bloody hell. Better, apparently, to kill all businesses slowly with a sales tax hike than to interfere with Californians’ right to cheap gasoline. Liberal politicians’ pious devotion to the science of global warming never translates into action, unless the costs of action can be safely transferred onto non-voters. And environmental groups are just as cowardly. I sure didn’t notice the Sierra Club or the NRDC protesting when presidential candidate Hillary Clinton called for a suspension of the federal gas tax last year.
This is not an amazing result. Gasoline taxes are so unpopular that their levels haven't even kept up with inflation for funding road maintenance. I realize some of you support a carbon tax because you are worried about global warming. But in spite of the fact that California enacted a law in 2006 to cut carbon dioxide emissions 25% by 2020 the people of California are not willing to pay a even a small price to achieve this goal. This has important ramifications for the global warming policy debate.
How unpopular are higher gasoline taxes in the US for roads and bridges? In August 2008 a poll found nearly two thirds of Americans opposed higher gasoline taxes to fix bridges. In July 2007 an overwhelming majority of Americans opposed a 50 cent gasoline tax.
Eighty-six percent (86%) of Americans oppose a proposal to increase gasoline taxes by 50 cents a gallon. A Rasmussen Reports national telephone survey found that just 8% favor such a tax hike.
Consider the contrast with cigarette taxes. Since most people do not smoke and they see cigarettes as harmful it is easy for many governments to impose high taxes on cigarettes and other tobacco products.
Eighty-five percent of the 1,018 adults polled opposed an increase in the federal gasoline tax, suggesting that politicians have good reason to steer away from so unpopular a measure. But 55 percent said they would support an increase in the tax, which has been 18.4 cents a gallon since 1993, if it did in fact reduce dependence on foreign oil. Fifty-nine percent were in favor if the result was less gasoline consumption and less global warming. The margin of sampling error is plus or minus three percentage points.
But so far this qualified support hasn't translated into a tax rise. Governor Deval Patrick of Massachusetts is proposing a 19 cent gasoline tax to help close the Massachusetts budget gap. Note that even if he succeeds the amount is so small that it will have minimal impact on gasoline demand. Several other states are considering gasoline tax increases to cut budget deficits. But again, even if these taxes are enacted the increases are small and fall far short of the high gasoline taxes in Europe that have pushed so many Europeans into very small cars.
Politicians who want to reduce carbon dioxide emissions will continue to find back door ways to do this where the higher costs are hidden from most voters. California and other states and countries have enacted requirements on utilities to get more electricity from renewables. This causes electricity prices to rise. But there's no identifiable tax on utility bills for this purpose. So few voters think to complain.
Tax credits and other subsidies for solar, wind, other renewables, and nuclear represent another way around popular opposition to fossil fuels energy taxes. Tax something else and then use the cash to subsidize non-fossil fuels energy sources. I happen to like this approach as a way to reduce growth of coal usage for generating electricity. Just due to conventional pollutants alone I wish we either burned much less coal or imposed stricter regulations on coal burning plants to cut mercury, particulate and other pollutant emissions. But the coal industry has done an excellent job of obstructing regulations to reduce pollutants.
Probably the most dramatic way that government policy attempts to work around opposition to fossil fuels taxes is with car fuel efficiency regulation. The US federal government and state government have enacted regulatory requirements for more fuel efficient cars. A more economically efficient way to reduce fuel usage is a tax. But this more economically efficient method isn't used because the public is too opposed. Hence the regulations to force car makers to make more fuel efficient cars.
I expect Peak Oil will eventually drive US gasoline prices up to levels close to Europe's current levels. I hope the rise in gasoline prices due to Peak Oil won't be so sudden and severe that our economy is crippled as a result. One thing a higher gasoline tax would do now is provide incentives to get ready for Peak Oil before it hits in full force. But the popular opposition to high gasoline taxes effectively precludes the optimal amount of preparation needed for Peak Oil.
Update: Think you can follow the global warming debate from political op-ed columns? Check out Carl Zimmer's run-down of a newspaper's mistakes in fact checking reports about ice cap areas.
The proposal also takes advantage of the analytical expertise of a company in the Purdue Research Park, Bioanalytical Systems Inc., and Purdue's PRIME Lab, a one-of-a-kind rare isotope laboratory. The PRIME Lab's accelerator mass spectrometer will allow researchers to monitor bone loss in 50 days that otherwise would take two to four years, Weaver said.
"For our osteoporosis study, for example, we'll be able to use just nine people and test them for seven different products in two years," Weaver said. "Without the PRIME lab, it would take two years to test just one product."
A general trend that continues into the future: faster and cheaper ways to do scientific and medical research. But I still want a time machine that'll let me jump ahead 30 years to get rejuvenation therapies immediately.
H.G. Wells missed this. But the biggest benefit of a time machine would be full body rejuvenation. Just jump far enough ahead that you come out when stem cell therapies and other strategies for engineered negligible senescence have become mature, safe, and cheap. Of course, you might land in a police state or a Borg consciousness.
Indeed, a new generation of smartphones like the G1, with Android software developed by Google, and a range of Japanese phones now “augment” reality by painting a map over a phone-screen image of the user’s surroundings produced by the phone’s camera.
Why experience reality without enhancements? I see phone reality augmentation as a transitory step. We really need glasses that do heads-up display overlaying information about what is in front of us or what we want to know about.
Using a hand to manipulate the phone is a waste of a valuable extremity. We need to be able to think what we want the phone to do. The "Phone Company" in the Cold War classic paranoid movie The President's Analyst tried to convince James Coburn's character to recommend to the US President to embed phones in everyone's brain. Then just think a phone number and the phone would dial it. Well, we need something like that.
Phones that know where you are and tell your friends and associates take away privacy but provide greater connectivity. Phones can also tell you things about your immediate surroundings.
Increasingly, phones will allow users to look at an image of what is around them. You could be surrounded by skyscrapers but have an immediate reference map showing your destination and features of the landscape, along with your progress in real time. Part of what drives the emergence of map-based services is the vast marketing potential of analyzing consumers’ travel patterns. For example, it is now possible for marketers to identify users who are shopping for cars because they have traveled to multiple car dealerships.
Imagine databases of criminals. Your "phone" could tell you when it sees a criminal near you and show you the list of convictions for that criminal. Or how about a phone that watches what you buy in a grocery store and warns you when you pick up something that violates your diet?
Functional magnetic resonance imaging (fMRI) looks more and more like a window into the mind. In a study published online today in Nature, researchers at Vanderbilt University report that from fMRI data alone, they could distinguish which of two images subjects were holding in their memory--even several seconds after the images were removed. The study also pinpointed, for the first time, where in the brain visual working memory is maintained.
Don't become too attached to your privacy. If you do then you'll sure miss it when it is gone.
Berkeley, CA — A new study on the installed costs of solar photovoltaic (PV) power systems in the U.S. shows that the average cost of these systems declined significantly from 1998 to 2007, but remained relatively flat during the last two years of this period.
Researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) who conducted the study say that the overall decline in the installed cost of solar PV systems is mostly the result of decreases in nonmodule costs, such as the cost of labor, marketing, overhead, inverters, and the balance of systems.
The flat costs in 2006 and 2007 might be because of large government subsidies that drove up module costs.
The decline averaged 3.5% per year. I expect the 2009 decline to be much larger due to the contracting economy.
The study examined 37,000 grid-connected PV systems installed between 1998 and 2007 in 12 states. It found that average installed costs, in terms of real 2007 dollars per installed watt, declined from $10.50 per watt in 1998 to $7.60 per watt in 2007, equivalent to an average annual reduction of 30 cents per watt or 3.5 percent per year in real dollars.
The researchers found that the reduction in nonmodule costs was responsible for most of the overall decline in costs. According to the report, this trend, along with a reduction in the number of higher-cost “outlier” installations, suggests that state and local PV-deployment policies have achieved some success in fostering competition within the industry and in spurring improvements in the cost structure and efficiency of the delivery infrastructure for solar power.
The full 42 page report is downloadable as a PDF. The report says that total installed costs are lower in Japan and Germany. Germany has bigger government incentives for installations that probably created economies of scale in installation. Japan is cheapest with only 3/4ths US cost. Both module and non-module costs flattened in the 2005-2007 period.
These cost reductions, however, have not occurred steadily over time. From 1998-2005, average costs declined at a relatively rapid pace, with average annual reductions of $0.4/W, or 4.8% per year in real dollars. From 2005 through 2007, however, installed costs remained essentially flat. During this period, U.S. and global PV markets expanded significantly, creating shortages in the supply of silicon for PV module production and putting upward pressure on PV module prices. As documented in the next section, however, silicon shortages are not the sole cause for the cessation of price declines during 2005-2007, as average non-module costs also remained relatively flat over this period.
Non-module costs are almost half of total costs. So when we read about declines in PV module costs keep in mind even if module costs went to zero the total cost for residential solar PV would remain pretty high.
As shown, capacity-weighted average costs declined from $10.5/W in 1998 to $7.6/W in 2007, equivalent to an average annual reduction of $0.3/W, or 3.5%/yr in real dollars.Using this method, the decline in total average PV installed costs since 1998 appears to be primarily attributable to a drop in non-module costs, which fell from approximately $5.7/W in 1998 to $3.6/W in 2007, a reduction of $2.1/W (or 73% of the $2.9/W drop in total installed costs of this period). In comparison, module index prices dropped by only $0.8/W from 1998-2007, and increased somewhat from 2003-2007.13 As with the trend in total installed costs, however, average non-module costs remained relatively stable from 2005-2007.
The overall 3.5% yearly decline rate means progress has been slow. Will it continue to be slow. Or will we reach a critical mass where price declines become much more rapid?
A new paper in Nature argues that in intact African forests the total biomass is increasing. Note the important qualifier. This refers to biomass in those forests which still exist.
Tropical forests hold more living biomass than any other terrestrial ecosystem. A new report in the journal Nature by Lewis et al. shows that not only do trees in intact African tropical forests hold a lot of carbon, they hold more carbon now than they did 40 years ago--a hopeful sign that tropical forests could help to mitigate global warming. In a companion article, Helene Muller-Landau, staff scientist at the Smithsonian Tropical Research Institute, says that understanding the causes of this African forest carbon sink and projecting its future is anything but straightforward.
The paper argues that per acre or hectare of mature forest the amount of carbon held is rising.
Growing trees absorb carbon. Dead, decomposing trees release carbon. Researchers expect growth and death to approximately balance each other out in mature, undisturbed forests, and thus for total tree carbon stocks, the carbon held by the trees, to remain approximately constant. Yet Lewis and colleagues discovered that on average each hectare (100 x 100 meters, or 2.2 acres) of apparently mature, undisturbed African forest was increasing in tree carbon stocks by an amount equal to the weight of a small car each year. Previous studies have shown that Amazonian forests also take up carbon, although at somewhat lower rates.
One possible cause: the rise in atmospheric carbon dioxide (CO2) could basically fertilize the plant life so that it grows more rapidly and densely.
"If you assume that these forests should be in equilibrium, then the best way to explain why trees are growing bigger is anthropogenic global change – the extra carbon dioxide in the atmosphere could essentially be acting as fertilizer." says Muller-Landau, "But it's also possible that tropical forests are still growing back following past clearing or fire or other disturbance. Given increasing evidence that tropical forests have a long history of human occupation, recovery from past disturbance is almost certainly part of the reason these forests are taking up carbon today."
The boost due to higher CO2 won't continue indefinitely. Other nutrients become rate-limiting. Also, more biomass means more food for herbivores and so their numbers grow and they eat more green.
Globally, tropical trees in undisturbed forest are absorbing nearly a fifth of the CO2 released by burning fossil fuels.
The researchers show that remaining tropical forests remove a massive 4.8 billion tonnes of CO2 emissions from the atmosphere each year. This includes a previously unknown carbon sink in Africa, mopping up 1.2 billion tonnes of CO2 each year.
Published today in Nature, the 40 year study of African tropical forests–one third of the world's total tropical forest–shows that for at least the last few decades each hectare of intact African forest has trapped an extra 0.6 tonnes of carbon per year.
The scientists then analysed the new African data together with South American and Asian findings to assess the total sink in tropical forests. Analysis of these 250,000 tree records reveals that, on average, remaining undisturbed forests are trapping carbon, showing that they are a globally significant carbon sink.
I am expecting growing populations and industrializing populations to eventually tap a lot of this increased biomass. Cellulosic technologies for making ethanol will eventually decline in cost to the point that wood as a source of car fuel will become economically competitive. Then all that increased biomass will become valuable as an energy source. Also, the demand for land for cattle and other livestock will grow with rising living standards which allow people to eat more meat. Picture a billion Chinese people eating as much meat as Americans.
These new “secondary” forests are emerging in Latin America, Asia and other tropical regions at such a fast pace that the trend has set off a serious debate about whether saving primeval rain forest — an iconic environmental cause — may be less urgent than once thought. By one estimate, for every acre of rain forest cut down each year, more than 50 acres of new forest are growing in the tropics on land that was once farmed, logged or ravaged by natural disaster.
At least in some countries the amount of land returning to forest may be greater than the amount of forest getting cut down.
In Panama by the 1990s, the last decade for which data is available, the rain forest is being destroyed at a rate of 1.3 percent each year. The area of secondary forest is increasing by more than 4 percent yearly, Dr. Wright estimates.
With the heat and rainfall in tropical Panama, new growth is remarkably fast. Within 15 years, abandoned land can contain trees more than 100 feet high. Within 20, a thick rain-forest canopy forms again. Here in the lush, misty hills, it is easy to see rain-forest destruction as part of a centuries-old cycle of human civilization and wilderness, in which each in turn is cleared and replaced by the other. The Mayans first cleared lands here that are now dense forest. The area around Gamboa, cleared when the Panama Canal was built, now looks to the untrained eye like the wildest of jungles.
The new-growth forests can not support as many species as the old growth forests. So this increase in biomass is not a solution to the habitat and species loss problem.
Dr. Wright, looking at a new forest, sees possibility. He says new research suggests that 40 to 90 percent of rain-forest species can survive in new forest.
Dr. Laurance focuses on what will be missing, ticking off species like jaguars, tapirs and a variety of birds and invertebrates.
Six studies published in the past year by a Cornell researcher add to growing evidence that an apple a day -- as well as daily helpings of other fruits and vegetables -- can help keep the breast-cancer doctor away.
You might need to eat 6 apples a day to get the full benefit.
In one of his recent papers, published in the Journal of Agricultural and Food Chemistry (57:1), Rui Hai Liu, Cornell associate professor of food science and a member of Cornell's Institute for Comparative and Environmental Toxicology, reports that fresh apple extracts significantly inhibited the size of mammary tumors in rats -- and the more extracts they were given, the greater the inhibition.
"We not only observed that the treated animals had fewer tumors, but the tumors were smaller, less malignant and grew more slowly compared with the tumors in the untreated rats," said Liu, pointing out that the study confirmed the findings of his preliminary study in rats published in 2007.
In his latest study, for example, he found that a type of adenocarcinoma -- a highly malignant tumor and the main cause of death of breast-cancer patients, as well as of animals with mammary cancer -- was evident in 81 percent of tumors in the control animals. However, it developed in only 57 percent, 50 percent and 23 percent of the rats fed low, middle and high doses of apple extracts (the equivalent of one, three and six apples a day in humans), respectively, during the 24-week study.
Fruits and vegetables are the ticket. Eat more of them and less of everything else.
Some Berkeley researchers believe that starting with cheaper raw materials is the road to much cheaper photovoltaics.
Berkeley -- Unconventional solar cell materials that are as abundant but much less costly than silicon and other semiconductors in use today could substantially reduce the cost of solar photovoltaics, according to a new study from the Energy and Resources Group and the Department of Chemistry at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory (LBNL).
These materials, some of which are highly abundant, could expand the potential for solar cells to become a globally significant source of low-carbon energy, the study authors said.
The analysis, which appeared online Feb. 13 in Environmental Science & Technology, examines the two most pressing challenges to large-scale deployment of solar photovoltaics as the world moves toward a carbon neutral future: cost per kilowatt hour and total resource abundance. The UC Berkeley study evaluated 23 promising semiconducting materials and discovered that 12 are abundant enough to meet or exceed annual worldwide energy demand. Of those 12, nine have a significant raw material cost reduction over traditional crystalline silicon, the most widely used photovoltaic material in mass production today.
Silicon crystals and some of the elements in photovoltaic thin films (e.g. the indium and gallium in CIGS - copper indium gallium selenide) are expensive and in limited supply. So how can they scale? By contrast, iron is a lot more plentiful and cheaper/
The team identified a large material extraction cost (cents/watt) gap between leading thin film materials and a number of unconventional solar cell candidates, including iron pyrite, copper sulfide and copper oxide. They showed that iron pyrite is several orders of magnitude better than any alternative on important metrics of both cost and abundance. In the report, the team referenced some recent advances in nanoscale science to argue that the modest efficiency losses of unconventional solar cell materials would be offset by the potential for scaling up while saving significantly on materials costs.
Will materials manipulated on the nanoscale be craftable into higher efficiency photovoltaics?
New animal research in the February 18 issue of The Journal of Neuroscience may indicate how certain diseases make people feel so tired and listless. Although the brain is usually isolated from the immune system, the study suggests that certain behavioral changes suffered by those with chronic inflammatory diseases are caused by the infiltration of immune cells into the brain. The findings suggest possible new treatment avenues to improve patients' quality of life.
Chronic inflammatory diseases like rheumatoid arthritis, inflammatory bowel disease, psoriasis, and liver disease cause "sickness behaviors," including fatigue, malaise, and loss of social interest. However, it has been unclear how inflammation in other organs in the body can impact the brain and behavior.
The researchers found that in mice with inflamed livers, white blood cells called monocytes infiltrated the brain. These findings support previous research demonstrating the presence of immune cells in the brain following organ inflammation, challenging the long-held belief that the blood-brain barrier prevents immune cells from accessing the brain.
The researchers identified chemicals that encouraged immune system monocytes to enter the brain.
"Using an experimental model of liver inflammation, our group has demonstrated for the first time the existence of a novel communication pathway between the inflamed liver and the brain," said the study's senior author Mark Swain, MD, Professor of Medicine at the University of Calgary.
Swain and his colleagues found that liver inflammation triggered brain cells called microglia to produce CCL2, a chemical that attracts monocytes. When the researchers blocked CCL2 signaling, monocytes did not enter the brain despite ongoing inflammation in the liver.
Liver inflammation also stimulated cells in the blood to make an immune chemical (TNFα). When the researchers blocked the signaling of this immune chemical, microglia produced less CCL2, and monocytes stayed out of the brain.
This is usable information because there are lots of ways to decrease the level of inflammation in your body. You can eat tart cherries, pistachios, grapes, vegetables, and omega 3 fatty acids from fish to cut down in your body's level of inflammation. Exercise helps too. Feeling fatigued? You might need a better diet and exercise.
Farmers across the tropics might raze forests to plant biofuel crops, according to new research by Holly Gibbs, a postdoctoral researcher at Stanford's Woods Institute for the Environment.
"If we run our cars on biofuels produced in the tropics, chances will be good that we are effectively burning rainforests in our gas tanks," she warned.
Policies favoring biofuel crop production may inadvertently contribute to, not slow, the process of climate change, Gibbs said. Such an environmental disaster could be "just around the corner without more thoughtful energy policies that consider potential ripple effects on tropical forests," she added.
Gibbs' predictions are based on her new study, in which she analyzed detailed satellite images collected between 1980 and 2000. The study is the first to do such a detailed characterization of the pathways of agricultural expansion throughout the entire tropical region. Gibbs hopes that this new knowledge will contribute to making prudent decisions about future biofuel policies and subsidies.
Of course, the expanding populations are a bad idea too.
Tearing down rain forests to plant biofuel crops causes huge carbon dioxide emissions that far exceed any reduced CO2 emissions caused by using biomass energy in place of oil.
"If biofuels are grown in place of forests, we're actually going to end up emitting a huge amount of carbon. When trees are cut down to make room for new farmland, they are usually burned, sending their stored carbon to the atmosphere as carbon dioxide. That creates what's called a carbon debt," Gibbs said. "This is because the carbon lost from deforestation is much greater than the carbon saved from using the current-generation biofuels."
Indeed, tropical forests are the world's most efficient storehouses for carbon, harboring more than 340 billion tons, according to Gibbs' research. This is equivalent to more than 40 years worth of global carbon dioxide emissions from burning fossil fuels.
Gibbs' previous findings asserted that the carbon debt incurred from cutting down a tropical forest could take several centuries or even millennia to repay through carbon savings produced from the resultant biofuels.
The scientists found that addressing the land-based carbon is essential for stabilizing greenhouse gases at low levels. Overall, land contains 2,000 billion tons of carbon, compared to the 750 billion tons in the atmosphere. In addition, forests hold more carbon than grazing does. Converting land from forest to food or bioenergy crops releases carbon into the atmosphere. Conversely, turning agricultural land back into forests tucks carbon away on land, reducing it in the atmosphere.
Now, I think that tearing down all the forests to enable the human population to continue to grow is a bad idea for other reasons. I don't see any benefit for existing people from the addition of another billion people and I see a lot of costs.
Some University of Minnesota researchers argue that the health and environmental costs of cellulosic ethanol are much lower than the costs of gasoline and corn ethanol.
Total environmental and health costs of gasoline are about 71 cents per gallon, while an equivalent amount of corn-ethanol fuel costs from 72 cents to about $1.45, depending on the technology used to produce it. An equivalent amount of cellulosic ethanol, however, costs from 19 cents to 32 cents, depending on the technology and type of cellulosic materials used.
But if existing rain forests are harvested for the cellulose then the net effect would be increase the amount of carbon dioxide in the atmosphere. Cellulosic technology can play a useful role in processing lawn clippings and other biomass wastes. But a big scaling up to produce cellulose from crops is a bad idea.
Eating just one more serving of green leafy vegetables or three more servings of fruit a day reduces the risk of developing Type II diabetes, according to results of data analysis performed by researchers in the Tulane School of Public Health and Tropical Medicine and the Harvard School of Public Health. The research team also found that one serving of fruit juice a day increased the risk of Type II diabetes in women.
Age and obesity both increase the risk of insulin resistant diabetes. That type of diabetes, just like the other type where the immune system attacks insulin-producing cells, accelerates the aging of the whole body. You really want to avoid this. Fortunately you can make dietary choices that'll cut your risk of insulin resistant diabetes. Familiar good foods are good for avoiding diabetes as well.
Tulane epidemiologist Dr. Lydia Bazzano says, “Based on the results of our study, people who have risk factors for diabetes may find it helpful to fill up on leafy greens like lettuces, kale and spinach and whole fruits, like apples, bananas, oranges and watermelon rather than drink fruit juices, which deliver a big sugar load in a liquid form that gets absorbed rapidly.”
Eat vegetables. Eat fruits. Then eat more vegetables and some more fruits.
Bazzano, an assistant professor of epidemiology, cautioned that since this is one of the first studies to separate fruit juice consumption from fruits as a whole, the association between juice and diabetes must be confirmed by additional research.
She and her team analyzed 18 years worth of diet and health data from 71,346 nurses who participated in the Nurses’ Health Study from 1984 to 2002. The women were all between 38 and 63 years old and diabetes-free when the study began. Approximately 7 percent of the participants developed diabetes over the course of the study.
WASHINGTON — For years, Senator Arlen Specter of Pennsylvania has been the National Institutes of Health’s most ardent champion on Capitol Hill. Having survived two bouts with cancer, open-heart surgery and even a faulty diagnosis of Lou Gehrig’s disease, he has long insisted that research that results in medical cures is the best service that government can provide.
But even lobbyists are stunned by the coup Mr. Specter pulled off this week. In return for providing one of only three Republican votes in the Senate for the Obama administration’s $787 billion economic stimulus package, he was able to secure a 34 percent increase in the health agency’s budget — to $39 billion from $29 billion.
Specter made a good deal. The medical research spending will speed the rate of advance in our understanding of the causes of diseases. The fiscal stimulus bill was going to pass anyway. He made sure it accomplished at least one constructive result.
We are all getting older every day. The faster medical research moves forward the sooner we will reach the day when we can start turning back the clock on our bodies and gradually make ourselves young again.
I suspect such a large single year increase in NIH funding will have the effect of getting approval for more risky and less mainstream grant proposals. This might let some less conventional approaches get investigated and perhaps open up some fruitful lines of research.
First impressions are highly influential, despite the well-worn admonition not to judge a book by its cover. Within a tenth of a second of seeing an unfamiliar face we have already made a judgement about its owner's character - caring, trustworthy, aggressive, extrovert, competent and so on (Psychological Science, vol 17, p 592). Once that snap judgement has formed, it is surprisingly hard to budge. What's more, different people come to strikingly similar conclusions about a particular face - as shown in our own experiment (see "The New Scientist face experiment").
People also act on these snap judgements. Politicians with competent-looking faces have a greater chance of being elected, and CEOs who look dominant are more likely to run a profitable company. Baby-faced men and those with compassionate-looking faces tend to be over-represented in the caring professions. Soldiers deemed to look dominant tend to rise faster through the ranks, while their baby-faced comrades tend to be weeded out early. When baby-faced men appear in court they are more likely than their mature-faced peers to be exonerated from a crime. However, they are also more likely to be found guilty of negligence.
While we have clear tendencies to expect different facial appearances to be associated with different kinds of personalities the article reports that scientists are still uncertain as to how accurate our judgments are about faces and personalities.
When offspring genetic engineering becomes possible will people choose to make the faces of their kids look trustworthy and dominant?
Picture a drug based on RNA as a mini computer program aimed at running in our cells rather than in a silicon computer. Such a drug in theory could carry out much more complex behaviors than conventional simpler chemical compounds. Stanford researchers are working on RNA-based drugs that would only turn on in cancer cells.
Current treatments for diseases like cancer typically destroy nasty malignant cells, while also hammering the healthy ones. Using new advances in synthetic biology, researchers are designing molecules intelligent enough to recognize diseased cells, leaving the healthy cells alone.
"We basically design molecules that actually go into the cell and do an analysis of the cellular state before delivering the therapeutic punch," said Christina Smolke, assistant professor of bioengineering who joined Stanford University in January.
This is the sort of approach we need to wipe out cancer. The current chemo drugs are nowhere near specific enough in the cells they target. The whole body ends up getting damaged. Also, the rates of failure for chemo are very high for many types of cancer.
The trick is to activate only in the presence of biomarker materials that are characteristic of cancer cells. That's a tough job because human cancer cells are human cells. Coming up with suitable biomarkers and ways to make RNA react to them is not easy.
"When you look at a diseased cell (e.g. a cancer cell) and compare it to a normal cell, you can identify biomarkers—changes in the abundance of proteins or other biomolecule levels—in the diseased cell," Smolke said. Her research team has designed molecules that trigger cell death only in the presence of such markers. "A lot of the trick with developing effective therapeutics is the ability to target and localize the therapeutic effect, while minimizing nonspecific side effects," she said.
Smolke will present the latest applications of her lab's work at the American Association for the Advancement of Science (AAAS) meeting in Chicago on Friday, Feb. 13.
These designer molecules are created through RNA-based technologies that Smolke's lab developed at the California Institute of Technology. A recent example of these systems, developed with postdoctoral researcher Maung Nyan Win (who joined Smolke in her move to Stanford), was described in a paper published in the Oct. 17, 2008, issue of Science.
"We do our design on the computer and pick out sequences that are predicted to behave the way we like," Smolke said. When researchers generate these sequences inside the operating system of a cell, they reprogram the cell and change its function. "Building these molecules out of RNA gives us a very programmable and therefore powerful design substrate," she said.
The ability to selectively kill all cancer cells in the body would not only put an end to cancer as a killer but also open up the door to a lot more therapies for other diseases. Hormone replacement therapies that increase the risk of cancer would no longer pose that problem for their use. So we could jack up our aging metabolisms with hormones and pay less of a price for doing so.
Using a sort of hacking approach to drug activation to only turn toxic drugs on in cancer cells is an obvious idea and other groups are working on it. See my 2004 post DNA Nanomachine Computers Against Cancer.
According to a recent study published online in The FASEB Journal (http://www.fasebj.org), diets rich in omega-3 fatty acids protect the liver from damage caused by obesity and the insulin resistance it provokes. This research should give doctors and nutritionists valuable information when recommending and formulating weight-loss diets and help explain why some obese patients are more likely to suffer some complications associated with obesity. Omega-3 fatty acids can be found in canola oil and fish.
A reduction in the risk of insulin-resistant diabetes is a big win. Insulin resistance causes many things (higher heart attack risk, faster brain aging, and much more) to go wrong as we age.
"Our study shows for the first time that lipids called protectins and resolvins derived from omega-3 fatty acids can actually reduce the instance of liver complications, such as hepatic steatosis and insulin resistance, in obese people," stated Joan Claria, a professor from the University of Barcelona and one of the researchers involved in the work.
You need your protectins and resolvins.
The scientists found that two types of lipids in omega-3 fatty acids—protectins and resolvins—were the cause of the protective effect. To reach this conclusion, they studied four groups of mice with an altered gene making them obese and diabetic. One group was given an omega-3-enriched diet and the second group was given a control diet. The third group was given docosahexaenoic acid, and the fourth received only the lipid resolvin. After five weeks, blood serum and liver samples from the test mice were examined. The mice given the omega-3-rich diet exhibited less hepatic inflammation and improved insulin tolerance. This was due to the formation of protectins and resolvins from omega-3 fatty acids.
If you demand understanding of a molecular mechanism for why a nutrient will protect you before you will act then now you know the mechanism exists. Get omega 3 fatty acids into your diet.
In the study, published online by the Journal of Clinical Endocrinology and Metabolism, Teff and her collaborators studied 17 obese men and women. Each was admitted two times to the Clinical and Translational Research Center at the University of Pennsylvania. On each admission, the subjects were given identical meals and blood was collected from an intravenous catheter over a 24-hour period. The only difference was the sweetener used in the beverages that accompanied the meals; beverages were sweetened with glucose during one admission and with fructose during the other.
Blood triglyceride levels were higher when subjects drank fructose-sweetened beverages with their meals compared to when they drank glucose-sweetened beverages. The total amount of triglycerides over a 24-hour period was almost 200 percent higher when the subjects drank fructose-sweetened beverages.
Although fructose increased triglyceride levels in all of the subjects, this effect was especially pronounced in insulin-resistant subjects, who already had increased triglyceride levels. Insulin resistance is a pre-diabetic condition often associated with obesity.
What I would like to know: Can eating fruits cause the same effect?
A drug under development boosts lean muscle mass in older adults. This is pretty gutsy for a drug company to do active development on what is basically a rejuvenation drug - albeit a very limited scope rejuvenation drug.
An investigational drug that stimulates the body to produce more growth hormone improves lean muscle mass and physical function in older adults, potentially helping to combat frailty, according to researchers at Duke University Medical Center, VA Puget Sound Health Care System, the University of Washington School of Medicine, and 10 other study centers.
The Phase II study is the first to show improvements in physical performance among at-risk seniors taking capromorelin, an oral compound developed by Pfizer, which can help the body release more growth hormone. Older adults have greatly reduced production of growth hormone, which regulates metabolism and aids in the building of muscle mass even after adolescent growth has been completed.
But can it improve function without causing side effects that decrease life expectancy? Ideally the drug will increase functioning and increase life expectancy. But that's hard to do. Our bodies wear out as we age. Attempts to stimulate them to function better can backfire. Look at the problems with replacement hormones for post menopausal women. Hormone replacement isn't guaranteed to yield a net benefit.
Cures for cancer will eventually allow hormone replacement to deliver a net benefit in more cases. Hormone therapies that boost cancer risk but deliver benefits in other areas will probably become net benefits once cancer cures are available.
In a comprehensive report on the subject this morning, Collins Stewart solar analyst Dan Ries notes that spot market poly prices have fallen from a peak of about $450/kg in mid-2008 to the $130-$150/kg range more recently. That’s a pretty dramatic move - but the decline is far from over.
Ries contends that spot prices by mid-2009 will plunge to the $40-$60/kg level, due to a severe oversupply.
One analyst expects PV prices to drop so far that PV will start to compete with other methods of generating electricity.
The silver lining here is that in the long run, much lower prices for polysilicon are the most direct way to bring down solar electricity production costs low enough to compete with conventional utility scale power generation. With poly in the $40-$60/kg range, he says, module prices would drop to the $1.70-$2/watt range, and utility scale projects could produce power for 11 cents/watt. At that rate, he says, solar would be “reasonably competitive” with combined cycle natural gas facilities and wind turbines.
The world may be running out of oil. But it is not running out of energy. We can shift to solar, wind, and nuclear. We just need great batteries for the cars.
Analysts at HSBC forecast average selling prices for solar systems will drop by about a fifth in 2009 given oversupply and a tighter credit environment, but prices for cells and modules have so far fallen much faster than those for silicon and wafer.
If you are thinking about putting PV on your house now is not the time to do it. Wait a year and save big.
In the U.S. solar industry, the ripple effects of the crisis extend all the way to the panels that homeowners put on their roofs. The price of solar panels has fallen by 25 percent in six months, according to Rhone Resch, president of the Solar Energy Industries Association, who said he expected a further drop of 10 percent by midsummer.
For homeowners, however, the savings will not be as substantial, partly because panels account for only about 60 percent of total installation costs.
The largest series of solar installations in history, more than 1,300 megawatts, is planned for the desert outside Los Angeles, according to a new deal between the utility Southern California Edison and solar power plant maker, BrightSource.
The technology isn't the familiar photovoltaics — the direct conversion of sunlight into electricity — but solar thermal power, which concentrates the sun's rays to create steam in a boiler and spin a turbine.
The big solar and wind facility builds in California are driven by state government requirements on electric utility companies to get more electric power from renewables. This announcement tells me that the utilities see solar concentrators as still cheaper than PV.
Varda Shalev, M.D., and colleagues at Maccabi Healthcare Services and Sackler Faculty of Medicine, Tel Aviv, Israel, analyzed data from 229,918 adults (average age 57.6) enrolled in a health maintenance organization who began taking statins between 1998 and 2006. This included 136,052 individuals without heart disease (primary prevention group), who were followed for an average of four years, and 93,866 already diagnosed with heart disease (secondary prevention group), with an average five years of follow-up. Researchers checked pharmacy records to calculate the proportion of days that each individual took statins.
During the study, 4,259 patients in the primary prevention group and 8,906 in the secondary prevention group died. In both groups, continuity of taking statins—defined as taking statins for at least 90 percent of the follow-up period—conferred at least a 45 percent reduction in the risk of death compared with patients who took statins less than 10 percent of the time. The risk reduction was stronger among patients with high levels of LDL cholesterol at the beginning of the study and among patients whose initial treatment was with high-efficacy statins.
"In conclusion, this study showed that the continuation of statin treatment provided an ongoing reduction in all-cause mortality [death] for up to 9.5 years among patients with and without a history of coronary heart disease," they continue. "The observed benefits from statins were greater than expected from randomized clinical trials, emphasizing the importance of promoting statin therapy and increasing its continuation over time for both primary and secondary prevention."
If you have high cholesterol do something to lower it. Take statins if you can't be bothered to radically change your diet. Or take statins and radically change your diet. Or at least change your diet. On the other hand, if you have a death wish I don't have any arguments to offer for why to take statins. But maybe if you changed your diet for the better you might feel better and less inclined to die.
Eat the Mediterranean Diet enhanced with the Ape or Portfolio Diet. The Portfolio Diet rivals statins in cholesterol lowering ability. But popping a pill requires less effort.
Update: Keep in mind that for a minority of statin users the side effects are severe and even damaging (e.g. myopathy - muscle damage). Try diet first. If you are willing to change your diet you can cut your cholesterol and also cut your risks of many diseases besides cardiovascular diseases. For example, the Mediterranean Diet will cut your risks of cancer. Also, as I point out in the comments, genetic tests that will identify those who will suffer side effects are probably on the way. The research on genetic variants responsible for statin side effects looks promising.
MADISON — The ability to empathize with others is partially determined by genes, according to new research on mice from the University of Wisconsin-Madison and Oregon Health and Science University (OHSU).
In the study, a highly social strain of mice learned to associate a sound played in a specific cage with something negative simply by hearing a mouse in that cage respond with squeaks of distress. A genetically different mouse strain with fewer social tendencies did not learn any connection between the cues and the other mouse's distress, showing that the ability to identify and act on another's emotions may have a genetic basis. The new research will publish Wednesday, Feb. 11, in the Public Library of Science ONE journal at http://dx.plos.org/10.1371/journal.pone.0004387.
Like humans, mice can automatically sense and respond to others' positive and negative emotions, such as excitement, fear or anger. Understanding empathy in mice may lead to important discoveries about the social interaction deficits seen in many human psychosocial disorders, including autism, schizophrenia, depression and addiction, the researchers say. For example, nonverbal social cues are frequently used to identify early signs of autism in very young children.
"The core of empathy is being able to have an emotional experience and share that experience with another," says UW-Madison graduate student Jules Panksepp, who led the work along with undergraduate QiLiang Chen. "We are basically trying to deconstruct empathy into smaller functional units that make it more accessible to biological research."
Here comes a question that is predictable for long time readers (at least those with the right genetic complement): Will people choose to make their genetically engineered offspring more or less empathetic than the average human is now?
People with the short serotonin transporter gene, 5-HTTLPR (two copies of the short allele), relative to those with the long version of that polymorphism (at least one copy of the long allele), invested 28 percent less in a risky investment. Similarly, people who carry the 7-repeat allele of the DRD4 gene in the dopamine family, relative to those carrying other versions of that gene, invested about 25 percent more in a risky investment.
"Our research pinpoints, for the first time, the roles that specific variants of the serotonin transporter gene and the dopamine receptor gene, play in predicting whether people are more or less likely to take financial risks," said Camelia M. Kuhnen, assistant professor of finance, Kellogg School of Management at Northwestern. "It shows that individual variability in our genetic makeup effects economic behavior."
"Genetic Determinants of Financial Risk Taking will be published online Wednesday, Feb. 11, by the open-access journal PLoS ONE. The study's co-investigators are Kuhnen and Joan Y. Chiao, assistant professor of psychology at Northwestern.
Prior research linking the two genetic variants of 5-HTTLPR and DRD4 to, respectively, negative emotion and addiction behaviors suggested to the Northwestern researchers that those particular brain mechanisms could play a role in financial risk-taking. But until the Northwestern study, the identification of specific genes underlying financial-risk preferences remained elusive.
When people gain the ability to choose genes for their offspring will they opt for a shorter or longer version of this gene? How will future humans differ from present humans genetically?
We should use more nuclear, wind, solar, and geothermal power now so we can save fossil fuels to use later to delay the next ice age.
Professor Shaffer made long projections over the next 500,000 years with the DCESS Earth System Model to calculate the evolution of atmospheric CO2 for different fossil fuel emission strategies. He also used results of a coupled climate-ice sheet model for the dependency on atmospheric CO2 of critical summer solar radiation at high northern latitudes for an ice age onset.
The results show global warming of almost 5 degrees Celsius above present for a "business as usual" scenario whereby all 5000 billion tons of fossil fuel carbon in accessible reserves are burned within the next few centuries. In this scenario the onset of next ice age was postponed to about 170,000 years from now.
Carbon can postpone ice age
However, for a management scenario whereby fossil fuel use was reduced globally by 20% in 2020 and 60% in 2050 (compared to 1990 levels), maximum global warming was less than one degree Celsius above present. Similar reductions in fossil fuel use have been proposed by various countries like Germany and Great Britain.
In this scenario, combustion pulses of large remaining fossil fuel reserves were then tailored to raise atmospheric CO2 content high and long enough to parry forcing of ice age onsets by summer radiation minima as long as possible. In this way our present equable interglacial climate was extended for about 500,000 years, three times as long as in the "business as usual" case.
Sounds like a good idea to me. Though we could always use nuclear fusion reactions to drive synthetic production of methane for a much more powerful greenhouse gas.
In any case, we've already kicked the next ice age 55,000 years into the future. So we've got that going for us. Which is nice.
"It appears to be well established that the strong ice ages the Earth has experienced over the past million years were ushered in by declining levels of atmospheric CO2. Our present atmospheric CO2 level of about 385 parts per million is already higher than before the transition to these ice ages" Professor Shaffer notes and adds that "The Earth's orbit is nearly circular at present meaning that the present minimum in summer radiation at high northern latitudes is not very deep. We have already increased atmospheric CO2 enough to keep us out of the next ice age for at least the next 55,000 years for this orbital setup".
Those born in the late summer and early autumn are around half a centimetre taller and have wider bones than their peers born in winter and spring, an 18 year project found.
Expectant mothers lucky enough to be blooming in the hot months should get enough sun to boost their vitamin D levels just by walking around outside or even sunbathing.
But winter parents should consider taking vitamin supplements, researchers at Bristol University recommended.
The largest genetic effect by far comes from the region on chromosome six containing the gene variant known as DRB1*1501 and from adjacent DNA sequences. Whilst one in 1,000 people in the UK are likely to develop MS, this number rises to around one in 300 amongst those carrying a single copy of the variant and one in 100 of those carrying two copies.
Now, in a study funded by the UK's MS Society, the MS Society of Canada, the Wellcome Trust and the Medical Research Council, researchers at the University of Oxford and the University of British Columbia have established a direct relationship between DRB1*1501 and vitamin D.
The researchers found that proteins activated by vitamin D in the body bind to a particular DNA sequence lying next to the DRB1*1501 variant, in effect switching the gene on.
"In people with the DRB1 variant associated with MS, it seems that vitamin D may play a critical role," says co-author Dr Julian Knight. "If too little of the vitamin is available, the gene may not function properly."
"We have known for a long time that genes and environment determine MS risk," says Professor George Ebers, University of Oxford. "Here we show that the main environmental risk candidate – vitamin D – and the main gene region are directly linked and interact."
So mom and baby should get lots of vitamin D to grow big and strong and avoid MS.
DNA sequencing costs are falling so far so fast that in 10 years DNA sequencing of babies will be commonplace at birth. Cuckolds will learn of their plight while standing outside hospital delivery rooms.
Every baby born a decade from now will have its genetic code mapped at birth, the head of the world's leading genome sequencing company has predicted.
A complete DNA read-out for every newborn will be technically feasible and affordable in less than five years, promising a revolution in healthcare, says Jay Flatley, the chief executive of Illumina.
Only social and legal issues are likely to delay the era of “genome sequences”, or genetic profiles, for all. By 2019 it will have become routine to map infants' genes when they are born, Dr Flatley told The Times.
Of course, this won't be commonplace in the poorer countries. But in industrialized countries a complete DNA sequence at birth will come to be seen as prudent for many reasons. Most obvious: Before the father's name gets placed on the birth certificate the hospital will verify just who is dad.
Genetic diseases that cause damage when the wrong foods are consumed will be known about from the start. Also, knowledge of genetic factors that contribute to autism might eventually become useful to help initiate treatment that'll alter the direction of brain development to make the disorder less severe.
Why else get sequenced at birth? To transfer the data to exclusive competitive kindergartens and grade schools which will of course evaluate applications for admission of Jill and Johnnie at least partially based on their genetic potential.
Some of the plants were exposed to atmospheric CO2 levels of 550 parts per million (ppm), the level predicted for the year 2050 if current trends continue. These were compared to plants grown at ambient CO2 levels (380 ppm).
The results were striking. At least 90 different genes coding the majority of enzymes in the cascade of chemical reactions that govern respiration were switched on (expressed) at higher levels in the soybeans grown at high CO2 levels. This explained how the plants were able to use the increased supply of sugars from stimulated photosynthesis under high CO2 conditions to produce energy, Leakey said. The rate of respiration increased 37 percent at the elevated CO2 levels.
The enhanced respiration is likely to support greater transport of sugars from leaves to other growing parts of the plant, including the seeds, Leakey said.
"The expression of over 600 genes was altered by elevated CO2 in total, which will help us to understand how the response is regulated and also hopefully produce crops that will perform better in the future," he said.
To fully exploit the agricultural benefits of high CO2 will likely require genetic engineering to tailor plant genes to operate optimally in a high CO2 environment.
But will the rains still come when the CO2 rises? Or will warming cause drying in soy crop areas? That is hard to know at this point.
Still not ready to shift to the Mediterranean diet of low dairy products, low red meat, and low saturated fats? Not ready to eat more fish, fruit, vegetables, nuts, beans, and whole grains? You can cut your risk of cognitive impairment and Alzheimer's disease by eating the Mediterranean diet.
Nikolaos Scarmeas, M.D., and colleagues at Columbia University Medical Center, New York, calculated a score for adherence to the Mediterranean diet among 1,393 individuals with no cognitive problems and 482 patients with mild cognitive impairment. Participants were originally examined, interviewed, screened for cognitive impairments and asked to complete a food frequency questionnaire between 1992 and 1999.
Over an average of 4.5 years of follow-up, 275 of the 1,393 who did not have mild cognitive impairment developed the condition. Compared with the one-third who had the lowest scores for Mediterranean diet adherence, the one-third with the highest scores for Mediterranean diet adherence had a 28 percent lower risk of developing mild cognitive impairment and the one-third in the middle group for Mediterranean diet adherence had a 17 percent lower risk.
Among the 482 with mild cognitive impairment at the beginning of the study, 106 developed Alzheimer's disease over an average 4.3 years of follow-up. Adhering to the Mediterranean diet also was associated with a lower risk for this transition. The one-third of participants with the highest scores for Mediterranean diet adherence had 48 percent less risk and those in the middle one-third of Mediterranean diet adherence had 45 percent less risk than the one-third with the lowest scores.
The Mediterranean diet may improve cholesterol levels, blood sugar levels and blood vessel health overall, or reduce inflammation, all of which have been associated with mild cognitive impairment. Individual food components of the diet also may have an influence on cognitive risk. "For example, potentially beneficial effects for mild cognitive impairment or mild cognitive impairment conversion to Alzheimer's disease have been reported for alcohol, fish, polyunsaturated fatty acids (also for age-related cognitive decline) and lower levels of saturated fatty acids," they write.
My advice: wade gradually into the diet. The key isn't eating less of bad foods as much as getting better foods onto your plate. Focus on cooking more with beans, eating raw vegetables as snacks, and other moves to put better foods into your mouth. The better foods will displace the lousier foods if you reach for the better foods first. Here is the original paper.
The 2007 PLoS Pathogens study, by researchers at Mt. Sinai School of Medicine in New York, looked at the effects of temperature and relative humidity on transmission of influenza using influenza-infected guinea pigs in climate-controlled chambers. The researchers used 20 different combinations of temperature and relative humidity in an effort to identify a trigger point for changes in transmission of the virus between infected guinea pigs and adjacent control animals.
In general, the study found that there were more infections when it was colder and drier. However, Shaman and Kohn demonstrated that relative humidity could only explain about 12 percent of the variability of influenza virus transmission from these data. In addition, numerous other experiments, dating back to the 1940s, have shown that low relative humidity favors increased influenza virus survival.
However, in their PNAS analysis, Shaman and Kohn demonstrated that relative humidity only explains about 36 percent of influenza virus survival. The Oregon researchers then retested the various data using absolute humidity and found a dramatic rise in accounting for both transmission (50 percent, up from 12 percent) and survival (90 percent, up from 36 percent).
The cold air holds less water. Absolute humidity drops. That increases flu virus survival.
Keep this news in mind the next time we get a big killer flu pandemic. But individuals can't do much to up the absolute humidity in our environment. We can do something in our homes with humidifiers. But most of our exposure to infectious people comes outside the home. Few of us have any control over the temperature or humidity where we work or shop or go for services such as in dental and medical offices.
Residential energy credits of 30 percent of the cost of certain improvements are available to homeowners, Ms. Weltman noted, with caps of $500 to $2,000, depending on the improvement. They include qualified expenditures on equipment for solar electric power, solar water heating, fuel cells, wind energy and geothermal heat pumps, and for their installation. More traditional energy-saving improvements, like increasing insulation to use less heating oil or natural gas, do not qualify.
Home energy efficiency differs from car energy efficiency in fundamental ways. Most notably, one gives up comfort, convenience, and utility when shifting to a smaller and more fuel efficient vehicle. By contrast, many home efficiency improvements increase comfort. Sealed gaps cut down on drafts. Multi-paned windows with argon gas fillers reduce the coldness one can feel in the winter when standing next to a window.
So if home energy efficiency improvements have such big upsides (along with payback times short enough to justify) why do so many homes lag in their energy efficiency? Why don't more government policies push for energy efficiency to the same extent that government policies push for car fuel efficiency? My guess: car manufacturing is a lot more centralized and standardized as compared to home construction. It is a lot easier for governments to focus on a couple dozen car makers than thousands of home builders and remodelers.
A similar problem is posed for individual home owners. One can't precisely estimate how much many home energy upgrades will cut heating and cooling bills. The uncertainty of the size of the benefit probably reduces home energy efficiency investments below the optimum. Even if one is buying a new appliance that is precisely rated on energy efficiency one doesn't necessarily know the energy efficiency of the older appliance that is being replaced or how much one uses that particular appliance.
Policy changes about cars can also accomplish faster changes. Houses have much longer lifecycles than cars. Changes in policies for new cars will ripple into over half the vehicle miles driven in about 6 or 7 years after the first car built to a new fuel efficiency specification rolls off the assembly line.
Housing construction regulations are also much more local both for historical and practical reasons. The ideal energy efficiency design elements for a house on a Maine hillside is different than the ideal house design for Florida, Brazil, Alaska, London, or Darwin Australia. The ideal design on that Maine hillside also differs depending on whether the house is to be built on a southward or northward facing side of the hill. The sun beating down in Arizona makes photovoltaics economics far more favorable there than in Edinburgh Scotland. Cloud cover and length of daylight varies by season and latitude. Attempts to come up with regulations to mandate higher housing efficiency might meet with success in a small flat country with consistent climate throughout. But in a larger nation building efficiency regulations are problematic at a national level.
Still, as Will Stewart points out, big energy and cost savings are to be had from better design of homes and commercial buildings.
For example, DoE’s NREL monitored one passive solar house built 16 years ago, finding primary energy costs savings of 56% compared to similar houses built to the Model Energy Code; small tweaks in the design could have realized a total of 70% energy savings. It may surprise some, but this house cost no more to build than other homes in the neighborhood. With regard to commercial and government buildings, an initial upfront investment of up to $100,000 to incorporate green building features into a $5 million project would result in a savings of at least $1 million over the life of the building, assumed conservatively to be 20 years.
Still, one can take the regulatory attempts too far. Britain's net-zero requirements for energy efficiency of new buildings looks like a case of pursuing dwindling marginal returns into negative returns (though I'd be happy if someone can prove me wrong on this point).
Stepping up a level, the Passive House architectural movement (originating in Germany) has been realizing designs that save 75-90% of a building’s primary energy use. Architecture 2030, an independent research organization, understands that strides that can be made in building energy efficiency. In 2006, it initiated the 2030 Challenge, which calls for a 50% reduction in new building energy fossil fuel use by 2010, and net-zero energy use by 2030. The UK is much more aggressive with a net-zero requirement for all of its new buildings by 2016.
Also see Will Stewart's passive solar series on The Oil Drum.
Kissing cuts down stress hormone levels in men and women. Think about that after a stressful day or week.
Hill wanted to find out just what happens to evoke such a powerful emotional response from simply rubbing lips. Her research looked at the impact of kissing on levels of two hormones, oxytocin and cortisol, in 15 male-female couples before and after holding hands and before and after kissing.
Oxytocin is known to be involved in social bonding so the researchers predicted that its levels would rise, while cortisol, a stress hormone, would fall. The results showed cortisol levels fell in both sexes, although oxytocin levels rose in men but fell in women.
But at least in the original non-romantic environment of a university health center only the men experienced a bond-forming boost of the hormone oxytocin. Does this mean that men get hooked by kisses but women dont? Maybe not. These sicentists are about to announce whether women too get an oxytocin boost from kissing in more romantic settings.
The scientists have since replicated the tests in more intimate settings, to see if the less-than-alluring environment of the university health centres where the original research was carried out hampered women's hormonal surge.
The final results will be presented at the annual conference of the American Association for the Advancement of Science in Chicago this week.
So guys, you might need to be careful where you kiss a woman if you don't want her getting the upper hand. Before you know it you'll be wrapped around her finger and she won't feel the same.
Oxytocin, a hormone involved in child-birth and breast-feeding, helps people recognize familiar faces, according to new research in the January 7 issue of The Journal of Neuroscience. Study participants who had one dose of an oxytocin nasal spray showed improved recognition memory for faces, but not for inanimate objects.
"This is the first paper showing that a single dose of oxytocin specifically improves recognition memory for social, but not for nonsocial, stimuli," said Ernst Fehr, PhD, an economist at the University of Zurich who has studied oxytocin's effect on trust and is unaffiliated with the new study. "The results suggest an immediate, selective effect of the hormone: strengthening neuronal systems of social memory," Fehr said.
Michael Kanellos takes a look at the high cost of liquid biodiesel fuels from algae and the prospects for lowering their costs.
Algae biofuel startup Solix, for instance, can produce biofuel from algae right now, but it costs about $32.81 a gallon, said Bryan Wilson, a co-founder of the company and a professor at Colorado State University.
But by using waste heat (e.g. from electric power generator plants) Solix claims it can get costs down to $5.50 per gallon. The article reports that cost is equivalent to sustained $150 per barrel oil.
By exploiting waste heat at adjacent utilities (one of our favorite forms of energy around here), the price can probably be brought down to $5.50 a gallon (see Will Waste Heat Be Bigger Than Solar?). By selling the proteins and other byproducts from the algae for pet food, the price can be brought to $3.50 a gallon in the near term.
Beyond that Solix claims to have ways to get the cost down to below the equivalent of $80 per barrel oil. But suppose they can just get it down to $150 per barrel. That would be great news for the post-Peak Oil era since it'd put a long term ceiling on the price of liquid fuel that would allow a functioning industrial civilization not much different than what we currently have.
During our current deepening recession these prices all sound high. But the big drop in oil prices has been caused by a sharp decline in demand. Economic recovery (whenever that comes) will drive prices back up again. So if Solix can survive long enough to get its costs down it could have bright prospects.
The February issue of Biodiesel Magazine reports on a number of recent funding successes for biodiesel players including $10.5 million for Solix. Kinda surprising given the recession and low oil prices. Investors must see higher oil prices and brighter prospects in the longer term.
Fort Collins, Colo.-based Solix Biofuels, a technology partner of Colorado State University, raised $10.5 million in a Series A round of outside financing and reached an agreement with its investors for an additional $5 million to support the construction of a pilot-scale algae oil production plant in Durango, Colo. The oil produced at the facility will be used by biodiesel producers and the chemical industry, according to Chief Executive Officer Doug Henston. I2BF Venture Capital and Bohemian Investments led the Series A round. Southern Ute Alternative Energy LLC, Valero Energy Corp. and Infield Capital also participated.
As long time readers know, I see crop-based biomass energy as a bad idea. But algae holds out the prospect of a much smaller land footprint, less impact on the environment, and less susceptibility to weather and other environmental factors.
Obviously analysis of medical records for millions of people can not substitute for randomized trials for new treatments that are not yet in use. However, some UPenn researchers claim they have demonstrated that at least in some cases different treatments can be compared by examining computerized medical records.
PHILADELPHIA – For years controversy has surrounded whether electronic medical records (EMR) would lead to increased patient safety, cut medical errors, and reduce healthcare costs. Now, researchers at the University of Pennsylvania School of Medicine have discovered a way to get another bonus from the implementation of electronic medical records: testing the efficacy of treatments for disease.
Often illnesses have multiple potential treatments. In many cases the best treatment is not known. But randomized medical trials are expensive and pose ethical problems as well. If we only had more electronic data about courses of treatment and outcomes in theory it might be possible to tease out of the data which sorts of treatment work best for different categories of patients.
An obvious bias with medical records is that doctors likely used characteristics of patients in determining which treatment to give them. But as the sum total of all research-accessible medical records covers an increasing length of time even that problem is surmountable by comparing records from time periods before and after treatments came into widespread use. Also, countries, regions, and individual doctors differ in terms of preferred treatments. These preferences on the part of individual doctors serve to partially randomize patient treatment choices.
What is surprising to me is that this press release claims that their use of electronic medical record (EMR) databases in this manner is a first of its kind. Is that true?
In the first study of its kind, Richard Tannen, M.D., Professor of Medicine at the University of Pennsylvania School of Medicine, led a team of researchers to find out if patient data, as captured by EMR databases, could be used to obtain vital information as effectively as randomized clinical trials, when evaluating drug therapies. The study appeared online last week in the British Medical Journal.
“Our findings show that if you do studies using EMR databases and you conduct analyses using new biostatistical methods we developed, we get results that are valid,” Tannen says. “That’s the real message of our paper — this can work.”
I expect medical records to become much more useful due to several factors: 1) longer numbers of years tracked for each person; 2) More medical tests per person; 3) eventually detailed DNA sequencing per person; and 4) eventually home sensors will continually collect information about each person from sinks, toilets, cameras, and even sensors on bedstands that'll monitor breathing and other biosigns. Adverse drug reactions will be compared against DNA sequencing results. Bedstand monitors will catch early signs of sleep apnea and insomnia.
While I view most fiscal stimulus program elements as wastes the plan to boost electronic medical records collection will pay back with higher quality care and an acceleration of the rate of advance of biomedical research.
In January 2009, President Barack Obama unveiled plans to implement electronic medical records nationwide within five years, arguing that such a plan was crucial in the fight against rising health care costs. Of the nearly $900 billion in Obama’s planned stimulus package currently before the United States Senate, $20 billion is proposed for electronic health records.
Once we get home medical sensors and medical sensors embedded in our bodies the business model for medical care will change drastically. Rather than going into a doctor to describe your symptoms so that the doc can order tests the continual streams of sensor readings will flow into expert systems running on web servers. If you start feeling ill you'll be able to call up a web page where you can describe your symptoms. The system might recommend a sensor pill to swallow or that you spit into a home microfluidic sensor device to collect more information. Then you'll get referred by the expert system software to a doctor for treatment.
In 1980 Sweden passed a referendum to gradually phase out nuclear power. Germany eventually followed suit. But now it looks like Sweden is going to flip back toward an embrace of new nukes.
On Thursday, the country once again took a step into the future -- by abandoning the ban on new nuclear power plants. Stockholm said the move was necessary to avoid energy sources that produce vast quantities of greenhouse gases. While Sweden has been a leader in developing alternative energy sources, they still have not done enough to completely replace nuclear power, which supplies half the country's energy.
The new proposal, presented by the country's center-right coalition, calls for the construction of new reactors as the old ones are taken out of service. Parliament will vote on the bill on March 17. The package also calls for the expansion of wind power and for a 40 percent cut to greenhouse gas emissions by 2020 relative to 1990 levels.
The Der Spiegel article reports that while the majority of Germans still favor a phase-out of nuclear power support for nukes is growing rapidly. While 36% opposed the phase-out in December 2007 just 8 months later opposition to the phase-out (i.e. support for nukes) had grown to 44%.
An earlier March 2008 Der Spiegel article reports that lots of coal electric plants are on the drawing board in Germany.
The Vattenfall project in Berlin is only one example of a larger trend. Utility companies want to set up a total of 26 new coal-fired power plants in Germany during the coming years.
In the long term, the power plants will replace older, dirtier plants. But that doesn't alter the fact that the plans are a direct contradiction of the climate goals formulated by Merkel. While emissions are practically zero in the case of nuclear energy, and while a natural gas-fired plant produces just 428 grams of CO2 emissions per kilowatt hour, a black coal power plant churns a solid 949 grams of CO2 into the atmosphere. The figure for lignite or brown coal -- 1,153 grams -- is even worse.
Some argue for coal with carbon capture as an alternative to nuclear power. But a BBC reporter at a carbon capture demonstration plant in Germany says carbon capture might boost coal electric costs by 50%. This is in line with other reports I've read that claim coal with carbon capture costs more than nuclear power.
Currently Germany gets 27% of domestic energy use from nuclear power. This puts it behind only 3 other nations as measured by percentage of electric power coming from nukes. France is at 77% from nukes, Ukraine is at 48%, and Japan is at 28%. The US is in 5th place at 19% with Russia in 6th place at 16%. Globally 15% of all electric power comes from nukes.
Update: Nuclear power's biggest competitor is coal. Coal is likely to remain in first place for a long time since the countries that are experiencing the greatest demand growth for electricity (e.g. China, India) also oppose international restrictions on their fossil fuels usage.
Coal remains the main fuel for power generation around the world, with a share of over 40%, followed by gas (20%), hydro (16%), nuclear (15%) and then oil (5%). Coal-fired power generation has grown strongly in the past decade, driven by strong growth in non-OECD countries. In China, coal-fired power generation capacity tripled during the past decade. Consequently, electricity output also expanded very rapidly, creating enormous pressures on the global thermal coal market.
Only lower costs for nuclear, wind, and solar can make a big dent in rising Asian coal demand.
Update In another recent article on nuclear power Der Spiegel argues Germany has a choice between keeping nuclear power around or building more coal electric plants.
Despite a decade of massive investment and generous programs established to promote wind, solar and biomass power generation, green energy sources make up just 14 percent of the country's energy supply. Even if that were to double in the near future, the lion's share of Germany's energy consumption would have to come from elsewhere. Without nuclear power, "elsewhere" in Germany necessarily means coal-fired power plants. But in a world with a rapidly warming climate caused by massive emissions of CO2 into the atmosphere by, among other sources, coal-fired power plants, such a scenario is decidedly unappetizing.
Nuclear power provides a real test of the seriousness of those who want to cut carbon emissions. They face 3 choices: 1) Build coal electric plants or 2) Drastically raise electricity prices while slashing consumption; or 3) Build more nuclear power plants.
Mate preferences studies show big changes in preferences since the 1930s. Men measure women more by their esthetics and income while women care less about whether a guy is nice.
This Valentine's Day, researchers at the University of Iowa have some new answers to the perennial question of what men and women want in a partner.
Men are increasingly interested in an educated woman who is a good financial prospect and less interested in chastity. Women are increasingly interested in a man who wants a family and less picky about whether he's always Mr. Nice Guy.
That's according to a study by University of Iowa sociologists Christine Whelan and Christie Boxer. They analyzed results of a 2008 survey of more than 1,100 undergraduates at the UI, the University of Washington, the University of Virginia and Penn State University, comparing the results to past mate-preference studies.
Since the 1930s, researchers have been asking college students to rank a list of 18 characteristics they'd prefer in a mate from "irrelevant" (0) to "essential" (3), allowing for a comparison of mate preferences dating back three generations. And my, how times have changed: Today's young adults rank love and attraction as most important; a few generations ago it didn't even make the top three.
While the researchers do not capture this in their survey, men are most interested in young and fertile women.
Note that love is not an attribute of the woman who a guy loves. Rather, love is a feeling within the guy's brain. By contrast, brains, beauty, money, and income are all attributes of the woman who the guy loves. These attributes play a big role in making that love feeling happen in the first place.
In the 1930s male respondents were seeking a dependable, kind lady who had skills in the kitchen. Chastity was more important than intelligence.
Now, guys look for love, brains and beauty -- and a sizable salary certainly sweetens the deal. Men ranked "good financial prospect" No. 12 in 2008, a significant climb from No. 17 in 1939 and No. 18 in 1967.
"These results are consistent with the rise in educational and career opportunities for women, and men's increasing desire to share the financial burdens with a future spouse," Whelan said.
Chastity -- which men ranked at No. 10 in 1939 -- fell to dead last in 2008.
Chastity was only No. 10 in 1939. I would have expected higher.
The emotional stability that women want is linked to the ambition that they also desire. Gotta be stable to follow through and achieve one's ambitions. Pleasing disposition? Nice guys finish last.
"When we administered the survey, several female students snickered at the idea that we even included the chastity item," Whelan said. "This is consistent with the widespread hook-up culture on college campuses."
For women of the 1930s, emotional stability, dependable character and ambition ranked as the top three characteristics they wanted in a man. Attraction and love didn't come in until No. 5. Today, women, like men, put love at the top of the list, with dependability and emotional stability rounding out the top three characteristics in Mr. Right.
Women rate desire for home and children much higher in importance than men do. In 2008, women rated desire for home and children fourth men ranked it ninth.
Women ranked "pleasing disposition" as significantly less important in 2008 than they have ever before. Pleasing disposition -- presumably interpreted to mean being a nice guy -- fell from a steady ranking of No. 4 throughout the second half of the 20th Century to a significantly lower rank of No. 7 in 2008.
Strip away tradition. Strip away religious beliefs. What happens? Men and women are looking at each other in ways that seem even more influenced by their evolutionary heritage. The mating market looks like it is becoming more competitive.
Update: See the comments where Jason Malloy thoughtfully takes issue with my analysis. However, Razib's analysis is closer to my own.
If we take these data at face value I think that in some ways evolutionary psychology is becoming more, not less, salient in terms of our life choices. In many "traditional" societies mate choice is highly constrained by the preferences & interests of individuals who are not the principals. Though this is certainly operative in many hunter-gatherer societies (e.g., the bizarre incest taboos among some Australian Aboriginals), I suspect that freedom of choice is more constricted among sedentary agricultural populations because it is in this group that institutionally derived norms loom the largest. As humans subsisted on the Malthusian margins in such relatively complex societies there was little "wiggle" room for lifestyle experimentation. Interesting many Blank Slate theorists who advocate lifestyle experimentation presume that an ideological revolution was necessary for an exploration of the behavior space, but perhaps deviation was always latent which cultural norms strongly constrained.
But Razib makes his point in a much more learned fashion.
Cornell University professors Valerie Reyna and Charles Brainerd find that people do a poorer job of remembering negative experiences.
"You may not remember the specifics of what happened to you, but boy, do you remember it was negative," said Brainerd. "And that allows you to fill in the blanks with 'memories' of negative events that didn't really happen."
Brainerd is the lead author of a study published in Psychological Science (Vol. 19, No. 9); co-authors include his wife, Valerie Reyna, also a Cornell professor of human development, and colleagues from Brazilian universities.
The researchers conducted experiments in which about 120 participants -- half in Brazil and half in the United States -- were asked to read lists of words that had either positive, negative or neutral connotations. They were then asked to identify which words had been listed. When remembering negative words -- such as mad, sad, rage, temper, ire and wrath -- they were much more likely to be inaccurate and "falsely remember" such unlisted words as anger. When identifying positive words, their memories were much more accurate.
The findings challenge traditional ideas about how emotion affects memory, Brainerd says. "Historically the belief has been that negative events are really pretty easy to remember, that negative emotion creates very distinctive memories. What we found was exactly the opposite. Negative information really tends to distort your memory."
This might mean that witnesses to crimes are memory-impaired. Well, if you witness a crime try not to feel unhappy about it and write up your memories while they are still fresh. Do not trust your memories.
A new University of British Columbia study reconciles a debate that has long raged among marketers and psychologists: What colour most improves brain performance and receptivity to advertising, red or blue?
It turns out they both can, it just depends on the nature of the task or message. The study, which could have major implications for advertising and interior design, finds that red is the most effective at enhancing our attention to detail, while blue is best at boosting our ability to think creatively.
"Previous research linked blue and red to enhanced cognitive performance, but disagreed on which provides the greatest boost," says Juliet Zhu of UBC's Sauder School of Business, author of the study which will appear in the Feb. 5 issue of Science Express. "It really depends on the nature of the task."
Between 2007 and 2008, the researchers tracked more than 600 participants' performance on six cognitive tasks that required either detail-orientation or creativity. Most experiments were conducted on computers, with a screen that was red, blue or white.
Red boosted performance on detail-oriented tasks such as memory retrieval and proofreading by as much as 31 per cent compared to blue. Conversely, for creative tasks such as brainstorming, blue environmental cues prompted participants to produce twice as many creative outputs as when under the red colour condition.
Okay, but what is mellow yellow good for? Also, what sort of work will you do best in a purple haze?
NEW YORK (Feb. 5, 2009) – New research from Columbia University Medical Center continues to shed light on the benefits of making fish a staple of any diet.
Fish are generally rich in omega-3 fatty acids, which have shown benefit in many health areas such as helping to prevent mental illness and delaying some of the disabilities associated with aging. Eating tuna, sardines, salmon and other so-called cold water fish appears to protect people against clogged arteries. Omega-3 fatty acids can also lower triglycerides, a type of fat often found in the bloodstream.
Now, a CUMC research team led by Richard J. Deckelbaum, M.D., Director of the Columbia Institute of Human Nutrition, has found that a diet rich in fish oils can prevent the accumulation of fat in the aorta, the main artery leaving the heart. The beneficial actions of fish oil that block cholesterol buildup in arteries are even found at high fat intakes.
The study was conducted in three separate populations of mice: one that was fed a balanced diet, one that was fed a diet resembling a "Western" diet high in saturated fat, and a third that was fed a high fish fat diet rich in omega-3 fatty acids.
Researchers in Dr. Deckelbaum's laboratory, including Chuchun Liz Chang, a Ph.D. student in nutritional and metabolic biology, found that the fatty acids contained in fish oil markedly inhibit the entry of "bad," or LDL, cholesterol into arteries and, as a result, much less cholesterol collects in these vessels.
They found that this is related to the ability of those fatty acids to markedly decrease lipoprotein lipase, a molecule that traps LDL in the arterial wall. This will likely prove to be important as a new mechanism which helps explain benefits of omega-3 fatty acids on heart health.
Hey, if lab mice live longer before getting heart disease they'll be able to put in more work searching for cures for cancer.
Madison, WI, February 2, 2009 - As the world seeks new ways to prevent and treat chronic diseases such as diabetes, heart disease and cancer, more research continues to be conducted on the benefits of certain foods in reducing people’s risk of contracting these ailments. Legumes in particular are often cited as being high in antioxidants, which have the property of being able to fight off free radical cells within the body, reducing the risk of cancer and other chronic diseases. A recent study further investigated these connections, as researchers focused on the benefits of one type of legume, dry beans, in reducing the risk of mammary cancer.
To address whether dry bean consumption is associated with a reduction in mammary cancer, scientists at Colorado State University studied the anticancer activity of six market classes of bean including; small red, great northern, navy, black, dark red and white kidney bean in the diet of laboratory animals. They also evaluated whether the level of antioxidants or seed coat pigments in the bean were related to mammary cancer. The study was funded by a grant from the Beans for Health Alliance, and the Colorado Agricultural Experiment Station with assistance from Archer Daniels Midland Co. and Bush Brothers Inc. Results from the study were published in the January-February 2009 issue of the journal Crop Science.
More darkly colored beans score higher in various measures of antioxidant activity.
Cooked dry bean powder from the six market classes and a control group without beans in the diet were fed to laboratory rats in a standard preclinical model for breast cancer. The dry bean powders were also evaluated for antioxidant capacity, phenolic and flavonoid content; all factors thought to be associated with anticancer activity. Chemical analysis of the beans revealed that total phenolic and flavonoid content varied widely among market classes and the differences were strongly associated with seed coat color; where colored beans had ten times or greater phenolic and flavonoid content compared to white beans. Antioxidant capacity of the beans also varied widely among dry bean market classes and were highly related to seed coat color, where colored beans had approximately two to three times greater antioxidant capacity than white beans.
But the lighter beans were just as effective at cutting cancer incidence as the darker beans that contain more antioxidants. Go figure.
Dry bean consumption from every market class reduced cancer incidence (number of animals with one tumor) and tumor number per animal compared to the control group. Cancer incidence was reduced from 95% in the control group to 67% in animals fed beans. The average number of malignant tumors was also reduced from 3.2 in the control group to 1.4 tumors per animal in the group fed bean. No associations were observed between phenolic content, flavonoid content and antioxidant capacity with cancer among the bean market classes. These results clearly suggest that the anticancer activity in dry bean is not associated with seed color or antioxidant capacity.
So what about beans delivered the anti-cancer benefit? Does a fiber in the beans cut cancer risks somehow?
For added anti-cancer protection take vitamin D with your beans.
Calcitrol, the active form of vitamin D, has been found to induce a tumor suppressing protein that can inhibit the growth of breast cancer cells, according to a study by researcher Sylvia Chistakos, Ph.D., of the UMDNJ-New Jersey Medical School.
Chistakos, a professor of biochemistry, has published extensively on the multiple roles of vitamin D, including inhibition of the growth of malignant cells found in breast cancer. Her current findings on the vitamin D induced protein that inhibits breast cancer growth are published in a recent issue of The Journal of Biological Chemistry.
Previous research had determined that increased serum levels of vitamin D are associated with an improved diagnosis in patients with breast cancer. Prior to the current study, little was known about the factors that determine the effect of calcitrol on inhibiting breast cancer growth, she said.
The vitamin D might make you stronger too. Though maybe people who get outside more get more exercise and get exposed to more sun which raises blood vitamin D levels.
A new study published in the American Journal of Clinical Nutrition explores how soyfood consumption may lower the risk of colorectal cancer, or cancer of the colon or rectum, in postmenopausal women. According to the National Cancer Institute, an estimated 71,560 American women were diagnosed with the fourth most common cancer in 2008.
Vanderbilt University School of Medicine researchers found that women who consumed at least 10 grams of soy protein daily were one-third less likely to develop colorectal cancer in comparison to women who consumed little soy. This is the amount of soy protein available in approximately one serving of tofu (1/2 cup), roasted soy nuts (1/4 cup), edamame (1/2 cup) or soy breakfast patties (2 patties).
The study observed soy intake in 68,412 women between the ages of 40 and 70, all free of cancer and diabetes prior to the initial screening. Researchers identified 321 colorectal cancer cases after participants were monitored for an average of 6.4 years. After adjusting for confounding factors, total soyfood intake was inversely associated with colorectal cancer risk among postmenopausal women.
Note that this study does not address whether soy will cut colon cancer risks in men or younger women.
I wonder if the scientists adjusted for meat consumption. People who eat soy burgers are probably less likely to eat hamburgers.
Researchers found that the ratio of sodium-to-potassium in subjects' urine was a much stronger predictor of cardiovascular disease than sodium or potassium alone.
"There isn't as much focus on potassium, but potassium seems to be effective in lowering blood pressure and the combination of a higher intake of potassium and lower consumption of sodium seems to be more effective than either on its own in reducing the risk of cardiovascular disease," said Dr. Paul Whelton, senior author of the study in the January 2009 issue of the Archives of Internal Medicine. Whelton is an epidemiologist and president and CEO of Loyola University Health System.
Researchers determined average sodium and potassium intake during two phases of a study known as the Trials of Hypertension Prevention. They collected 24-hour urine samples intermittently during an 18-month period in one trial and during a 36-month period in a second trial. The 2,974 study participants initially aged 30-to-54 and with blood pressure readings just under levels considered high, were followed for 10-15 years to see if they would develop cardiovascular disease. Whelton was national chair of the Trials of Hypertension Prevention.
Those with the highest sodium levels in their urine were 20 percent more likely to suffer strokes, heart attacks or other forms of cardiovascular disease compared with their counterparts with the lowest sodium levels. However this link was not strong enough to be considered statistically significant.
By contrast, participants with the highest sodium-to-potassium ratio in urine were 50 percent more likely to experience cardiovascular disease than those with the lowest sodium-to-potassium ratios. This link was statistically significant.
You should consume half as much sodium as potassium.
Whelton was a member of a recent Institute of Medicine panel that set dietary recommendations for salt and potassium. The panel said healthy 19-to-50 year-old adults should consume no more than 2,300 milligrams of sodium per day -- equivalent to one teaspoon of table salt. More than 95 percent of American men and 75 percent of American women in this age range exceed this amount.
To lower blood pressure and blunt the effects of salt, adults should consume 4.7 grams of potassium per day unless they have a clinical condition or medication need that is a contraindication to increased potassium intake. Most American adults aged 31-to-50 consume only about half as much as recommended in the Institute of Medicine report.
How to double your potassium intake? Turns out that high potassium foods are things you ought to eat for other reasons too. Beans, tomatoes, prunes, bananas, acorn squash, artichoke, spinach, sunflower seeds, almonds, winter squash, soybeans, cantaloupe, honeydew melon, and lentils all provide high potassium.
Here is more evidence for the theory that Alzheimer's is due to a special form of insulin-insensitive diabetes of the brain. Insulin blocks toxic proteins from damaging nerve cells.
EVANSTON, Ill. --- A Northwestern University-led research team reports that insulin, by shielding memory-forming synapses from harm, may slow or prevent the damage and memory loss caused by toxic proteins in Alzheimer's disease.
The findings, which provide additional new evidence that Alzheimer's could be due to a novel third form of diabetes, will be published online the week of Feb. 2 by the Proceedings of the National Academy of Sciences (PNAS).
In a study of neurons taken from the hippocampus, one of the brain's crucial memory centers, the scientists treated cells with insulin and the insulin-sensitizing drug rosiglitazone, which has been used to treat type 2 diabetes. (Isolated hippocampal cells are used by scientists to study memory chemistry; the cells are susceptible to damage caused by ADDLs, toxic proteins that build up in persons with Alzheimer's disease.)
The researchers discovered that damage to neurons exposed to ADDLs was blocked by insulin, which kept ADDLs from attaching to the cells. They also found that protection by low levels of insulin was enhanced by rosiglitazone.
Diabetics have a significantly greater risk of dementia, both Alzheimer's disease — the most common form of dementia — and other dementia, reveals important new data from an ongoing study of twins. The risk of dementia is especially strong if the onset of diabetes occurs in middle age, according to the study.
"Our results . . . highlighted the need to maintain a healthy lifestyle during adulthood in order to reduce the risk of dementia late in life," explained Dr. Margaret Gatz, who directs the Study of Dementia in Swedish Twins.
In a study published in the January 2009 issue of Diabetes, Gatz and researchers from Sweden show that getting diabetes before the age of 65 corresponds to a 125 percent increased risk for Alzheimer's disease. Nearly 21 million people in the United States have diabetes, according to the American Diabetes Association, which publishes the journal.
Eating less to remember more might become a new prescription for some elderly people, German researchers say.
They found that memory and thinking skills improved among healthy, overweight subjects who cut their calorie intake by 30 percent over a three-month period.
If further research supports this conclusion, "from a public health point of view, you could actually do something for the prevention of cognitive decline from aging," said lead researcher Dr. Agnes Floel, assistant professor of neurology at the University of Munster.
A couple of articles in the New York Times draw attention to business models in medicine that slow the rate of improvement in medical service delivery.
Two main causes of the system’s ills are century-old business models, for the general hospital and the physician’s practice, both of which are based on treating illness, not promoting wellness. Hospitals and doctors are paid by insurers and the government for the health care equivalent of piecework: hospitals profit from full beds and doctors profit from repeat visits. There is no financial incentive to keep patients healthy.
“The business models were all created decades ago, and acute disease drove those costs at the time,” says Steve Wunker, a senior partner at the consulting firm Innosight. “Most businesses in this industry are looking at their business model as entirely immutable. They’re looking for innovative offerings that fit this frozen model.”
Why have old business models lasted so long in medicine? It seems hard to price wellness maintenance as compared to pricing procedures and consultations. How to incentivize individual doctors to keep patients healthy? It is a lot easier to say it is a worthy goal than to describe a system for doing it that would work financially. Anyone have suggestions along these lines?
I would like to see far more automation of diagnosis. This requires wider spread use of electronic medical records so that the data which medical expert systems need will exist in electronic form. It also requires an economic model for medical care that provides incentives for automation. Medical expert systems can make better diagnostic decisions because the huge and growing quantity of medical test results and the large number of diseases and treatments really test the limits of the human mind to process all that information. Medical expert systems can free up smart doctors to do more original creative work such as medical research and product development.
Most doctors in private practice still do not use electronic medical records systems, making them outliers in a world where a very large fraction of all high information work is done using electronic information systems. Digital medical records make the discovery of better medical practices possible.
The Marshfield Clinic, a large doctors’ group in Wisconsin, shows that computerized records can indeed improve the quality and efficiency of medicine. Yet the Marshfield experience suggests that the digital record becomes truly useful only when patient information is mined to find patterns and answer questions: What treatments work best for particular categories of patients? What practices or procedures yield the best outcome?
This group of doctors have used their medical software system to help cut total costs by allowing them to manage diabetic care more efficiently.
From mid-2004 through the third quarter of this year, the percentage of the clinic’s diabetic patients with blood cholesterol at or below the recommended level rose to 61 percent, from 40 percent earlier. The percentage with satisfactory blood pressure increased to 52 percent, up from 32 percent.
Over the same span, hospital admissions among Marshfield’s diabetic population fell — to 311 per 1,000 patients a year, from 360. Because a hospital stay for a diabetes patient ranges from $8,000 to $22,500, according to national statistics, Marshfield’s results translate into an annual cost saving of $7.3 million to $20.5 million.
But for the average private practice a reduction in the hospitalization rate of patients isn't going to boost revenues. The money saved probably all flows to insurers. The system lacks incentives for most medical providers to go after these forms of savings and care improvement.
A medical marketplace which rewards use of expert systems, electric patient records, and reduction of total costs by use of more effective and productive methods of purveying treatments is what we need. How do we get there?
A study published this week in the Archives of Internal Medicine suggests that broad adoption of IT systems may provide significant health benefits for patients. Researchers at the Johns Hopkins University School of Medicine, in Baltimore, rated clinical information technologies at 41 hospitals in Texas and compared those results with discharge information for more than 160,000 patients. Technologies recorded included electronic note taking, treatment records, test results, drugs orders, and decision-support systems that offer information concerning treatment options and drug interactions. The researchers found that hospitals that rated highly on automated note taking had a 15 percent decrease in the odds that a patient would die while hospitalized. Hospitals with highly rated decision-support systems also had 20 percent lower complication rates. Researchers found that electronic systems reduced costs by about $100 to $500 per admission.
We need technologies that will allow our bodies to be repaired as thoroughly as we repair our cars. Some UCSD researchers find that titanium nanotubes can cause stem cells to become osteoblasts that speed bone repair.
San Diego, CA, January 29, 2009 --Engineers at the University of California at San Diego have come up with a way to help accelerate bone growth through the use of nanotubes and stem cells. This new finding could lead to quicker and better recovery, for example, for patients who undergo orthopedic surgery.
Nanotube implants might some day become a routine part of orthopedic surgery.
“If you break your knee or leg from skiing, for example, an orthopedic surgeon will implant a titanium rod, and you will be on crutches for about three months,” said Sungho Jin, co-author of the PNAS paper and a materials science professor at the Jacobs School of Engineering. “But what we anticipate through our research is that if the surgeon uses titanium oxide nanotubes with stem cells, the bone healing could be accelerated and a patient may be able to walk in one month instead of being on crunches for three months.
“Our in-vitro and in-vivo data indicate that such advantages can occur by using the titanium oxide nanotube treated implants, which can reduce the loosening of bones, one of the major orthopedic problems that necessitate re-surgery operations for hip and other implants for patients,” Jin added. “Such a major re-surgery, especially for older people, is a health risk and significant inconvenience, and is also undesirable from the cost point of view.”
By controlling nanotube diameter the researchers can instruct stem cells to turn into bone-forming osteoblast cells.
This is the first study of its kind using stem cells attached to titanium oxide nanotube implants. Jin and his research team – which include Jacobs School bioengineering professors Shu Chien and Adam Engler, as well as post doctoral researcher Seunghan Oh and other graduate students and researchers –report that the precise change in nanotube diameter can be controlled to induce selective differentiation of stem cells into osteoblast (bone-forming) cells.
The biggest challenge with stem cells is instructing them to become the right kind of cell at the right place in the body. A material implanted where the repair is needed has the advantage of being very local in its effects. That can work for highly targeted repairs where a particular piece of tissue needs fixing.
We also need ways to instruct stem cells to go to particular types of tissue that might be scattered all around the body. For example, bone marrow stem cells age along with the rest of the body. Well, we have about 206 bones per person (I say "about" because there is some variation - for example, some people have an extra rib). That's a lot of places to instruct stem cells to go to and replace aged stem cells. We will need additional techniques for more widespread stem cell delivery.