Here's yet another blog post where I present yet another example of why the people who say that aging is dignified are totally wrong.
New research shows cerebral microbleeds, which are lesions in the brain, are more common in people over 60 than previously thought. The study is published in the April 1, 2008, issue of Neurology®, the medical journal of the American Academy of Neurology.
“We found a three-to-four-fold higher overall prevalence of cerebral microbleeds compared to other studies,” according to study author Monique M.B. Breteler, MD, PhD, with the Erasmus MC University Medical Center in Rotterdam, the Netherlands. “These findings are of major importance since cerebral microbleeds likely reflect cerebrovascular pathology and may be associated with an increased risk of cerebrovascular problems.”
Cerebral microbleeds are lesions that can be seen on brain scans, such as an MRI brain scan. The lesions are deposits of iron from red blood cells that have presumably leaked out of small brain vessels.
For the study, 1,062 healthy men and women who were an average age of 70 underwent an MRI to scan for the presence of cerebral microbleeds. Of the participants, 250 were found to have cerebral microbleeds.
The study found overall prevalence of cerebral microbleeds was high and increased with age from 18 percent in people age 60 to 69 to 38 percent in people over age 80. People with the e4 allele of the APOE gene, which is known to increase the risk of Alzheimer’s disease and of cerebral amyloid angiopathy, had significantly more microbleeds than people without this genetic variant.
We should find ways to repair and halt the damage of aging. Rejuvenating stem cell therapies could repair our blood vessels and prevent cerebral microbleeds. We could avoid the resulting brain damage and keep more of who we are intact.
Update: Read all about Strategies for Engineered Negligible Senescence to get an idea of how we can hope to some day stop and reverse aging of the brain and of other parts of our bodies.
The more they know the less most people care. Anyone want to offer an explanation for this response?
COLLEGE STATION – The more you know the less you care – at least that seems to be the case with global warming. A telephone survey of 1,093 Americans by two Texas A&M University political scientists and a former colleague indicates that trend, as explained in their recent article in the peer-reviewed journal Risk Analysis.
“More informed respondents both feel less personally responsible for global warming, and also show less concern for global warming,” states the article, titled “Personal Efficacy, the Information Environment, and Attitudes toward Global Warming and Climate Change in the USA.”
The study showed high levels of confidence in scientists among Americans led to a decreased sense of responsibility for global warming.
The diminished concern and sense of responsibility flies in the face of awareness campaigns about climate change, such as in the movies An Inconvenient Truth and Ice Age: The Meltdown and in the mainstream media’s escalating emphasis on the trend.
The research was conducted by Paul M. Kellstedt, a political science associate professor at Texas A&M; Arnold Vedlitz, Bob Bullock Chair in Government and Public Policy at Texas A&M’s George Bush School of Government and Public Service; and Sammy Zahran, formerly of Texas A&M and now an assistant professor of sociology at Colorado State University.
Does just looking at Al Gore cause people to fall asleep? Or am I an outlier on this?
Maybe if people think scientists are all on top of it that scientists will figure out solutions. Someone's working the issue. Not to worry?
Maybe the perceived immensity of the problem breeds a feeling of hopelessness?
Maybe fear of the known is less than fear of the unknown? A well characterized problem strikes people as something they know how to work around? (don't buy that ocean front mansion in Fort Lauderdale - as if you could afford it anyhow)
I already think we should stop building coal-fired electric power plants for another reason: cleaner air down at ground level. We should switch to nuclear, solar, and wind. With excellent batteries we could shift most transportation to electric power and breathe cleaner air. I believe the amount of extractable oil and natural gas left is so small that only coal can cause climate problems. (and also see this PDF of a presentation by David Rutledge of CalTech)
If silicon chips stop speeding up 5 years from now we'll experience lower economic growth. Faster computer chips are one of the drivers of higher productivity and economic output. Well, silicon might finally be approaching the end of the line for speed-ups. This has important (mainly negative) implications for economic growth.
The silicon chip, which has supplied several decades’ worth of remarkable increases in computing power and speed, looks unlikely to be capable of sustaining this pace for more than another decade – in fact, in a plenary talk at the conference, Suman Datta of Pennsylvania State University, USA, gives the conventional silicon chip no longer than four years left to run.
As silicon computer circuitry gets ever smaller in the quest to pack more components into smaller areas on a chip, eventually the miniaturized electronic devices are undermined by fundamental physical limits. They start to become leaky, making them incapable of holding onto digital information. So if the steady increases in computing capability that we have come to take for granted are to continue, some new technology will have to take over from silicon.
We could still extend the speed-up time by several years with more parallel architectures. That's already happening some now with multi-core CPU chips. But software that can effectively utilize many CPU cores in parallel has been slow in coming. You can see this with Mozilla Firefox for example. I have a dual core CPU. If I open up 50 to 100 web pages with FireFox at once (and I do this often) then FireFox never takes more than 50% of available CPU usage. Why? It can't use multiple threads (at least with FireFox v2.x revs - can v3?) and so FireFox maxes out a single thread of execution fully utilizing just one of my 2 CPU cores. The 50% of total CPU usage in Windows Task Manager means 50% of 2 core in my case. So 100% of 1 core.
This limit on the part of FireFox is disappointing. If a very popular development project with a large number of contributors and millions of users is lagging then how will it take for less important apps to get more parallelized?
Some areas of computing could still accelerate once silicon chip sizes stop getting faster. Subsets of computer algorithms could migrate into gate logic rather getting expressed in software that runs as a sequence of instructions in memory. In other words, abandon the Von Neumann architecture. Not easy to do in the general case. But already lots of algorithms (such as in graphics chips) get implemented in logic gates.
As a way to get past the silicon speed limits carbon nanotubes might replace silicon in computer fabrication.
At the conference, researchers at Leeds University in the UK will report an important step towards one prospective replacement. Carbon nanotubes, discovered in 1991, are tubes of pure carbon just a few nanometres wide – about the width of a typical protein molecule, and tens of thousands of times thinner than a human hair. Because they conduct electricity, they have been proposed as ready-made molecular-scale wires for making electronic circuitry.
It seems unlikely carbon nanotubes will be ready to replace silicon in 5 years. So I suspect we are going to enter a gap period where computing capacity doesn't grow as rapidly as it has in the last 50 years.
Research results from University of Maryland physicists show that graphene, a new material that combines aspects of semiconductors and metals, could be a leading candidate to replace silicon in applications ranging from high-speed computer chips to biochemical sensors.
The research, funded by the National Science Foundation (NSF) and published online in the journal Nature Nanotechnolgy, reveals that graphene conducts electricity at room temperature with less intrinsic resistance than any other known material.
"Graphene is one of the materials being considered as a potential replacement of silicon for future computing," said NSF Program Manager Charles Ying. "The recent results obtained by the University of Maryland scientists provide directions to achieve high-electron speed in graphene near room temperature, which is critically important for practical applications."
Graphene is a sheet of carbon that is only 1 atom thick. That's as thin as thin gets.
Carbon comes in many different forms, from the graphite found in pencils to the world's most expensive diamonds. In 1980, we knew of only three basic forms of carbon, namely diamond, graphite, and amorphous carbon. Then, fullerenes and carbon nanotubes were discovered and all of a sudden that was where nanotechnology researchers wanted to be. Recently, though, there has been quite a buzz about graphene. Discovered only in 2004, graphene is a flat one-atom thick sheet of carbon.
We might hit a computer Peak Silicon at the same time we hit Peak Oil. But while the 2010s are looking problematic I'm more bullish on the 2020s due to advances in biotechnology that should really start to cause radical changes by then. Also, by the 2020s advances in photovoltaics, batteries, and other energy technologies should start to bring in replacement energy sources faster than fossil fuels production declines.
An MIT researcher has found a way to significantly improve the efficiently of an important type of silicon solar cells while keeping costs about the same. The technology is being commercialized by a startup in Lexington, MA, called 1366 Technologies, which today announced its first round of funding. Venture capitalists invested $12.4 million in the company.
1366 Technologies claims that it improves the efficiency--a measure of the electricity generated from a given amount of light--of multicrystalline silicon solar cells by 27 percent compared with conventional ones.
The company expects other improvements to combine to get it to its $1/watt goal by 2012.
MIT Professor, 1366 founder and CTO, Ely Sachs, noted that 1366 Technologies will be combining innovations in silicon cell architecture with manufacturing process improvements to bring multi-crystalline silicon solar cells to cost parity with coal-based electricity.
Sachs added, "The science is understood, the raw materials are abundant and the products work. All that is left to do is innovate in manufacturing and scale up volume production, and that's just what we intend to do." The company has just taken space in Lexington to build its pilot solar cell manufacturing facility.
1366 Technologies' roadmap includes a new cell architecture that uses innovative, low-cost fabrication methods to increase the efficiency of multi-crystalline solar cells. This architecture, developed at MIT, improves surface texture and metallization to enhance silicon solar cell efficiency by 25% (from 15 - 19%) while lowering costs.
And what will happen if 1366’s claims pan out, and silicon-based solar cells really drop below $1 per watt within the next few years? If the costs for those cells, solar PV, drop more rapidly than expected, thin-film solar based on other materials could face more challenges than expected. However, companies that make thin-film cells like First Solar (NASDAQ: FSLR) and Nanosolar (coverage here) are working on their own process improvements, and it’s difficult to tell when breakthroughs will come.
As of this writing First Solar (FSLR) has a market capitalization of almost $18 billion. So the markets think First Solar could be the winner. So will 1366 score an upset? Photovoltaics makers can raise the capital needed if they can just come up with plausible technologies for lowering photovoltaics costs.
A beer belly rots your brain even though the beer might not be at fault.
ST. PAUL, Minn. – People with larger stomachs in their 40s are more likely to have dementia when they reach their 70s, according to a study published in the March 26, 2008, online issue of Neurology®, the medical journal of the American Academy of Neurology.
The study involved 6,583 people age 40 to 45 in northern California who had their abdominal fat measured. An average of 36 years later, 16 percent of the participants had been diagnosed with dementia. The study found that those with the highest amount of abdominal fat were nearly three times more likely to develop dementia than those with the lowest amount of abdominal fat.
So then would belly liposuction reduce your risk of Alzheimer's Disease?
A lot of people are walking around (or sitting) with hazardous bellies.
“Considering that 50 percent of adults in this country have an unhealthy amount of abdominal fat, this is a disturbing finding,” said study author Rachel A. Whitmer, PhD, a Research Scientist of the Kaiser Permanente Division of Research in Oakland, CA, and member of the American Academy of Neurology. “Research needs to be done to determine what the mechanisms are that link abdominal obesity and dementia.”
Having a large abdomen increased the risk of dementia regardless of whether the participants were of normal weight overall, overweight, or obese, and regardless of existing health conditions, including diabetes, stroke and cardiovascular disease.
Those who were overweight and had a large belly were 2.3 times more likely to develop dementia than people with a normal weight and belly size. People who were both obese and had a large belly were 3.6 times more likely to develop dementia than those of normal weight and belly size. Those who were overweight or obese but did not have a large abdomen had an 80-percent increased risk of dementia.
This study didn't prove the direction of causation. But the results are highly suggestive.
Yet Seshadri also notes that factors other than fat could be responsible for the results. Whitmer's team controlled for some of these, such as education and rates of other illnesses. But other issues were not taken into account. Overweight people are less likely to exercise, for instance. Physical activity is known to decrease obesity risk, as well as being psychologically beneficial.
Whitmer acknowledges this short-coming, but points out that the dementia rates were greater among those who were not overweight during middle-age, but did have high levels of belly fat. These people are likely to have exercised since their weight was normal, she says, but they still went on to develop cognitive problems.
Pay attention to comments from obesity researcher Rudolph Liebel of Columbia University in this previous post. Note that fat cells are now known to secrete at least a couple of dozen hormones and other signalling compounds. Some of those compounds cross the blood-brain barrier. Fat is not a passive pile of blubber. Belly fat in particular secretes more stuff than other areas of fat. The fat on your belly is sending out messages that are messing up your brain and body.
NEW YORK, March 23, 2008—Research led by investigators at Memorial Sloan-Kettering Cancer Center (MSKCC) has shown that therapeutic cloning, also known as somatic-cell nuclear transfer (SCNT), can be used to treat Parkinson’s disease in mice. The study’s results are published in the March 23 online edition of the journal Nature Medicine.
For the first time, researchers showed that therapeutic cloning or SCNT has been successfully used to treat disease in the same subjects from whom the initial cells were derived. While this current work is in animals, it could have future implications as this method may be an effective way to reduce transplant rejection and enhance recovery in other diseases and in other organ systems.
In therapeutic cloning or SCNT, the nucleus of a somatic cell from a donor subject is inserted into an egg from which the nucleus has been removed. This cell then develops into a blastocyst from which embryonic stem cells can be harvested and differentiated for therapeutic purposes. As the genetic information in the resulting stem cells comes from the donor subject, therapeutic cloning or SCNT would yield subject-specific cells that are spared by the immune system after transplantation.
The new study shows that therapeutic cloning can treat Parkinson’s disease in a mouse model. The scientists used skin cells from the tail of the animal to generate customized or autologous dopamine neurons—the missing neurons in Parkinson’s disease. The mice that received neurons derived from individually matched stem cell lines exhibited neurological improvement. But when these neurons were grafted into mice that did not genetically match the transplanted cells, the cells did not survive well and the mice did not recover.
This work builds on earlier work by the same team where they didn't make stem cells for each target animal. With this latest work they avoided the immune rejection problem that by starting with a genome from the target animal and then creating embryonic stem cells with that animal's genome.
Some religious people find cloning to create embryonic stem cells morally unacceptable because an embryo gets destroyed in the creation of the stem cells in most cases. But for a moment leave aside the ethical objections for use of this procedure with humans. The fact remains that given immune compatible cells properly prepared to become dopamine neurons it is possible to do brain repair and a limited form of brain rejuvenation.
Some see rejuvenation therapies as distant prospects. But I do not see why stem cell therapies lie only in the distant science fiction future. A therapy that works for mice today is going to work for humans within a timespan quick enough for many of us alive today.
My guess is that some county in Maine or Minnesota is probably hardest hit by energy prices overall due to their need for very costly heating oil. But in terms of percentage of income going to gasoline poor Camden Alabama is the hardest hit county in America.
But the county is poor - household income of $26,000 is nearly half the national average - and people have to travel a long way to work.
The combination of low wages and long travel times means the people of Camden, for the second year in a row, spent a higher portion of their income on gas than anyone else in the country, according to a new study from the Oil Price Information Service, a research firm that tracks data for AAA.
In Camden, drivers put 13% of every paycheck right into the gas tank. In wealthy towns around New York City, people spend less than 2% of their income on gas.
This affords me an opportunity to make a point that really needs making: As we come up on and pass the peak in world oil production government policy ought to be aimed more at getting the poor to move to dwellings and towns which will reduce their need for energy rather than spend tax money to pay part of the energy bills of poor people. Less energy available will mean the absolute need to use less energy. Government subsidies delay needed adjustments in life styles.
In the United States there's a federal program, Low Income Home Energy Assistance Program, to provide poor people with money to buy heating oil. State and local governments as well as local charitable groups provide additional aid. In Massachusetts the average beneficiary of this aid gets $1000 this year.
The fuel-assistance programs the agencies run combine federal and state funds and will provide about $140 million in assistance this heating season to between 130,000 and 140,000 clients statewide. The federal government released $40 million in emergency aid last week to 11 states, including Massachusetts, which is receiving an additional $5 million.
Heating fuel aid is the wrong response to this problem. The coming decline in world oil production can't be prevented by providing people cash to buy heating oil or propane. We need to shift gears and aim to reduce energy usage. For example, people who can't afford to heat a house should live in a multi-unit apartment with shared walls and heavy insulation. People who can't afford to travel from their rural residency to work should move into more densely populated areas. The message people need to hear is that they need to adapt and change how they live.
Transportation isn't the toughest energy adaptation problem. We can greatly improve the energy efficiency of moving us around. Albeit there are costs in reduced comfort. Building efficiency strikes me as a harder problem to solve both because buildings last far longer than cars and because buildings cost much more. Upgrading the housing stock is a tall order. But that's all the more reason not to use government programs to allow people stay where they are living in unsustainable ways.
The surge in value has made oil executives and shareholders extremely happy, but at what price for Americans? A congressional forum last fall in Boston produced riveting testimony from a mother, an Iraq War veteran, whose husband still serves in the Persian Gulf. Her second child was born sickly and frail, requiring extensive hospitalization and intensive aftercare. But one of the prescriptions -- a warm home -- proved unaffordable for the young mother, who had to move in with her mother to keep her children warm and healthy.
But people who move in together reduce the number of dwellings that need to be heated and in doing so they reduce energy usage. Well, reduced energy usage is necessary in the face of limited oil reserves and swelling Asian demand.
Politicians could constructively engage to deal with the hardships caused by rising oil prices by encouraging construction of multi-unit dwellings, mixed zoning that puts homes and workplaces closer together, new building designs for greater energy efficiency, upgrades of existing buildings for energy efficiency, a shift from oil heaters to ground sink heat pumps, and other measures that will reduce the need for energy. Poor old rural folks in cold areas like Maine could be helped to move into senior citizen apartments in town within walking distance of stores and medical offices.
These results deal with averages of course. But personality types identifiable in preschool children have lasting effects.
Participants consisted of 230 children who were studied every year from their first or second year in preschool until age 12. After age 12, the sample was reassessed twice, at ages 17 and 23. Researchers led by Jaap Denissen of Humboldt-University Berlin assessed degrees of shyness and aggressiveness through parental scales and teacher reports.
Denissen tested the hypotheses on the predictive validity of three major preschool personality types. Resilient personality is characterized by above average emotional stability, IQ, and academic achievement. Overcontrol is characterized by low scores on extraversion, emotional stability, and self-esteem. Undercontrol is characterized by low scores on emotional stability and agreeableness and high scores on aggressive behavior.
The 19-year longitudinal study illustrated that childhood personality types were meaningfully associated with the timing of the transitions. Resilient males were found to leave their parents’ house approximately one year earlier than overcontrolled or undercontrolled children. Overcontrolled boys took more than a year longer than others in finding a romantic partner. Resilient boys and girls were faster in getting a part-time job than their overcontrolled and undercontrolled peers.
Okay, when offspring genetic engineering becomes possible will prospective parents opt to give their kids genetic variations that make them resilient personalities or maybe undercontrolled or overcontrolled? I'm expecting parents to boost the IQ of their kids. But will they go too far in giving the kids extraversion or perhaps make them too emotionally controlled?
Chris Skrebowski, a researcher for the Energy Institute in Britain, told delegates that the oil supply will peak in 2011 or 2012 at around 93 million barrels a day, that oil supply in international trade will peak earlier than the oil production peak, and he forecast: "There will be supply shortfalls in winter before peak."
According to Skrebowski, there were eight key pieces of evidence that insisted that the world was looming ever-closer to peak oil. These included the falling rate of discoveries of new oil-fields; sustained high oil prices; the age of the largest fields; the lack of real growth potential in oil-producing countries; the current lack of incremental flows; the sustained depletion of oil reserves; nongeologic threats to future oil-supplies; and the struggle to hold production by many of the major oil producers.
He explained that peak oil was predicted to become a reality in 2011 on the basis that the world’s major oil fields were being depleted at a rate of 4,5% a year.
The production decline rates of existing fields are an important part of the equation for when Peak Oil happens. Another important factor is the rate at which new oil megaprojects come on line. Megaproject delays - which are not uncommon - could make the peak come sooner. The debate about when oil peaks is partly a debate about how many projects will stay on schedule.
But how is it that crude can still trade above $100 a barrel, three times what it sold for at the start of the decade, despite a very wobbly economy?
If you want to understand that, it helps to listen in to ExxonMobil's (XOM) presentation to analysts in New York City in early March. Halfway through the three-hour meeting, Exxon management flashed a chart that showed the company's worldwide oil production staying flat through 2012.
So $100 per barrel oil isn't enough for ExxonMobil to find ways to boost production. Only national oil companies might be able to substantially increase production. The publicly traded international oil companies can't find enough oil to produce.
What about supply? The recent Deutsche Bank report notes the perverse fact that since 2003 higher oil prices have caused lower growth in oil production, a phenomenon that is related to the hoarding issue that I have long discussed. Based is on the work of Skrebowski and the opinions of Maxwell and other experts on future oil supplies I suspect the world may scrape out the capacity to meet normal demand growth of developing countries for perhaps two more years, bringing oil use by the end of 2009 to perhaps 90 mb/d. But after that, oil supply growth will stop and then start declining. It could decline slowly by, say, 1 mb/d or it could decline more rapidly by perhaps 3 mb/d. If it declines by 1 mb/d between 2010 and 2015, it will be back to 85 mb/d in 2015. Implied demand, we saw, will be about 97 mb/d after the savings in U.S. car usage.
If you are sitting on a lot of oil why sell oil in a given year once oil production is in global decline? Each next year the price will be much higher. Leaving oil in the ground is a form of investment. This is why I think the environmentalists who oppose drilling in the Alaska National Wildlife Refuge (ANWR) have done us a big favor. The oil will get pumped out eventually assuming ANWR really has a lot of oil (and it might not). Meanwhile it sits in the ground waiting to help us in the coming harsh post-peak era.
Energy analyst Charles T. Maxwell thinks gasoline prices in the US will need to more than triple to force Americans into a radical restructuring of how they live.
Maxwell said it will take $12 to $15 a gallon to get Americans to let go of what he called the “precious freedom of mobility.” As much as Maxwell laments the loss, he sees no other way for the U.S. to impose enough conservation to deal with the growing imbalance between oil demand and supply that he sees developing around 2010 and getting worse in 2012 or 2013, as the world hits a “peak” in conventional oil production.
The Energy Watch Group is even more pessimistic since the EWG claims world oil production peaked in 2006 and we are looking at steep production declines going forward.
According to the scenario calculations, oil production will decline by about 50% until 2030. This is equivalent to an average annual decline rate of 3%, well in line with the US experience where oil production from the lower 48 states declined by 2-3% per year.
Still other analysts see peak oil between 2010 and 2012.
A decline in production in 2008 would be far more disruptive than a decline in 2012. Investments in reaction to the current high oil prices will gradually yield substitute energy sources in the coming years. We are better off if the world stays on an oil production plateau so that gradually rising prices send increasingly louder signals that we need to develop alternatives, implement conservation measures, and restructure our lives to need less energy.
We need nuclear, solar, and wind power and great batteries for transportation.
Heart rejuvenation is fundamentally a DNA programming problem. With enough knowledge about how to run the DNA software we can make stem cells become replacement cells in cardiac muscle and in the rest of the body.
SAN FRANCISCO, CA –March 5, 2008--Researchers at the Gladstone Institute of Cardiovascular Disease (GICD) and the University of California, San Francisco have identified for the first time how tiny genetic factors called microRNAs may influence the differentiation of pluripotent embryonic stem (ES) cells into cardiac muscle. As reported in the journal Cell Stem Cell, scientists in the lab of GICD Director, Deepak Srivastava, MD, demonstrated that two microRNAs, miR-1 and miR-133, which have been associated with muscle development, not only encourage heart muscle formation, but also actively suppress genes that could turn the ES cells into undesired cells like neurons or bone.
“Understanding how pluripotent stem cells can be used in therapy requires that we understand the myriad processes and factors that influence cell fate,” said Dr. Srivastava. “This work shows that microRNAs can function both in directing how ES cells change into specific cells—as well as preventing these cells from developing into unwanted cell types.”
These microRNAs trigger gene activity that turns the embryonic stem cells into cardiac muscle. With more knowledge about the activity of hundreds (or perhaps thousands) of microRNAs we will be able to make large numbers of tissue types from stem cells. It is a matter of discovering a large number of possible ways to instruct cells to do our bidding.
How long will it take to figure which microRNA can tell which cell type to become which other cell type? I'm thinking that microfluidics will speed up this process by automating the testing of large numbers of microRNAs with large numbers of cell types. The rate of advance in stem cell manipulation will accelerate every year as microfluidic devices and other tools for lab automation allow the solution space to be searched orders of magnitude more rapidly.
Boston, MA—Researchers from Boston University School of Medicine (BUSM) have estimated that one in six women are at risk for developing Alzheimer’s disease (AD) in their lifetime, while the risk for men is one in ten. These findings were released today by the Alzheimer’s Association in their publication 2008 Alzheimer’s Disease: Facts and Figures.
The higher incidence of Alzheimer's among women comes as a result of women living longer. Basically, the longer you live the higher the odds you'll get Alzheimer's (and other degenerative diseases of the brain) and die from it.
Stroke and dementia are the most widely feared age-related neurological diseases, and are also the only neurological disorders listed in the ten leading causes of disease burden.
The researchers followed 2,794 participants of the Framingham Heart Study for 29 years who were without dementia. They found 400 cases of dementia of all types and 292 cases of AD. They estimated the lifetime risk of any dementia at more than one in five for women, and one in seven for men.
Our brains are aging. This is bad. When we get into our 70s and 80s our brains will really start to malfunction. Did I mention this is bad? Can you still remember two sentences later that I mentioned this is bad? Isn't it really handy that you can remember a sentence long enough to see logical connections between sentences? Don't you want to continue to be able to do this? This is an argument for a very aggressive program to develop brain rejuvenation therapies using stem cells and gene therapies. Yes, we should develop rejuvenation therapies as an urgent priority. In the mean time fish oils DHA and EPA appear to slow down brain aging and reduce the risk of Alzheimer's. So lets slow down our brain aging while we try to develop the means to reverse brain aging.
• As many as 5.2 million people in the United States are living with Alzheimer’s.
• 10 million baby boomers will develop Alzheimer's in their lifetime.
• Every 71 seconds, someone develops Alzheimer’s.
• Alzheimer's is the seventh-leading cause of death.
• The direct and indirect costs of Alzheimer's and other dementias to Medicare, Medicaid and businesses amount to more than $148 billion each year.
Those costs do not include the costs of lower earnings since we can't think as productively as our minds become messed up and less able to function.
A new study of 856 people age 71 years and older found that 22 percent had some cognitive impairment that did not reach the threshold for dementia (Article, p. 427). Each year, about 8 percent of individuals with cognitive impairment but not dementia at baseline died and about 12 percent progressed to dementia. Using the 22 percent figure, researchers calculate that in 2002 in the United States, 5.4 million people aged 71 and older had cognitive impairment without dementia. Previous estimates of cognitive impairment without dementia ranged from 5 percent to 29 percent.
Bear in mind (at least while you can) that some old folks have Alzheimer's, others have dementia, still others have cognitive impairment that falls short of getting classified as dementia, and still others have impairment due to stroke or Parkinson's Disease.
But on the bright side the incidence of cognitive impairment among the elderly appears to be declining.
ANN ARBOR, Mich. — Although it’s too soon to sound the death knell for the “senior moment,” it appears that memory loss and thinking problems are becoming less common among older Americans.
A new nationally representative study shows a downward trend in the rate of “cognitive impairment” — the umbrella term for everything from significant memory loss to dementia and Alzheimer’s disease — among people aged 70 and older.
The prevalence of cognitive impairment in this age group went down by 3.5 percentage points between 1993 and 2002 — from 12.2 percent to 8.7 percent, representing a difference of hundreds of thousands of people.
Lower rates of smoking probably contributed to this improvement. Cholesterol lowering drugs and better diets might have helped too. Ditto for lower levels of air pollution.
Philadelphia, PA, March 18, 2008 – Individuals who experience military combat obviously endure extreme stress, and this exposure leaves many diagnosed with the psychiatric condition of post-traumatic stress disorder, or PTSD. PTSD is associated with several abnormalities in brain structure and function. However, as researcher Roger Pitman explains, “Although it is tempting to conclude that these abnormalities were caused by the traumatic event, it is also possible that they were pre-existing risk factors that increased the risk of developing PTSD upon the traumatic event’s occurrence.” Drs. Kasai and Yamasue along with their colleagues sought to examine this association in a new study published in the March 15th issue of Biological Psychiatry.
The authors measured the gray matter density of the brains of combat-exposed Vietnam veterans, some with and some without PTSD, and their combat-unexposed identical twins using a technology called magnetic resonance imaging (MRI). The detailed images provided by the MRI scans then allowed the investigators to compare specific brain regions of the siblings. They found that the gray matter density of the pregenual anterior cingulate cortex, an area of the brain involved in emotional functioning, was reduced in veterans with PTSD, but not in their twins who had not experienced combat. According to Dr. Pitman, “this finding supports the conclusion that the psychological stress resulting from the traumatic stressor may damage this brain region, with deleterious emotional consequences.”
A traumatic event is much more likely to result in posttraumatic stress disorder (PTSD) in adults who experienced trauma in childhood – but certain gene variations raise the risk considerably if the childhood trauma involved physical or sexual abuse, scientists have found. The research was conducted with funding from the National Institute of Mental Health, which is part of the National Institutes of Health, and others.
Inherited variations in multiple genes, which have yet to be identified, are estimated to account for 30 to 40 percent of the risk of developing PTSD. The gene identified in this study is one likely candidate, although others are almost certain to emerge.
To conduct their study, the researchers surveyed 900 primarily African-American people 18 to 81 years old, from poor, urban neighborhoods. As is common in impoverished environments, many of the people in this study had experienced severe traumatic experiences in childhood and had later experienced other kinds of trauma as adults. The researchers also examined the genetic make-up of 765 of the participants.
They found that having a history of child abuse – which was the case for almost 30 percent of the people in this study – led to more than twice the number of PTSD symptoms in adults who had later undergone other traumas, compared to traumatized adults who weren’t abused in childhood. But the history of child abuse wasn’t enough, by itself, to lead to the increase in symptoms; the increase appeared to depend on whether or not certain variations in the stress-related gene also were present.
At some point in the future I picture potential soldiers undergoing genetic screening. Then if you are unlucky enough to have a genetic profile that makes you more immune to PTSD you'll get assigned to more dangerous front line combat units.
But then brain gene therapy to make soldiers more immune to stress will take even abused kids with bad genetic profiles and turn them into relaxed combat leaders who can handle extended periods of combat with little long term brain damage.
In a series of experiments, Ritesh Saini (George Mason University) and Ashwani Monga (University of Texas, San Antonio) demonstrate that a qualitatively different form of decision making gains prominence when consumers work with time instead of money. Specifically, consumers thinking about expenditure of time are more likely to rely on heuristics: intuitive, quick judgments based more on prior experience than on analysis of the information presented.
For example, one experiment had participants consider the purchase of a used car. They were told that a search on a used-car website had yielded 80 cars meeting their criteria but that viewing each accident record would take either $1 or 5 minutes of time. They were then asked how many records they would like to view, with a catch: the researchers used classic experimental “anchoring” techniques to manipulate the answers.
Participants were asked whether they would view “up to 2” or “up to 40” records, before indicating the specific number of records they would view. The use of an anchor, for those thinking in terms of time expenditure, turned out to have a significant impact.
When the anchor value was high in the time condition, consumers chose to view an average of 23.7 accident reports, versus 9.1 when the anchor value was low. The number of records consumers in the money condition chose to view was statistically the same, irrespective of whether the anchor value was high or low.
“People face difficulties in accounting for time because they do not routinely transact in time as they do in money,” explain the researchers. “Although people in some professions (e.g., lawyers) do keenly monitor their time expenditures, most other people are not trained to do so.”
Does this line of reasoning sound correct to you?
Look at people in office settings. I see them waste each others' time on a daily basis. Seems that it is more easy to get people to waste time than money.
How easily one can be influenced to spend money depends on whether one is a tightwad or a spendthrift. Surprisingly the tightwad's spending habits are more easy to influence.
Whether one is a spendthrift or a tightwad also predicts a wide range of spending behavior, the researchers found. Spendthrifts are no more likely than tightwads to use credit cards, but spendthrifts who use credit cards are three times more likely to carry debt than tightwads who use credit cards.
Annual income differs little between tightwads and spendthrifts, suggesting that the observed differences in debt are largely driven by differences in spending habits.
Interestingly, the researchers also found that tightwads are also most sensitive to marketing ploys designed to reduce the pain of paying. In one experiment, participants were asked whether they would be willing to pay $5 to have DVDs shipped overnight. The cost was either framed as a “$5 fee” or a “small $5 fee.” Spendthrifts were completely insensitive to the manipulation, but tightwads were 20 percent more likely to pay the fee when it was less painfully presented as “small.”
When offspring genetic engineering becomes possible will more people genetically engineer their kids to be tightwads or spendthrifts? Will they make their kids less likely to waste time?
Author James O’Keefe, M.D., a cardiologist from the Mid America Heart Institute in Kansas City, Mo., cites the results of several large trials that demonstrated the positive benefits associated with omega-3 fatty acids, either from oily fish or fish oil capsules.
“The most compelling evidence for the cardiovascular benefit provided by omega-3 fatty acids comes from three large controlled trials of 32,000 participants randomized to receive omega-3 fatty acid supplements containing DHA and EPA or to act as controls,” explains Dr. O’Keefe. “These trials showed reductions in cardiovascular events of 19 percent to 45 percent. Overall, these findings suggest that intake of omega-3 fatty acids, whether from dietary sources or fish oil supplements, should be increased, especially in those with or at risk for coronary artery disease.”
How much fish oil should people attempt to incorporate into their diets" According to Dr. O’Keefe, people with known coronary artery disease should consume about 1 gram per day, while people without disease should consume at least 500 milligrams (mg) per day.
“Patients with high triglyceride levels can benefit from treatment with 3 to 4 grams daily of DHA and EPA,” says Dr. O’Keefe. “Research shows that this dosage lowers triglyceride levels by 20 to 50 percent.”
About two meals of oily fish can provide 400 to 500 mg of DHA and EPA, so patients who need to consume higher levels of these fatty-acids may choose to use fish oil supplements to reach these targets.
How much EPA and DHA you are going to get from eating fish depends heavily on which fish you eat. They differ on the total amount of fat per serving and also in terms of what percentage of the fat is DHA or EPA.
See this chart of EPA and DHA per 3 oz serving of various types of fish. Their amounts of EPA and DHA vary by more than 2 orders of magnitude. If you choose to eat 3 ounces per day of the higher EPA and DHA fish you can easily get 1 gram of EPA and DHA per day. Though doing that day after day might get tedious and time consuming. The pills have their appeal.
A pair of articles from MIT's Technology Review report on prospects of lower solar photovoltaics manufacturing costs. First, Solaria is developing cheaper ways to make cheaper silicon-crystal based photovoltaic using thinner cells and lower cost fabrication techniques.
Solaria, a startup based in Fremont, CA, intends to cut the cost of solar panels by decreasing the amount of expensive material required. It has recently started shipping its first panels to select customers. This spring the company will begin production of solar panels at a factory built to produce 25 megawatts of solar panels per year.
Current high costs for the type of silicon used in photovoltaics have significantly driven up the price of conventional solar panels. Solaria's cells generate about 90% of a conventional solar panel's power, while using half as much silicon, says Kevin Gibson, Solaria's CTO.
The eventual expected cost reduction is only 10 to 30 percent.
Gibson says Solaria's first products will be economical enough to compete with panels produced by much larger companies, and that successive product generations will cost between 10 and 30 percent less than their competitors.
We need a much larger drop in photovoltaics cost. But 30% would be very substantial.
An approach using titanium oxide nanocrystals and organic dyes has the potential for much larger price reductions.
Cheap and easy-to-make dye-sensitized solar cells are still in the early stages of commercial production. Meanwhile, their inventor, Michael Gratzel, is working on more advanced versions of them. In a paper published in the online edition of Angewandte Chemie, Gratzel, a chemistry professor at the École Polytechnique Fédérale de Lausanne in Switzerland, presents a version of dye-sensitized cells that could be more robust and even cheaper to make than current versions.
Dyes made out of organic material could be very cheap.
New dyes are also being investigated. In commercial cells, the dyes are made of the precious metal ruthenium. But researchers have recently started to consider organic molecules as an alternative. "Organic dyes will become important because they can be cheaply made," Gratzel says. In the long run, they might also be more abundant than ruthenium.
Costs of new nuclear and coal power plant construction have skyrocketed. So the price point that solar has to get down to in order to compete has risen. Competitive photovoltaics probably require at least a two thirds price cut to below $1/Watt capacity. When will that happen? Your guess is as good as mine.
Outsourcing takes so many forms. Foreigners rent wombs in India in order to save money.
Commercial surrogacy, which is banned in some states and some European countries, was legalized in India in 2002. The cost comes to about $25,000, roughly a third of the typical price in the United States. That includes the medical procedures; payment to the surrogate mother, which is often, but not always, done through the clinic; plus air tickets and hotels for two trips to India (one for the fertilization and a second to collect the baby).
I'm sure you all can see the next logical step: parenting surrogacy. Hire the surrogate mother to keep taking care of the kid even after birth. Get to claim the kid is yours without having to interrupt your drive to success by actually taking the time to raise it. You could fly into India (or have the baby flown to your home country) once a year to get a series of pictures taken with the kid. That way the pictures at your office desk or in your wallet stay up to date with your age and your co-workers do not have to suspect you rarely see the kid. You can even fake authentic child raising problems. Occasionally (but not as often as in real life child raising) when the kid gets the flu in India you could even stay home from work for a couple of days and pretend to take care of Johnnie or Jill.
A deluxe parenting surrogacy service would include a web cam accessible only by you and some camera monitoring personnel in India. When an important moment happens (e.g. your baby's first step) a camera monitoring worker could notify you and email you the video clip showing those first steps. The baby would be kept in a US-looking living room which could be made to look like your own. With the Indian surrogates care in staying away from the camera you could even show your baby's first steps to people in the office.
The problem with parenting surrogacy of course are the invitations where you are supposed to bring Junior. This is where surrogacy in Mexico might be able to compete with surrogacy in India. If the little tyke is only a short airplane hop away from where you live then the baby can be brought in just for baby birthday parties and the like.
Medical tourism surrogacy is rapidly growing.
Rudy Rupak, co-founder and president of PlanetHospital, a medical tourism agency with headquarters in California, said he expected to send at least 100 couples to India this year for surrogacy, up from 25 in 2007, the first year he offered the service.
Lower prices in India make surrogacy affordable by middle class Americans.
Under guidelines issued by the Indian Council of Medical Research, surrogate mothers sign away their rights to any children. A surrogate’s name is not even on the birth certificate.
This eases the process of taking the baby out of the country. But for many, like Lisa Switzer, 40, a medical technician from San Antonio whose twins are being carried by a surrogate mother from the Rotunda clinic, the overwhelming attraction is the price. “Doctors, lawyers, accountants, they can afford it, but the rest of us — the teachers, the nurses, the secretaries — we can’t,” she said. “Unless we go to India.”
Outsourcing isn't just for corporations. Outsourcing is for mothers too.
FOSTER CITY, Calif. -- Applied Biosystems (NYSE:ABI), an Applera Corporation business, today announced a significant development in the quest to lower the cost of DNA sequencing. Scientists from the company have sequenced a human genome using its next-generation genetic analysis platform. The sequence data generated by this project reveal numerous previously unknown and potentially medically significant genetic variations. It also provides a high-resolution, whole-genome view of the structural variants in a human genome, making it one of the most in-depth analyses of any human genome sequence. Applied Biosystems is making this information available to the worldwide scientific community through a public database hosted by the National Center for Biotechnology Information (NCBI).
Does anyone reading this know (or have a way to find out) how many days or weeks this sequencing took to do?
Applied Biosystems was able to analyze the human genome sequence for a cost of less than $60,000, which is the commercial price for all required reagents needed to complete the project. This is a fraction of the cost of any previously released human genome data, including the approximately $300 million1 spent on the Human Genome Project. The cost of the Applied Biosystems sequencing project is less than the $100,000 milestone set forth by the industry for the new generation of DNA sequencing technologies, which are beginning to gain wider adoption by the scientific community.
The earliest automated DNA sequencing machine developed at CalTech (using a mass spectrometer design developed for a Mars mission) required a full time lab technician to purify the existing highest quality reagents to an even higher purity that the sequencing machine needed.
These scientists did multiple sequencings of the same genome which is needed in order to get good accuracy.
Under the direction of Kevin McKernan, Applied Biosystems' senior director of scientific operations, the scientists resequenced a human DNA sample that was included in the International HapMap Project. The team used the company's SOLiD System to generate 36 gigabases of sequence data in 7 runs of the system, achieving throughput up to 9 gigabases per run, which is the highest throughput reported by any of the providers of DNA sequencing technology.
The 36 gigabases includes DNA sequence data generated from covering the contents of the human genome more than 12 times, which helped the scientists to determine the precise order of DNA bases and to confidently identify the millions of single-base variations (SNPs) present in a human genome. The team also analyzed the areas of the human genome that contain the structural variation between individuals. These regions of structural variation were revealed by greater than 100-fold physical coverage, which shows positions of larger segments of the genome that may vary relative to the human reference genome.
"We believe this project validates the promise of next-generation sequencing technologies, which is to lower the cost and increase the speed and accuracy of analyzing human genomic information," said McKernan. "With each technological milestone, we are moving closer to realizing the promise of personalized medicine."
Before we get to personalized medicine we are going to discover what a huge number of genetic variations do to make us different in mind and body. Our perceptions of what we are as humans will be fundamentally altered. Most notably people will come out on the other side of this wave of discoveries with an altered and reduced view of the power of free will.
How many weeks has it been since I last nagged you about how most of you don't get enough vitamin D? Vitamin D, which helps the immune system function better, seems to cut the incidence of the autoimmune disorder type 1 diabetes.
Vitamin D supplements in early childhood may ward off the development of type 1 diabetes in later life, reveals a research review published ahead of print in the Archives of Disease in Childhood.
Type 1 diabetes is an autoimmune disorder, in which insulin producing beta cells in the pancreas are destroyed by the body’s own immune system, starting in early infancy. The disease is most common among people of European descent, with around 2 million Europeans and North Americans affected.
Its incidence is rising at roughly 3% a year, and it is estimated that new cases will have risen 40% between 2000 and 2010.
A trawl of published evidence on vitamin D supplementation in children produced five suitable studies, the pooled data from which were re-analysed.
The results showed that children given additional vitamin D were around 30% less likely to develop type 1 diabetes compared with those not given the supplement.
Vitamin D also might cut your risk of the autoimmune disease rheumatoid arthritis (and see here too). Risk of Multiple Sclerosis also appears inversely associated with blood vitamin D levels. Avoid autoimmune disorders. Get enough vitamin D.
Promoting the green design, construction, renovation and operation of buildings could cut North American greenhouse gas emissions that are fuelling climate change more deeply, quickly and cheaply than any other available measure, according to a new report issued by the trinational Commission for Environmental Cooperation (CEC).
I've long thought that cars get a disproportionate amount of attention over the energy they use. We should focus harder on building efficiency over car efficiency for a few reasons. First off, buildings last longer and cost more. Decisions made about building construction stay with us for a longer period of time than decisions about which car to drive. As the effects of Peak Oil hit with full force we can shift to motorcycles, bicycles converted to electric power, and very small cheap cars. But houses and office buildings can last for 100 years and longer.
A second reason to focus more on buildings is that most measures for making a building more efficient (e.g. better insulation and sealing, multi-pane windows facing southward, ground sink heat pumps) do not make buildings less comfortable. In fact, they can make buildings more comfortable. By contrast, most people prefer bigger cars for greater comfort and safety. They won't give up the big cars until gasoline goes up even higher.
Very few of the new buildings get built with the most efficient designs possible.
North America’s buildings cause the annual release of more than 2,200 megatons of CO2 into the atmosphere, about 35 percent of the continent’s total. The report says rapid market uptake of currently available and emerging advanced energy-saving technologies could result in over 1,700 fewer megatons of CO2 emissions in 2030, compared to projected emissions that year following a business-as-usual approach. A cut of that size would nearly equal the CO2 emitted by the entire US transportation sector in 2000.
It is common now for more advanced green buildings to routinely reduce energy usage by 30, 40, or even 50 percent over conventional buildings, with the most efficient buildings now performing more than 70 percent better than conventional properties, according to the report.
Despite proven environmental, economic and health benefits, however, green building today accounts for a only small fraction of new home and commercial building construction—just two percent of the new non-residential building market, less than half of one percent of the residential market in the United States and Canada, and less than that in Mexico.
I am expecting energy price rises to drive a push toward more efficient building construction. If you are thinking about building a house or commercial building think about future energy prices when you choose your design.
Oak Ridge National Laboratory researchers claim if pluggable hybrids don't get recharged until after 10 PM then they will require little or no additional electric power plants.
In an analysis of the potential impacts of plug-in hybrid electric vehicles projected for 2020 and 2030 in 13 regions of the United States, ORNL researchers explored their potential effect on electricity demand, supply, infrastructure, prices and associated emission levels. Electricity requirements for hybrids used a projection of 25 percent market penetration of hybrid vehicles by 2020 including a mixture of sedans and sport utility vehicles. Several scenarios were run for each region for the years 2020 and 2030 and the times of 5 p.m. or 10:00 p.m., in addition to other variables.
The report found that the need for added generation would be most critical by 2030, when hybrids have been on the market for some time and become a larger percentage of the automobiles Americans drive. In the worst-case scenario—if all hybrid owners charged their vehicles at 5 p.m., at six kilowatts of power—up to 160 large power plants would be needed nationwide to supply the extra electricity, and the demand would reduce the reserve power margins for a particular region's system.
The best-case scenario occurs when vehicles are plugged in after 10 p.m., when the electric load on the system is at a minimum and the wholesale price for energy is least expensive. Depending on the power demand per household, charging vehicles after 10 p.m. would require, at lower demand levels, no additional power generation or, in higher-demand projections, just eight additional power plants nationwide.
Since I suspect the world has already reached Peak Oil I expect the shift to electrically-powered vehicles will happen sooner than this study assumes. Also, total electric demand will grow more rapidly as dwindling oil supplies cause a big shift toward electrically powered equipment of all kinds.
The great difference in power plant usage between the afternoon and late night is partly a result of a lack of dynamic pricing. If electric rates for homes varied by the time of day based on relative levels of demand then people and companies would shift more of their electric demand toward the late night even before significant numbers of hybrid vehicles hit the market. Such a shift in demand would cause higher utilization of power plants at night and therefore less excess power generation capacity available to charge electric cars.
Fortunately thermal solar and photovoltaic solar will drop in prices and will become cost competitive sources of day time power. Electric cars will then preferentially get recharged in the morning sun before the peak business demand for electric power in the afternoon.
Eat your broccoli! That's the advice from UCLA researchers who have found that a chemical in broccoli and other cruciferous vegetables may hold a key to restoring the body's immunity, which declines as we age.
Published in this week's online edition of the Journal of Allergy and Clinical Immunology, the study findings show that sulforaphane, a chemical in broccoli, switches on a set of antioxidant genes and enzymes in specific immune cells, which then combat the injurious effects of molecules known as free radicals that can damage cells and lead to disease.
Immune system aging sets you up for getting killed by pneumonia or flu or a bacterial infection picked up while at a hospital. In fact immune system aging probably makes us more vulnerable to cancer and people with especially capable immune systems are probably at much lower risk of getting cancer. So keeping your immune system younger yields a big benefit.
The UCLA team not only found that the direct administration of sulforaphane in broccoli reversed the decline in cellular immune function in old mice, but they witnessed similar results when they took individual immune cells from old mice, treated those cells with the chemical outside the body and then placed the treated cells back into a recipient animal.
In particular, the scientists discovered that dendritic cells, which introduce infectious agents and foreign substances to the immune system, were particularly effective in restoring immune function in aged animals when treated with sulforaphane.
"We found that treating older mice with sulforaphane increased the immune response to the level of younger mice," said Hyon-Jeen Kim, first author and research scientist at the Geffen School.
To investigate how the chemical in broccoli increased the immune system's response, the UCLA group confirmed that sulforaphane interacts with a protein called Nrf2, which serves as a master regulator of the body's overall antioxidant response and is capable of switching on hundreds of antioxidant and rejuvenating genes and enzymes.
Nel said that the chemistry leading to activation of this gene-regulation pathway could be a platform for drug discovery and vaccine development to boost the decline of immune function in elderly people.
Sulforaphane concentration in broccoli sprout (1153 mg/100 g dry weight) was about 10 times higher than that of mature broccoli (44-171 mg/100 g dry weight).
Extracts of 3-day-old broccoli sprouts (containing either glucoraphanin or sulforaphane as the principal enzyme inducer) were highly effective in reducing the incidence, multiplicity, and rate of development of mammary tumors in dimethylbenz(a)anthracene-treated rats. Notably, sprouts of many broccoli cultivars contain negligible quantities of indole glucosinolates, which predominate in the mature vegetable and may give rise to degradation products (e.g., indole-3-carbinol) that can enhance tumorigenesis. Hence, small quantities of crucifer sprouts may protect against the risk of cancer as effectively as much larger quantities of mature vegetables of the same variety.
Some Johns Hopkins researchers have even founded a company that sells teas fortified with sulforaphane.
The growth in China's carbon dioxide (CO2) emissions is far outpacing previous estimates, making the goal of stabilizing atmospheric greenhouse gases much more difficult, according to a new analysis by economists at the University of California, Berkeley, and UC San Diego.
Previous estimates, including those used by the Intergovernmental Panel on Climate Change, say the region that includes China will see a 2.5 to 5 percent annual increase in CO2 emissions, the largest contributor to atmospheric greenhouse gases, between 2004 and 2010. The new UC analysis puts that annual growth rate for China to at least 11 percent for the same time period.
A constant percentage increase per year turns into an absolute increase per year. If China maintains an 11% CO2 increase per year through the 2010s then by 2020 it will likely emit more CO2 than all the rest of the world put together. Will they do that?
The study is scheduled for print publication in the May issue of the Journal of Environmental Economics and Management, but is now online.
Keep in mind that many Kyoto Accord signing countries are falling far short of meeting their pledges anyway.
The researchers' most conservative forecast predicts that by 2010, there will be an increase of 600 million metric tons of carbon emissions in China over the country's levels in 2000. This growth from China alone would dramatically overshadow the 116 million metric tons of carbon emissions reductions pledged by all the developed countries in the Kyoto Protocol. (The protocol was never ratified in the United States, which was the largest single emitter of carbon dioxide until 2006, when China took over that distinction, according to numerous reports.)
Put another way, the projected annual increase in China alone over the next several years is greater than the current emissions produced by either Great Britain or Germany.
Picture China's economy 2 times bigger. Picture it 3 times bigger. Huge demands for raw materials. Huge consumption of fossil fuels. Lots of pollution generated even from the solar photovoltaics industry.
Suppose rising CO2 emissions will cause global warming and that global warming will cause big negative impacts that outweigh the benefits. Well, we are going to have to use climate engineering techniques to stop and reverse the warming. Barring big breakthroughs to lower the costs of solar and nuclear power I do not see a substantial decrease in CO2 emissions until Peak Coal hits.
Most of this increase is coming from burning coal to generate electricity. If only they were building nuclear rather than coal electric power plants the emissions (and not just of CO2, also particulates, mercury, etc) would be far less.
China's installed nuclear power-generating capacity is expected to reach 60 gigawatts by 2020, a senior Chinese energy official said -- much higher than an earlier government estimate of 40 gigawatts. A gigawatt is the equivalent of one billion watts. The new estimate is equal to about two-thirds of Britain's total electricity-generating capacity today, although still equivalent to less than a tenth of China's current total.
Faced with an energy crunch resulting from its fast economic growth, China has decided to develop more nuclear power. By 2020, the nation will have an installed nuclear power capacity of 40 million kw, accounting for 4 percent of its total installed generating capacity.
They still see nuclear power as too costly as compared to coal. Without cheaper ways to generate cleaner power the world is going to become a dirtier place.
Some forms of a gene that controls the body's response to stress hormones appear to protect adults who were abused in childhood from depression, psychiatrists have found.
People who had been abused as children and who carried the most protective forms of the gene, called corticotropin-releasing hormone receptor one (CRHR1), had markedly lower measures of depression, compared with people with less protective forms, the researchers found in a recent study.
The findings could guide doctors in finding new ways to treat depression in people who were abused as children, says senior author Kerry Ressler, MD, PhD, assistant professor of psychiatry and behavioral sciences at Emory University School of Medicine.
This is not the first report of genetic variations of brain genes that affect how well developing children handle abuse and adversity. Previous research found that children who carry the low MAOA activity allele (MAOA-L) and who are abused demonstrate more aggressive and violent behavior as adults.
Some kids have genes that let them shrug off all sorts of abuse and basically keep trucking. Other kids aren't so lucky. Those latter kids become problems for the rest of us too. Violence prone adults pose a danger to whoever they come into contact with.
Early identification of kids with genetic vulnerabilities might some day get used to guide more aggressive state intervention into bad families. You can imagine social workers arguing to take a kid out of an abusive home more quickly if the has genes that make him or her vulnerable to permanent and problematic behavioral and personality alterations.
Once offspring genetic engineering becomes possible we can't assume parents should avoid giving offspring these genetic variations that make kids more vulnerable to abuse. There might be benefits to these alleles in more benign environments. Though I see a more compelling argument for discouraging the passing along of these alleles if either prospective parent has a genetic profile and brain scans that suggests he or she is likely to abuse kids.
CAMBRIDGE, Mass. — Capitalizing on a cell’s ability to roll along a surface, MIT researchers have developed a simple, inexpensive system to sort different kinds of cells — a process that could result in low-cost tools to test for diseases such as cancer, even in remote locations.
A cheap, small, and easy-to-operate device for detecting cancers would allow more frequent, cheaper, and earlier stage cancer detection. One can imagine such devices available in supermarkets or drug stores. A small blood sample could tell you pretty quickly whether to seek out a doctor. The resulting earlier stage diagnoses will substantially up cure rates.
Notice this result was published in Nano Letters. Advances in biotechnology are increasingly coming from working with very small scale materials and devices. Smaller devices can be orders of magnitude cheaper, faster, reliable, and sensitive.
Rohit Karnik, an MIT assistant professor of mechanical engineering and lead author of a paper on the new finding appearing this week in the journal Nano Letters, said the cell-sorting method was minimally invasive and highly innovative.
“It’s a new discovery,” Karnik said. “Nobody has ever done anything like this before.”
The method relies on the way cells sometimes interact with a surface (such as the wall of a blood vessel) by rolling along it. In the new device, a surface is coated with lines of a material that interacts with the cells, making it seem sticky to specific types of cells. The sticky lines are oriented diagonally to the flow of cell-containing fluid passing over the surface, so as certain kinds of cells respond to the coating they are nudged to one side, allowing them to be separated out.
The device will take 2 years to become usable as a lab research tool and 5 years before use in clinical tests.
Now that the basic principle has been harnessed in the lab, Karnik estimates it may take up to two years to develop into a standard device that could be used for laboratory research purposes. Because of the need for extensive testing, development of a device for clinical use could take about five years, he estimates.
The amount of oil available for import (in contrast to the larger amounts produced or exported) by OECD countries (basically the most developed countries) looks set to decline. We need substitutes. The obstacles in the way of many of those substitutes keep growing. Fear of carbon taxes has helped drive cancellation of many new proposed coal electric plants.
Utilities canceled or put on hold at least 45 coal plants in development last year, according to a new analysis by the US Department of Energy's National Energy Technology Laboratory in Pittsburgh. These moves – a sharp reversal from a year ago, when the industry had more than 150 such plants in development – signal the waning of a major US expansion into coal.
Part of the reticence to build new coal electric plants stems from rising construction costs. Nuclear power is faced with the same problem. High prices for construction materials lowers the profit protential of proposed plants. I wonder whether the high costs are transitory. If not then we are going to pay more for electricity as demand rises.
Natural-gas and renewable power projects have leapt ahead of coal in the development pipeline, according to Global Energy Decisions, a Boulder, Colo., energy information supplier. Gas and renewables each show more than 70,000 megawatts under development compared with about 66,000 megawatts in the coal-power pipeline.
This year could diminish coal's future prospects even more. Wall Street investment banks last month said they will now evaluate the cost of carbon emissions before approving power plants, raising the bar much higher for new coal projects, analysts say.
The turn from coal to natural gas will raise electric prices. On the bright side, the higher prices will make renewables and nuclear power more competitive.
I do not expect the growing opposition to coal electric plants necessarily will cause the United States to reach a peak in coal usage in the next 5 or so years. As world oil production declines another surge in demand for coal will come from a desparate move to convert coal into liquid fuel. So limits on use of coal for electricity just leaves more coal available to power cars with the product of coal-to-liquid plants.
State governments already are leading the movement to curb greenhouse gases, with 26 now requiring that a percentage of electricity come from renewable sources, such as wind and solar. Those include five of the top ten coal-producing states — Pennsylvania, Montana, Texas, Colorado and Illinois.
Nearly all of those 26 states also have signed on to three separate, regional cap-and-trade systems that will eventually require cuts in carbon dioxide emissions from power plants and other industrial sources. Under those systems, coal-fired power plants would be given or have to buy credits for the carbon dioxide they produce and pay for additional credits if they do not meet reduction targets.
This opposition to coal will increase the demand for nuclear power. But construction cost increases hit nuclear harder than coal because nuclear power plants are more capital intensive. Power plant construction costs have risen very dramatically since 2000.
The costs that drive the rates that power customers pay have been going up dramatically, according to the new Power Capital Costs Index (PCCI) developed by IHS Inc. (NYSE: IHS) and Cambridge Energy Research Associates (CERA) and introduced today at the CERAWeek 2008 conference in Houston. The index shows the cost of new power plant construction in North America increased 27 percent in 12 months and 19 percent in the most recent six months, reaching a level 130 percent higher than in 2000.
The new PCCI -- which tracks the costs of building coal, gas, wind and nuclear power plants indexed to year 2000 -- registered 231 index points in the third quarter period ending in October, indicating a power plant that cost $1 billion in 2000 would, on average, cost $2.31 billion today.
“These costs are beginning to act as a drag on the power industry’s ability to expand to meet growing North American demand, and leading to delays and postponements in the building of new power plants,” said Candida Scott, lead researcher for the Capital Costs Analysis Forum for Power, a new project of CERA. “As the cost of construction rises, firms may become reluctant to invest in new plants, or delay and postpone these projects, in turn constraining the growth of capacity.”
“Although the PCCI has been on an upward trend since 2000, a surge that began in 2005 has pushed costs up 76 percent in the past three years,” according to Scott. “The latest increases have been driven by continued high activity levels globally, especially for nuclear plants, with continued tightness in the equipment and engineering markets, as well as historically high levels for raw materials.” Excluding nuclear plants, costs have risen 79 percent since 2000, she noted.
I hope big strides are made in lowering the costs of solar and wind power. Otherwise look for big price increases in electric bills in the coming years. Also, high construction costs for nuclear and coal electric plants reduce the amount of substitution possible for dwindling oil. Less energy substitution means lower living standards.
Donovan and Leslie Lock, assistant adjunct professor of biological chemistry and developmental and cell biology at UCI, previously identified proteins called growth factors that help keep cells alive. Growth factors are like switches that tell cells how to behave, for example to stay alive, divide or remain a stem cell. Without a signal to stay alive, the cells die.
The UCI scientists – Donovan, Lock and Kristi Hohenstein, a stem cell scientist in Donovan’s lab – used those growth factors in the current study to keep cells alive, then they used a technique called nucleofection to insert DNA into the cells. Nucleofection uses electrical pulses to punch tiny holes in the outer layer of a cell through which DNA can enter the cell.
With this technique, scientists can introduce into cells DNA that makes proteins that glow green under a special light. The green color allows them to track cell movement once the cells are transplanted into an animal model, making it easier for researchers to identify the cells during safety studies of potential stem cell therapies.
Scientists today primarily use chemicals to get DNA into cells, but that method inadvertently can kill the cells and is inefficient at transferring genetic information. For every one genetically altered cell generated using the chemical method, the new growth factor/nucleofection method produces between 10 and 100 successfully modified cells, UCI scientists estimate.
Gene therapy has been a great disappointment. Back in the mid 1990s gene therapy research seemed more promising. This gene therapy method is for cells that can be removed from the body. So it is useful for preparing stem cells (and probably non-embryonic stem cells) to accept DNA. But it is not a general solution for gene therapy.
This report is especially interesting because the improvement by orders of magnitude. To get from where we are to where we need to be with gene therapy and stem cell therapy we need many advances that bring orders of magnitude improvements in our ability to manipulate cells and genes
A new solar thermal electric power installation in Boulder City Nevada uses arrays of mirrors to concentrate sun light to drive electric power generation. The cost of electricity for this plant is estimated at 15-20 cents per kilowatt-hour (kwh).
Many states, including California, are imposing mandates for renewable energy. All of that is reviving interest in solar thermal plants.
The power they produce is still relatively expensive. Industry experts say the plant here produces power at a cost per kilowatt- hour of 15 to 20 cents. With a little more experience and some economies of scale, that could fall to about 10 cents, according to a recent report by Emerging Energy Research, a consulting firm in Cambridge, Mass. Newly built coal-fired plants are expected to produce power at about 7 cents per kilowatt-hour or more if carbon is taxed.
That is at least double what cheaper sources of electricity cost in the United States. Can the costs really go down substantially with a bigger market?
While solar thermal still costs more than wind power predictable daylight hours and the ability to store the heat allows solar thermal to provide a more reliable power source.
According to the U.S. Department of Energy, wind power costs about 8 cents per kilowatt, while solar thermal power costs 13 to 17 cents. But power from wind farms fluctuates with every gust and lull; solar thermal plants, on the other hand, capture solar energy as heat, which is much easier to store than electricity. Utilities can dispatch this stored solar energy when they need it--whether or not the sun happens to be shining.
Solar thermal doesn't have to be able to provide electric power 24 hours per day to be useful. If its cost could drop in half then solar thermal would greatly reduce the use of coal and natural gas and allow limited fossil fuels to last longer and pollute less..
Acciona's plant, which began operation last year, produces 64 megawatts of electricity for the utility company Nevada Power, enough to light up 14,000 homes. The company's Spanish competitor Abengoa just announced a plan to build a 280-megawatt solar thermal plant outside Phoenix, which would be the largest such project in the world.
All you need is a lot of sun, a lot of space and a lot of mirrors — and NS1 has all of the above. 182,000 parabolic mirrors are spread over 400 acres of flat desert, creating a glistening sea of glass visible from miles away.
That's 35 homes worth of electric power per acre of land. Mind you, this is an area of the United States that gets above average amounts of sunlight. But this result suggests that use of solar thermal to power all homes would not use an inordinate amount of land - at least not in countries with lower population densities.
Solar thermal looks cheaper than solar photovoltaics and the heat from solar thermal can be stored to stretch into evening hours. But solar photovoltaics might have better prospects for lower cost reductions and it lends itself more easily to decentralized use and smaller installations on homes and other buildings.
Researchers at MIT, UCSD, and the Karolinska Institutet in Stockholm Sweden conducted two sets of studies on twins and found that part of human trust and trustworthiness seem due to genetic influences (PDF at PNAS site). (and thanks to one of the researchers, MIT's David Cesarini, for the heads up)
To investigate whether humans are endowed with genetic variation that could help account for individual differences in trust game behavior, two separate teams of researchers independently conceived and executed a very similar experiment on twins [see supporting information (SI) for experimental procedures]. These teams became aware of each other for the first time after all data had been collected. One team recruited 658 subjects from the population-based Swedish Twin Registry, and the other team recruited 706 subjects from the 2006 and 2007 Twins Days Festivals in Twinsburg, OH. Both teams administered the trust game to (identical) monozygotic (MZ) and (nonidentical) dizygotic (DZ) same-sex twin pairs. The game was played with real monetary payoffs and between anonymous partners.
They looked at both how much trust people had in strangers and also how much people lived up to the expectations of those who trusted them.
The results of our mixed-effects Bayesian ACE analysis suggest that variation in how subjects play the trust game is partially accounted for by genetic differences (Tables 2 and 3 and Fig. 2). In the ACE model of trust, the heritability estimate is 20% (C.I. 3–38%) in the Swedish experiment and 10% (C.I. 4–21%) in the U.S. experiment. The ACE model of trust also demonstrates that environmental variation plays a role. In particular, unshared environmental variation is a much more significant source of phenotypic variation than genetic variation (e2 = 68% vs. c2 = 12% in Sweden and e2 = 82% vs. c2 = 8% in the U.S.; P < 0.0001 in both samples). In the ACE model of trustworthiness, heritability (h2) generates 18% (C.I. 8–30%) of the variance in the Swedish experiment and 17% (C.I. 5–32%) in the U.S. experiment. Once again, environmental differences play a role (e2 = 66% vs. c2 = 17% in Sweden and e2 = 71% vs. c2 = 12% in the U.S.; P < 0.0001 in both samples).
The researchers found more trust in their Swedish participants (which is what I'd expect from such a high trust society). They also found a higher heritability of trust in their Swedish participants. See figure 2.
Heritability (h2) of trust is estimated to be (a) 20% in Sweden and (b) 10% in the U.S. Heritability of trustworthiness is estimated to be (a) 18% in Sweden and (b) 17% in the U.S.
That difference in trust between US and Swedish participants might be due to genetic differences.
They suspect their results understate extent of heritability for a couple of reasons including the very plausible idea that people are more likely to mate with people who have similar levels of trust. So the DZ twins and MZ twins are not as different in their genetic sequences which influence trust as would be the case if matings were more random.
Moreover, we believe that the reported estimates indicate a lower bound on heritability and shared environment for two reasons. First, the estimate of the variance explained by the unshared environmental differences includes all idiosyncratic error, including measurement error. If our subjects had each played several rounds of the trust game with different individuals, our measures of trust and trustworthiness may have been more precise, which would have yielded higher estimates of heritability and common environmental influences. Second, one assumption of the ACE model is that there is no assortative mating with respect to the trait of interest. If preferences for cooperation are indeed heritable, and if people who cooperate tend to mate with other cooperative individuals, then this will increase the similarity in cooperative behavior in their children. This inflates the correlation of the genotypes of DZ siblings, making it harder to detect differences in MZ and DZ twins. As a result, the more assortativity, the more it biases downward the estimate of heritability.
The problem is that it is easier to compare twins than to compare any two random individuals to tease out genetic influences. We don't know how much two random individuals differ in their genetic sequences. Using twins studies researchers can detect the presence of genetic influences on behavior. That's important because that detection is the first step toward finding the actual genetic variations that cause people to behave differently from each other.
In the last couple of years the rate of detection of the meaning of genetic differences greatly sped up. The rapid decline in DNA testing costs and the development of more extensive maps of genetic differences (e.g. as done in the International Haplotype Map project) has made this task much easier. Within 10 years at most studies such as this one will routinely include genetic testing information on all test subjects.
Cheap DNA sequencing will probably increase the optimal size of trust studies and other behavioral studies of twins. Much larger groups of participants are needed to better control for all the genetic variations in order to identify which genetic variations contribute to behavioral differences.
Studies of this sort demonstrate that natural selection played a major role in shaping human behavior. Economic behavior is not simply the result of rational calculations by humans. The extent of willingness to engage in exchanges and enter into business deals is influenced by our evolutionary past. That notion doesn't sit well with people who imagine they have total free will and control over their decisions.
The results of this study understate the extent to which genetic sequences control our economic behavior. The study tried to measure differences in behavior. Many genetic sequences identical in all study participants didn't cause differences and yet did influence behavior of all the study participants.
Cesarini and Swedish researchers have previously published work in this area. See my previous post Large Genetic Component To How People Play Economic Game.
What I most want to know: once all the genetic alleles which influence trust and trustworthiness are identified and offspring genetic engineering becomes possible will people choose genes that make their offspring more or less trusting and more or less trustworthy?
People don't expect cheap drugs to help them much. No wonder the price of drugs has risen. People want more effective results.
DURHAM, N.C. -- A 10-cent pill doesn't kill pain as well as a $2.50 pill, even when they are identical placebos, according to a provocative study by Dan Ariely, a behavioral economist at Duke University.
"Physicians want to think it's the medicine and not their enthusiasm about a particular drug that makes a drug more therapeutically effective, but now we really have to worry about the nuances of interaction between patients and physicians," said Ariely, whose findings appear as a letter in the March 5 edition of the Journal of the American Medical Association.
Ariely and a team of collaborators at the Massachusetts Institute of Technology used a standard protocol for administering light electric shock to participants’ wrists to measure their subjective rating of pain. The 82 study subjects were tested before getting the placebo and after. Half the participants were given a brochure describing the pill as a newly-approved pain-killer which cost $2.50 per dose and half were given a brochure describing it as marked down to 10 cents, without saying why.
In the full-price group, 85 percent of subjects experienced a reduction in pain after taking the placebo. In the low-price group, 61 percent said the pain was less.
The conclusion here is obvious: Medical professionals need to go to greater lengths to deceive patients into believing that ineffective treatments really will work. At least for chronic pain this might help. I'm at least half serious.
University at Buffalo researchers now have shown in a randomized trial that by using a device that automatically restricted video-viewing time, parents reduced their children's video time by an average of 17.5 hours a week and lowered their body-mass index (BMI) significantly by the end of the 2-year study.
In contrast, children in the control group, whose video time was monitored, but not restricted, reduced their viewing time by only 5 hours per week.
By the end of the study, children with no time limits reduced their TV and computer use by an average of 5.2 hours per week, compared with an average reduction of 17.5 hours per week among children whose time was restricted. BMI as adjusted for age and sex and calorie intake also were lower among the group with restrictions on viewing than among the control group. No difference between the two groups was observed in the amount of physical activity.
University of Minnesota School of Public Health Project Eating Among Teens (EAT) researchers have found further evidence to support the importance of encouraging youth to eat breakfast regularly. Researchers examined the association between breakfast frequency and five-year body weight change in more than 2,200 adolescents, and the results indicate that daily breakfast eaters consumed a healthier diet and were more physically active than breakfast skippers during adolescence. Five years later, the daily breakfast eaters also tended to gain less weight and have lower body mass index levels – an indicator of obesity risk – compared with those who had skipped breakfast as adolescents.
Mark Pereira, Ph.D., corresponding author on the study, points out that this study extends the literature on the topic of breakfast habits and obesity risk because of the size and duration of the study. “The dose-response findings between breakfast frequency and obesity risk, even after taking into account physical activity and other dietary factors, suggests that eating breakfast may have important effects on overall diet and obesity risk, but experimental studies are needed to confirm these observations,” he added.
On the other hand, the teenagers who ate breakfast less frequently were the ones who were most likely to smoke, drink alcohol, and use dieting and other ways to control their weight.
If you feel inspired to start eating a grain-based breakfast then make sure you eat a whole grain.
Over the 12-week study period, all participants received the same dietary advice on weight loss, and encouragement to participate in moderate physical activity. Researchers also asked participants to consume five daily servings of fruits and vegetables, three servings of low-fat dairy products, and two servings of lean meat, fish or poultry.
The study's findings are published in the January 2008 issue of the American Journal of Clinical Nutrition.
Results from the study showed that waist circumference and body weight decreased significantly in both groups – between 8-11 pounds on average – but weight loss in the abdominal region was significantly greater in the whole grain group.
According to Katcher, the whole grain group experienced a 38 percent decrease in C-reactive protein levels in their blood. A high level of this inflammatory marker is thought to place patients at a higher risk for diabetes, hypertension and cardiovascular disease.
"Typically you would expect weight loss to be associated with a decrease in C-reactive protein, but the refined grain group showed no decrease in this marker of inflammation even though they lost weight," said Kris-Etherton.
The Penn State researcher suggests that the finding is because the consumption of refined grains has been linked to increased levels of the protein. So even though people in the refined grain group lost weight, the fact that they ate so many refined grains probably negated the beneficial effect of weight loss on C-reactive protein levels.
Older men with lower free testosterone levels in their blood appear to have higher prevalence of depression, according to a report in the March issue of Archives of General Psychiatry, one of the JAMA/Archives journals.
Depression affects between 2 percent and 5 percent of the population at any given time, according to background information in the article. Women are more likely to be depressed than men until age 65, when sex differences almost disappear. Several studies have suggested that sex hormones might be responsible for this phenomenon.
Osvaldo P. Almeida, M.D., Ph.D., F.R.A.N.Z.C.P., of the University of Western Australia, Perth, and colleagues studied 3,987 men age 71 to 89 years. Between 2001 and 2004, the men completed a questionnaire reporting information about demographics and health history. They underwent testing for depression and cognitive (thinking, learning and memory) difficulties, and information about physical health conditions was obtained from a short survey and an Australian health database. The researchers collected blood samples from the participants and recorded levels of total testosterone and free testosterone, which is not bound to proteins.
A total of 203 of the participants (5.1 percent) met criteria for depression; these men had significantly lower total and free testosterone levels then men who were not depressed. After controlling for other factors—such as education level, body mass index and cognitive scores—men in the lowest quintile (20 percent) of free testosterone concentration had three times the odds of having depression compared to men in the highest quintile.
I really want testosterone replacement to yield a net benefit in physical and mental health because I really want ways to slow up and delay the various deleterious effects of aging. We need prospective studies of its effects to know for sure.
I'm far from convinced. But here's yet another round in the ideal diet debate.
Low-fat diets are more effective in preserving and promoting a healthy cardiovascular system than low-carbohydrate, Atkins’-like diets, according to a new study by researchers at the Medical College of Wisconsin in Milwaukee.
The study, published in the February edition of the scientific journal Hypertension, was led by David D. Gutterman, M.D., Northwestern Mutual Professor of Cardiology, professor of medicine and physiology, and senior associate dean of research at the Medical College. Shane Phillips, M.D., a former Cardiology faculty member at the Medical College, and now assistant professor in the department of physical therapy at the University of Illinois - Chicago, was the lead author.
Mind you, this is not the first study to address the question of fats versus carbohydrates for heart health and it likely won't be the last. The issue is not settled.
These scientists are quite sure that higher fat content does you more harm than higher carbohydrate content. My reaction: it depends.
“Low-carbohydrate diets are significantly higher in total grams of fat, protein, dietary cholesterol and saturated fats than are low-fat diets. While a low-carbohydrate diet may result in weight loss and improvement in blood pressure, similar to a low-fat diet, the higher fat content is ultimately more detrimental to heart health than is the low-fat diet suggested by the American Heart Association,” points out Dr. Phillips.
“The higher fat content of a low-carbohydrate diet may put dieters at an increased risk of atherosclerosis (hardening of the arteries) because low-carbohydrate diets often reduce protection of the endothelium, the thin layer of cells that line the blood vessels of the circulatory system. The reduced production from the endothelium of nitric oxide, a specific chemical, puts the vessel at higher risk of abnormal thickening, greater clotting potential, and cholesterol deposition, all part of the atherosclerosis process,” says Dr. Gutterman.
What does the carbs versus fats debate depend on? Which fats. Which carbohydrates. Also, what particular genes do you have?
Take carbohydrates for example. They do not all break down into simple sugars in the intestines at the same speed. The ones that break down slowly (they have low glycemic index) enter the blood stream more slowly than the ones that break down quickly (they have high glycemic index). A rapid rise in blood sugar has harmful effects. Plus, the simple sugars fructose and glucose do not get metabolized the same way. Glucose transport gets regulated with insulin for example. So not all carbohydrates are equal.
What I really want to see: A comparison where one group eats unrefined very low glycemic index carbs while another group eats refined high glycemic index carbs. Then 3 other groups should eat polyunsaturated, monounsaturated, and saturated fats. But even that split isn't fine enough. Still, such a comparison would be a good start.
These researchers see fats as worse for the cardiovascular system because the low carb diet participants showed less artery dilation than the low fat diet participants.
Over a six-week period, the researchers found reduced flow-mediated dilation in the arm artery in participants who were on the low-carbohydrate diet. Reduced flow-mediated dilation, as measured in this study, is an early indicator of cardiovascular disease. On the other hand, flow-mediated dilation improved significantly in participants on the low-fat diet suggesting a healthier artery which is less prone to developing atherosclerosis.
“We observed a reduction in brachial artery flow-mediated dilation after six weeks of weight loss on a low-carbohydrate, Atkins’-style diet,” Dr. Gutterman says.
A diet high in nuts would be high in fats. But it would be higher in less saturated fats and higher in arginine that promotes nitric acid production and capillary dilation. So I don't think the question of ideal diets is as easy as these researchers make it sound.
For the sake of argument, imagine that those who ate the low carb diet kept their weight down for longer periods of time. That benefit might outweigh other costs and benefits. We need to know more about the longer term effects of these diets. Does the low carbo diet do a better job of keeping off weight?
The researchers point out that the low carbo diet had less folic acid. But that's easily remediable by taking a pill or by eating green leafy vegetables (which are high in fiber too and have other good stuff in them).
Low-carbohydrate diets were also found to have significantly less daily folic acid than low-fat diets. Folic acid is thought to be helpful in reducing the likeliness of heart disease. This protective effect results from the antioxidant property of folic acid and its ability to lower levels of homocysteine, a naturally occurring amino acid that can be dangerous at elevated levels.
Low glycemic index beans also are good sources of folic acid. Check out this glycemic index database and try to shift your diet toward lower glycemic index foods. Those foods tend to be healthier for other reasons as well.
Picture a fully robotic laboratory where huge racks of microfluidic chips continuously conduct simultaneous experiments on millions of cells. That's where I see cellular biology headed. Humans will just program experiments and analyze experimental results. A Johns Hopkins team has developed a device that I think is a step on the road toward totally automated labs. A chip can control the environment of a single nerve cell and feed it controlled amounts of chemicals to see how the cell responds to growth signals.
Johns Hopkins researchers from the Whiting School of Engineering and the School of Medicine have devised a micro-scale tool - a lab on a chip - designed to mimic the chemical complexities of the brain. The system should help scientists better understand how nerve cells in the brain work together to form the nervous system.
A report on the work appears as the cover story in the February 2008 issue of the British journal Lab on a Chip.
”The chip we’ve developed will make experiments on nerve cells more simple to conduct and to control,” says Andre Levchenko, Ph.D., associate professor of biomedical engineering at the Johns Hopkins Whiting School of Engineering and faculty affiliate of the Institute for NanoBioTechnology.
Nerve cells decide which direction to grow by sensing both the chemical cues flowing through their environment as well as those attached to the surfaces that surround them. The chip, which is made of a plastic-like substance and covered with a glass lid, features a system of channels and wells that allow researchers to control the flow of specific chemical cocktails around single nerve cells.
“It is difficult to establish ideal experimental conditions to study how neurons react to growth signals because so much is happening at once that sorting out nerve cell connections is hard, but the chip, designed by experts in both brain chemistry and engineering, offers a sophisticated way to sort things out,” says Guo-li Ming, M.D., Ph.D., associate professor of neurology at the Johns Hopkins School of Medicine and Institute for Cell Engineering.
In experiments with their chip, the researchers put single nerve cells, or neurons, onto the chip then introduced specific growth signals (in the form of chemicals). They found that the growing neurons turned and grew toward higher concentrations of certain chemical cues attached to the chip’s surfaces, as well as to signaling molecules free-flowing in solution.
Chips such as these lend themselves to cheap mass manufacture. The chips will improve in successive generations just as computer chips do. They will become better integrated with each other and robot devices will install them and tend to them. Eventually we'll see fully automated lights-out labs analogous to lights out manufacturing plants which companies seek to achieve in order to minimize manufacturing labor costs.