ST. PAUL, Minn -- A gene variation that helps people live into their 90s and beyond also protects their memories and ability to think and learn new information, according to a study published in the December 26, 2006, issue of Neurology, the scientific journal of the American Academy of Neurology.
The gene variant alters the cholesterol particles in the blood, making them bigger than normal. Researchers believe that smaller particles can more easily lodge themselves in blood vessel linings, leading to the fatty buildup that can cause heart attacks and strokes.
The study examined 158 people of Ashkenazi, or Eastern European, Jewish descent, who were 95 years old or older. Those who had the gene variant were twice as likely to have good brain function compared to those who did not have the gene variant. The researchers also validated these findings in a group of 124 Ashkenazi Jews who were between age 75 and 85 and found similar results.
"It's possible that this gene variant also protects against the development of Alzheimer's disease," said study author Nir Barzilai, MD, the director of the Institute for Aging Research at Albert Einstein College of Medicine in Bronx, NY.
Work is underway to develop a drug that emulates the effect of this life-extending version of the CETP gene. But I'd much rather get a gene therapy that'd enhance my liver cells to express the genetic variant for CETP that slows aging.
I've long thought the liver a key target for slowing whole body aging because it regulates blood lipid, lipoprotein, and cholesterol levels. This CETP gene variant (called CETP VV) is likely just one of many genetic variations waiting to be found that are expressed in the liver and can raise life expectancy. Another example genetic variation with life extending capabilities is Apolipoprotein A-I Milano high density lipoprotein (Apo A-I Milano HDL for short) clears out artery plaque and would also be a very beneficial gene with which to enhance one's liver.
Since livers age and become cancerous what we need are genetically engineered youthful replacement livers. The maximum benefit way to extend one's life through liver genetic engineering would be to grow a youthful replacement liver that has beneficial genes added. We can not do this yet. But in 10 or 20 years we should be able to take some existing liver cells, select out cells that have the least amount of accumulated DNA aging damage, do gene therapy on those cells in culture, then grow the cells up into a replacement liver. Next swap out your aged liver for a genetically enhanced younger liver. Your blood lipids will get changed by the new liver to slow the aging of your brain and body.
Led by Dr. Nir Barzilai, director of the Institute for Aging Research at Einstein, the researchers examined 158 people of Ashkenazi (Eastern European) Jewish descent who were 95 or older. Compared with elderly subjects lacking the gene variant, those who possessed it were twice as likely to have good brain function based on a standard test of cognitive function.
Later the researchers validated their findings independently in a younger group of 124 Ashkenazi Jews between the ages of 75 and 85 who were enrolled in the Einstein Aging Study led by Dr. Richard Lipton. Within this group, those who did not develop dementia at follow up were five times more likely to have the favorable genotype than those who developed dementia.
Dr. Barzilai and his colleagues had previously shown that this gene variant helps people live exceptionally long lives and apparently can be passed from one generation to the next. Known as CETP VV, the gene variant alters the Cholesterol Ester Protein. This protein affects the size of “good” HDL and “bad” LDL cholesterol, which are packaged into lipoprotein particles. Centenarians were three times likelier to possess CETP VV compared with a control group representative of the general population and also had significantly larger HDL and LDL lipoproteins than people in the control group.
The genetic variation causes people to produce less of a protein called cholesterol ester transfer protein (CETP). Barzilai says that CETP has two functions: it helps move cholesterol from the arteries to the liver, and it helps control the size of cholesterol particles circulating in the blood. People with the protective gene variant have higher levels of "good" HDL cholesterol and also produce bigger cholesterol particles, which scientists believe may not stick to blood-vessel walls as easily as small particles do.
CETP is on one of the 3 pathways that transfer cholesterol from HDL particles in the blood into the liver. So CETP is involved in regulating the amount of cholesterol in the blood.
Plasma high density lipoprotein (HDL) levels show an inverse relationship to atherogenesis, in part reflecting the role of HDL in mediating reverse cholesterol transport. The transfer of HDL cholesterol to the liver involves 3 catabolic pathways: the indirect, cholesteryl ester transfer protein (CETP)–mediated pathway, the selective uptake (scavenger receptor BI) pathway, and a particulate HDL uptake pathway. The functions of the lipid transfer proteins (CETP and phospholipid transfer protein) in HDL metabolism have been elucidated by genetic approaches in humans and mice. Human CETP deficiency is associated with increased HDL levels but appears to increase coronary artery disease risk.
Each tweak on genes causes many effects. CETP modification might or might not be the most effective way to improve blood lipids and slow brain and body aging. We might eventually find that CETP VV has side effects that are undesirable and that ApoA-I Milano will accomplish the same beneficial effects without some undesirable side effects. Or we might find CETP VV is better than ApoA-I Milano or some other genetic variants not yet discovered are better than either of them. Or maybe ApoA-I Milano and CETP VV work together synergistically to slow aging even more.
In three lines of transgenic mice the tissues expressing the human CETP mRNA were similar to those in humans (liver, spleen, small intestine, kidney, and adipose tissue); in two lines expression was more restricted. There was a marked (4-10-fold) induction of liver CETP mRNA in response to a high fat, high cholesterol diet. The increase in hepatic CETP mRNA was accompanied by a fivefold increase in transcription rate of the CETP transgene, and a 2.5-fold increase in plasma CETP mass and activity.
Brain rejuvenation is going to be the hardest rejuvenation task to accomplish. Anything that slows down brain aging is doubly beneficial. First off, your brain will function at a higher level longer during your working career and into retirement. That makes for better success at work and a happier life all around. Also, since the brain is going to be the hardest organ to rejuvenate we need to preserve it longer while we wait for effective brain rejuvenation biotechnologies.
More generally, while we desperately need therapies that do repair and replacement of aged parts we should not ignore the benefits and potential of slowing the aging process. Liver genetic engineering as an approach to slow the aging process is appealing because it looks much easier to do than full body gene therapy. If liver genetic engineering could buy us one or two decades of additional life that might be just the time we need to live long enough to still be alive when full body rejuvenation becomes possible.
Writing for the New York Times Matthew Wald examines the economics of wind power.
He said that in one of the states the company serves, Colorado, planners calculate that if wind machines reach 20 percent of total generating capacity, the cost of standby generators will reach $8 a megawatt-hour of wind. That is on top of a generating cost of $50 or $60 a megawatt-hour, after including a federal tax credit of $18 a megawatt-hour.
Note that a tax credit on one party is a tax on another party. So that wind tax credit is not free and causes market distortions. Though other energy sources have their own external costs that cause market distortions.
By contrast, electricity from a new coal plant currently costs in the range of $33 to $41 a megawatt-hour, according to experts. That price, however, would rise if the carbon dioxide produced in burning coal were taxed, a distinct possibility over the life of a new coal plant. (A megawatt-hour is the amount of power that a large hospital or a Super Wal-Mart would use in an hour.)
A few things to note here. Take the $18 per megawatt hour US government tax credit away from wind and it costs from $68 to $78 per megawatt-hour plus another $8 per megawatt-hour for standby capacity coming from other electric power sources. That puts it at double or more the cost of coal electric.
But the economics above understate the problem with wind. Suppose we shift to dynamic pricing of electricity (which we should btw) so that the price of electricity varies as a function of demand and supply. Electricity would cost more at 2 PM on a hot summer day than at 2 AM on a cool fall day. Well, wind tends to blow when electric demand is lowest!
In many places, wind tends to blow best on winter nights, when demand is low. When it is available, power from wind always displaces the most expensive power plant in use at that moment. If wind blew in summer, it would displace expensive natural gas. But in periods of low demand, it is displacing cheap coal.
The wind power industry wants a far more sophisticated electric power distribution grid so that wind electric can get carried from wherever the wind is blowing to wherever it is not blowing. Some industry analysts are skeptical about the feasibility of such an undertaking and whether it would even work since we could have weak wind days over a very large area. I wonder what it would cost.
Curiously, wider usage of wind power would favor coal over nuclear. Why? Coal has a larger variable cost than nuclear because coal as fuel is a larger fraction of total coal electric cost than uranium or plutonium is as a cost for nuclear electricity. In a nutshell, nuclear plants have the highest capital cost but the lowest fuel cost. Next comes coal and then finally natural gas. Natural gas electric plants cost the least to build but have the highest fuel cost. So they are used for peak power. Wind is so unreliable that natural gas plants probably would cost too much as back-ups to wind and therefore coal would be the best back-up for wind.
Nuclear, by contrast, works best as baseload power. Nuclear plants cost so much to build and save so little in operational cost when idled that once a nuclear plant gets built it makes sense to run a nuclear plant continuously 24x7.
Photovoltaics (if only they didn't cost so much) have far more favorable supply characteristics as compared to wind. They produce the most electricity during summer days when demand is highest. Though they are far from perfect. First off, in the northern hemisphere (and a similar problem occurs in the southern hemisphere just 6 months out of phase) the hottest days are in July and August and yet the longest day of the year (when the most suns shines to generate the most electricity) is in late June. Also, electric power demand does not peak at high noon. As the day heats up people turn on more air conditioners into the afternoon as the sun is past its peak and into the evening when people go home and turn on air conditioners, TVs, computers, and assorted home appliances. Solar's output peak does not match the market's demand peak for electricity.
Wind (and solar and nuclear) economics would improve if a carbon tax was levied on coal and natural gas burned to generate electricity. But coal would still retain a large cost advantage even with a hefty carbon tax.
The economics of wind would change radically if the carbon dioxide emitted by coal were assigned a cash value, but in the United States it has none. Coal plants produce about a ton of carbon dioxide each megawatt hour, on average, so a price of $10 a ton would have a major impact on utility economics.
I've read estimates of the cost of full carbon dioxide sequestration of about 2 cents per kwh or $20 per megawatt-hour. That'd still leave coal cheaper than wind. Though full carbon sequestration would probably make nuclear cheaper than coal (see Phil Sargent's links in the comments).
When comparing between wind and coal the wind tax credit is economically similar to forcing coal burning utilities to do full carbon sequestration on coal in the sense that the wind tax credit narrows the gap between wind and coal by about the same amount as the cost of carbon sequestration. However, the wind tax credit does not cause a big shift in demand away from coal because wind costs too much. An elimination of the wind tax credit combined with a requirement for full carbon sequestration would cause a partial shift away from coal toward nuclear and would eliminate the economic argument in favor of wind.
The wind tax credit currently causes a small reduction in demand for nuclear power. How? To the extent that wind farms get installed the effect is to increase the demand for back-up power sources which are cheaper when not used all the time. The back-up power is needed for when the wind does not blow. Since coal plants cost less than nuclear plants they are cheaper as back-up power for wind.
Note that the relative cost of nuclear, coal, wind, natural gas, and other electric power sources varies within the United States and even more globally. For example, in the Middle East natural gas is far cheaper than in the United States and coal is far more expensive. Similarly, the amount and reliability of wind varies. In some regions (e.g. the southeastern part of the United States) winds are pretty weak. Whereas in other regions (e.g. the Aleutian Islands of Alaska) winds are very strong.
Note as well that the relative costs of electric power sources will change with technological advances. Photovoltaics strike me as having the greatest potential for big cost declines. But being the most expensive photovoltaics most need big cost declines. Nuclear, wind, and cleaner coal costs will decline as well. But how much and how soon?
Two more wild cards: dynamic pricing and better electric energy storage technologies. Big declines in battery costs would greatly help wind and photovoltaics. Electronic switches could charge batteries when electricity is cheapest.
One of the makers of these new gene chips, San Diego-based Illumina, is now looking ahead to the next phase of medical genetics. The company has recently acquired new diagnostic and sequencing technologies, which it plans to use to better identify medically relevant genes. Ultimately, the goal is to diagnose risk of specific diseases and identify the best treatment options for certain patients.
The Illumina chip contains 650,000 short sequences of DNA that can identify SNPs (single nucleotide polymorphisms), carefully selected from a map of human genetic variation known as the HapMap (see "A New Map for Health"). Each SNP represents a spot of the genome that frequently varies among individuals and acts as a signpost for that genomic region. Scientists use the chip to search for genetic variants that are more common in a group of people with the disease of interest.
This chip illustrates why the application of computer industry semiconductor process technologies to biology are so going to lower the cost of doing biological research and biomedical testing and treatment. The computer industry has developed technology to produce chips in bulk at low and declining cost.
Initially these chips will be used for research. Their lower costs will speed up the search for the meaning of genetic variations. Same sized research budgets will produce more genetic testing results each year as gene chip prices fall. Already the ability to look at 650,000 genetic variations in a single person with a single chip is going to cause a huge increase in the rate of genetic testing.
As these chips help scientists discover the significance of an increasing number of genetic variations the result will be discovery of variations whose existence becomes useful for each individual to know. For example, prospective parents wanting a particular eye or hair color or facial shape will be able to use gene chips to do pre-implantation genetic diagnosis (PGD or PIGD) on embryos fertilized in a lab (in vitro fertilization or IVF).
As soon as SNPs (single nucleotide polymorphisms or single letter differences in genetic code) are discovered for facial features, hair texture, hair and eye color, height, musculature, intelligence, and other attributes the use of gene chips to test for these attributes in embryos will explode. We could be 5 years away from the start of extensive genetic testing of embryos.
Stanford researchers have integrated an array of tiny magnetic sensors into a silicon chip containing circuitry that reads the sensor data. The magnetic biochip could offer an alternative to existing bioanalysis tools, which are costly and bulky.
"The magnetic chip and its reader can be made portable, into a system the size of a shoebox," says Shan Wang, professor of materials science and electrical engineering at Stanford University, in Palo Alto, CA. Its small size, he says, could make it useful at airports for detecting toxins, such as anthrax, and at crime scenes for DNA analysis.
Reductions in the size of genetic testing equipment also reduce the ability of governments to regulate the use of genetic testing. Want to ban genetic testing of employees and prospective employees? Kinda hard to do if a device the size of a shoe box can let you test dandruff flakes or hair droppings from a job interviewee. Easily find out whether the guy or gal has genetic variations associated with greater honesty or a greater proclivity to steal. Throw in the identification of some genetic variations that affect level of work motivation and lots of smaller employers especially will do secret genetic testing of job prospects.
If some governments try to ban genetic testing of embryos expect to see other countries keep embryo genetic testing unregulated. Then watch how a lot more babies get conceived on "vacations" to Caribbean islands or other countries that see big profits in medical tourism. Then for that fraction of the human race which embraces gene testing of embryos the rate of evolution will skyrocket. Anyone who doesn't jump on this will find their offspring left behind in the job market.
WASHINGTON, Dec. 24 — The nuclear power industry has asked the government to specify how new nuclear plants should minimize damage from airplane attacks, weeks after the Nuclear Regulatory Commission decided not to institute requirements on building new plants that are tougher than the rules that prevailed decades ago when the old ones were built.
Airplanes have gotten bigger. The new Airbus A380 has 50% more floor space than a 747-400 and can have take-off weight of over 600 short tons (2000 lb per ton). That is approximately 4 times the takeoff weight of a 707 (which varies considerably depending on the dash model).
The nuclear industry wants the government to spell out any new requirements for nuclear power plants before the industry tries to build new plants.
Mr. Peterson said the industry wanted the regulations to be issued soon, because companies had expressed interest in building 30 new reactors. The actual number built is likely to be much smaller, experts say, but there is a widespread expectation of new orders, probably in 2007.
That small number of reactors means the continued ascent of coal. The problem is that coal is cheaper in many locations as long as carbon sequestration is not required (see the comments of Phil Sargent at the bottom of the comments there). Tougher emissions regulations work in favor of nuclear power. Tougher safety regulations raise the cost of nuclear power. The competition between nuclear and coal is therefore driven by regulatory environments. Nuclear needs big technologically driven cost improvements so it can win a much larger portion of the market.
What would happen with a next gen nuclear reactor if an A380 crashed into it? How hard would it be to aim such a large jet to strike a nuclear reactor? How much iron and concrete or other materials would be needed to protect a reactor from a direct strike?
There's a smarter way to deal with the problem of airplane hijacking: Program the auto-pilots to prevent airplanes from getting near a nuclear reactor. If an airplane started heading toward a nuclear reactor at a low enough altitude the auto-pilot could activate and change the course of an airplane to make it pass around the reactor. The system could be designed to only cut in below some threshold altitude so that airplanes passing over at normal cruising altitudes would not suffer any inconvenience.
Another option: build reactor vessels underground.
Yet another option: Develop an auto-pilot system that can be remotely activated to take over an airplane if the airplane is hijacked. The auto-pilot could land the plane on a runway and then shut down the engines. That seems like the best option because it would save lives of passengers. It would also protect skyscrapers and natural gas unloading terminals that are tempting targets for suicidal jihadists.
A Plos Medicine article reviews the sources of cancer funding in the European Union and the United States and finds Europe is greatly lagging in per capita spending in cancer research from funding sources which are not for-profit businesses. The US federal government's National Cancer Institute alone (not the only source of cancer research funding at the federal level) spends more than two and a half times the total spent by all non-commercial sources for cancer research in Europe.
In our survey, we identified 139 noncommercial funding organisations that collectively spent 1.43 billion on cancer research for the year spanning 2002–2003. Absolute spending in 2002/2003 on cancer research varied widely across the EU, ranging from 388 million in the United Kingdom to 0 in Malta, with three countries spending greater than 100 million, nine greater than 10 million, and ten less than 1 million. Of all the countries in the survey, only Bulgaria failed to report their spending, and only Malta spent nothing on cancer research in 2002/2003 (Figure 1).
In Euros the 3.6 billion for the US National Cancer Institute is more than two and a half times the 1.43 spent by all European noncommercial sources.
The EU spends a greater proportion of its cancer research funding on cancer biology than does the US (41% compared with 25%). The US spends a greater proportion of its cancer research funding on research into prevention and treatment than does the EU (prevention, 9% in the US compared with 4% in the EU; treatment, 25% in the US compared with 20% in the EU) (Figure 2). Data published by the US National Cancer Institute has been fully validated, whereas the EU uses self-reported, top-level CSO categories for 62% (n = 74) of the organisations from which financial data was obtained. The size of the two pie charts in Figure 2 is representative of the sizes of the annual budgets: in 2002/2003, the US National Cancer Institute spent 3.60 billion, compared with the EU spending of 1.43 billion.
But wait, the gap is even bigger.
The average per capita spent across the entire EU (including European Commission and Trans-European Organisation spending) was 2.56 (US$3.30), while the per capita spent in the US was 17.63 (US$22.76)—seven times greater. This gap is reduced to 5-fold if the US spending is compared with the spending of the 15 EU countries only (Figure 5). Average cancer research spending as a percentage of GDP across the EU was 0.0152%, and the median was 0.0056%. As a percentage of GDP, the US spent four times as much as the average across the entire European survey; this difference remained the same when the US percentage was compared with the percentage spending by the 15 EU member states.
I would be happy to see Europe try to seriously compete with the United States in biomedical science funding. I would be happy to see Europe act more pro-life and anti-death and less lame and pathetic. We would all benefit if the European countries tried as hard as America to conquer cancer and a large variety of other old age killers. Do I even need to mention that the general advances in biomedical science and technology that come from research on diseases of old age will inevitably produce biotechnologies we need for rejuvenation therapies?
In his entry in his "Scream this from the rooftops" series Alex Tabarrok came across a research paper with evidnce that European drug price controls are causing Europeans to produce far less commercially funded medical advances as well.
EU countries closely regulate pharmaceutical prices whereas the U.S. does not. This paper shows how price constraints affect the profitability, stock returns, and R&D spending of EU and U.S. firms. Compared to EU firms, U.S. firms are more profitable, earn higher stock returns, and spend more on research and development (R&D). Some differences have increased over time. In 1986, EU pharmaceutical R&D exceeded U.S. R&D by about 24 percent, but by 2004, EU R&D trailed U.S. R&D by about 15 percent. During these 19 years, U.S. R&D spending grew at a real annual compound rate of 8.8 percent, while EU R&D spending grew at a real 5.4 percent rate. Results show that EU consumers enjoyed much lower pharmaceutical price inflation, however, at a cost of 46 fewer new medicines introduced by EU firms and 1680 fewer EU research jobs.
Europeans, like most of the rest of the world, are freeloading off of US medical research funded by our federal government, states, private foundations, and private sector companies. We would all benefit if they stepped up to the plate and spent on medical research as much as Americans do.
Boston, MA - In recent years, health professionals and the general public alike have been acutely aware of the potential ravages that could result from a flu pandemic. Although many people might still recall the pandemics of 1968 and 1957, it is the infamous 1918-1920 pandemic--and the possibility of a recurrence on that scale--that causes the most trepidation.
Strangely, researchers still don't know exactly how many people died from this particular strain of the flu virus in that pandemic, and they know even less about how mortality rates varied in different parts of the world. In fact, most historic information is based on eyewitness accounts and not on statistical analysis. Now, a team of researchers from Harvard School of Public Health (HSPH) and the University of Queensland in Australia have re-analyzed data from 27 countries around the world to estimate both the global mortality patterns of the 1918 pandemic and, based on 2004 population data, how a similar pandemic would affect the world today.
These findings, to be published in the December 23, 2006 issue of The Lancet, show that mortality rates for the 1918-1920 pandemic were disproportionately high in communities where per capita income was lowest. If the same pandemic were to occur today, approximately 96 percent of deaths would occur in developing countries.
In event of a pandemic if you can manage it your best option is total physical isolation. If you can telecommute to work and rarely go shopping or anywhere else with people your odds of getting the flu will be extremely low.
The difference in risk from the lowest to highest death toll regions varied by a multiple of 39 in the 1918 outbreak.
For many decades, published epidemiological literature assumed that mortality rates from the 1918-20 pandemic were distributed fairly equally. A simple population count from that period would lead to the conclusion that about 20 percent of all fatalities occurred in the developed world. "But when you look at the data," said Murray, "that number shrinks to about three or four percent."
The disparities between the developed and developing worlds during this period are striking. For example, in Denmark 0.2 percent of the population succumbed to the flu. In the United States, that figure is 0.3 percent (based on data from 24 states). In the Philippines, the mortality rate was 2.8 percent, in the Bombay region of India, 6.2 percent, and in central India, 7.8 percent, which was the highest rate of the countries and regions analyzed. According to this data then, from Denmark to central India, death rates from the 1918-1920 flu pandemic varied more than 39-fold.
What caused the disparity in death rates? One possibility: nutrition. Poorer people died at higher rates because their immune systems were weakened by malnutrition. Also, in more developed countries perhaps people could better afford to go about their daily activities without coming into close contact with others. Anyone who can afford total isolation can avoid catching a pandemic flu virus.
62 million would die if an influenza virus of similar lethality spread in a pandemic today. Will H5N1 avian flu virus adapt to human spread and cause this scenario to come true? Or will H5N1 even exceed the 1918 strain in lethality? Your guess is as good as mine.
The researchers then took the relationship observed in 1918 between per capita income and mortality and extrapolated it to 2004 population data. After adjusting for global income and population changes, as well as changes in age structures within different populations, the research team estimated that if a similarly virulent strain of flu virus were to strike today, about 62 million people worldwide would die.
The economic disruption would be enormous as people became too sick to work, needed care, and became too afraid to go to work. We need to think ahead about measures that can simultaneously decrease the risk of infection and reduce economic disruption. In event of a pandemic I have argued for a response I call "workplace cocooning" which would be live-in workplace quarantines where people live and work in the same place and never leave their workplace for months until vaccines make human contact safe again. Some would live and work at home and telecommute. Others would get together in workgroups and live and work either in a regular workplace or in a big farm house. Larger groups could live and work together in an otherwise empty hotel (travel would collapse) converted to serve as both offices and living space.
Sailors on ships could stay on ships for the duration and never get off the ships in ports while their ships are unloaded and loaded. Truck drivers could stay in their cabs while the trailers get loaded and unloaded at warehouses by workers who would live in the warehouses and never leave during the pandemic.
The idea is to divide up people into workgroups and avoid contact between workgroups while allowing contact to do necessary collaborative work within workgroups. This model would not work well for every occupation or individual. But enough people live by themselves or in small families that consolidation of groups into combined group homes and workplaces would allow a large fraction of the population to continue working with little risk of infection.
PITTSBURGH, Dec. 20 – The accumulation of genetic damage in our cells is a major contributor to how we age, according to a study being published today in the journal Nature by an international group of researchers. The study found that mice completely lacking a critical gene for repairing damaged DNA grow old rapidly and have physical, genetic and hormonal profiles very similar to mice that grow old naturally. Furthermore, the premature aging symptoms of the mice led to the discovery of a new type of human progeria, a rare inherited disease in which affected individuals age rapidly and die prematurely.
"These progeroid mice, even though they do not live very long, have remarkably similar characteristics to normal old mice, from their physical symptoms, to their metabolic and hormonal changes and pathology, right down to the level of similar changes in gene expression," said corresponding author Jan Hoeijmakers, Ph.D., head of the department of genetics at the Erasmus Medical Center in Rotterdam, Netherlands. "This provides strong evidence that failure to repair DNA damage promotes aging— a finding that was not entirely unexpected since DNA damage was already known to cause cancer. However, it shows how important it is to repair damage that is constantly inflicted upon our genes, even through the simple act of breathing."
The study found that a key similarity between the progeria-like, or progeroid, mice and naturally old mice is the suppression of genes that control metabolic pathways promoting growth, including those controlled by growth hormone. How growth hormone pathways are suppressed is not known, but this response appears to have evolved to protect against stress caused by DNA damage or the wear-and-tear of normal living. The authors speculate that this stress response allows each of us to live as long and as healthy a life as possible despite the accumulation of genetic damage as we age.
Can we design humans to live longer? Or will we have to constantly repair accumulating damage? How to make the DNA in our cells less prone to accumulation of damage? Will the development of massive computer simulations for computer aided biological engineering allow us to find much better designs for enzymes that protect and repair DNA?
The scientists were set off on the road to make this discovery by their investigations into the causes of a boy's genetic disease.
A German physician had contacted the center about a 15-year old Afghan boy who was highly sensitive to the sun and had other debilitating symptoms including weight loss, muscle wasting, hearing loss, visual impairment, anemia, hypertension and kidney failure.
The boy turned out to have a defect in the DNA repair mechanism called nucleotide excision repair (NER). The scientists were able to trace down the mutation to a particular gene:
When the investigators obtained cells from the boy and tested them for NER activity, they found almost none. Further analysis of the boy’s DNA revealed a mutation in a gene known as XPF, which codes for part of a key enzyme required for the removal of DNA damage. The XPF portion of the enzyme harbors the DNA-cutting activity; whereas a second portion, known as ERCC1, is essential for the enzyme to bind to the damaged DNA. Mutations in either XPF or ERCC1 lead to reduced activity of this key DNA repair enzyme.
"We were completely surprised by the finding that the patient had a mutation in XPF, because mutations in this gene typically cause xeroderma pigmentosum, which is a disease characterized primarily by skin and other cancers rather than accelerated aging," said Dr. Hoeijmakers. "This patient, therefore, has a unique disease, which we named XPF-ERCC1, or XFE-progeroid syndrome."
DNA manipulation technologies have become powerful enough to allow creation of lab mice which have any mutation of interest. So these scientists do what many scientists do when faced with the need to better understand a human genetic variation: They created lab mice that contain the same genetic defect.
To understand why this XPF mutation caused accelerated aging, the investigators compared the expression pattern of all of the genes (approximately 30,000) in the liver of 15-day-old mice that had been generated in the laboratory to harbor a defect in their XPF-ERCC1 enzyme and that had symptoms of rapidly accelerated aging to the genes expressed by normal mice of the same age. This comparison revealed a profound suppression of genes in several important metabolic pathways in the progeroid mice. Most notably, the progeroid mice had a profoundly suppressed somatotroph (growth hormone) axis—a key pathway involved in the promotion of growth and development—compared to normal mice.
Damaged aging bodies probably produce less growth hormone as a way to reduce cancer risk. Aging cells with damaged genomes are at risk of becoming cancerous. Growth hormone exposure would make the cells divide. While a cell is dividing it is at increased risk of further DNA damage. Accumulation of DNA damage eventually causes some cells to mutate in their mechanisms for controlling cell growth. Then they start dividing continuously and you get cancer. Better to turn down the growth hormone and reduce the rate of DNA mutation accumulation than get cancer.
The investigators also found low levels of growth hormones in the progeroid mice and ruled out the possibility that this suppression was due to problems with their hypothalamus or pituitary glands, which regulate growth hormone secretion. Furthermore, they demonstrated that if normal adult mice were exposed to a drug that causes DNA damage, such as a cancer chemotherapy agent, the growth hormone axis was similarly suppressed. In other words, DNA damage somehow triggered hormonal changes that halted growth, while also boosting maintenance and repair.
Turns out these mutant mice get the same pattern of gene expression that normal old mice get. So the more rapid rate of accumulation of DNA damage causes mice to age in the same way as normal mice do but at a faster rate.
Because growth hormone levels go down as we get older, contributing to loss of muscle mass and bone density, the investigators systematically compared the gene expression pattern of their progeroid mice to normal old mice to look for other similarities. What they found was a striking similarity pattern between the progeroid and normal-aged mice in several key pathways.
Indeed, for genes that influence the growth hormone pathway, there was a greater than 95 percent correlation in changes in gene expression between the DNA repair-deficient mice and old mice. And, remarkably, there was a near 90 percent correlation between all other pathways affected in the progeroid mice and the older mice.
These results strongly suggest that most of normal aging is driven by accumulation of damage in DNA.
"Because there were such high correlations between these pathways in progeroid and normal older mice, we are quite confident that DNA damage plays a significant role in promoting the aging process. The bottom line is that avoiding or reducing DNA damage caused by sources such as sunlight and cigarette smoke, as well as by our own metabolism, also could delay aging," explained Dr. Niedernhofer.
We need gene therapies that will repair DNA damage. But if DNA damage involves most of a genome then gene therapy might not be practical. Current gene therapy techniques involve adding just a gene or two. Putting in much larger amounts of DNA is a much harder task. How to get completely new copies of entire genomes into hundreds of billions of cells throughout the body?
For neurons in the brain we have got to find ways to do extensive repair or replacement of chromosomes. We can't replace all our neurons without losing our identities. This result provides additional evidence that brain rejuvenation is our toughest rejuvenation challenge.
Update: In the last 5 years the two reports that have done the most to make full body rejuvenation look harder to me are this report above and another report that showed the blood of young mice improves the regenerative ability of the muscles of old mice. In both cases the upshot is that the scale of the changes needed to do rejuvenation came out looking bigger.
In the case of the young mouse blood, old mice, and muscle regeneration the result indicates that perhaps many cells all over the body excrete compounds into the blood that dampen down stem cells. Even if the compounds that cause this effect come from a few places and are easily blockable the fact that old bodies make stem cell suppressor compounds suggests that old bodies really need to make stem cell suppressor compounds in order to reduce the risk of cancer.
This latest report similarly points in the direction of more extensive changes needed to do rejuvenation. This report is worse news than the mouse blood report because development of ability to deliver full genome gene therapy hundreds of millions of cells strikes me as an incredibly difficult problem to solve. We will probably need some pretty sophisticated nanotechnology to solve it. I hope the nanotech optimists are right about how fast nanotech will advance. To do extensive genome repair we'll probably need nanotechnology.
Fish oil supplements given to pregnant mums boost the hand-eye coordination of their babies as toddlers, reveals a small study published ahead of print in the Archives of Disease in Childhood (Fetal and Neonatal Edition).
The researchers base their findings on 98 pregnant women, who were either given 4g of fish oil supplements or 4g of olive oil supplements daily from 20 weeks of pregnancy until the birth of their babies.
Only non-smokers and those who did not routinely eat more than two weekly portions of fish were included in the study. Eighty three mothers completed the study.
Once the children had reached two and a half years of age, they were assessed using validated tests to measure growth and development.
These included tests of language, behaviour, practical reasoning and hand-eye coordination. In all, 72 children were assessed (33 in the fish oil group and 39 in the olive oil group).
There were no significant overall differences in language skills and growth between the two groups of children
But those whose mothers had taken fish oil supplements scored more highly on measures of receptive language (comprehension), average phrase length, and vocabulary.
And children whose mothers had taken fish oil supplements scored significantly higher in hand-eye coordination than those whose mothers had taken the olive oil supplements.
The effect might be even stronger if the mothers on the fish oil supplements breast feed. Though some baby formula contains omega 3 fatty acid DHA.
You can read the full paper as a PDF document:
Our finding of enhanced eye and hand coordination with fish oil supplementation is plausible and consistent with previously reported benefits on visual function after postnatal n-3 PUFA supplementation in both preterm14 24 and term15 25 infants. Although the underlying mechanism is not understood, DHA is known to facilitate rapid phototransduction in the retinal membrane,26 and deficiencies are associated with reduced retinal function in infant primates.2 Furthermore, effects on visual evoked potential could indicate that DHA may also have an effect on the development of the visual cortex.27 Finally, improved stereoacuity in infants has been associated with LC PUFA formula supplementation28 and fish intake of lactating mothers.29
To our knowledge, only one other study has assessed the effects of supplementation with high-dose fish oil in pregnancy on cognitive development of the offspring. A randomised clinical trial by Helland et al9 involved 590 pregnant women who received fish oil at half the dose we used in this study, from 18 weeks’ gestation until 3 months post partum. No differences in development were observed in the 269 infants tested at 6 and 9 months; however, fish oil supplementation was associated with increased mental processing in children at age 4 years. Additionally, mental processing scores were significantly correlated with maternal intake of DHA in pregnancy after adjusting for potential confounding factors10; this is consistent with observed correlations of DHA (and EPA) intake with eye and hand coordination in this study.
Other studies have found positive relationships between n- 3 PUFAs at birth (principally DHA) and aspects of visual and neurological development, in either observational studies30–32 or intervention studies using much lower levels of supplementation. 11 12 33 Our findings suggest that detection of the potentially beneficial effects of DHA in pregnancy may require larger doses. Further, although it is difficult to directly extrapolate the pregnancy dosage to supplementation of the preterm infant, the doses in our study resulted in similar increases in cord blood levels of DHA to those achieved with the higher doses trialled in preterm infants.34
The researchers acknowledge that their study was too small to prove their conclusions. But they think their conclusions are consistent with other studies of the effects of omega 3 fatty acid DHA on brain development.
Women who want to give their offspring every advantage should consider regular salmon meals or high quality omega 3 fatty acid supplements while pregnant and while lactating and breast feeding. Also, if you use baby formula and if DHA fortification is optional in your legal jurisdiction look for the formula brands that have DHA added.
Los Angeles, CA., Dec.20, 2006-A multi-national research team headed by USC School of Dentistry researcher Songtao Shi, DDS, PhD, has successfully regenerated tooth root and supporting periodontal ligaments to restore tooth function in an animal model. The breakthrough holds significant promise for clinical application in human patients.
The study appears December 20 in the inaugural issue of PLoS ONE.
Utilizing stem cells harvested from the extracted wisdom teeth of 18- to 20-year olds, Shi and colleagues have created sufficient root and ligament structure to support a crown restoration in their animal model. The resulting tooth restoration closely resembled the original tooth in function and strength.
Mesenchymal stem cell-mediated tissue regeneration is a promising approach for regenerative medicine for a wide range of applications. Here we report a new population of stem cells isolated from the root apical papilla of human teeth (SCAP, stem cells from apical papilla). Using a minipig model, we transplanted both human SCAP and periodontal ligament stem cells (PDLSCs) to generate a root/periodontal complex capable of supporting a porcelain crown, resulting in normal tooth function. This work integrates a stem cell-mediated tissue regeneration strategy, engineered materials for structure, and current dental crown technologies. This hybridized tissue engineering approach led to recovery of tooth strength and appearance.
The researchers used swine (i.e. pigs) to grow the teeth in.
To accomplish functional tooth regeneration, we used swine because of the similarities in swine and human orofacial tissue organization. Swine SCAP were loaded into a root-shaped HA/TCP block that contained an inner post channel space to allow the subsequent installation of a porcelain crown (Figure 5A). A lower incisor was extracted and the extraction socket was further cleaned with a surgical bur to remove remaining periodontal tissues (Figure 5A). The HA/TCP block containing SCAP was coated with Gelfoam (Pharmacia Canada Inc., Ontario, Canada) containing PDLSCs and inserted into the socket and sutured for 3 months (Figure 5B–E). CT examination revealed a HA/SCAP-Gelfoam/PDLSC structure growing inside the socket with mineralized root-like tissue formation and periodontal ligament space. The surface of the implanted HA/SCAP-Gelfoam/PDLSC structure was surgically re-opened at three months post-implantation, and a pre-fabricated porcelain crown resembling a minipig incisor was inserted and cemented into the pre-formed post channel inside the HA/TCP block (Figure 5F–H). After suture of the surgical opening, the porcelain crown was retained in situ and subjected to the process of tooth function for four weeks (Figure 5I, J). CT and histologic analysis confirmed that the root/periodontal structure had regenerated (Figure 5K–M). Moreover, newly formed bio-roots demonstrated a significantly improved compressive strength than that of original HA/TCP carriers after six-month implantation (Figure 5N). These findings suggest the feasibility of using a combination of autologous SCAP/PDLSCs in conjunction with artificial dental crowns for functional tooth regeneration.
We need the ability to grow replacement parts. Every step in that direction is something to be cheered. Way to go scientists!
A link between obesity and the microbial communities living in our guts is suggested by new research at Washington University School of Medicine in St. Louis. The findings indicate that our gut microbes are biomarkers, mediators and potential therapeutic targets in the war against the worldwide obesity epidemic.
In two studies published this week in the journal Nature, the scientists report that the relative abundance of two of the most common groups of gut bacteria is altered in both obese humans and mice. By sequencing the genes present in gut microbial communities of obese and lean mice, and by observing the effects of transplanting these communities into germ-free mice, the researchers showed that the obese microbial community has an increased capacity to harvest calories from the diet.
"The amount of calories you consume by eating, and the amount of calories you expend by exercising are key determinants of your tendency to be obese or lean," says lead investigator Jeffrey Gordon, M.D., director of the Center for Genome Sciences and the Dr. Robert J. Glaser Distinguished University Professor. "Our studies imply that differences in our gut microbial ecology may determine how many calories we are able to extract and absorb from our diet and deposit in our fat cells."
That is, not every bowl of cereal may yield the same number calories for each person. People could extract slightly more or slightly less energy from a serving depending upon their collection of gut microbes. "The differences don't have to be great, but over the course of a year the effects can add up," Gordon says.
Small differences add up.
Up with the great Bacteroidetes and down with those heinous Firmicutes.
The researchers focused on two major groups of bacteria - the Bacteroidetes and the Firmicutes - that together make up more than 90 percent of microbes found in the intestines of mice and humans. In an earlier study, they compared genetically obese mice and their lean littermates. The obese mice had 50 percent fewer Bacteroidetes and proportionately more Firmicutes. Moreover, the differences were not due to a bloom of one species in the Firmicutes or a diminution of a single or a few species of Bacteroidetes: virtually all members of each group were altered.
In one of this week's Nature articles, Ruth Ley, Ph.D., a microbial ecologist in Gordon's group, reports on her investigation into whether these findings also held true among obese humans. She followed 12 obese patients at a Washington University weight loss clinic over a one-year period. Half the patients were on a low-calorie, low-fat diet and half were on a low-calorie, low carbohydrate diet.
At the outset of the study, the obese patients had the same type of depletion of Bacteroidetes and relative enhancement of Firmicutes as the obese mice. As the patients lost weight, the abundance of the Bacteroidetes increased and the abundance of Firmicutes decreased, irrespective of the diet they were on. Moreover, not one particular species of Bacteroidetes but the entire group increased as patients lost weight.
So then does the obesity cause the bacterial difference? Or does the bacterial difference cause the obesity? Or do these two factors act in a reinforcing cycle?
Part of the research was made possible by DNA sequencing technology. Answers to questions about human biology will come much more rapidly in the future because scientific instrumentation advances continue to accelerate the rate at which scientists can collect data and to allow measurement of things that were previously unmeasurable.
In a companion paper in the same journal, Peter Turnbaugh, a Ph.D. student in Gordon's lab, compared the genes present in the gut microbial communities of the obese and lean mice using the newest generation of massively parallel DNA sequencers.
Transfer of bacteria from fat rats to sterile rats made those rats gain weight more rapidly.
The results of these so-called comparative metagenomic studies revealed that the obese animals' microbial community genome (microbiome) had a greater capacity to digest polysaccharides, or complex carbohydrates. By transferring the gut microbial communities of obese and lean mice to mice that had been raised in a sterile environment (germ-free animals), he confirmed that the obese microbial community prompted a significantly greater gain in fat in the recipients.
There's an obvious opening here for yogurt makers? Can bacterial blends for yogurt get formulated to encourage the flourishing of bacteria that keep the weight off?
Boston, MA – The possibility that vitamin D could help protect people from developing multiple sclerosis (MS) has been posited by researchers in recent decades, but evidence to support that link has been scant. In the first large-scale, prospective study to investigate the relationship between vitamin D levels and MS, researchers at the Harvard School of Public Health (HSPH) have found an association between higher levels of vitamin D in the body and a lower risk of MS. The study appears in the December 20, 2006, issue of the Journal of the American Medical Association.
What I wonder: Does Vitamin D reduce the risk by helping the immune system knock out pathogens at a very early stage before the can replicate enough to invoke a more specific immune response? MS might be caused by an immune reaction to a pathogen that has a protein on it that is similar to a surface protein on human nerves. If the immune system can knock out a pathogen at a very early stage (and vitamin D might help do this) then the larger immune response could be avoided and production of T cells carrying antibodies which have affinity for nerves could be avoided.
A big reduction in the incidence of MS could avoid hundreds of thousands of future cases.
MS is a chronic degenerative disease of the central nervous system. It affects some 350,000 people in the U.S. and 2 million worldwide, and occurs most commonly in young adults. Women, who are affected more than men, have a lifetime risk of about 1 in 200 in the U.S. Vitamin D is a hormone manufactured naturally in the body, and its levels can be increased with exposure to sunlight, consumption of foods rich in vitamin D, such as fatty fish, and by taking supplements.
The research was done on a population of 7 million military personnel and former personnel.
The researchers, led by Ascherio, worked in collaboration with colleagues in the U.S. Army and Navy to determine whether vitamin D levels measured in healthy young adults predict their future risk of developing MS. The investigation relied on a study population of more than 7 million individuals, whose serum samples are stored in the Department of Defense Serum Repository. Between 1992 and 2004, 257 U.S. Army and Navy personnel with at least two serum samples stored in the repository were diagnosed with MS. A control group, consisting of participants who did not develop MS, was randomly selected from the study population. Serum samples were analyzed for levels of 25-hydroxyvitamin D, a good indicator of vitamin D availability to tissues, and individuals were divided into five groups of equal size according to their average levels. Because vitamin D levels are strongly influenced by skin color, separate analyses were conducted among whites, blacks, and Hispanics.
The results showed that, among whites, MS risk declined with increasing vitamin D levels--the risk was 62% lower among individuals in the top fifth of vitamin D concentration (corresponding approximately to levels above 100 nmol/L or 40 ng/mL) than among those in the bottom fifth (approximately below 63 nmol/L or 25 ng/mL). The association was strongest among individuals who were younger than 20 when they first entered the study. No significant association was found among blacks and Hispanics, possibly because of a smaller sample size and the lower levels of vitamin D found in those groups. The average age of onset of MS cases was 28.5 years old; there was no significant difference in the results between men and women.
Doug of the Exoteric brought to my attention an abstract of a research paper which suggests influenza might be more common in the winter in part because of vitamin D deficiency. That abstract argues vitamin D prevents respiratory infections by strengthening the initial immune response to pathogens.
vitamin D deficiency is common in the winter, and activated vitamin D, 1,25(OH)2D, a steroid hormone, has profound effects on human immunity. 1,25(OH)2D acts as an immune system modulator, preventing excessive expression of inflammatory cytokines and increasing the ‘oxidative burst’ potential of macrophages. Perhaps most importantly, it dramatically stimulates the expression of potent anti-microbial peptides, which exist in neutrophils, monocytes, natural killer cells, and in epithelial cells lining the respiratory tract where they play a major role in protecting the lung from infection. Volunteers inoculated with live attenuated influenza virus are more likely to develop fever and serological evidence of an immune response in the winter.
Could avoidance of full blown infections reduce the risk of auto-immune disorder? Can we protect ourselves from infections, MS, and even cancer by taking vitamin D?
The common assumption had been that the brain drain was due to a decreasing supply of neural stem cells in the aging hippocampus, said lead study investigator Bharathi Hattiangady, Ph.D., research associate in neurosurgery. Neural stem cells are immature cells that have the ability to give rise to all types of nerve cells in the brain.
In the current study, however, the researchers found that the stem cells in aging brains are not reduced in number, but instead they divide less frequently, resulting in dramatic reductions in the addition of new neurons in the hippocampus.
To conduct their census, the researchers attached easy-to-spot fluorescent tags to the neuronal stem cells in the hippocampus in young, middle-aged and old rats.
Parenthetically, the hippocampal stem cells are not the only stem cells in brains. But the hippocampal stem cells are important because of the hippocampus's role in formation of new memories.
The rat hippocampus has only 50,000 stem cells and the number does not diminish with age.
They found that in young rats, the hippocampus contained 50,000 stem cells -- and, significantly, this number did not diminish with aging. This finding, the researchers said, suggested that the decreased production of new neurons in the aged brain was not due to a lack of starting material.
The researchers then used another fluorescent molecule to tag all stem cells that were undergoing division in the process of staying "fresh" in case they were recruited to become mature nerve cells.
While the number of hippocampal stem cells does not change the percentage engaged in cell division (during which new neurons are formed) does diminish with age.
They found that in young rats, approximately 25 percent of the neural stem cells were actively dividing, but only 8 percent of the cells in middle-aged rats and 4 percent in old rats were dividing. This decreased division of stem cells is what causes the decreased neurogenesis, or birth of nerve cells, seen with aging, the scientists said.
The reported difference in the percentage of neural stem cells dividing may even understate the difference in the rate of generation of new neurons. If the old stem cells divide more slowly then the difference in the rate generation of new stem cells may be even greater than the multiple of 6.25 (25 divided by 4) we might expect as the difference in rate of new neuron generation.
Replacement of aged stem cells by younger stem cells will some day be a core component of rejuvenation therapies. So how many neural stem cells will we need to replace in our hippocampuses? Some human brains weigh 1400 grams as compared to 2 grams for a rat.. The difference is approximately a factor of 700 (though human and rat brain sizes vary considerably). So if we could create 700 times 50,000 or about 35 million human neural stem cells and inject them into a human brain's hippocampus we should be able to make our aging brains act younger again.
Think about that. We know one of the causes of lower brain performance as we age: A very small portion of all brain cells gradually lose their ability to divide. That portion of brain aging is a problem that seems solvable within a couple of decades at most.
A human brain contains about 100 billion neurons. All the neurons age. We need to find ways to rejuvenate all those 100 billion neurons. That's probably the toughest challenge in human rejuvenation because we have to fix all those cells rather than replace them. But the very small fraction of that 100 billion that are the hippocampal neural stem cells play an outsize role and they are are obvious candidates for replacement.
Now, one issue arises: Suppose we can find a way to deliver new neural stem cells. How to get rid of the old stem cells ones that are already there? Likely the neural stem cells have a mechanism for regulating their total number. So delivery of new younger stem cells might cause some of the older ones to commit cellular suicide (known as apoptosis). But it might be necessary to do several rounds of replacement stem cell therapy to gradually weed out the older stem cells.
What I want to know: If we had a way to create young hippocampal stem cells could needles deliver the cells safely into the hippocampus without causing brain damage in the process? Or could surgeons guide a flexible tube up arteries to the hippocampus to deliver the stem cells that way? Or how else could the stem cells get delivered?
What else I'd like to know: Do people who have better memories have more neural stem cells in their hippocampuses? Or do their neural stem cells get triggered more easily to divide? The potential exists to use replacement neural stem cell therapy as a way to bring in stem cells that are genetically engineered to form memories more rapidly.
Gene Expression blogger Razib has written some thoughts on evolutionary biologist Armand Leroi's expectations for the future of eugenics (labelled neo-eugenics to try to distance it from eugenics advocacy of previous eras)
To greatly reduce the rate of mutations in births requires widespread screening and a willingness to abort based on the results. Or genetically screen IVF embryos before implantation. In the longer run we will gain the ability to do gene therapy to repair genetic defects in embryos. But due to the risks involved and likely regulatory resistance I expect that's decades away from routine procedure.
If all embryos were screened and if women halted all pregnancies which have genetic defects the percentage of births with genetic diseases avoided would be pretty low.
Notice the emphasis on known. Of course many more unknown purely harmful mutations will be found in the coming years. So the incentive to screen to avoid harmful mutations will rise.
Also, we will come across many more mutations that provide benefits under some conditions (e.g the sickle cell anemia mutation which confers malaria resistance) but at painful costs. Expect quite divisive controversies on which genetic variations are harms and which are benefits.
Some argue against all attempts to prevent the birth of babies with genetic diseases. Others argue against specific methods (e.g. abortion) to avoid such births. Still others argue that abortion or pre-implantation genetic diagnosis (PGD - used before implantation of embryos created with in vitro fertilization) are acceptable only to avoid true genetic defects.
The argument for restricting the use of, say, PGD only to avoid genetic defects immediately runs up against the question of what is a genetic defect and what is a genetic disease. The genetic variation for sickle cell anemia was selected for because it conferred resistance to malaria. It was not a defect for those who it helped to survive malaria infection.
I predict we will find many genetic variations that confer some benefit at some cost. Sometimes the benefit will be irrelevant under modern conditions (e.g. sickle cell anemia for someone living in temperate climes or with benefit of drugs). But that won't always be the case. Real thorny ones will involve trade-offs that come from genetic variations that provide both high costs and high benefits.
For example, imagine a genetic variation that boosts IQ at the cost of greater chances of feeling depressed if one encounters tough times. Or imagine a genetic variation that allows one to live longer but at the cost of making one more lethargic. People will argue to select for or against all manner of genetic variations.
Cost and benefit calculations will depend in part on one's values. But one's expectations of future technological advances will also figure into the question of what is a cost and what is a benefit. Suppose some genetic variation increases a woman's chance of breast cancer in her thirties but also will raise her intelligence. That might well be the case for the BRCA gene variations that contribute to cancer risk. A person might plausibly argue against selecting out such variations on the theory that 30 or 40 years from now cancer will become easily curable.
To screen most effectively requires use of in vitro fertilization (IVF).
A major caution about massive genetic preimplantation screens is that they would be preimplantation. That is, some sort of IVF would be needed. It seems implausible that this would be widespread, but Leroi points out that IVF procedures already make up several percent of the pregnancies in Western nations. The cost of a typical IVF procedure is that of a medium sized car, and crucially, the cost of many diseases over one's lifetime is far greater (IVF would be like "insurance").
As the number of genetic variations one wishes to avoid rises so does the need for IVF and genetic screening on multiple embryos. But the greater the number of genetic variations to avoid or to selective for the greater the potential benefit.
As we learn the significance of large numbers of genetic variations the primary motivation for gene selection will be to get desired features rather than to avoid genetic diseases. The desire for higher intelligence will make IVF become the preferred way to create babies for one reason: People will embrace IVF to make smarter babies. They'll also embrace it to make better looking children. The prospect of better smarts and looks will cause prospective parents to embrace IVF and genetic screening with a passion.
Armand's piece points out several important issues. First, the new eugenics is already here. Second, the new eugenics will become more powerful as information gathering via genomics becomes more omniscient, and medical interventions in fertility become more omnipotent. Third, there is variance in the extent that different individuals and groups are willing to avail themselves of the opportunities offered by the new eugenics.
Smarter and higher economic class people will embrace eugenic technologies more rapidly and more enthusiastically. The smarties will select for smarter children with attributes that will make them more successful. Therefore I predict a widening of the gaps between the most and least successful segments of societies and of the gaps between societies.
Elites will promote subsidies and propaganda campaigns to encourage the cognitively less able and poorer people to also use eugenic reproductive technologies. But even when the dumber folks opt to use genetic screening they'll make less optimal choices for their offspring.
I would add a fourth point to the three points Razib enumerates: Those who will avail themselves of methods to select offspring genetic endowments will make different average decisions as members of different societies, races, religions, and other groupings. This will tend to cause a divergence of the human race into groups that will become more genetically dissimilar. Genetic variations that cause differences in methods of cognitive processing will have the greatest political impact as groups clash due to genetically caused differences in values and in understandings of the world.
Rowan T. Chlebowski, M.D., Ph.D., of the Los Angeles Biomedical Research Institute at the Harbor-University of California, Los Angeles Medical Center in Torrance, Calif., and his colleagues set out to determine whether a low-fat diet could prolong relapse-free survival in women with early-stage breast cancer.
Between February 1994 and January 2001, 2,437 women who had been treated for early-stage breast cancer were recruited from the Women’s Intervention Nutrition Study (WINS). They were randomly assigned to a dietary intervention group (40%), or a control group (60%).
At the beginning of the study, both groups consumed similar amounts of calories from fat—56 to 57 grams of fat per day (about 30% of total calories). After 1 year, the women in the dietary intervention group were consuming an average of 33 g/day (20.3% of total calories) compared with 51 g/day (29.2% of total calories) in the control group. The difference between the two groups was maintained throughout the trial. Average body weight was similar before the trial started, but 5 years later, the women in the intervention group weighed an average of 6 pounds less than the women in the control group.
So the women in the intervention group ate less food total and less calories total. How much of their reduction in fats consumed came as a reduction in animal fats?
The odds of recurrence of breast cancer was low in both groups because they were caught at an early stage.
Ninety-six of 975 women (9.8%) in the intervention group had some form of relapse, compared with 181 of 1462 women (12.4%) in the control group. The researchers calculate that 38 women would need to adopt such a dietary fat reduction plan to prevent one breast cancer recurrence. "Women in the dietary intervention group had a 24% lower risk of relapse than those in the control group," the authors write.
Their data also suggest that women with hormone receptor–negative breast cancers may have had the most benefit from the dietary fat reduction, but those results weren’t statistically significant and will require further confirmation. The authors plan to address these and other questions in ongoing follow-up studies of the women.
The reduction in calories consumed might have been the real cause of the difference. Or maybe something else about the difference in diets caused the difference in risks.
They caution that the study relied on self-reports of dietary fat intake. Also, the reduction in body weight in the dietary intervention group may have had an effect on breast cancer recurrence, rather than dietary fat intake on its own.
What I'd like to know: Is the risk reduction due to lower total calories consumed? Or perhaps due to a reduction in saturated fats? In other words, do all fats put women at equal risk of recurrence or perhaps does a particular saturated fat increarse the risk of recurrence? Or did the reduction of fatty foods in the diet increase the consumption of vegetables and fruits that have compounds that reduce breast cancer recurrence?
A diet that reduces the amount of fat in it also reduces and increases the amounts of many other things. Therefore even if the women on the lower fat diet had a real reduction in their risk of recurrence of breast cancer that does not begin to tell us why.
If the women on the lower fat diets ate more vegetables then compounds in the vegetables might have reduced the rate of breast cancer recurrence. Compounds in cruciferous vegetables called isothiocyanates (ITCs) have anti-cancer effects.
"The contribution of diet and nutrition to cancer risk, prevention and treatment have been a major focus of research in recent years because certain nutrients in vegetables and dietary agents appear to protect the body against diseases such as cancer," said Shivendra Singh, PhD, lead investigator and professor of pharmacology and urology at the University of Pittsburgh School of Medicine. "From epidemiologic data, we know that increased consumption of vegetables reduces the risk for certain types of cancer, but now we are beginning to understand the mechanisms by which certain edible vegetables like broccoli help our bodies fight cancer and other diseases."
Dr. Singh's study is based on phytochemicals found in several cruciferous vegetables called isothiocyanates (ITCs), which are generated when vegetables are either cut or chewed. His laboratory has found that phenethyl-ITC, or PEITC, is highly effective in suppressing the growth of human prostate cancer cells at concentrations achievable through dietary intake of cruciferous vegetables.
In seeking to further define the mechanisms by which PEITC induces apoptosis, or programmed cell death, mice were grafted with human prostate tumors and orally administered a small amount of PEITC daily. After 31 days of treatment, the average tumor volume in the control group that did not receive PEITC was 1.9 times higher than that of the treatment group. In addition, a pro-apoptotic protein called Bax appeared to play a role in bringing about apoptosis by PEITC.
While digging for the information to write this post I felt compelled to make and eat a big bowl of cole slaw made with non-fat mayo and some canola oil. An analogue of cruciferous vegetable compound sulfurophane also has anti-cancer effects.
It is pretty easy to make a scientifically informed argument for eating a lot more fruits and vegetables. In particular, for reduction of cancer risk cruciferous vegetables such as kale, broccoli, cabbage, and arugula have compounds in them that have anti-cancer properties. The relative value of fat avoidance is harder to discern. The various fatty acids vary considerably in their metabolic roles and the generally bad reputation given to fat is too broad. Most people need more omega 3 fatty acids, not less. Also, it is not clear to me that a diet high in monounsaturated fats is a net harm.
In one study, participants linked more masculinized faces with riskier and more competitive behaviors, higher mating effort and lower parenting effort in comparison with less masculine faces.
Men with highly masculine faces were judged more likely to get into physical fights, challenge their bosses, sleep with many women, cheat on their partners and knowingly hit on someone else's girlfriend. Those with more feminine faces were judged to be more likely to be good husbands, be great with children, work hard at their jobs even though they didn't like them, and be emotionally supportive in long-term relationships.
"Men picked the less masculine-looking men to accompany their girlfriends on a weekend trip to another city," Kruger said, "and both men and women would prefer the less masculine versions as dating partners for their daughters."
Together, the studies show that highly masculine faces are associated with riskier and more competitive behavior, higher mating effort and lower parenting effort in comparison with less masculine faces.
"Both men and women generally respond to men with high and low facial masculinity in ways that could be expected to benefit their own reproductive success," Kruger said. "While the more masculine-looking men may be good bets for mating, the more feminine-looking men may be better bets as parenting partners. More feminine features suggest compassion and kindness, indicating that men are able and willing to invest in a long-term relationship and in any potential children."
But will a masculine appearance help or hurt your career? It is hard to separate out the effect of the appearance and the effect of the behavioral tendencies that correlate with that appearance.
Humans are going to become more confused with each other in the future because we will gain greater capabilities to change both our appearances and our personalities. A guy will be able to make himself look more feminine-looking while he has his mind molded to make him more masculine in behavior. Might there be an appeal for this strategy since it would allow a person to be underestimated? Would people let their guards down around a more feminine looking guy?
What I want to know: When offspring genetic engineering becomes possible will people make their sons and daughters more feminine or more masculine? Will the sexes become more alike or more different?
A new report from FAO says livestock production contributes to the world's most pressing environmental problems, including global warming, land degradation, air and water pollution, and loss of biodiversity. Using a methodology that considers the entire commodity chain, it estimates that livestock are responsible for 18 percent of greenhouse gas emissions, a bigger share than that of transport. However, the report says, the livestock sector's potential contribution to solving environmental problems is equally large, and major improvements could be achieved at reasonable cost.
Based on the most recent data available, Livestock's long shadow takes into account the livestock sector's direct impacts, plus the environmental effects of related land use changes and production of the feed crops animals consume. It finds that expanding population and incomes worldwide, along with changing food preferences, are stimulating a rapid increase in demand for meat, milk and eggs, while globalization is boosting trade in both inputs and outputs.
Grazing uses a quarter of the land surface of the Earth. Think about what that means as populations increase and humans all over the world use rising affluence to move out into newly created suburbs. Land supplies are inadequate. The human race has gotten too big.
Deforestation, greenhouse gases. The livestock sector is by far the single largest anthropogenic user of land. Grazing occupies 26 percent of the Earth's terrestrial surface, while feed crop production requires about a third of all arable land. Expansion of grazing land for livestock is a key factor in deforestation, especially in Latin America: some 70 percent of previously forested land in the Amazon is used as pasture, and feed crops cover a large part of the reminder. About 70 percent of all grazing land in dry areas is considered degraded, mostly because of overgrazing, compaction and erosion attributable to livestock activity.
To the fans of biomass energy: Hasn't enough of the Amazon already been lost to pasture land? Do we need to make it worse by promoting the destruction of the rain forests in the name of biomass energy environmentalism?
Livestock are responsible for 37% of anthropogenic methane (i.e. methane produced as a result of human activities).
FAO estimated that livestock are responsible for 18 percent of greenhouse gas emissions, a bigger share than that of transport. It accounts for nine percent of anthropogenic carbon dioxide emissions, most of it due to expansion of pastures and arable land for feed crops. It generates even bigger shares of emissions of other gases with greater potential to warm the atmosphere: as much as 37 percent of anthropogenic methane, mostly from enteric fermentation by ruminants, and 65 percent of anthropogenic nitrous oxide, mostly from manure.
Methane is probably the biggest greenhouse gas problem with livestock. As a greenhouse gas methane is about 21 times more potent than carbon dioxide by weight. Rising world affluence translates into rising demand for meat and that means more cows, sheep, and other methane producers.
But methane from livestock strikes me as (at least in theory) a much more tractable problem than carbon dioxide from fossil fuels burning. The potential exists to capture dairy cow methane when they are in buildings. Also, feeds greatly differ in their effects on methane production and cow bacteria balances could be manipulated to lower methane production. Biotechnology could drastically cut back on livestock methane production.
The use of fossil fuels in agriculture is more problematic for the same reason that the use of fossil fuels is so intractable in other human activities. Until other energy sources become cheaper than fossil fuels the rising demand for livestock and fancier food in general is going to cause a rising demand for fossil fuels.Livestock compete with wild animals for land area. As the human race becomes more affluent the amount of animal biomass that will be wild is going to decline. This'll drive more species to extinction. (So will medical treatments that allow humans to live in high disease areas.)
The sheer quantity of animals being raised for human consumption also poses a threat of the Earth's biodiversity. Livestock account for about 20 percent of the total terrestrial animal biomass, and the land area they now occupy was once habitat for wildlife. In 306 of the 825 terrestrial eco-regions identified by the Worldwide Fund for Nature, livestock are identified as "a current threat", while 23 of Conservation International's 35 "global hotspots for biodiversity" - characterized by serious levels of habitat loss - are affected by livestock production.
The full text of the UN Food and Agricultural Organization report Livestock's long shadow is downloadable as a PDF file.
Elizabeth Economy, director for Asia studies at the Council on Foreign Relations, says China will surpass the United States in carbon dioxide emissions and China is embarked on an internal propaganda campaign to blame the rest of the world.
Last month the International Energy Agency announced that China would probably surpass the United States as the world's largest contributor of the greenhouse gas carbon dioxide by 2009, more than a full decade earlier than anticipated. This forecast could spur China to adopt tough new energy and environmental standards, but it probably won't. China has already embarked on a very different strategy to manage its environmental reputation: launching a political campaign that lays much of the blame for the country's mounting environmental problems squarely on the shoulders of foreigners and, in particular, multinational companies.
While still in its initial stages, the campaign has gained steam over the past month. Senior Chinese officials, the media and even some environmental activists have charged multinational firms and other countries with exporting pollution, lowering their environmental manufacturing standards and willfully ignoring China's environmental regulations. Faced with growing international and popular discontent over the country's environmental crisis, China's leaders are tapping into anti-foreign and nationalist sentiments to deflect attention from their own failures.
First off, China's not going to help. Second, if they are going to surpass the United States in 2009 then where are they going to be in 2019 or 2029?
Consider the sheer cheekiness of this claim:
In late October a top environmental official, Pan Yue, accused the developed countries of "environmental colonialism": of transferring resource-intensive, polluting industries to China and bearing as little environmental responsibility as possible.
The Chinese government is buying massive amounts of American debt in order to keep the Chinese yuan currency undervalued. This boosts Chinese exports and decreases production in other countries of steel and other energy-intensive products. As the US dollar has dropped against other currencies in response to a US trade deficit that East Asian countries created with US debt purchases it has made the Chinese currency even more undervalued against the Euro, the English Pound, and other currencies.
Benny Peiser points out the above article and this one below by Fiona Harvey, environment correspondent for the Financial Times as indications for why the current unilateral regulatory approach in Europe faces an increasingly difficult reception. Some of Europe's reduction in CO2 emissions has just shifted to other countries which levy fewer taxes on energy usage. Fiona Harvey says that countries are afraid to put higher costs on carbon dioxide emissions because they fear loss of international competitiveness.
Japan refused to hurry moves to commit to reductions in emissions beyond 2012, when the current provisions of the Kyoto protocol expire, because of fears that it would hand China a competitive advantage in manufacturing industries. Canada faced a similar dilemma, resisting pressure to push for greater emissions cuts as the US was refusing to take on reduction targets. The US and Australia have already rejected the protocol, which obliges developed countries to cut their emissions by an average of 5 per cent compared with 1990 levels by 2012.
More worrying for proponents of the treaty, however, are rifts on the issue that are beginning to become apparent within Europe. The European Union has long been the most steadfast supporter of the Kyoto protocol, in the face of backsliding from Canada and Japan. The EU was credited with enticing Russia to agree to the protocol two years ago, which was the decisive factor in ensuring the long-delayed agreement finally came into effect. The EU’s greenhouse gas emissions trading scheme is the only mandatory scheme in the world to impose constraints on business emissions of carbon dioxide and to allow companies to trade their emissions allowances with one another in order to reduce carbon output at the lowest possible price.
Governments aren't just worried about reduced competitiveness. Their publics do not want to pay more for energy and for products and services made from energy.
The Kyoto Accord to cut green house gas emissions wasn't honored by some of its signatories. Now the percentage of emissions by non-Kyoto countries is skyrocketing. An international agreement isn't going to cut total carbon dioxide (CO2) emissions or even slow CO2 emissions growth by much.
Worried about the potential for global warming? There's only one way to stop CO2 emissions growth: Development of energy technologies that are cleaner and cheaper than fossil fuels is the only way guaranteed to CO2 emissions.
ST. LOUIS, Dec. 06, 2006 -- Boeing [NYSE: BA] today announced that Spectrolab, Inc., a wholly-owned subsidiary, has achieved a new world record in terrestrial concentrator solar cell efficiency. Using concentrated sunlight, Spectrolab demonstrated the ability of a photovoltaic cell to convert 40.7 percent of the sun's energy into electricity. The U.S. Department of Energy's National Renewable Energy Laboratory (NREL) in Golden, Colo., verified the milestone.
Note the use of the term "concentrator". Sounds like they are focusing light down from larger to smaller areas. So does the photovoltaic cell achieve 40% efficiency even with more intense concentrated light? Sounds like it.
"This solar cell performance is the highest efficiency level any photovoltaic device has ever achieved," said Dr. David Lillington, president of Spectrolab. "The terrestrial cell we have developed uses the same technology base as our space-based cells. So, once qualified, they can be manufactured in very high volumes with minimal impact to production flow."
High efficiency multijunction cells have a significant advantage over conventional silicon cells in concentrator systems because fewer solar cells are required to achieve the same power output. This technology will continue to dramatically reduce the cost of generating electricity from solar energy as well as the cost of materials used in high-power space satellites and terrestrial applications.
They think they can increase the conversion efficiency even higher.
"These results are particularly encouraging since they were achieved using a new class of metamorphic semiconductor materials, allowing much greater freedom in multijunction cell design for optimal conversion of the solar spectrum," said Dr. Richard R. King, principal investigator of the high efficiency solar cell research and development effort. "The excellent performance of these materials hints at still higher efficiency in future solar cells."
So how far will this drive down the cost of photovoltaic electricity?
Projections made on the future use of various source of energy are guesses. Go out enough years and unpredictable technological breakthroughs make all future projections wrong. Maybe battery breakthroughs will make electric cars practical for most uses. Maybe photovoltaic breakthroughs will halt the growth of coal for electric power. Or maybe nuclear power will replace coal as the whole world becomes too concerned by the growth in carbon dioxide emissions. Then again, maybe methods to capture all pollutants and equester carbon dioxide from burning coal will get so cheap that coal will become the cheapest way to get clean energy.
The problem with technologies that make fossil fuels cleaner is that they almost always cost more than not using such technologies. We are more assured of a cleaner environment if innately cleaner energy technologies become cheaper.
RICHLAND, Wash. – If all the cars and light trucks in the nation switched from oil to electrons, idle capacity in the existing electric power system could generate most of the electricity consumed by plug-in hybrid electric vehicles. A new study for the Department of Energy finds that
Researchers at DOE's Pacific Northwest National Laboratory also evaluated the impact of plug-in hybrid electric vehicles, or PHEVs, on foreign oil imports, the environment, electric utilities and the consumer.
"This is the first review of what the impacts would be of very high market penetrations of PHEVs, said Eric Lightner, of DOE's Office of Electric Delivery and Energy Reliability. "It's important to have this baseline knowledge as consumers are looking for more efficient vehicles, automakers are evaluating the market for PHEVs and battery manufacturers are working to improve battery life and performance."
The average commuting trip in the United States is 33 miles per day.
Current batteries for these cars can easily store the energy for driving the national average commute - about 33 miles round trip a day, so the study presumes that drivers would charge up overnight when demand for electricity is much lower.
Daily recharging would get old real fast. Every time you come home the need to plug the car into an electric socket would become an annoying chore. Plus,. some people do not live in places where this is practical. Say you live in an apartment building and park on the street or in a big lot. You may have no practical way to plug in your car. Even if you can plug in your car is that always practical? What about running an electric cable out to the car when it is raining? Works okay if you keep it in a garage. But most park their cars outside - including most who have car garages.
The areas which get their power from hydroelectric will need to build more coal or nuclear plants. Natural gas? North American production can't keep up with demand. More electric demand means more coal with smaller amounts of other types.
Researchers found, in the Midwest and East, there is sufficient off-peak generation, transmission and distribution capacity to provide for all of today's vehicles if they ran on batteries. However, in the West, and specifically the Pacific Northwest, there is limited extra electricity because of the large amount of hydroelectric generation that is already heavily utilized. Since more rain and snow can't be ordered, it's difficult to increase electricity production from the hydroelectric plants.
They didn't include nuclear plants because those operate around the clock supplying base electric demand.
"We were very conservative in looking at the idle capacity of power generation assets," said PNNL scientist Michael Kintner-Meyer. "The estimates didn't include hydro, renewables or nuclear plants. It also didn't include plants designed to meet peak demand because they don't operate continuously. We still found that across the country 84 percent of the additional electricity demand created by PHEVs could be met by idle generation capacity."
I suspect the power plants that are shut down at night have higher electric generation costs. So a shift toward using those power plants at night might raise electric costs for all purposes on average.
The coal-fired plants would emit more. But the reduction in gasoline burning might lead to a net reduction in carbon dioxide. However, a big decrease in US demand for oil would lower world prices and therefore lead to a greater demand for oil for other purposes. So I'm not as optimistic when looking at this path from a global level.
The study also looked at the impact on the environment of an all-out move to PHEVs. The added electricity would come from a combination of coal-fired and natural gas-fired plants. Even with today's power plants emitting greenhouse gases, the overall levels would be reduced because the entire process of moving a car one mile is more efficient using electricity than producing gasoline and burning it in a car's engine.
More coal burning means more sulfur emissions. It also means more mercur, particulates, and other pollutants.
Total sulfur dioxide emissions would increase in the near term due to sulfur content in coal. However, urban air quality would actually improve since the pollutants are emitted from power plants that are generally located outside cities. In the long run, according to the report, the steady demand for electricity is likely to result in investments in much cleaner power plants, even if coal remains the dominant fuel for our electricity production.
Newer electric plants could be built to tougher emissions requirements if the political will exists to make that happen. More stringent requirements on emissions from coal fired plants will push more new construction toward nuclear power plants. Tougher emissions regulations would raise the cost per kilowatt-hour of electricity.
"With cars charging overnight, the utilities would get a new market for their product. PHEVs would increase residential consumption of electricity by about 30 - 40 percent. The increased generation could lead to replacing aging coal-fired plants sooner with newer, more environmentally friendly versions," said Kintner-Meyer.
"The potential for lowering greenhouse gases further is quite substantial because it is far less expensive to capture emissions at the smokestack than the tailpipe. Vehicles are one of the most intractable problems facing policymakers seeking to reduce greenhouse gas emissions," said Pratt.
Big power plants can have big emissions control equipment and highly skilled technical staff to manage the equipment. The capture and management of sulfur, mercury, particulates, carbon dioxide, and other pollutants is far easier than with cars running on gasoline.
If utilities were to change their rate structures to charge more during periods of high demand and less during periods of low demand (aka dynamic pricing) then pluggable hybrids would pay off more quickly and people would move toward them more quickly.
Finally, the study looked at the economic impact on consumers. Since, PHEVs are expected to cost about $6,000 to $10,000 more than existing vehicles - mostly due to the cost of batteries -- researchers evaluated how long it might take owners to break even on fuel costs. Depending on the price of gas and the cost of electricity, estimates range from five to eight years - about the current lifespan of a battery. Pratt notes that utilities could offer a lower price per kilowatt hour on off-peak power, making PHEVs even more attractive to consumers.
The pluggable hybrids could be connected to electric sockets with smart electronic switches that waited till electric prices dropped below some settable minimum before starting to charge.
Dynamic pricing combined with pluggable hybrids that can easily respond to pricing changes will do something else too: They will create more growth potential for energy sources that are not reliable. Wind and solar photovoltaics will both become more useful if a large portion of the demand for electricity was highly responsive to pricing changes. Pluggable hybrids will provide such a use for electric power.
To make pluggable hybrids most effective we need better batteries. Venture capital start-ups and established companies are chasing that goal. I'm confident the battery advances will come. The growing demand for hybrid vehicles has provided the financial incentive to invest in better battery technology.
We also need regulatory reform in the electric power market to make dynamic pricing a reality. Here I'm less optimistic. Government regulators and electric utilities don't have much incentive to push through a shift to dynamic pricing and I do not expect the public to be excited about it.
Update: A large increase in the demand for over night electricity would tend to cause a phase out of electric generator plants that provide peak power (notably natural gas burners) in favor of base line electric power generators (mostly coal and nuclear). Why? Because the base load suppliers are cheaper per kwh but only if they can run constantly. Coal and especially nuclear plants cost more to build but use cheaper fuel. They need to operate constantly to pay for their higher capital costs.
The exact mix of coal versus nuclear is going to depend on the regulatory environment and on technological advances. Tougher emissions regulations will favor nuclear. Technological advances might lower the costs of one more than the other. My guess is that nuclear has a greater potential for cost declines from technological advances. But when will those technological advances come?
Telomerase is an enzyme that rebuilds tips of chromosomes. The tips or ends of chromosomes get shorter each time a cell divides. As humans age our chromosome tips get shorter and once they get too short our cells can not divide well. Chronic stess not coincidentally shortens telomeres and also accelerates the aging of our bodies. So why don't our bodies just make enough telomerase enzyme to rebuild our chromosome tips? Evolutionary biologists theorize that selective pressures reduced telomerase expression to reduce cancer risk.
The link between telomerase and aging makes telomerase research an interest to biologists who study aging and cancer. Vera Gorbonova decided to compare the telomerase expression in 15 rodent species from around the globe and found telomerase expression is inversely correlated with body mass.
A key enzyme that cuts short our cellular lifespan in an effort to thwart cancer has now been linked to body mass.
Until now, scientists believed that our relatively long lifespans controlled the expression of telomerase—an enzyme that can lengthen the lives of cells, but can also increase the rate of cancer.
Vera Gorbunova, assistant professor of biology at the University of Rochester, conducted a first-of-its-kind study to discover why some animals express telomerase while others, like humans, don't. The findings are reported in today's issue of Aging Cell.
"Mice express telomerase in all their cells, which helps them heal dramatically fast," says Gorbunova. "Skin lesions heal much faster in mice, and after surgery a mouse's recovery time is far shorter than a human's. It would be nice to have that healing power, but the flip side of it is runaway cell reproduction—cancer."
The activation of telomerase could help rejuvenate bodies. But telomerase activation would probably come with higher risk of cancer. We need cures for cancer not only to avoid death from cancer but also to make it easier to use stem cells and other cell types to create replacement parts. Without those replacement parts we'll die from worn out organs and work out capillaries and other parts failures even if we can avoid cancer.
Are rodent species as close to each other on the evolutionary tree as this scientist assumes? How far back did the first branching of existing rodent species occur? Anyone know?
For over a year, Gorbunova collected deceased rodents from around the world and had them shipped to her lab in chilled containers. She analyzed their tissues to determine if the telomerase was fully active in them, as it was in mice, or suppressed, as it is in humans. Rodents are close to each other on the evolutionary tree and so if there were a pattern to the telomerase expression, she should be able to spot it there.
To her surprise, she found no correlation between telomerase and longevity. The great monkey wrench in that theory was the common gray squirrel, which lives an amazing two decades, yet also expresses telomerase in great quantity. Evolution clearly didn't see long life in a squirrel to be an increased risk for cancer.
I am guessing she was expecting to find an inverse correlation between telomerase expression and longevity. Shorter lived species ought to be able to allow greater repair by turning up telomerase since shorter lived species will die from other things before cancer.
But that line of thinking does not make sense anyway since we know short lived mice die from cancer. Their cells deterioriate and they lose control of them more quickly. Do their cells mutate more rapidly than human cells? I dimly recall that mice or rats have DNA polymerase enzymes that make errors at higher rates than human DNA polymerase.
Bigger bodies mean more cells which mean more risk of a cell mutating into cancers. So it is not surprising to me that bigger body rodents have less telomerase.
Body mass, however, showed a clear correlation across the 15 species. The capybara, nearly the size of a grown human, was not expressing telomerase, suggesting evolution was willing to forgo the benefits in order to reign in cancer.
The results cannot be directly related to humans, but Gorbunova set up the study to produce very strong across-the-board indicators. It's clear that evolution has found that the length of time an organism is alive has little effect on how likely some of its cells might mutate into cancer. Instead, simply having more cells in your body does raise the specter of cancer—and does so enough that the benefits of telomerase expression, such as fast healing, weren't worth the cancer risk.
One reason why the results do not directly relate to humans: We may have evolved better mechanisms for controlling cancer. Or maybe the rodents evolved better mechanisms to control cancer. But I would also expect rodents to differ between species in the quality of their mechanisms for doing cell replication and in their immune mechanisms for stopping cancer.
Larger animals have even larger numbers of cells and therefore, all else equal, even greater chances of developing cancer. Every additional cell is an additional risk for cancer. So the bigger an organism gets the greater the need to develop additional methods to control cancer.
Gorbunova points out that these findings raise another, perhaps far more important question: What, then, does this mean for animals that are far larger than humans? If a 160-pound human must give up telomerase to thwart cancer, then what does a 250,000-pound whale have to do to keep its risk of cancer at bay?
"It may be that whales have a cancer suppressant that we've never considered," says Gorbunova. "I'd like to find out what kind of telomerase expression they have, and find out what else they use to combat cancer."
We might eventually find genetic mechanisms for cancer prevention in other species that we could adapt to humans. Genes transferred from other species into human stems cells could serve to make youthful replacement organs less prone to become cancerous.
Dec. 7, 2006 -- A great deal of research connects nutrition with cancer risk. Overweight people are at higher risk of developing post-menopausal breast cancer, endometrial cancer, colon cancer, kidney cancer and a certain type of esophageal cancer. Now preliminary findings from researchers at Washington University School of Medicine in St. Louis suggest that eating less protein may help protect against certain cancers that are not directly associated with obesity.
The research, published in the December issue of the American Journal of Clinical Nutrition, shows that lean people on a long-term, low-protein, low-calorie diet or participating in regular endurance exercise training have lower levels of plasma growth factors and certain hormones linked to cancer risk.
These researchers think that just because people on low calorie low protein diets have lower blood levels of insulin-like growth factor 1 (IGF-1) that this is proof that protein is a culprit for raising IGF-1.
"However, people on a low-protein, low-calorie diet had considerably lower levels of a particular plasma growth factor called IGF-1 than equally lean endurance runners," says the study's first author Luigi Fontana, M.D., Ph.D., assistant professor of medicine at Washington University and an investigator at the Istituto Superiore di Sanità in Rome, Italy. "That suggests to us that a diet lower in protein may have a greater protective effect against cancer than endurance exercise, independently of body fat mass."
But has Fontana looked hard at the body of research on calorie restriction? Are distance runners really a good gold standard to compare to? We already know that calorie restriction will boost longevity of lab mice as compared to mice who eat more and get more exercise.
Note that Fontana's group that had the lowest IGF-1 levels also ate a raw food vegetarian diet. Okay, that's lower in glycemic index, plus comes with lots of beneficial compounds in fruits and vegetables. Seems to be he changed too many variables at once between groups.
The study involved three groups of people. The first ate a low-protein, low-calorie, raw food vegetarian diet and was made up of 21 lean men and women. Another group consisted of 21 lean subjects who did regular endurance running, averaging about 48 miles per week. The runners ate a standard Western diet, consuming more calories and protein than group one. The third group included 21 sedentary people who also consumed a standard Western diet, higher in sugars, processed refined grains and animal products. The subjects were matched for age, sex and other demographic factors, and no one smoked or had diabetes, cardiovascular disease, cancer, lung disease or other chronic illness.
Protein intake was, not surprisingly, lowest in the low-protein group. They averaged a daily intake of 0.73 grams of protein per kilogram of body weight. Endurance runners ate 1.6 grams and sedentary people on the Western diet, 1.23 grams. The recommended daily allowance for protein intake is 0.8 grams. That's about three ounces of protein per day for a 220-pound man.
"It's interesting to us that both the runners and especially the sedentary people consumed about 50 percent more protein than recommended," says Fontana. "We know that if we consume 50 percent more calories than recommended, we will become obese. But there is not a lot of research on whether chronic over-consumption of protein also has harmful effects."
The conclusions these researchers draw about about protein and cancer risk are based on a known association between insulin-like growth factor 1 (IGF-1) blood plasma levels and risk of breast cancer, prostate cancer, and colon cancer.
Fontana and colleagues found significantly lower blood levels of plasma insulin-like growth factor 1 (IGF-1) in the low-protein diet group than in either the equally lean runners or the sedentary people eating a standard Western diet. Past research has linked pre-menopausal breast cancer, prostate cancer and certain types of colon cancer to high levels of IGF-1, a powerful growth factor that promotes cell proliferation. Data from animal studies also suggest that lower IGF-1 levels are associated with maximal lifespan.
What I'd like to see: A larger assortment of diets compared for effects upon blood IGF-1 levels. I do not believe they've proven their main claim against protein:
"Our findings show that in normal weight people IGF-1 levels are related to protein intake, independent of body weight and fat mass," Fontana says. "I believe our findings suggest that protein intake may be very important in regulating cancer risk."
See below for why I doubt the strength of their claim. They might be right. I'd really like to know whether they are right. But the study strikes me as having a major shortcoming.
He calls the study a hypothesis-generating paper that suggests connections between dietary protein and epidemiological studies that show associations between IGF-1 levels and the risk of cancer. But he says more research is needed to clarify what that connection is.
I see a big obvious shortcoming of this study in this paragraph. Do you see it too?
The researchers also found that the group of endurance runners in the study consumed the highest number of calories, averaging more than 2,600 per day. Those on a standard Western diet consumed just over 2,300 calories daily, while those in the low-calorie, low-protein group ate just under 2,000 calories a day. Members of the latter group also tended to weigh less than sedentary people but slightly more than the endurance runners. The average body mass index (BMI) in the low-protein, low-calorie group was 21.3. BMI averaged 21.1 among the runners and 26.5 among those who were sedentary. BMI is a measurement of weight divided by height squared. People with a BMI greater than 25 are considered overweight.
Problem: The people on the standard diet ate more calories than those on the low protein diet. So how much of the lower blood IGF-1 is due to lower calories rather than lower protein? We already know that calorie restriction causes all sorts of blood markers to shift in directions favorable to good health. Cholesterol and triglycerides go down. Markers for insulin sensitivity improve. So I would expect better IGF-1 just from the lower calorie intake.
Note that their lower calorie lower protein study participants had mower body mass indexes. Was the lower IGF-1 just due to that? Have any studies been done comparing IGF-1 levels as a function of BMI?
Are any readers aware of studies of people on the high protein Atkins diet that looked at blood IGF-1 levels?
What is needed: Comparison of IGF-1 levels of people on different ratios of fats, carbohydrates, and protein on a normal calorie diet. Then, repeat the same experiment on people who are on calorie restriction diets. I certainly expect the people on lower calorie diets to have lower IGF-1. But will there be differences in IGF-1 based on the relative contributions of protein, fat, and carbos as calorie sources?
While I'm asking for experiments: I'd like to see comparisons of IGF-1 for diets where the carbos come from different sources such as low glycemic index versus high glycemic index foods and high fructose versus high glucose foods.
I'd also like to see the effect of BMI on IGF-1. Will high BMI people have high IGF-1 even if, say, they go on a low protein diet. I'd expect they would. Can any sort of diet that does not bring off weight lower IGF-1? Does high dose resveratrol lower IGF-1?
Can anyone point out studies in the research literature that control for factors that Fontana's team apparently didn't separately control for?
Plasma concentrations of insulin, free sex hormones, leptin, and C-reactive protein were lower and sex hormone–binding globulin was higher in the low-protein, low-calorie diet and runner groups than in the sedentary Western diet group (all P < 0.05). Plasma insulin-like growth factor I (IGF-I) and the concentration ratio of IGF-I to IGF binding protein 3 were lower in the low-protein, low-calorie diet group (139 ± 37 ng/mL and 0.033 ± 0.01, respectively) than in the runner (177 ± 37 ng/mL and 0.044 ± 0.01, respectively) and sedentary Western (201 ± 42 ng/mL and 0.046 ± 0.01, respectively) diet groups (P < 0.005).
But, again, how much of the result was due to A) lower protein, B) lower calories, or C) lower BMI? The latter two will both lower IGF-1 and markers for inflammation such as C-reactive protein. See the comments for pointers to other research that suggests, yes, protein restriction can lower unfavorable indicators in blood such as reactive oxygen species (ROS). So maybe a lower protein diet will help.
The work by MIT chemical-engineering professor Gregory Stephanopoulos and his colleagues focuses on the second part of this process: fermenting sugars to make ethanol. The yeast strain they made can tolerate ethanol concentrations as high as 18 percent--almost double the concentration that regular yeast can handle without quickly dying. In addition, the new strain makes about 20 percent more ethanol by processing more of the glucose, and it speeds up fermentation by 70 percent.
The research was done on a lab strain of yeast and still would need to be repeated on an industrial strain to be useful in a production environment. This capability, added to an industrial yeast strain, offers a couple of advantages. First, it reduces the capital cost of sugar fermentation to produce ethanol because the same sized fermenting tank can produce more yeast in the same amount of time. Second, the energy cost of separating ethanol from water at the end of the fermentation is lowered because the final solution has more ethanol and less water.
These researchers also want to genetically engineer the yeast to break down cellulose into simple sugars. Then yeast could perform the two biggest steps in making ethanol from biomass.
I think these results also raise the more distant prospect of highly automated home biomass ethanol fermenters. Take your bush, tree, and lawn cuttings, dump them into a home fermenter with genetically engineered organisms, and out comes ethanol for your car. Nanotech materials serving as catalysts might even some day replace the yeast.
I can also imagine an ethanol production system with nanotech membranes to produce ethanol that automatically shoves each ethanol molecule into a separate pure ethanol partition on the other side of the membrane from the sugars.
Ethanol is less than ideal as a liquid fuel because it has much less energy per gallon than gasoline. A bioengineered microorganism that produced non-oxygenated hydrocarbons from sugars would be even more attractive.
Diverse mixtures of native prairie plant species have emerged as a leader in the quest to identify the best source of biomass for producing sustainable, bio-based fuel to replace petroleum.
A new study led by David Tilman, an ecologist at the University of Minnesota, shows that mixtures of native perennial grasses and other flowering plants provide more usable energy per acre than corn grain ethanol or soybean biodiesel and are far better for the environment. The research was supported by the National Science Foundation (NSF) and the University of Minnesota Initiative for Renewable Energy and the Environment.
"Biofuels made from high-diversity mixtures of prairie plants can reduce global warming by removing carbon dioxide from the atmosphere. Even when grown on infertile soils, they can provide a substantial portion of global energy needs, and leave fertile land for food production," Tilman said.
The findings are published in the Dec. 8, 2006, issue of the journal Science.
Mixtures of plant species work better on land that is less than ideal.
The is study based on 10 years of research at Minnesota's Cedar Creek Natural History Area, one of 26 NSF long-term ecological research (LTER) sites. It shows that degraded agricultural land planted with diverse mixtures of prairie grasses and other flowering plants produces 238 percent more bioenergy on average than the same land planted with various single prairie plant species, including switchgrass.
One of the problems I have with the corn biomass ethanol approach is that the land which is not currently used to grow corn is far worse for that purpose than the land which still is used to grow corn. These researchers, by looking at what works best on poorer quality land, are looking for ways to make biomass energy production scale.
"This study highlights very clearly the additional benefits of taking a less-intensive management approach and maintaining higher biodiversity in the process," said Henry Gholz, NSF LTER program director. "It establishes a new baseline for evaluating the use of land for biofuel production."
Tilman and his colleagues estimate that fuel made from this prairie biomass would yield 51 percent more energy per acre than ethanol from corn grown on fertile land. Prairie plants require little energy to grow and all parts of the plant above ground are usable.
Fuels made from prairie biomass are "carbon negative," which means that producing and using them actually reduces the amount of carbon dioxide (a greenhouse gas) in the atmosphere. Prairie plants store more carbon in their roots and soil than is released by the fossil fuels needed to grow and convert them into biofuels. Using prairie biomass to make fuel would lead to the long-term removal and storage of from 1.2 to 1.8 U.S. tons of carbon dioxide per acre per year. This net removal of atmospheric carbon dioxide could continue for about 100 years, the researchers estimate.
In contrast, corn ethanol and soybean biodiesel are "carbon positive," meaning they add carbon dioxide to the atmosphere, although less than fossil fuels.
These researchers do not see switchgrass as the great biomass hope that others portray it to be.
Switchgrass, which is being developed as a perennial bioenergy crop, was one of 16 species in the study. When grown by itself in poor soil, it did not perform better than other single species and gave less than a third of the bioenergy of high-diversity plots."Switchgrass is very productive when it's grown like corn in fertile soil with lots of fertilizer, pesticide and energy inputs, but this approach doesn't yield as much energy gain as mixed species in poor soil nor does it have the same environmental benefits," said paper co-author Jason Hill, also of the University of Minnesota.
So far monocultures have been the only way biomass energy has been produced. Therefore these researchers are arguing for quite a departure from current practice.
To date, all biofuels, including cutting-edge nonfood energy crops such as switchgrass, elephant grass, hybrid poplar and hybrid willow, are produced as monocultures grown primarily in fertile soils.
But the amount of energy they expect to get from using mixed prairie grasses on less than ideal land is still far from enough to replace all uses of oil. Worse yet, the world demand for energy is going to keep going up.
The researchers estimate that growing mixed prairie grasses on all of the world's degraded land could produce enough bioenergy to replace 13 percent of global petroleum consumption and 19 percent of global electricity consumption.
My guess is they are assuming future cellulosic technologies to extract the energy out of the grasses. So a shift toward prairie grass for energy isn't practical yet.
My main objection to biomass remains that land pushed into production to produce energy is land not available to serve as habitat for a wide assortment of species. Want to see more animals go extinct? Promote biomass. The land footprint of nuclear power is far smaller and even photovoltaics would use a much smaller footprint to produce the same amount of energy as biomass.
Update: See the extensive debate at The Oil Drum on this research. Some of the posters throw doubt on the use of marginal lands with the argument that the biomass yield per acre will be so low that this will cause high harvesting costs per amount of energy gained.
Lower yield per acre also translates into far more acres used to produce energy. This cuts more heavily into habitats and threatens species. We need to move to nuclear, photovoltaics, and even wind power. Damage to habitats from biomass energy will cancel out any benefits from reduced CO2 emissions.
An article in the New York Times reports on prospective parents who select for offspring who share their genetically caused disabilities.
Wanting to have children who follow in one’s footsteps is an understandable desire. But a coming article in the journal Fertility and Sterility offers a fascinating glimpse into how far some parents may go to ensure that their children stay in their world — by intentionally choosing malfunctioning genes that produce disabilities like deafness or dwarfism.
The parents use in vitro fertilization (IVF or test tube babies) combined with pre-implantation genetic diagnosis (PIGD or PGD) to choose embryos to implant in a woman that will carry a genetic defect that will cause their children to have the same disability that the parents have.
Yet Susannah A. Baruch and colleagues at the Genetics and Public Policy Center at Johns Hopkins University recently surveyed 190 American P.G.D. clinics, and found that 3 percent reported having intentionally used P.G.D. “to select an embryo for the presence of a disability.”
Mind you, they aren't saying that 3% of all PGD uses are to select for disabilities. They are saying 3% of clinics have done this sort of selection at least once. But the article also reports that other clinics have turned down these sorts of requests from potential customers. That raises the possibility that prospective parents will respond by seeking out the clinics that are willing to select for disabilities.
The article points out that while PGD improves the reliability of attempts to select for defects deaf lesbian women have used sperm from deaf sperm donors to achieve this sort of objective. Have any women intentionally chosen donor eggs from genetically defective egg donors as well? My guess is this has already happened and will again.
One of my great worries for the future is over the question of what qualities will people choose for their children when they gain the ability to choose many more genetic characteristics of their offspring. Deaf or dwarf people who choose to have deaf or dwarf offspring are especially troubling for what they say about the potential for humans to make rather clannish decisions to promote creation of separate groups.
I'm especially worried about choices that will determine aspects of personality and instincts. People may decide to give their sons aggressive instincts that, as a side effect, make them more likely to be violent. Or they may cut back on the instinct to carry out altruistic punishment or the capacity to feel empathy so that their kids focus more on their own success.
Genetically caused qualities of human minds make a free society possible. An overlapping but not identical set of genetically caused qualities of the mind make a modern technological society possible. Will those qualities become more or less common when people gain the ability to select genetic variations for their offspring?
Kevin Bullis of MIT's Technology Review reports on views of battery researchers on the feasibility of powering cars with batteries.
Stanley Whittingham, inventor of the first commercial lithium-ion battery and professor of chemistry, materials science, and engineering at the State University of New York, at Binghamton, says current research should make electric vehicles practical--with the following caveat: they'll probably be used for trips of less than 100 miles. Those who want 300-to-400-mile ranges typical of gasoline-powered vehicles will need to turn to plug-in hybrids: vehicles much like today's gas-electric hybrids, but with a much larger battery pack that makes it possible to go longer on electric power, thereby saving gas. These batteries could be partly charged by an onboard gas engine, but also by electricity from a wall socket.
Whittingham says that while he expects battery capacity to double, it's not going to get much better than that.
But electrochemist Peter Bruce of University of Saint Andrews in Scotland thinks his experimental lithium ion battery that combines with oxygen to form lithium peroxide could more than quadruple current battery capacity.
Based on his experiments, Bruce says that such batteries could store as much as 600 to 700 milliamp hours per gram (more than four times that of batteries today) while maintaining the ability to be charged and discharged for many cycles.
Even 100 mile range would make electric cars practical for many. But to maximize the convenience of electric cars it helps to have a property that makes it easy to run a power cord to a car. Someone who parks in their own garage could plug in their car pretty easily. But someone who parks on the street and walks to an apartment will find home charging hard to do. Those who can't easily charge at home will need faster charging and higher energy storage capacity batteries to make pure electric cars practical for them.
MIT battery research Donald Sadoway (whose battery research I've previously reported on) told Technology Review in an interview in October 2005 that hydrogen fuel cells are not going to compete with batteries for vehicle power.
DS: I don't believe in fuel cells for portable power. I think it's a dumb idea. The good news is: they burn hydrogen with oxygen to produce electricity, and only water vapor is the byproduct. The bad news is: you have to deal with molecular hydrogen gas, and that's what's stymieing the research and in my opinion is always going to stymie the research.
That's why I don't work on fuel cells. Where's the infrastructure? Where are we going to get hydrogen from? Hydrogen is a molecule, it's H2. To break it apart, to get H+, you've got to go from H2 to H, and that covalent bond is very strong. To break that bond you have to catalyze the reaction, and guess what the catalyst is? It's noble metals -- platinum and palladium. Have you seen the price of platinum? Lithium [for lithium ion batteries] is expensive. But it's not like platinum. Lithium right now is probably $40 a pound. Platinum is $500 an ounce. If I could give the fuel-cell guys platinum for $40 a pound, they would be carrying me around on their shoulders until the day I die.
Sadoway thinks electric cars with longer ranges are within the realm of the possible.
Batteries suitable for electric cars would make a huge difference in our energy future. Why? Simple: Batteries would allow all energy sources that can generate electricity to power vehicles. Nuclear, solar, wind, geothermal, coal will all become energy sources for transportation when batteries improve enough to make electric cars competitive.
MIT's Technology Review reports on an experimental thermal rectifier made from nanotubes that preferentially allows heat to flow more easily in one direction.
Scientists have been precisely controlling electric current for decades, building diodes and transistors that shuttle electrons around and make computers and cell phones work. But similarly controlling the flow of heat in solids stayed in the realm of theoretical physics--until now.
Alex Zettl and his colleagues at the University of California, Berkeley (UC Berkeley), have shown that it is possible to make a thermal rectifier, a device that directs the flow of heat, with nanotubes. If made practical, the rectifier, which the researchers described in last week's Science, could be used to manage the overheating of microelectronic devices and to help create energy-efficient buildings, and it could even lead to new ways of computing with heat.
The difference in the ease of heat flow they produced was not enormous. But it was a good start. Think of it as a material that has a higher insulation rating in one direction than another.
Imagine two sides of a wall. Sometimes the outside is hotter. Sometimes the inside is hotter. It is not hard to imagine scenarios where you would want heat to flow out when it is too hot but to never flow in. Or perhaps you'd want to control the direction of heat flow depending on the season. The ability to easily flip around a section of thermal rectifier wall material would come in very handy.
The sale of eggs is illegal in this country, but in America, the industry is worth an estimated $4.5bn (£2.4bn). Donors with the right physical, personal and intellectual attributes can attract fees of up to $35,000 for their eggs, with some in the industry claiming that as much as $50,000 has changed hands. Prices are rising, too: in New York, average eggs are fetching $8,000. About 15 years ago, the comparable figure was closer to $1,000.
The people who are paying only $8000 for eggs are making bad investment decisions. Top quality is worth paying for when it comes to the genetic inheritance of your children.
British women, banned from selling their eggs in Britain, are increasingly offering their eggs for sale in laissez faire America.
Now British women - including 25-year-old Alexandra Saunders of High Wycombe, who this week advertised her eggs on the internet to pay off a £15,000 credit card debt - are following suit.
Though a woman who would run up a nearly $30,000 credit card debt strikes me perhaps lacking in genes that contribute to prudence and the ability to engage in careful financial planning.
The article quotes an an American egg brokerage web site which claims it has experienced a 25% increase in applications by British women who want to sell their eggs. It seems likely British buyers are also travelling to the United States to get eggs and to get them fertilized and implanted while here. That's a lot more expensive than it need be. If the British government would get over their socialist view that eggs shouldn't be sold they'd save British buyers and sellers a lot of time, money, and aggravation.
There's an underground egg trade in Britain where the market participants try to find ways around the regulatory limitations.
Controversially, one of the UK's leading fertility experts, Dr Mohammed Taranissi, has argued that payment for eggs was already a reality in the UK. Dr Taranissi, director of Britain's most successful fertility clinic, the Assisted Reproduction and Gynaecology Centre in London, said that, via sizeable "expenses" for donors and free IVF treatment for those involved in egg-sharing programmes, payment was being made in different ways by clinics.
Given the risks and impositions of egg donations it seems entirely unfair for a government to tell women they do not have a right to charge what the market will bear for egg donation. The risks involved in use of fertility drugs to cause extra egg production might even include chromosomal damage to eggs in the ovaries. Governments should not limit how much women can charge for running these risks.
Those Danes are doing their manly duty to bring new babies into this world. Limits on sperm donation in many European countries have driven Denmark to the top of the European donor sperm trade.
In the same way that some nations have oil fields or bread mountains, Denmark boasts an ever-growing sperm lake. The vault at Cryos HQ holds around 75,000 straws. It is far too much sperm for a nation where only 65,000 children are born each year, so Denmark is a net exporter. The efforts of the men of Arhus, Odense and Copenhagen have helped to engender an estimated 12,000 children around the world, and each year “the Danish stuff” brings forth some 1,400 more.
An embarrassment of riches in Denmark has corresponded to a scarcity of donor sperm almost everywhere else. In Britain, as in Norway and Sweden, new regulations ending anonymity for sperm donors has decimated the ranks of men once willing to donate, while in April the arrival of the EU Tissue Directive is likely to make sperm banking a harder business to manage on a small scale. Cryos could yet emerge with something of a monopoly on the European market.
The London Bridge centre once supplied donor sperm to most UK fertility clinics. “We now just about meet our own needs,” says Professor Gedis Grudzinskas, medical director. Previously, up to 15 UK clinics relied on semen from Cryos, but such imports are now restricted. “We send our most urgent cases to clinics in Denmark,” says Grudzinskas.
6% of Danish babies are born with the help of assisted reproduction technology (ART). The United States is lagging Denmark in terms of the percentage of women using ART. I suspect that is because Denmark has an older population and so a larger percentage of Danish women who are trying to conceive are in their 30s and 40s. Here's how fast Assisted Reproduction Technologies (ART) usage has increased in the United States: (the CDC lags in reporting national results by a few years)
Those 48,756 live births represent over 1% of total babies born in the United States in 2003.
Getting back to the second article, since sperm is much easier and less risky to produce the size of the sperm market in monetary terms is very small. Given the decline in the dollar you can almost multiply by 2 to convert these figures to dollars.
The global market for sperm exports has been estimated at between £25 million and £50 million a year. The US market is worth £5 million and £10 million and the European market is of similar size.
These are small amounts in dollars too. The article also reports on British women travelling to Spain to buy eggs. The sale of eggs isn't legal in Spain but the cost of the effort can be paid. This sounds like in America where technically eggs can't be sold but women can charge what the market will bear for the time they spend donating the eggs.
If you are in the market for sperm you definitely should go for top quality donors in terms of intellect, health, physical appearance, accomplisments, and desired personality characteristics. Even the best do not cost much. Scrimping on sperm donor costs is very foolish. Go for 140+ IQ donors.
European women are travelling to Denmark, Ireland, Belgium and Finland to buy sperm (often from the supplier Cryos of Denmark) because sperm donation is more difficult in other European countries. In many European countries sperm donation is difficult because donors are not allowed to be anonymous. Guys don't want junior knocking on the door 15 years later when they are raising their own families.
Skeletal progenitor cells differentiate into cartilage cells when one master gene actually suppresses the action of another, said Baylor College of Medicine researchers in a report that appears online in the journal Proceedings of the National Academy of Sciences.
Skeletons are made of bone and cartilage cells that are differentiated from the same multipotent stem cell, said Dr. Brendan Lee, associate professor of molecular and human genetics at BCM, director of the Skeletal Dysplasia Clinic at Texas Children’s Hospital and a Howard Hughes Medical Institute investigator. This same stem cell gives rise to bone, cartilage, fat and fibroblasts.
“The big question is what are the master genes that make a stem cell go one way versus another,” said Lee.
Both SOX9 and RUNX2 are master transcription factors involved in the process of differentiating bone and cartilage.
SOX9 and RUNX2 are obvious candidates for drug development. A drug that could block SOX9 would probably cause skeletal progenitor cells to become bone cells. That'd be handy for bone repair and bone restoration for people suffering osteoporosis. A drug that could turn on SOX9 could produce cartilage to replace aged or damaged cartilage.
The master protein SOX9 directs skeletal progenitor cells to become cartilage and another master protein, RUNX2, directs such cells to become bone, However, he said, the primordial skeletal cell has both RUNX2 AND SOX9.
“We then asked a simple question: Could these master transcription factors (that direct the expression of other genes) directly affect one another’s function"” he said. After studies in the laboratory, with mice and with humans, the answer was yes.
“SOX9 appears to be the dominant player,” said Lee. “When it is present in a progenitor cell, it turns off RUNX2 and allows the cell to become cartilage.”
That does not answer the question of how such cells become bone.
“Clearly, something must turn off SOX9,” said Lee. “That’s the next question we have to answer.”
These two genes are part of a much larger set of genes that control cell differentiation (i.e. the process by which cells turn into all the specialized cell types on the body). Advances in biotechnology are accelerating the rate at which scientists working in labs can figure out how all these genes work. The more they learn the better able they will be to intervene and turn cells into any types needed for repair and rejuvenation.
Here's the paper: Dominance of SOX9 function over RUNX2 during skeletogenesis.
CHICAGO -- Researchers have discovered that even a small amount of MDMA, better known as ecstasy, can be harmful to the brain, according to the first study to look at the neurotoxic effects of low doses of the recreational drug in new ecstasy users. The findings were presented today at the annual meeting of the Radiological Society of North America (RSNA).
“We found a decrease in blood circulation in some areas of the brain in young adults who just started to use ecstasy,” said Maartje de Win, M.D., radiology resident at the Academic Medical Center at the University of Amsterdam in the Netherlands. “In addition, we found a relative decrease in verbal memory performance in ecstasy users compared to non-users.”
Note that Dr. de Win is in the Netherlands and therefore probably not under the control of what some paranoids see as a US government plot to produce lots of false propagandistic drug research which supposedly has corrupted all drug research in America. But who knows. Maybe the CIA does international work for the US National Institute on Drug Absue
Ecstasy is an illegal drug that acts as a stimulant and psychedelic. A 2004 survey by the National Institute on Drug Abuse (NIDA) found that 450,000 people in the United States age 12 and over had used ecstasy in the past 30 days. In 2005, NIDA estimated that 5.4 percent of all American 12th graders had taken the drug at least once.
Ecstasy targets neurons in the brain that use the chemical serotonin to communicate. Serotonin plays an important role in regulating a number of mental processes including mood and memory.
Research has shown that long-term or heavy ecstasy use can damage these neurons and cause depression, anxiety, confusion, difficulty sleeping and decrease in memory. However, no previous studies have looked at the effects of low doses of the drug on first-time users.
So we know that MDMA in longer term users causes damage. But these researchers wanted to find out how quickly the damage appears to show up. So they recruited malleable young minds who were just about to tune in, turn on, and drop out (anyone else remember that drug documentary with Timothy Leary?).
Dr. de Win and colleagues examined 188 volunteers with no history of ecstasy use but at high-risk for first-time ecstasy use in the near future. The examinations included neuroimaging techniques to measure the integrity of cells and blood flow in different areas of the brain and various psychological tests. After 18 months, 59 first-time ecstasy users who had taken six tablets on average and 56 non-users were re-examined with the same techniques and tests.
The ecstasy users experienced decreased blood flow in some brain regions and decreased verbal memory performance.
The study found that low doses of ecstasy did not severely damage the serotonergic neurons or affect mood. However, there were indications of subtle changes in cell architecture and decreased blood flow in some brain regions, suggesting prolonged effects from the drug, including some cell damage. In addition, the results showed a decrease in verbal memory performance among low-dose ecstasy users compared to non-users.
Unless you happen to have the body and face of a Mischa Barton or a Paris Hilton your brain is probably your most valuable asset. Damaging it for transitory kicks does not seem like a wise strategy. Even Mischa and Paris will reduce their earnings potential if they damage their brains. Please Mischa, be careful. You too Jessica Alba.
In a forthcoming meeting of the Society of Natural Cycle Assisted Reproduction (ISNAR) Professor Bob Edwards (who initiated the first in vitro fertilization (IVF) pregnancy of the famous baby Louise Brown in 1978) and other fertility experts are expected to call for a reduction in the use of hormones that stimulate ovaries to produce eggs.
A conference of fertility experts this month will call on the IVF industry to rethink its approach. They say hormones used to "kickstart" the ovaries could cause chromosomal damage to more than half of eggs, rendering them useless. The treatments may also affect the womb lining, preventing embryos from implanting.
These fertility experts think that for some women the net effect of using the ovulation stimulation drugs might be a net harm for their prospects of getting pregnant.
While fertility drugs like Clomid (which causes a false signal of low estrogen to cause gonadotropin hormone release) are used for many IVF procedures they are not always necessary. Professor Edwards used eggs naturally produced by the menstrual cycle to start the pregnancy that produced Louise Brown. Also, it is possible to use lower doses of fertility drugs and some of these experts think fertility doctors should lower their doses.
The case against the fertility drugs has not been proven. But these fertility researchers and practitioners think the case is strong enough to argue for changes in procedures used by fertility clinics.
Another fertility pioneer, Robert Winston, the peer, said: "The trend is to get as many eggs as possible, but that may be counterproductive. From the research we've done, the main risk is that doing this produces chromosomal damage in at least half, if not 70 per cent, of eggs. New studies are needed to prove the drugs are causing the damage, but it is my strong suspicion that this is the case."
What is needed is another way to produce eggs for women with aged ovaries. That is coming. Within 10 to 20 years time advances in technologies for stem cell manipulations will produce eggs suitable for fertilization with sperm. It will become possible to take adult cells and expose the cells to a series of chemicals and/or gene therapies to turn them into embyronic cells and then stimulate them to divide into eggs.
Cutting back on fertility drug usage might not reduce success rates. In fact, a couple of recent findings both point the way to much higher rates of success for IVF attempts. See my posts Biopsy Doubles Success Rate For IVF Babies and Embryo Tests More Than Double IVF Pregnancy Rate.