Brace yourselves for a study which produced results which are, to say the least, counterintuitive. A group of Finnish researchers found in an epidemiological study on twins that those overweight people who intended to lose weight who lost weight were most at risk of dying over the study period. (the higher the hazard ratio the greater the risk of dying)
Weight loss in the obese improves risk factors for cardiovascular diseases and diabetes. However, several studies have shown inconsistent long-term effects of weight loss on mortality. We investigated the influence on mortality of intention to lose weight and subsequent weight changes among overweight individuals without known co-morbidities.
Methods and Findings
In 1975, a cohort of individuals reported height, weight, and current attempts (defined as “intention”) to lose weight, and in 1981, they reported current weight. Mortality of the 2,957 participants with body mass index ≥ 25 kg/m2 in 1975 and without pre-existing or current diseases was followed from 1982 through 1999, and 268 participants died. The association of intention to lose weight in 1975 and actual weight change until 1981 with mortality was analysed while controlling for behavioural and psychosocial risk factors and hypertension as possible confounders. Compared with the group not intending to lose and able to maintain stable weight, the hazard ratios (with 95% confidence intervals) in the group intending to lose weight were 0.84 (0.49–1.48) for those with stable weight, 1.86 (1.22–2.87) for those losing weight, and 0.93 (0.55–1.56) for those gaining weight. In the group not intending to lose weight, hazard ratios were 1.17 (0.82–1.66) for those who did lose weight, and 1.57 (1.08–2.30) for those gaining weight.
Deliberate weight loss in overweight individuals without known co-morbidities may be hazardous in the long term. The health effects of weight loss are complex, possibly composed of oppositely acting processes, and need more research.
That is the abstract of the study. Since it was published in PLoS Medcine you can click through and read the whole study. The body of the study has an extensive discussion of previous relevant research, more details of this study, and analyses of what it all means.
But don't rush to conclusions. Epidemiologist Meir Stampfer of the Harvard School for Public Health says epidemiology studies of the effect of weight on mortality need to correct for a number of problems.
Epidemiologic studies of the relation between overweight and mortality typically must address three principal concerns . First, in many populations, cigarette smokers tend to be leaner than nonsmokers. Because cigarette smoking is such a strong risk factor for mortality, failure to adjust for this adequately can lead to confounding, with the erroneous conclusion that leanness carries increased risk of death. Statistical adjustment for smoking is often insufficient to account for this difficulty. Smoking can lead to medical conditions, sometimes sub-clinical, that are associated with decreased body weight, such as chronic obstructive pulmonary disease. The presence of symptoms or diagnosed conditions may induce smokers to quit. Moreover, the intensity of smoking is related to both risk of death and body mass index. For these reasons, the best way to assess the impact of overweight on risk of mortality is simply to exclude current and past smokers. Kaprio and colleagues' study differentiated only current smoker or nonsmoker. Thus, never-smokers were included in the same category as past smokers, regardless of how much the past smokers had smoked or their reasons for quitting.A second difficulty in some epidemiologic studies is the inclusion of intermediary factors as co-variates. Weight loss improves hypertension and diabetes, so including these as co-variates would tend to attenuate the apparent benefit of weight loss. In the present study, the authors appropriately excluded people with diabetes, and adjustment for hypertension appeared to have little impact. The third and most difficult issue in studies of overweight and mortality is reverse causation, the impact of disease on body weight. This can occur either through the biological impact of a condition (diagnosed or preclinical) or as an inducement to attempt to lose weight as a means to improve health. The authors' keen recognition of this problem provides a significant strength to the present study. The authors appropriately excluded individuals with a wide range of conditions to identify an apparently healthy cohort. This critical step is often ignored. Sometimes, investigators also exclude deaths that occur in the first few years after follow-up, to reduce the impact of reverse causation. Such lagged analyses can be helpful, but some chronic conditions of long duration, such as depression, chronic lung disease, and heart failure (conditions that often may not reach the level of clinical diagnosis) can lead to lower body weight and higher mortality risk. Hence, that strategy (not employed by Kaprio and colleagues) may not fully avoid the problem.
Stampfer points out that the people who lost more weight may not have done so as a result of a desire to lose weight. They may have intended to lose weight. But when they finally did lose weight it might have been due to diseases that eventually killed them. Plus the study size was not big enough and the degree of obesity was fairly small.
As a further step toward reducing the potential bias of the impact of disease on body weight, the authors identified individuals with intent to lose weight. Unintended weight loss is well known as an ominous clinical sign, usually signifying a serious illness. Just over a third of the overweight subjects in the Kaprio cohort had expressed intent to lose weight at baseline in 1975. It is interesting to note that despite this intention, as a group, the median weight change in the ensuing six years was a gain of 0.33 kg/m2, almost identical to the weight change in the group that had not expressed intent to lose weight (gain of 0.31 kg/m2). Thus, this study cannot fairly be characterized as an assessment of intentional weight loss and its subsequent effect on mortality. Because the changes in weight are so similar in these two groups, it is implausible to attribute the weight loss in the intent-to-lose group to that intention. These findings render the results particularly difficult to interpret.Other important limitations include the very small number of endpoints—only 268 total deaths in the cohort. When further subdivided according to intention to lose weight, and categories of weight change, the numbers are far from sufficient for reliable estimates. For example, the main conclusions are based on the subgroup of those with intent to lose weight who actually lost weight; this group had only 42 deaths, of which ten were violent. Another related difficulty is that this cohort, though overweight, is not drastically obese. The median body mass index (BMI) was 26.7 kg/m2, and fewer than 10% of people had a BMI greater than 30. With such a narrow range of BMI, coupled with the very small changes in weight during the six years of initial follow-up, even a very large study would not have sufficient power to detect plausible relative risks associated with weight change.
At the extreme it seems very unlikely that people who are morbidly obese (e.g. the people who are so large they can no longer even stand up) would suffer a net harm from losing weight. Also, people who have various risk factors that show up more among the obese such as hypertension and type II diabetes would probably benefit more from weight loss than those who do not have those risk factors who are equally overweight.
If you are overweight, then losing weight is good for your health, surely? Unfortunately, the evidence on which an answer to this seemingly simple question might be based is at best equivocal, and at worst very controversial. Previous work has shown that weight loss in obese people improves risk factors associated with cardiovascular diseases and diabetes, but studies are conflicting on the long-term effects of weight loss on mortality. A study in this month's PLoS Medicine by Jaakko Kaprio and colleagues on a Finnish dataset adds more evidence to this debate, but experts are divided on what can be concluded from it.
The major difficulty in getting clear results on this question is that it is virtually impossible to do a controlled trial to answer it. Hence, the evidence accumulated has come mostly from epidemiological studies, but it is notoriously difficult to remove all confounding factors from these studies. Kaprio and colleagues' study is another epidemiological study, but we should not simply dismiss the data as unreliable just because of the problems inherent to such a study design. Instead, we should consider their study in the light of all the other evidence available.
Suppose the Finnish study's results hold up in a larger study. How could this be? One possibility is that loss of non-fat cell mass while on a diet may cause harm.
The study leaves us with the question of how intentional weight loss could lead to excess mortality. The authors suggest that this could be due to the unavoidable loss of lean body mass, which according to several other studies may increase mortality, and which may outweigh the beneficial effects of losing fat mass in healthy individuals. The authors therefore conclude that “the long-term effects of weight loss are complex, and they may be composed of oppositely operating effects with net results reflecting the balance between these effects.”
The research paper (which is the first link above) discusses this idea at greater length.
Suppose loss of lean body mass during dieting inflicts real harm. Large amounts of high calorie burning aerobic exercise might be the solution. If one could burn off calories with exercise then perhaps the lean muscle mass loss could be avoided. Though possibly internal organ mass might still be lost while dieting.
Another possible solution might be slower diets. But in the longer term the ideal solution would be drugs for regulating appetite that avoid obesity in the first place. For those already obese we probably need drugs that signal only fat cells to burn fat while leaving other tissue types unchanged.
Once effective weight loss drugs make it to market then fairly well prospective studies on the effect of weight loss will become possible. Without weight loss drugs few people currently succeed in keeping off the weight that they lose. I've seen references (sorry, no URL) to studies which purport to show that yo yo dieting where the weight goes on and off repeatedly is more harmful than no dieting at all. This makes intuitive sense when you consider the discussion above about the loss of lean body mass during diets. Someone who repeatedly diets puts their body through many episodes of lean body mass loss. Also, someone who is binging to gain weight will put their body through higher calorie consumption periods and probably therefore periods of even higher lipid levels from consuming mass quantites. My guess is that while people from Remulac can consume mass quantities safely humans can not.
My advice: Concentrate less on weight loss and more on improving the quality of the food you eat. Also, change how you go about your daily activities to inject more exercise into what you do. Try to walk rather than drive where that is practical. Take stairs rather than elevators. Get a totally manual mower rather than a gas engine mower. If your dog wants you to take him or her for a run then do so. Dogs are great personal fitness trainers if you will only obey them.
A group of researchers from Harvard, University of Auckland in NZ, University of Queensland in Australia, and something called the International Obesity Task Force in London have found that obesity and cholesterol levels peak in middle income countries and decline with higher incomes.
Cardiovascular diseases and their nutritional risk factors—including overweight and obesity, elevated blood pressure, and cholesterol—are among the leading causes of global mortality and morbidity, and have been predicted to rise with economic development.
Methods and Findings
We examined age-standardized mean population levels of body mass index (BMI), systolic blood pressure, and total cholesterol in relation to national income, food share of household expenditure, and urbanization in a cross-country analysis. Data were from a total of over 100 countries and were obtained from systematic reviews of published literature, and from national and international health agencies.
BMI and cholesterol increased rapidly in relation to national income, then flattened, and eventually declined. BMI increased most rapidly until an income of about I$5,000 (international dollars) and peaked at about I$12,500 for females and I$17,000 for males. Cholesterol's point of inflection and peak were at higher income levels than those of BMI (about I$8,000 and I$18,000, respectively). There was an inverse relationship between BMI/cholesterol and the food share of household expenditure, and a positive relationship with proportion of population in urban areas. Mean population blood pressure was not correlated or only weakly correlated with the economic factors considered, or with cholesterol and BMI.
When considered together with evidence on shifts in income–risk relationships within developed countries, the results indicate that cardiovascular disease risks are expected to systematically shift to low-income and middle-income countries and, together with the persistent burden of infectious diseases, further increase global health inequalities. Preventing obesity should be a priority from early stages of economic development, accompanied by population-level and personal interventions for blood pressure and cholesterol.
I have to quibble with their comment about "persistent burden of infectious diseaes". I doubt that middle income countries labor under as much infectious disease burden as lower income countries. As incomes rise and obesity and cholesterol become problems disease burdens probably fall for a number of reasons including wider spread vaccination, better housing which reduces exposure to disease and to weather, reduced malnutrition of types that suppress immune response, and cleaner water.
In the longer run I expect the infectious disease burden to drop even in many lower income countries as vaccines get developed which are cheaper and easier to deliver (e.g. geneticaly engineered into foods), advances in nanotechnology make water purification cheap, and other advances made in the industrialized countries get delivered cheaply to the basketcase countries of the world.
I think they are on firmer ground in claiming that there are inflection points where the health consequences of rising affluence on cardiovascular disease reach a peak and then reverse. People in the most developed countries are more likely to get screened for cardiovascular risk factors and to take statin drugs to lower cholesterol and drugs to lower blood pressure. They also have more income to spend on fruits and vegetables and to purchase other foods that lower risks (e.g. fish).
An accompanying essay by Thomas E. Novotny of UCSF entitled Why We Need to Rethink the Diseases of Affluence makes an important point about environmental engineering.
As populations assume more of an urban lifestyle, they should not be limited in their choices for healthy foods, suffer from lack of safe water, or lose opportunities for physical activity. These problems can be reduced through good urban planning, better food policies, improved environmental engineering, and better attention to healthy lifestyle practices in our growing cities. Screening for hypertension, hypercholesterolemia, and nicotine addiction need to become a part of good clinical practices in low- and middle-income countries. Of course, screening for these risks should then also be accompanied by better availability of low-priced secondary prevention therapies such as generic versions of anti-hypertensives, statins, and nicotine replacement therapies.
The amount of exercise that people get in more urbanized and suburban environments could be increased if areas were set aside for public parks, streets were designed to be more pedestrian and bicycle friendly, and zoning put stores and offices within walking distances of homes.
Also, note Novotny's reference to nicotine replacement therapies. This touches on a larger subject: It is my impression that the United States has a much more anti-smoking regulatory environment than less developed countries and even than the bulk of the most industrialized countries. Some of the other Western industrialized countries are trending toward reducing smoke exposure in work places and discouraging smoking. That lowers cardiovascular risk relative to lower and middle income countries which generally have fewer limits on cigarette smoking, sales, and advertising and fewer public health warnings against smoking.
Food fortification is also probably making a difference. For example, the addition of folic acid to grain-based foods in the United States is lowering average blood homocysteine and that is reducing cardiovascular diseasa. The motivation behind the folic acid fortification was to lower spina bifida birth defects. The cardiovascular disease reduction is a bonus. But fortification is cheap and could and should be implemented more widely in middle and lower income countries. Africa would benefit especially.
In the medium run I expect the cost of cardiovascular disease reduction to fall to levels that cause cardiovascular risk reduction practices to spread to medium and lower income groups and countries. Genetic engineering of foods will improve diets. For example, genetic engineering will increase omega 3 fatty acid levels in grains and livestock while simultaneously reducing the level of fats that most increase cardiovascular risks. Another possibility is the genetic engineering of food crops to raise levels of a compound called beta sitosterol which is now used in special cholesterol lowering margarines. Also, cheaper, safer, and easier ways to control appetite and reduce nicotine cravings will be found.
In the long run I expect to see genetic engineering of liver cells to make them control blood lipid and cholesterol levels to assure optimal blood for cardiovascular health. Also, stem cell therapies and other gene therapies will allow repair of all atherosclerotic plaque damage and heart muscle damage.
STANFORD, Calif. – Older people are more prone to infections and have a higher risk of developing leukemia, and now researchers at Stanford University School of Medicine have one hint as to why that may be. The group found that in mice, the bone marrow stem cells responsible for churning out new blood cells slow down in their ability to produce immune cells, leaving older mice with fewer defenses against infection.
This result is not the least bit surprising. The decrease in the ability to divide probably happens in every stem cell type in the body as animals and humans age.
These new findings, published in the June 20 online issue of Proceedings of the National Academy of Sciences, add to mounting evidence that many pitfalls of aging result from either older stem cells or stem cells responding to their older environment.
“Aging results in a diminished capacity of the body to maintain tissue and organ function. Since we know the cells mediating this maintenance are stem cells, it doesn’t take a great leap of faith to think that stem cells are at the heart of that failure,” said Derrick Rossi, PhD, postdoctoral scholar and co-first author on the paper with postdoctoral scholar David Bryder, PhD.
In addition to producing fewer immune cells, the older blood-forming stem cells were actively using genes known to be involved in leukemia, a group of cancers that affect blood cells. This could be one reason why older people are more prone to developing certain forms of leukemia.
The role of stem cells in cancer formation is suspected in a number of cancer types. Stem cells divide a lot and cell division is an error-prone process. As compared to cell types that divide less often stem cells are at greater risk of accumulating errors in genetic and epigenetic information. Those errors increase the risk of cancer as we age.
Senior author Irv Weissman, MD, director of the Stanford Institute for Cancer and Stem Cell Biology and Medicine, said one surprise came when the group transplanted older stem cells into younger mice. Those cells continued to behave like old stem cells, producing fewer immune cells and turning on cancer-causing genes. From previous work in mouse muscle cells, he said he expected the blood-forming stem cells to resume a more youthful life once transplanted into younger mice.
This work could eventually lead to new ways of improving immune function in older people or of preventing leukemia. As one example, Weissman said that by understanding the difference between older and younger stem cells it may be possible to prompt old cells to act young again, reviving their ability to produce immune cells.
What we really need is the ability to replace aged stem cells with younger and less defective stem cells. If we could replace old stem cells we'd gain many benefits including better repair capabilities, reduced risk of cancer, and better function in systems such as the immune system that require new cells to be made to respond to constantly changing threats and conditions.
Whether rejuvenation of existing aged stem cells will turn out to be practical depends on what is causing them to act aged in the first place. If a fairly small number of types of mutations or changes in epigenetic information are causing stem cells to age then perhaps gene therapies could eventually be developed to go in and fix the mutations. But if the accumulated problematic damage involves a wider range of locations in the genome then gene therapies aimed at doing repair to individual cells would need to repair too many places and the size of the genetic programming delivered by the gene therapies might end up too large to fit in any gene therapy delivery package.
My guess: For most aged stem cell types delivery of youthful replacement stem cells will likely win out over gene therapies for stem cell rejuvenation.
WASHINGTON, June 24, 2005 – Trust for America’s Health (TFAH) today released state-by-state projections that found over half a million Americans could die and over 2.3 million could be hospitalized if a moderately severe strain of a pandemic flu virus hits the U.S. Additionally, based on the model estimates, 66.9 million Americans are at risk of contracting the disease. The study also found that the U.S. currently only has stockpiled 2.3 million courses and has placed orders for an additional three million courses of antiviral pharmaceuticals (produced as Tamiflu by Roche Pharmaceuticals), which would likely be available in 2006. This would be enough to cover 5.3 million Americans, leaving over 60 million who could be infected and would not be able to receive medication before an effective vaccine to combat the flu strain is identified and produced.
TFAH’s numerical projections are included in a new report, “A Killer Flu? ‘Inevitable’ Epidemic Could Kill Millions.”
“This is not a drill. This is not a planning exercise. This is for real,” said Shelley A. Hearne, DrPH, Executive Director of TFAH. “Americans are being placed needlessly at risk. The U.S. must take fast and furious action to prepare for a possible pandemic outbreak here at home.”
“The Government Reform Committee has held several hearings over the last few years to let people know that the flu is not something to take lightly,” said U.S. Congressman Tom Davis (R-VA), Chairman of the House Government Reform Committee. “TFAH's report clearly demonstrates that the emergence of a pandemic flu could exact a tremendous toll on U.S. health and economic stability. In order to identify problem areas and prioritize planning and response efforts, the Committee will hold a hearing next week on the threats posed by a potential flu pandemic.”
Dr. Hearne will be testifying Thursday, June 30th, before the House Government Reform Committee on U.S. preparedness for pandemic and annual flu. Some of the TFAH report’s other findings include:
- While estimates find that over two million Americans may need to be hospitalized during a pandemic outbreak, the U.S. currently only has approximately 965,256 staffed hospital beds.
- The U.S. has not adequately planned for the disruption a flu pandemic could cause to the economy, daily life, food and supply distributions, or homeland security.
- The U.S. lags in pandemic preparations compared to Great Britain and Canada based on an examination of leadership, vaccine development, vaccine and antiviral planning, health care system surge capacity planning, coordination between public and private sectors, and emergency communications planning.
TFAH provides a series of detailed recommendations to help ensure the U.S. is better prepared regardless of whether a pandemic occurs as soon as this year or in several years. With a crisis looming, the U.S. plan for the pandemic should be finalized and the President should designate an official with authority to coordinate the U.S. response across federal agencies. Other top level recommendations include taking:
- Immediate steps of outbreak tracking, stockpiling medical supplies, and developing emergency communications plans;
- Intermediate steps of stockpiling additional antivirals and developing surge capacity plans for hospitals and health care providers; and
- Longer range steps to increase vaccine production and the development of new technologies for vaccines.
I figure, humans being humans, we'll have a big pandemic and millions will die and only afterwards all the recommendations about building more rapid vaccine production technologies, stockpiling of medical supplies, and better methods of reducing human-to-human transmission will be implemented. The warnings coming from infectious disease experts and others are being discounted as the standard exaggerated doom and gloom fare of coming disasters.
The full report projects a high economic cost from a pandemic. (PDF format)
The estimated economic impact of a pandemic flu outbreak in the U.S. today, based on projections from the relatively mild 1968 flu epidemic, would be $71.3 to $166.5 billion due to death and lost productivity, excluding other “disruptions to commerce and society.”6
Note the real possibility that the avian flu could have a higher lethality rate than the 1918 flu. So the economic costs, number of hospitalizations, and total deaths could be much greater than the estimates provided above.
Some countries could be politically destabilized by the effects of a flu pandemic. Though I'm going to go out on a limb here and guess that in Africa with so much disease already a flu pandemic might seem like nothing out of the ordinary.
The U.S. would be impacted by the global implications as soon as a pandemic outbreak occurred in any part of the world due to the interdependence of economies. Sectors, such as hospitals and the health care system, which rely on supplies manufactured in other parts of the world, including Asia, would feel immediate repercussions and supply shortages. Travel restrictions, possible limitations on public gatherings and events, and other measures taken to limit the spread of disease would also have rapid and far reaching repercussions. Since a pandemic could likely result in political and economic destabilization, particularly in developing countries, it poses serious national security concerns for the U.S.
Those who think the threat of the avian flu is overblown need to learn about the larger historical context: Influenza pandemics have occurred regularly in human history and statistically speaking we are overdue for the next one.
Based on historical trends and projections, virologists and epidemiologists predict a new flu pandemic will emerge three to four times each century.8 Health officials around the world are troubled by the severity of the “avian flu” circulating in Asia, which scientists refer to as the H5N1 flu strain. They fear this avian flu could become the next pandemic for humans. The regional director of the WHO for the Western Pacific region stated in February 2005 that the “world is now in the gravest possible danger of a pandemic.”9
We currently run the risk that the avian flu will not only be the next pandemic but that it will be much more lethal than the average pandemic.
The economic disruptions of a pandemic would reach the United States rapidly due to the interdependent nature of economies.
The U.S. has not assessed or planned for the disruption a flu pandemic could cause both to the economy and society as a whole. This includes daily life considerations, such as potential school and workplace closures, potential travel and mass transit restrictions, and the potential need to close stores resulting in complications in the delivery of food and basic supplies to people. Daily life and economic problems would likely emerge in the U.S. even before the pandemic flu hit the country due to the global interdependence of the world economy.
Put aside for the moment the medical issues (e.g. virus manufacture, acute patient care, drug production, medical supplies shortages, and so on). Think about the problem at the level of human organization to reduce pathogen transmission in ways that minimize economic disruption. We have the potential to develop and find ways to carry out economic functions with less human-to-human exposure. The development of procedures and products, the training of work forces, and the purchase of key pieces of capital equipment could reduce the amount of human contact involved in most types of economic activity. This would simultaneously reduce the rate and extent of spread of the pandemic virus and reduce the size of the economic disruption caused by the virus.
The rate of infection of the population might be between 25% and 50%. But with better economic organization and practices human-to-human contacts and transmission could be greatly decreased.
For Americans who become infected their odds of getting anti-viral medication will be less than 1 in 12. For people on most other countries their odds will be much lower. For anyone who has stockpiled your own personal Tamiflu supply your odds of getting anti-viral treatment are excellent - unless you tell too many people about your stockpile and someone steals it.
As of May 2005, the U.S. has stockpiled 2.3 million courses of the antiviral medication Tamiflu, which could be used as a treatment in the event of an outbreak, and intends to order approximately three million more with funds recently appropriated by Congress to total 5.3 million. The WHO is currently estimating that an avian flu epidemic could impact 25 percent of countries’ populations.
In the U.S., this means it could affect nearly 67 million individuals, based on FluAid projections and population numbers. With the current level of the U.S. Tamiflu order, over 61.5 million Americans who could be infected would not receive antiviral medication. If the U.S. orders additional courses of Tamiflu, they would not be available until 2007, unless production capacity significantly changes.
The Brits have ordered enough Tamiflu anti-viral drug to cover a quarter of their population. If the US government decided to do so it would have to wait till 2007 to have the needed number of doses.
Several other countries have already ordered enough Tamiflu to protect between 20-25 percent of their populations in case of an outbreak. The U.S. is already behind in the queue to place an order for the medication, for which there is a single manufacturer worldwide -- Roche Pharmaceutical, which is located in Switzerland. In testimony before the U.S. House of Representatives Health Subcommittee of the Energy and Commerce Committee, the medical director for Tamiflu of the Roche company explained that historically they have not produced the levels of Tamiflu required for global stockpiling. To help accommodate the growing concerns and orders, they have increased production of the antiviral nearly eight-fold since 2003.42
On March 1, 2005, the British government announced that it was taking steps to procure 14.6 million courses of Tamiflu.43 This procurement would cover 25 percent of the British population, the rate WHO has recommended.
Given the current and projected production capacity, if the U.S. did place a large order for Tamiflu, Roche has testified before Congress that it could be the end of 2007 before they could deliver enough to the national stockpile for 25 percent of the population. Thus, antiviral treatment will only be an effective part of the U.S. response if a pandemic does not occur for several years and, of course, if the pandemic strain is responsive to antiviral medications.
This wll create a real problem for the British: People from other countries will try to sneak into Britain in order to be better protected in case they get sick.
The $58 million the US Congress has appropriated for avian flu preparedness is chump change.
The recently enacted emergency supplemental appropriations legislation made available $58 million for the purchase of influenza countermeasures for the Strategic National Stockpile, including, but not limited to, antiviral medications and vaccines. These funds are most welcome, but TFAH believes that Congress should provide additional funds during the FY 2006 appropriations cycle to continue to build the nation’s antiviral stockpiles from the current level of two percent of the U.S. population to cover a higher percentage of the population.
Does the United States have enough medical supplies to handle a large surge in patients?
Does the National Strategic Stockpile Include ALL Necessary Medical Supplies That Will Be Necessary to Respond to a Pandemic? In addition to stockpiling antivirals and vaccines, when they are available, the U.S. must also stockpile critical medical supplies such as masks, gloves, gowns, bed linens, and all other equipment needed to assure that hospitals and other health care providers are properly protected when the usual supply chain is disrupted either abroad or in the U.S.
Let me answer that question: Of course not! This is all the more reason to avoid getting sick in the first place. If you could go live in a cabin in the mountains for a couple of years and see no one other than those who initially travel there with you then you could avoid getting sick and therefore avoid dying.
We also need an enormous amount of face masks and other paraphernalia that the general population will use to avoid transmission of the pandemic influenza strain in public places.
You might need to wait as long 18 months from the time the pandemic begins before you can be vaccinated against the virus.
Is There a Rapid Response Plan to Develop, Test, and Produce a Vaccine? It will take an estimated six to nine months after a pandemic emerges to develop a vaccine. Questions of how to rapidly review and test the vaccine once it is created remain, including concerns about speeding the approval process by the Food and Drug Administration (FDA), liability protection for vaccine manufacturers, and what type of preservative will be used in the vaccine. In addition, industry representatives have suggested that current production capacity is insufficient to meet the demand for a pandemic influenza vaccine, and that it could take 12-18 months to meet appropriate production levels.26
We need to get beyond the old fashioned fertilized chicken egg technology for growing influenza viruses for vaccines. Newer and faster technologies for making vaccines would go far to reduce the size of the disruption and the number of deaths from a flu pandemic.
We also need better ways to reorganize just about every job and economic function in society so that fewer people have to come in contact with each other while they are working, going to school, going shopping, or receiving services.
If a half million or more Americans were at risk from some type of terrorist attack billions of dollars would be thrown at the problem. We should do the same with the avian flu threat. Avian flu is far more likely to kill you in the next 5 years than anything terrorists might accomplish. Our preparations for it should be commensurate with the scale of the threat it poses.
Copenhagen, Denmark: Women who have a special genetic profile can conceive spontaneously after the age of 45 years, a scientist said at the 21st annual conference of the European Society of Human Reproduction and Embryology today (Tuesday 21 June 2005). Dr. Neri Laufer, from the Haddassah University Hospital, Jerusalem, Israel, told the conference that his team's work to identify a specific gene expression profile linked to later fertility would help understanding of the ageing process, as well as enabling the development of better treatments for infertility in older patients.
Dr. Laufer and colleagues studied a large group of 250 women over 45 who conceived spontaneously. Women are generally not fertile after this age due to ageing of the ovaries, so the scientists thought that there might be some special factor that was allowing these women to conceive. "Mostly they had had a large number of children and also a low miscarriage rate", he said "and these two factors indicated to us that they had a natural ability to escape the ageing process of the ovaries. We decided to see if we could find any differences in gene expression between 8 such women and another 6 women of the same age group who had finished their families at the age of 30."
Using gene chip technology, the scientists found that blood samples from the 8 women had a unique pattern of gene expression that did not exist in the control group. The two main groups of genes expressed in these women were involved in apoptosis (cell death) and in DNA repair mechanisms. "These women appear to differ from the normal population due to a unique genetic predisposition that protects them from the DNA damage and cellular ageing that helps age the ovary", said Dr. Laufer. "What we do not yet know is whether this reproductive success is linked with potential longevity." The women were all Ashkenazi Jews but Dr. Laufer's team does not believe that the gene profile is unique to this group. "We already have preliminary results demonstrating similar results from another group", he said. The team intends to study women from different ethnic, and hence genetic, groups and study their genetic fingerprints against those of the first group.
One wonders whether the apoptosis genes were regulated to make cell death more or less likely in the late conceiving women. Did their ovary cells manage to avoid death and therefore hang around longer to reproduce? Or did their bad ovary cells more reliably die and thereby eliminate their harmful influence on neighboring cells? Some biologists theorize that senescent cells and other old damaged cells release free radicals and other chemicals into their environment that damage neighboring cells. So a greater ability to commit cell suicide might provide a net benefit in the reproductive tract.
These women who have babies later in life who also have repair enzymes unregulated probably have longer average life expectancies. Study of the genetic differences between them and women whose repair enzymes are highly expressed could lead to discovery of genetic variations which upregulate repair enzymes and proteins and RNA fragments which are involved in repair upregulation.
While women who have babies at late ages might have longer life expectancies there is a more obvious group to investigate for life extending genetic variations: really old people. A comparison of gene expression profiles between 90+ year olds and these late reproducing women might yield some insights into which gene expression profiles are most valuable for longevity. Also, really old women could be asked what age they gave birth to their last child.
As DNA sequencing technologies continue to drop in price the rate of search for genetic variations which influence longevity will accelerate. This will lead to identification of proteins that regulate repair and other processes that influence longevity. Any proteins that are involved in DNA repair enzyme regulation would be obvious targets for pharmaceutical development. However, drugs that increase longevity are still a distant prospect for a few reasons.
First off, The development of a drug that upregulates repair would be a tough challenge. Ideally the drug would work in all the cells of the body. A long half life would be desirable in order for it to be able to build up in all cells. Such a drug would probably have to be taken for decades in order to provide a large benefit for longevity. So it would need to be very non-toxic with low incidence of side effects.
Second, proving that a supposedly life extending drug will be efficacious is an extremely difficult task. Clinical trials on 30, 40, or 50 year olds that last a few years can't demonstrate what the effect of a drug will be if taken for decades. If a clinical trial was held for decades it would cost too much, the patent on the drug would expire while the trial was on-going, and, well, we'd all be too old (or dead) by the time the benefit was proven. So, er, who cares?
Third, the US Food and Drug Administration and similar regulatory agencies in other countries are set up to approve drugs that treat or prevent specific diseases. These agencies do not treat aging as a disease. A life extending drug would need to provide some measurable benefit on the incidence of diseases in shorter time frames. While cholesterol lowering drugs and blood pressure lowering drugs extend life they do so by changing metabolisms for purposes that very directly reduce the risk of well characterized diseases. What quickly measurable benefit would up-regulation of DNA repair enzymes bring? Can anyone think of anything? I can't off the top of my head.
While I'd happily take drugs which slow the rate of aging I'm expecting to see treatments that repair and reverse aging before drugs that slow the general rate of aging. The push to develop, for example, stem cell therapies is so great that I expect many useful stem cell therapies to come to market in the next 10 years and probably two or three times more in the following 5 years and again in the following 5 years. But I do not expect any general aging deceleration drug to hit the market in the next 10 years and I'm not certain about the prospects for such drugs in the following 10 years.
A study of 135 Boston area babies has found that mercury from fish lowers baby IQ but low mercury fish consumption raises baby IQ dramatically. (same article here and here and here)
The women in the study ate fish on average once a week during the second trimester of their pregnancy. The highest intelligence scores were among the babies whose mothers had consumed more than two helpings of fish per week but whose mercury levels remained under 1.2 parts per million, according to the report published online last month in the journal Environmental Health Perspectives.
For each additional weekly serving of fish, the babies' intelligence scores increased by 4 points, or an average of almost 7%. But for every increase of 1 part per million of mercury, the babies' intelligence scores dropped by 7.5 points, or 12.5%. A woman could raise her mercury level by 1 ppm if she ate an average-sized serving of swordfish once a week, said Dr. Emily Oken of Harvard Medical School, the study's lead researcher.
"The range of fish intake in our study was from zero to 5.5 servings per week, so these were not women who were eating fish daily or multiple times a day," said Oken, who specializes in pregnancy and nutrition.
The beneficial effects of the fish consumption is almost certainly coming from the omega 3 fatty acids in the fish. A reduction in mercury exposure combined with an increased consumption of omega 3 fatty acids could produce a large increase in average intelligence in future generations. The resulting increase in the smart fraction of the population would lead to a large increase in economic output and living standards.
The full paper is not on the web at the time I'm typing this but the paper "Maternal Fish Consumption, Hair Mercury, and Infant Cognition in a US Cohort" will be free to view at this link when it gets put on the web.
While poking around trying to find the previous paper I came across another recent research paper on the Environmental Health Perspectives web site about the economic costs of mercury due to lowered IQs.
Methyl mercury is a developmental neurotoxicant. Exposure results principally from consumption by pregnant women of seafood contaminated by mercury from anthropogenic (70%) and natural (30%) sources. Throughout the 1990s, the U.S. Environmental Protection Agency (EPA) made steady progress in reducing mercury emissions from anthropogenic sources, especially from power plants, which account for 41% of anthropogenic emissions. However, the U.S. EPA recently proposed to slow this progress, citing high costs of pollution abatement. To put into perspective the costs of controlling emissions from American power plants, we have estimated the economic costs of methyl mercury toxicity attributable to mercury from these plants. We used an environmentally attributable fraction model and limited our analysis to the neurodevelopmental impacts--specifically loss of intelligence. Using national blood mercury prevalence data from the Centers for Disease Control and Prevention, we found that between 316,588 and 637,233 children each year have cord blood mercury levels > 5.8 µg/L, a level associated with loss of IQ. The resulting loss of intelligence causes diminished economic productivity that persists over the entire lifetime of these children. This lost productivity is the major cost of methyl mercury toxicity, and it amounts to $8.7 billion annually (range, $2.2-43.8 billion; all costs are in 2000 US$). Of this total, $1.3 billion (range, $0.1-6.5 billion) each year is attributable to mercury emissions from American power plants. This significant toll threatens the economic health and security of the United States and should be considered in the debate on mercury pollution controls.
If the new Boston babies study is correct then the economic costs of mercury pollution might be even higher than this latter paper assumes. Therefore the lax and slow approach of the Bush Administration (and, to be fair, the Clinton Administration and other Administrations before it) toward reduction of mercury emissions is even more short-sighted and stupid than I already thought it was. FuturePundit gets angry thinking about the coal burning electric plants emitting mercury and the less noticed (and possibly worse - see below) chlorine plants that do the same.
Marla Cone of the LA Times who wrote the first article I linked to above also wrote a previous article on mercury emissions from coal burning electric plants and chlorine plants and how chlorine plants might be worse than coal plants for mercury emissions.. (same article here and here)
In 2000, 11 chlorine plants reported releasing 14 tons of mercury into the air through smokestacks and unmonitored leaks called "fugitive" emissions. But according to the EPA, another 65 tons of mercury were used there and unaccounted for.
EPA officials, in a 2003 report, said "that the fate of all the mercury consumed" at the chlorine plants "remains somewhat of an enigma."
If even half of that "lost" mercury were released into the air, the plants would have polluted the air with nearly the same volume as the 49 tons released by the nation's 497 mercury-releasing power plants that year, said Oceana's pollution campaign director, Jackie Savitz.
By 2002, two of the 11 plants had closed, and the reported mercury emissions dropped almost in half, to a total of 7.6 tons. The plants, however, had 28 tons of mercury that were unaccounted for, which amounted to about 1% of their total mercury used and stored, according to a 20
An enigma? Are they serious?
In a lawsuit filed today by NRDC (Natural Resources Defense Council) and Sierra Club, represented by Earthjustice, the groups charge that the rule, issued in December, does not address "lost" mercury pollution from the plants and eliminates prior pollution control requirements. In a parallel legal document, the NRDC today petitioned EPA to reconsider its December rule, and set standards that will guarantee reductions in toxic mercury emissions.
Just nine mercury cell chlorine plants are still in operation in the United States. This handful of plants purchases dozens of tons of mercury each year, to replace mercury that evaporates from the giant vats they use to make chlorine. Each plant has more than 50 of these mercury vats (called "cells" in the industry) measuring approximately 50 feet long by more than five feet wide, and each cell holds some 8,000 pounds of mercury each. In 2000 these plants purchased 65 tons of replacement mercury; in 2002, 130 tons.
"The amount of mercury that these plants are losing' dwarfs the estimated 43 tons of mercury emitted by coal-fired power plants, and it's all disappearing from nine outdated factories," said Earthjustice attorney Jim Pew, who is representing the groups in their lawsuit.
The EPA publicly acknowledges that it has not accounted for the tons of mercury that each plant must replace every year. The agency concluded in its December rule that "the fate of all the mercury consumed at mercury cell chlor-alkali plants remains somewhat of an enigma."
"It's outrageous that the EPA has no apparent interest in discovering what happens to 65 tons of mercury, which these plants likely emit into the air, and plans to do nothing about it," said Jon Devine, an NRDC attorney. "The agency apparently has forgotten what its name stands for."
Do not eat Shark, Swordfish, King Mackerel, or Tilefish because they contain high levels of mercury.
The US Environmental Protection Agency (EPA) and US Food and Drug Administration (FDA) have a table of mercury levels in fish in parts per million (PPM) which you all ought to go take a look at. Look for the ones which are really low in mercury and eat them. Parenthetically, another study provides evidence that mercury might be even higher in some fish than the previous table shows. This study sampled fish purchased in New Jersey (which was not all from New Jersey) and found higher mercury levels than the older FDA/EPA table shows.
To compare actual mercury measures against data reported by the FDA, the team purchased and assayed samples of six additional types of fish (Chilean sea bass, porgy, red snapper, croaker, cod, and whiting) and two types of shellfish (shrimp and scallops) from central New Jersey markets. These species were chosen because of their wide availability in the state.
Mean levels of mercury were higher in the sea bass, croaker, whiting, and shrimp available in New Jersey--as well as the tuna sampled in the first tier of the study--than predicted by the FDA's data; the actual mean for one fish, croaker, was nearly three times the FDA estimate. The authors say these discrepancies show that the FDA should update its database (the data provided were collected mainly from 1990 to 1992). They also suggest that the agency consider providing regional breakdowns of aggregate mercury levels so state agencies can evaluate possible risks for their citizens.
What I'd like to see: A table that takes the amount of omega 3 fatty acids in fish and the amount of mercury and then ranks fish according to the ratio of omega 3 fatty acids to mercury. In other words, how to get the most amount of omega 3 fatty acids to mercury? Not all fish have as much omega 3 fatty acids. Salmon is one of the better fish for omega 3's. It also happens to be very low in mercury. So salmon is my preferred fish.
(BETHESDA, MD)—Older people who ate fish once or twice a week had a 20 percent lower risk of developing congestive heart failure during 12 years of follow-up, according to a new study in the June 21, 2005, issue of the Journal of the American College of Cardiology.
This is the first study to look at fish intake and the development of heart failure.
“Prior studies have shown fish intake to be associated with lower risk of fatal heart attacks. The results of the present study suggest that intake of fatty fish — high in omega-3 fatty acids — may reduce the risk of developing heart failure as well,” Dr. Mozaffarian added.
From 1989 to 1990, the researchers gave diet questionnaires to 4,738 adults in four cities who were 65 or older and free of congestive heart failure. During 12 years of follow-up, 955 participants developed congestive heart failure. After adjusting the results for other risk factors, those who had reported that they ate tuna or other fish once or twice a week were 20 percent less likely to develop congestive heart failure than those who said they ate such fish less than once a month. Eating fish three or four times a week was linked to a 31 percent lower risk of developing congestive heart failure over the next 12 years. However, fried fish consumption was linked to a higher risk of congestive heart failure.
Update: Since the world's fisheries are becoming depleted and many fish have problems with either mercury or organic toxins or both what we really need is genetic engineering of food crops such as soy, corn, and wheat to make them synthesize large amounts of the omega 3 fatty acids docosahexaenoic acid (DHA) and Eicosapentaenoic Acid (EPA). We need large cheap terrestrial sources of the omega 3 fatty acids. The alpha linolenic fatty acid (ALA) in flax seed is less than ideal and we'd be better off with a food crop that directly produced DHA and EPA.
New Haven, Conn., June 20, 2005—The basic economic theory that people work harder to avoid losing money than they do to make money is shared by monkeys, suggesting this trait has a long evolutionary history, according to a Yale University study under review by the Journal of Political Economy.
This phenomenon, known as “loss aversion,” refers to the tendency for people to strongly prefer avoiding losses to acquiring gains. “A large body of studies suggest that losses are more than twice as psychologically powerful as gains,” said author M. Keith Chen, assistant professor at Yale School of Management.
In this study conducted with Venkat Lakshminaryanan, a research assistant in the Department of Psychology, and Laurie Santos, assistant psychology professor and director of the Capuchin Cognition Laboratory at Yale, tufted capuchin monkeys were given small disks to trade for rewards—apples, grapes or gelatin cubes. The researchers said capuchins are well-suited subjects for study since they are relatively large-brained primates, skilled problem solvers, and a close evolutionary neighbor to humans.
In their studies monkeys were given a budget of disks and asked to decide how much to spend on apples, and how much to spend on the gelatin cubes, even as the prices of these goods and the size of their budgets fluctuated. Capuchins performed much like humans do. Capuchins, like humans, react rationally to these fluctuations.
In a second experiment, capuchins were asked to choose between spending a token on one visible piece of food that half the time gave a return of two pieces, or two pieces of visible food, that half the time gave a return of only one piece. Economic theory predicts that consumers should not care which of these outcomes they receive since they are essentially both 50-50 shots at one or two pieces of food. The capuchins, however, vastly preferred the first gamble, which is essentially a half chance at a bonus, than the second gamble, which is essentially a half chance at a loss.
“The goal of this work,” said Santos, “is to learn whether other animals share some of our basic economic decision processes or whether human economic behavior is unique to our own species.”
“The economic view,” Chen added, “says people are aware, rational and in control of their major decisions. Social psychology cuts in the opposite direction, maintaining that people are often unaware of the forces that dictate their behavior. We wanted to understand the interactions of these two things. What we’ve shown is that capuchin monkeys look remarkably like us; making rational decisions in many of the same settings that humans get right, but also make many of the same mistakes we make.”
Their work provides an evolutionary spin on the current debate about why Americans do not save enough for retirement or put enough of their savings into the stock market. “Although the stock market offers a better rate of return than investing in safer financial products, such as bonds, people tend to experience stock market fluctuations through the biased lens of loss aversion, a lens that appears to be shared with other primates,” Santos said. Chen added, “We are fighting tendencies that may be biologically hard-wired.”
We are more like monkeys that most of are willing to admit. Resistance to the theory of evolution is just one manifestation of that reluctance.
To learn about work done on humans with loss aversion read about the classic experiment on risk aversion done by Kahneman, Knetsch, and Thaler with coffee mugs. Basically, people value objects that they own more than objects that they could buy. Also, see the Wikipedia Loss Aversion entry. Or see an example that perhaps more closely mirrors the example with the monkeys where different styles of framing choices causes different preferred outcomes.
This whole area of economics calls for attempts to develop methods to compensate for systematic biases in human cognitive processes. For example, to overcome resistance to regulatory changes due to status quo bias loss aversion Lynne Kiesling of Knowledge Problem argues that experiments to demonstrate the effects of policy changes might help reduce the resistance to changes in the regulatory status quo.
Young drug abusers are up to three times more likely to suffer brain damage than those who don't use drugs, according to research published online by Neuropathology and Applied Neurobiology.
The brains of 34 intravenous drug abusers, who had mainly used heroin and methadone, were examined after death and compared with 16 young people who had not used drugs.
This revealed that the drug abusers sustained a level of brain damage normally only seen in much older people and similar to the early stages of Alzheimer's disease.
"Our study shows evidence of an increased risk of brain damage associated with heroin and methadone use, which may be highest in the young, when individuals are most likely to acquire the habit" says co-author Jeanne Bell Professor of Neuropathology at the University of Edinburgh.
Damaged nerve cells were identified in the key areas of the brain involved in learning, memory and emotional well-being.
"We found that the brains of these young drug abusers showed significantly higher levels of two key proteins associated with brain damage" adds Professor Bell.
"In a previous study we found out that drug abuse causes low grade inflammation in the brain. Taken together, the two studies suggest that intravenous opiate abuse may be linked to premature ageing of the brain."
The 34 documented drug users had a history of opiate abuse – mainly heroin and methadone – but were HIV negative and had no history of head injury. The 16 control cases had no history of drug abuse or neurological impairment.
The average age in these two groups was only 26 years and included drug abusers as young as 17.
Toxic proteins were found in the brain cells of drug abusers.
Tau protein, which in its soluble form is essential for communication and transport within brain cells, had become insoluble in some cells, causing nerve cell damage and death in selected areas of the brain.
Other nerve cells showed an accumulation of the amyloid precursor protein, which suggests that protein transport had been disrupted and the nerve cell functions affected.
"This study shows that drug abuse can lead to a build up of proteins which cause severe nerve cell damage and death in essential parts of the brain. This is very worrying as there are strong indications that drug use in the UK, in particular opiates like heroin and methadone, has continued to rise in recent years" says Professor Bell.
If you damage your brain with drugs now you will have to wait for decades before stem cell therapies can fix all the damage. Whatever you become after the future damage repair will be someone else different than who you were before you damaged your brain in the first place.
Also see my post "Partial Recovery From Methamphetamine-Induced Brain Damage" and be sure to read the comments by some of the ex-meth users who describe the symptoms of their own brain damage.
John J. Fialka of the Wall Street Journal reports on Tennessee Republican Senator Lamar Alexander's battle against wind electric generator tower subsidies.
Compared with other emerging renewable-energy sources in the U.S., wind power is a giant, growing about 25% each year because, with its subsidies, it is increasingly cost-competitive with natural-gas-fired power in some states. Sen. Alexander says he wants to remove wind power's subsidies before it gets bigger. "We are ruining the outdoors for no good reason," he said during an interview. "These aren't your grandmother's windmills."
That is so: A modern wind generator stands on a 300-foot tower with flashing red lights that can be seen for more than 20 miles. Its blades are 95 feet long and when the wind is blowing it can generate enough electricity to power 500 homes. Since wind comes and goes, it normally operates at about 35% of capacity.
The Democrats are backing a proposal to require electric utilities to buy 10% of their electric power from renewable sources by 2020 (and does that include nuclear?). Alexander opposes that and he also opposes a $3.7 billion tax credit the bulk of which is expected to go to wind farm construction.
The proposal Sen. Alexander failed to stop last week establishes a "national renewable portfolio standard." It would require large utilities to generate 10% of their electricity from renewable resources by 2020, a requirement financed by a small increase in electricity rates. Energy companies that don't generate renewable power would have to buy credits from those that do, which would be an incentive to use wind, geothermal, solar and other sources.
Mr. Alexander says that would spell an environmental "disaster" for the Southeast, where strong wind exists mainly on mountaintops. In a recent speech he envisioned hundreds of turbines "with their flashing red lights atop the blue ridges of Virginia, above the Shenandoah Valley, along the foothills of the great Smoky Mountains...and down the Tennessee River Gorge." The sound of these machines, the senator said, is like "a brick wrapped in a towel tumbling in a clothes drier on a perpetual basis."
I like scenic vistas. I don't understand why environmental groups are willing to support wind power. Would they rather ruin scenic vistas than build nuclear power plants? I guess so. They even want to use taxpayers money and higher electric prices to subsidize the ruin of scenery. How about you? do you mind seeing wind towers 20 miles off on mountain tops or coast lines? I can see putting them 30 miles offshore beyond view of most people.
Senator Alexander is certainly correct that the US Southeast has little wind power potential and that most of the potential in the Southeast is in the mountains. See this map of wind in the United States. (or see a newer and higher resolution wind map of the United States) Most of the Mississippi valley is pretty poor for wind as well. The Northeast has wind at the coast and on mountains. Do you want your coastal and mountain scenic views ruined by wind towers?
On that previous map note the "Superb" wind ratings for Alaska's Aleutian islands. Could wind towers on an Aleutian island provide such cheap power to economically justify siting an aluminum smelter or other highly energy intensive industry on one of those islands?
To get a feel for how much wind varies with time check out this map of wind intensities over the United States per hour for the last 6 hours. At the moment I viewed the map the US was experiencing pretty low levels of wind almost everywhere in the lower 48 states.
I'd rather accelerate research into nuclear power and photovoltaic materials. Nanotech photovoltaics of the future will be used to create photovoltaic roof tiles and siding that blend in to housing exteriors without any esthetic loss. Billions spent per year on wind tower construction subsidies would be better spent on photochemistry and nanotech research. The wind subsidies are literally orders of magnitude larger than the amount spent on photovoltaics research today. This seems like bad policy getting worse.
A New York Times article on solar energy installed by homeowners demonstrates that the home photovoltaics market is the product of government subsidies.
In moving toward the energy mainstream, solar expenses have dropped to around $8 a watt, from roughly $100 three decades ago; the cost is even less if a system is installed as part of a new home's construction.
In either case, that puts the price of a system that can reduce electric bills significantly - like a three-kilowatt system - in the $20,000 range. That's still a lot of money, but buyers may be able to get a lot of it back immediately, through government incentives. And with energy prices rising, the payback period for the rest is getting steadily shorter.
With real costs like those described below I have to wonder how long it takes for the energy used to manufacture the photovoltaic panels to be balanced out by energy collected once they are installed. Some of the manufacturing cost has got to be due to energy consumption.
On Long Island, Mr. Sunde's systems are working smoothly, and he expects them to keep doing so over their guaranteed 25-year life. A staunch environmentalist who had dreamed of owning solar panels since he was a boy, he now has more power than he needs.
He couldn't have done it without the incentives. With rebates and tax refunds, he chopped nearly 75 percent off the $115,000 bill, bringing the cost down to $30,000. With about 7.5 kilowatts for each house, he wound up paying about $2 a watt.
He did so well because Long Island kicked off New York's incentive programs with rebates of up to $6 a watt. Now it's in line with the rest of the state, offering $4, while the newer New Jersey program, is the most generous in the New York metropolitan area, with incentives of $5.50 a watt.
Over all, he calculates the payback period at a bit over 15 years.
Government subsidies paid almost three quarters of the cost of his system and yet the payback period was still over 15 year. Absent those subsidies the installation of solar photovoltaics systems on houses in much of the United States would be very rare. Payback periods would be longer than the lifetimes of most homeowners (though eventually SENS technologies will change that).
But even with large subsidies few homeowners see photovoltaics as worthwhile. If most homeowners wanted to use these subsidies governments would have to abandon these subsidies programs due to high costs.
The article reports on a woman who installed pool heater which she expect to pay itself back in 2 years and a guy who installed solar heating for his home who expects a payback in 8 years. In each case no tax subsidies contributed to the heating installations. Well, this raises an obvious question: Why are governments spending large amounts of money on subsidizing photovoltaic systems when smaller amounts of money on thermal heating systems will pay back more quickly without subsidy? If the government wants to get the most out of its energy subsidy dollars it ought to subsidize heating rather than electric generation. Or if the goverment doesn't want to spend the money it could increase the spread of solar heating by putting requirements for it in building codes.
If governments (including state governments) want to encourage the development of cheaper photovoltaics then my advice is that governments should shift money from subsidies to fund more photochemistry research to discover processes for making photovoltaics that are inherently cheaper. Subsidizies for the purchase of photovoltaics made by processes which are inherently expensive just aren't going to get us to cheap photovoltaics.
Using human embryonic stem cells and a mixture of tissue types a group of researchers has found a way to grow replacement muscle tissue which has blood vessels.
CAMBRIDGE, Mass.--For years, a major obstacle has dashed the hopes of creating "replacement parts" for the human body: the lack of an internal, nourishing blood system in engineered tissues. Without it, thicker tissues can't thrive, which has confined tissue engineering's practical application to thin skin, which can recruit blood vessels from underlying tissue.
Now, researchers in Institute Professor Robert Langer's lab at MIT have used a novel cocktail of cells to coax muscle tissue to develop its own vascular network, a process called pre-vascularization. When implanted in living mice and rats, these tissues integrated more robustly with the body's own tissues than similar implants without blood vessels.
This approach should work with other tissue types.
"What's even more exciting than being able to make skeletal muscles for reconstructive surgery or to repair congenitally defective muscles, for instance, is that this a generic approach that can be applied towards making other complex tissues. It could allow us to do really wonderful things," says collaborator Daniel Kohane, an affiliate at MIT and assistant professor of pediatrics at Harvard Medical School.
The researchers published their work in Nature Biotechnology, available online in advance on June 19, 2005. An accompanying News and Views commentary says this "landmark paper" provides "a compelling demonstration of the benefits of pre-vascularization for engineering larger pieces of tissue."
"When I came to work with Bob Langer for my postdoc, it was my dream to vascularize a tissue," recalls first author Shulamit Levenberg, who is now on the faculty of the biomedical engineering department at Technion in Haifa, Israel where she completed these studies. She chose to tackle muscles, since they depend on blood vessels interspersed with muscle fibers and also serve as a model for highly vascularized organs such as the liver, heart, and lung.
The researchers used three cell types: myoblasts, endothelial cells, and fibroblasts. Some of the endothelial cells formed the needed blood vessels.
Levenberg theorized she needed to combine three cell types: myoblasts that form muscle fibers; endothelial cells that independently self-organize into vessel tubes; and fibroblasts that are the precursors for the smooth muscle cells that stabilize the vessel amidst the tissue's gooey extracellular matrix. "No one had tried a 3-D tri-culture scaffold before. It's hard enough to work with one cell type, let alone three!" explains senior author Langer, who is a pioneer in tissue engineering.
The VEGF mentioned here is a Vascular Endothelial Growth Factor, a hormone that causes blood vessels to form. The process of blood vessel formation is called angiogenesis. Angiogenesis has come to be well understood as a result of Harvard cancer researcher Judah Folkman's decades of pursuit of anti-angiogenesis compounds as an approach to stopping cancer tumor growth. The field of tissue engineering therefore benefits from insights developed by cancer researchers.
In vitro experiments validated Levenberg's hypothesis: "The endothelial cells formed vessels, recruited the fibroblasts, and differentiated them into smooth muscle cells," she says. "The differentiated fibroblasts expressed the angiogenic growth factor, VEGF, which further stimulated vessel growth." The constructs measured 5mm by 5mm by 1mm.
Note the use of human embryonic stem cells to make endothelial cells. My guess is that work was done in Israel.
For implantation in living animals, the lab used immunodeficient mice and rats that would not reject the human-derived endothelial cells. At the beginning of the project, Levenberg had isolated endothelial cells from human embryonic stem cells – a first. Human derivation is key for clinical use to avoid an immune rejection.
The animal studies progressed in stages. First, the researchers implanted a muscle construct under the skin, then inserted one within a leg muscle, and finally replaced a piece of a rat's abdominal muscle with a construct, simulating a situation applicable to trauma victims, for instance. Later tissue staining showed that the implants' vessels grew into the host tissue and the host's vessels grew into the constructs.
But what good are blood vessels if they don't deliver the goods – blood? Using two non-invasive live imaging techniques (labeled lectin injected into the tail vein and a luminescent luciferase-based system), the researchers could watch the host's blood flow into the engineered vessels. About 41% of the constructed vessels became perfused with the hosts' blood, meaning they functioned in the living body. "That's pretty good for a first try," Levenberg asserts.
Importantly, twice as many of the cells survived in the tri-culture implants compared to implants without endothelial cells. "The myoblasts also became even longer tubes when implanted, and they began to align themselves with the host's muscle fibers," Levenberg recounts.
"This tri-culture system shows a whole new way of creating a vascular network in the tissue," summarizes Langer. "We've also demonstrated another powerful use of human embryonic stem cells."
In addition to Kohane, Levenberg and Langer collaborated with Patricia D'Amore and Diane Darland at The Schepens Eye Research Institute, Evan Garfein at Brigham and Women's Hospital, Robert Martin of MIT's Division of Comparative Medicine, Richard Mulligan of Children's Hospital and Harvard Medical School, Clemens van Blitterswijk at Twente University in the Netherlands, and present and former MIT graduate students Mara Macdonald Jeroen Rouwkema.
The discovery of an approach which causes the growth of blood vessels in bioengineered organs lifts a major obstacle in the way of tissue engineering. If tissue engineers can cause cells to build vascular networks then the construction of larger three dimensional pieces of tissue becomes possible.
One of the 7 Strategies for Engineered Negligible Senescence (SENS) is the introduction of replacement cells. I think the category should be worded a bit more broadly since we have parts that are not even cells (e.g. heart valves). Also, while some of those replacement parts will be delivered as cell types injected into the body for many organ failure problems we will need to grow replacement organs. That requires the development of an additional set of capabilities for doing tissue engineering to create three dimensional structures. This latest result is a very helpful step in that direction.
New York University School of Medicine researchers have developed a brain scan-based computer program that quickly and accurately measures metabolic activity in a key region of the brain affected in the early stages of Alzheimer's disease. Applying the program, they demonstrated that reductions in brain metabolism in healthy individuals were associated with the later development of the memory robbing disease, according to a new study.
"This is the first demonstration that reduced metabolic activity in the hippocampus may be used to help predict future Alzheimer's disease," says Lisa Mosconi, Ph.D., a research scientist in the Department of Psychiatry, who developed the computer program and led the new study. "Although our findings need to be replicated in other studies," she says, "our technique offers the possibility that we will be able to screen for Alzheimer's in individuals who aren't cognitively impaired."
How would you like to get a brain scan and then be told that in 5 or 10 years you will start to lose your memory and eventually forget everything thing you know?
Dr. Mosconi and colleagues have recently published the technical details of the program, called "HipMask," in the June 2005 issue of the journal Neurology. She will present the new findings on June 20 at the Alzheimer's Association International Conference on Prevention of Dementia held in Washington.
The computer program is an image analysis technique that allows researchers to standardize and computer automate the sampling of PET brain scans. The NYU researchers hope the technique will enable doctors to measure the metabolic rate of the hippocampus and detect below-normal metabolic activity.
The technique grew out of years of research by Mony de Leon, Ed.D., Professor of Psychiatry and Director of the Center for Brain Health. His group was the first to demonstrate with CT and later with MRI scans that the hippocampus, a sea-horse shaped area of the brain associated with memory and learning, diminishes in size as Alzheimer's disease progresses from mild cognitive impairment to full-blown dementia.
Yet until now there has been no reliable way to accurately and quickly measure the hippocampal area of the brain on a PET scan. The hippocampus is small and its size and shape are affected greatly in individuals with Alzheimer's, making it difficult to sample this region. HipMask is a sampling technique that uses MRI to anatomically probe the PET scan.
MRI relies on electromagnetic energy to excite water molecules in the brain to create an anatomical map of the brain. The MRI was used in the study to determine the total volume of the hippocampus and then to define that portion (namely the HipMask) that was shared by all persons regardless of their disease status. PET employs radioactively labeled glucose to show the brain at work and the HipMask was applied to these scans to derive estimates of the hippocampal glucose metabolism.
The researchers followed 53 healthy, normal subjects between the ages of 54 and 80 for at least 9 years and in some cases for as long as 24 years. All subjects received two FDG-PET scans -- one at baseline and a follow-up after 3 years. Thirty individuals had a second follow-up scan after another seven years. Altogether there were 136 PET scans.
The researchers applied the HipMask to all 136 scans. The results showed that hippocampal glucose metabolism, as determined by the HipMask, was significantly reduced 15% to 40% on the first scan, compared to controls, of those 25 individuals who would later experience cognitive decline related to either mild cognitive impairment or to Alzheimer's. The researchers found that the baseline hippocampal glucose metabolism was the only brain or clinical measure that predicted the future cognitive decline.
"Right now, we can show with great accuracy who will develop Alzheimer's nine years in advance of symptoms, and our projections suggest we might be able to take that out as far as 15 years," says Dr. de Leon, whose longitudinal study is funded by the National Institutes of Health (NIH).
"Our basic results will need to be replicated in other studies and expanded to include PET data from diverse patient groups," adds Dr. De Leon. "But we're confident this is a strong beginning, demonstrating accurate detection of early Alzheimer's disease. Now we have a better tool to examine disease progression, and we anticipate this might open some doors to prevention treatment strategies."
Most people are not going to get brain PET scans. The greater value of this finding is in research on methods to prevent or delay the onset of Alzheimer's Disease.
Don't wait for that PET scan result to come back with bad news. Reduce your risk of Alzheimers.
People who drink fruit or vegetable juice at least three times a week seem four times less likely to develop Alzheimer's than nonjuice drinkers, according to a study of 1,800 elderly Japanese Americans. The theory is that juice contains high levels of polyphenols, compounds that may play a brain-protective role.
Less education, gum disease early in life, or a stroke were more important than genes in determining who got dementia, concluded a study of 100 dementia patients with healthy identical twins. Education stimulates neuronal growth; gum disease is a marker of brain-harming inflammation.
Middle-aged sons and daughters of people with Alzheimer's disease may be able to reduce their risk of getting the disorder through lifestyle measures such as exercise, avoiding gum disease, moderate alcohol consumption, and drinking fruit and vegetable juice, according to new research.
Keep your teeth clean. Drink some V-8 or pure fruit juice. Get regular exercise. They'll all protect your brain.
A new study of dementia in identical twins suggests that exposure to inflammation early in life quadruples one's risk of developing Alzheimer's disease.
Margaret Gatz, lead author and professor of psychology in the USC College of Letters, Arts and Sciences, is slated to present her findings at the first Alzheimer's Association International Conference on Prevention of Dementia, to be held June 18-21 in Washington, D.C.
If confirmed, the link would add inflammatory burden to the short list of preventable risk factors for Alzheimer's.
Previous studies by Gatz and others have shown that Alzheimer's is strongly genetic: If one twin has the disease, his or her identical twin has a 60 percent chance of developing it.
Stroke and a short period of formal education both increase the odds of dementia, but not of Alzheimer's specifically, the new study found.
Dementia is an umbrella term for many conditions, including Alzheimer's.
"People can plan a life span that will alter dementia risk," Gatz said. "And these aren't risk factors that are unique to dementia. Many of these are also risk factors for other disorders. This is good news."
Gatz's team, which included researchers from the Karolinska Institute in Stockholm, Sweden, sifted the 20,000 participants in the Swedish Twin Registry for the 109 "discordant" pairs where only one twin had been diagnosed with dementia.
Information about participants' education, activities and health history came from surveys they completed in the 1960s, when the registry was created, and from hospital discharge records.
The surveys included questions about loose or missing teeth. Gatz and colleagues used the answers to build a crude indicator of periodontal disease.
"We're talking about gum disease, but it was measured by teeth lost or loose," Gatz said. "It's not perfect. Given it's not perfect, it's even more striking that it's such a solid risk factor."
The conclusion is not that good oral health can prevent Alzheimer's, but that an inflammatory burden early in life, as represented by chronic gum disease, may have severe consequences later.
I think that previous sentence is poorly worded. Surely good oral health will reduce the risk of Alzheimer's. Poor oral health is not the only souce of inflammatory burden. But it is one big source.
Gatz was inspired to focus on inflammation by the work of USC gerontologists Caleb Finch and Eileen Crimmins, who published a paper in the journal Science linking today's record life spans to lower rates of childhood infectious diseases, such as gum disease, flu, rheumatic fever, tuberculosis and other illnesses.
Such diseases are often preventable, raising hope for prevention of Alzheimer's.
"If what we're indexing with periodontal disease is some kind of inflammatory burden, then it is probably speaking to general health conditions," Gatz said. "There was in our twins quite a lot of periodontal disease, and at that time in Sweden there was a lot of poverty."
The study, titled "Potentially Modifiable Risk Factors From Dementia: Evidence From Identical Twins," also found that mental activities at age 40, such as reading or attending cultural events, did not seem to lower the risk of developing Alzheimer's.
So crossword puzzles do not help. Then I guess writing a blog isn't going to help either. Heavy sigh.
Previous reports have suggested a link between education and lowered risk of Alzheimer's. But when education is controlled for by using twins with different levels of education then the education effect becomes very low. One possible explanation might be that level of education is a proxy for level of intelligence. Higher intelligence people might be more likely to avoid behaviors (like eating junk food or not practicing good dental hygeine) than lower intelligence people. Or perhaps having smarter brains allows a person to deteriorate for longer from a higher level of initial cognitive function before the effects of Alzheimer's become apparent.
Participants who had more education than their twins were at slightly lower risk of developing dementia, but the influence of education on Alzheimer's risk was statistically negligible."Once one controls for genes, the level of education is not a huge risk factor," said Gatz, who questioned popular attitudes linking Alzheimer's or dementia to mental inactivity.
Drinking soda leads to tooth decay. So if you don't want your kids to get Alzheimer's Disease in their old age then do not let them drink soda.
Foreign Affairs has a series of articles coming out in their July/August 2005 edition on the threat posted by the Avian Influenza H5N1 strain. Michael Osterholm makes the argument that governments have not sufficiently prepared for the possibility of a bird flu pandemic in human populations.
What should the industrialized world be doing to prepare for the next pandemic? The simple answer: far more. So far, the World Health Organization and several countries have finalized or drafted useful but overly general plans. The U.S. Department of Health and Human Services has increased research on influenza-vaccine production and availability. These efforts are commendable, but what is needed is a detailed operational blueprint for how to get a population through one to three years of a pandemic. Such a plan must involve all the key components of society. In the private sector, the plan must coordinate the responses of the medical community, medical suppliers, food providers, and the transportation system. In the government sector, the plan should take into account officials from public health, law enforcement, and emergency management at the international, federal, state, and local levels.
The full articles are not yet available online. But you can read longer excerpts if you click through.
Q: If an outbreak does occur, what is the state of preparedness planning between nations, across regions, among departments and ministries of individual governments and throughout the non profit sectors?
Planning is abysmally inadequate, given the likely severity of a pandemic caused by a human-to-human transfer of a virus as virulent as the current H5N1 strain.
Q: Without adequate preparations, what would be the likely toll of such a pandemic globally and in the United States?
The answer depends on the virulence level of the pandemic virus. The 1918 strain, which killed 50 to 100 million people, only killed about two to three percent of the people it infected. The H5N1 strain now in circulation kills 100 percent of the birds it infects and has killed more than 50 percent of the people known to be infected so far. If it manages to mutate into a human-to-human form, and retains even half its current virulence, the death toll would be in the hundreds of millions. The WHO issued a report a few months ago putting the estimate at eight million and has since retracted that estimate, preferring far higher reckonings.
Q: How serious might be the economic, social and political impacts?
One Oxford University computer model, assuming a virus with low virulence, put global losses at two to three trillion dollars. The Oxford team concluded that it is impossible to guess the catastrophic economic toll of a high virulence strain.
Trillions of dollars add up to a serious amount of money. Major killer pandemic have occurred within the lifespan of some people still living. The most famous pandemic in modern times occurred in 1918 with 20 to 50 million killed from a much smaller world population. The avian flu cases recorded in humans to date killed at least half of those infected. A world pandemic of such a lethal strain would make the 1918 outbreak seem mild in comparison.
In a separate article Garrett underscores the unpredictability of avian flu's future path.
According to the March 2005 National Academy of Science's Institute of Medicine flu report, the "current ongoing epidemic of H5N1 avian influenza in Asia is unprecedented in its scale, in its spread, and in the economic losses it has caused."
In short, doom may loom. But note the "may." If the relentlessly evolving virus becomes capable of human-to-human transmission, develops a power of contagion typical of human influenzas, and maintains its extraordinary virulence, humanity could well face a pandemic unlike any ever witnessed. Or nothing at all could happen. Scientists cannot predict with certainty what this H5N1 influenza will do. Evolution does not function on a knowable timetable, and influenza is one of the sloppiest, most mutation-prone pathogens in nature's storehouse.
We do not know the probability for an avian flu crossover into the human population that would develop the abiltiy to cause a massive pandemic. We do not know what the fatality rate would be from such a pandemic. Political leaders have reacted to this uncertainty by doing very little. The public has done the same.
He saved his most flatly worded warning, however, for a news conference organized by the Council on Foreign Relations, which publishes the respected journal. In an interview from Washington following the briefing, he repeated his blunt message of how dire things would be if a pandemic starts in the short term.
"We're pretty much screwed right now if it happens tonight," said Osterholm, director of the Center for Infectious Disease Research and Policy at the University of Minnesota.
Osterholm said the "just-in-time" delivery model by which modern corporations operate means food distribution networks don't have warehouses brimming with months worth of inventory.
Most grocery store chains have only several days worth of their most popular commodities in warehouses, he explained, with perhaps 30 days worth of stock for less popular items.
Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, said the threat of a deadly pandemic is growing.
"This is not going to go away," Fauci said during a forum on the issue Thursday at the Council on Foreign Relations. "Get rid of the 'if.' This is going to occur."
Some fairly cheap things could be stockpiled in advance. High quality face masks seem the most obvious. Given that governments look set to fail to make adequate preparations you might want to buy your own stockpile of high quality particulate filtering face masks. On Google Froogle check out 3M N95 models which can go for less than $1 USD per mask when buying boxes of 20 or more. Note that those masks last for only several hours (forget the exact amount of time). A possibly more cost effective choice is the 3M N100 models which cost more but last for 150 hours. The 3M P100 models also last 150 hours.
One drug might help. Seriously consider getting a Tamiflu prescription and store the package unopened.
How and when should I take TAMIFLU?
TAMIFLU should be taken twice daily (once in the morning and once in the evening) for five days. TAMIFLU can be taken with or without food. As with many medicines, if taken with a light snack, milk, or a meal, the potential for stomach upset may be reduced. You should complete the entire treatment of ten capsules, even if you are feeling better. Never share TAMIFLU with anyone, even if they have the same symptoms. It is important that you begin your treatment with TAMIFLU as soon as possible from the first appearance of your flu symptoms.
Tamiflu comes in 10 and 30 tablet amounts. The 10 tablet amount is enough for 1 person. If you are not living alone or have other people you'd want to help in event of an outbreak then keep in mind that the 30 tablet size costs much less per tablet. In the United States you are looking at perhaps $70 to $80 USD for the 10 tablet amount for a single person and perhaps $190 for 30 tablets (and don't skimp by buying from the more questionable online vendors - go brand name). Canadians living in price controlled socialism get to pay less (and get to live off of the labors of market price paying Americans). But once the big crunch comes don't expect to be able to pay even current American prices if you can find Tamiflu at all.
If you think that come a crisis Tamiflu production will rapidly scale up think again. While research a previous post on avian flu I found an expert who claims it would take 18 months to scale up Tamiflu production. Higher demand from governments to stockpile Tamiflu would help make more available. But don't count on governments. If you can afford to then protect yourself and your loved ones.
If you can afford to stockpile food and have the room to store it then buy a lot of dried and canned food. Keep in mind that if you think you are going to eat the food eventually then the money won't be wasted.
If money is no object here is my deluxe version of what to do:
Isn't survivalism fun? Though if your country hideaway doesn't have DSL don't expect me to visit.
If you aren't rich but have some money to spend then at least consider some cheaper measures you can take now such as purchase of high quality face masks and Tamiflu. If you live in an apartment with central air you might want to get HEPA air filters for your apartment. If you live in a city but can afford a cheap small piece of country land you could park a cheap travel trailer on it. Don't expect to be able to buy most of these things when the pandemic starts.
If your work lends itself to use of a home office then set one up in advance. Even if your employer does not currently allow working from home make sure you have the basic supplies of a computer, broadband internet connection, and other materials. When a pandemic starts employers will become much more flexible about working conditions than they are now. Any person who can avoid entering the office during a deadly flu outbreak is one less person who might infect the boss.
My guess is that in Western industrialized countries the first year of an avian flu outbreak will bring millions of deaths. But by the second year the mechanisms for how to operate societies while minimizing human-to-human contact will be well worked out. Also, by the second season an increasing number of people will have been vaccinated or will have survived infection. So more people will be able to safely move around and carry out the tasks that involve more human-to-human contact.
The economic and human losses from a highly lethal flu pandemic could be greatly reduced with aggressive advanced preparation by governments and industry. But until the public at large becomes concerned I don't expect governments to do much about it. If biological scientists and medical doctors think this threat deserves greater preparation efforts then a lot more scientists and doctors ought to write op/eds and letters to the editor. Too few people are sounding the alarm. Until that changes you are on your own.
Back during the early Cold War era many people took steps such as building fall out shelters to protect themselves against nuclear war. Today we face a danger whose probability of coming true might well be higher than the chance of nuclear war during the Cold War. Yet few people are taking any steps to protect themselves from a highly lethal pandemic because pandemics do not evoke dramatic images. Infections do not create huge visible blasts. Our buildings and other physical structures are left unharmed by a virus. So governments and individuals do far too little to prevent or defend against a massive killer outbreak that could kill many more people than all the wars of the 20th century.
The other odd thing about the lack of preparation for the avian flu is the larger effort being made to protect against bioterrorism. At this point in time in the year 2005 I think we are at far greater risk from a new natural killer influenza strain produced by Darwinian natural selection than we are from a man made strain produced in a laboratory. Yet in the West we have become so accustomed to controlling and defeating the will of nature that we fear more what other humans can do than what nature can throw up at us. I see this as a premature shift in emphasis. Perhaps 20 years or 30 years from now we will face greater threats from bioterrorism than from natural pathogens. But I think right now we are still living in the twilight of the era when natural selection is more likely than terrorists to produce a new deadly pathogen strain.
Also see the special avian flu articles from Nature.
If the next pandemic were to arise five years from now, there would have been breathing space to stimulate our drug and vaccine industries to limit the damage it would cause. But that requires urgent action now. As matters stand, a vaccine against a pandemic flu would not be ready until at least six months after a pandemic starts. Too late: by then the worst of the pandemic would already have happened.
A vaccine that can be produced more quickly demands a research effort akin to that for a strategic military weapon, not business as usual. We also need to be able to produce enough of such a vaccine to cope with the surge in demand during a pandemic. At present, the entire world production capacity can produce only enough doses for 450 million people. To stimulate an increase in capacity, we need health policies that boost demand for existing flu vaccines in ordinary years. The same goes for antiviral drugs.
But the worst-case scenario is that a pandemic starts within two years. We would have no vaccine and few drugs, and we would be dependent on governments and the WHO to try to extinguish the first outbreaks at source. That's why the first priority must be to prevent a pandemic emerging in the first place, by extinguishing the disease in animals.
Time for action
Unfortunately, the current situation does not bode well for the abilities of governments and international agencies to cope with this challenge. We should be monitoring in almost real time the genetic changes in the avian and human viruses that could herald the emergence of a pandemic strain, for example. But there is no international funding to help affected countries build decent and sustained surveillance programmes. And while outside researchers want data from affected countries, they aren't engaging enough in the meaningful collaboration needed to build trust and open sharing. The international community is not offering incentives, such as drugs for the Asian countries that would be in the front line of a pandemic. Combine this with the fact that countries are reluctant to share the few data they have because their analysis could affect their trade and economies, and the current mess in surveillance is hardly surprising.
If I was King I'd allocate $20 billion per year in the United States for preparations against avian flu. But I'm not King and neither is anyone else. So instead I'll just give you my warning: We ought to do orders of magnitude more than we are currently doing to get ready for the next highly lethal influenza outbreak, whenever it might come.
Researchers at the UCSD School of Medicine working with scientists at Elan Pharmaceuticals, have reported promising results in mice of a vaccine approach to treating Parkinson’s and similar diseases. These results appear in the June edition of the journal Neuron.
Dr. Eliezer Masliah, Professor of Neurosciences and Pathology at UCSD, and colleagues at UCSD and Elan Pharmaceuticals in San Francisco, vaccinated mice using a combination of the protein that abnormally accumulates in the brains of Parkinson’s (called human alpha-synuclein) and an adjuvant. This approach resulted in the generation of anti-alpha synuclein antibodies in mice that are specially bred by Masliah’s team to simulate Parkinson’s disease, resulting in reduced build-up of abnormal alpha-synuclein. The accumulation of abnormal alpha-synuclein is associated with degeneration of nerve cells and interference with normal inter-cellular communication, leading to Parkinson’s disease and dementia.
The work marks the first time a vaccine for this family of diseases has been found effective in animal studies. Scientists at Elan Pharmaceuticals have been working for the past few years in a vaccine for Alzheimer’s Disease.
The researchers focused on a spectrum of neurological disorders called Lewy body disease, which include Parkinson’s and Alzheimer’s. These disorders are marked by the presence of Lewy bodies -- abnormal clumps of alpha-synuclein -- in the brain. Normally, alpha-synuclein proteins support communications between brain cells, or neurons. However, when abnormal proteins clump together in the neurons, a build-up of synuclein can cut off neuron activity, blocking normal signaling between brain cells and ultimately choking the cells to death.
“We found that the antibodies produced by the vaccinated mice recognized and reduced only the abnormal form of alpha-synuclein, since the protein’s normal form is in a cellular compartment where antibodies can’t reach it,” said Masliah. “Abnormal alpha-synuclein finds its way to the cell membrane, where antibodies can recognize it.”
A few years ago a vaccine trial against beta amyloid plaques which accumulate in Alzheimer's Disease reduced plaque build-up in many of the study participants. But in a few percent of the patients the vaccine caused an immune response which led to inflammation of the brain. This led to a halt of that vaccine development effort due to an expectation that regulators would not approve such a treatment. Well, if I was diagnosed with Alzheimer's I'd be willing to run a 3% or 4% risk of brain inflammation if my alternative was the gradual destruction of my brain. But the FDA is impervious to that sort of reasoning.
A similar fear of brain inflammation with a Parkinson's vaccine means these researchers are looking for some other way to deliver what is essentially an immunotherapy. While not mentioned here the obvious alternative choice is monoclonal antibodies. The advantage of monoclonal antibodies is that they get broken down by the body. So unlike a vaccine they might not cause a permanent change in the immune system. On first signs of an inflammation response the treatment could be stopped. Also, a vaccine probably causes the immune system to make a number of different antibody types and only one of those types might contribute to the inflammation. A monoclonal antibody approach would allow a much narrower and controllable set of antibodies to be made and delivered.
Masliah stressed that the team’s experimental active immunization, while effective in mice, may not be as useful in humans. “We would not want to actively immunize humans in this way by triggering antibody development, because one could create harmful inflammation,” he cautioned. “However, it might be feasible to inject antibodies directly, as if the patient were creating his or her own.”
The team, the first to identify the presence of these proteins in the human brain, originally thought the protein played an important role in the development of Alzheimer’s disease. Then, an explosion of research linked Lewy bodies and their constituent proteins to both Alzheimer’s and Parkinson’s. The team spent four years clarifying alpha-synuclein’s role in Parkinson’s, developing a mouse model that contained the faulty and normal genes for alpha-synuclein, and conducting the experiments that led to their current findings.
With evidence that this approach could be effective in treating Lewy Body disease, the UCSD researchers are now working with Elan Pharmaceuticals to develop alternative ways to produce alpha-synuclein antibodies, with the goal of making a vaccine that is safe and effective in humans. While this research could take many years and holds no promise of prevention or cure, the researchers are hopeful that the mouse studies are a step in the right direction.
“This shows the first demonstration of a vaccine for this family of disease,” Masliah said.
Attempts to develop vaccines and monoclonal antibodies against Parkinson's Disease and Alzheimer's Disease fit within the typology of 7 Strategies for Engineered Negligible Senescence (SENS), more specifically the strategy to remove accumulated extracellular junk. It is quite possible that immunotherapies against Alzheimer's and Parkinson's will turn out to provide benefits to people who are not diagnosed with either Parkinson's or Alzheimer's. We might all accumulate some amount of either amyloid plaques or misshapen alpha-synuclein. If that is the case then immunotherapies to remove these build-ups might someday become routine as people grow older.
LA JOLLA – Brains are marvels of diversity: no two look the same -- not even those of otherwise identical twins. Scientists at the Salk Institute for Biological Studies may have found one explanation for the puzzling variety in brain organization and function: mobile elements, pieces of DNA that can jump from one place in the genome to another, randomly changing the genetic information in single brain cells. If enough of these jumps occur, they could allow individual brains to develop in distinctly different ways.
This result might explain why humans differ in their intellectual abilities and behavioral tendencies in ways that are not accounted for by genetic inheritance or environment. Humans may end up being even more controlled by their genes than twins studies would suggest because some of the genetic patterns that control them are generated during fetal development.
"This mobility adds an element of variety and flexibility to neurons in a real Darwinian sense of randomness and selection," says Fred H. Gage, Professor and co-head of the Laboratory of Genetics at the Salk Institute and the lead author of the study published in this week's Nature. This process of creating diversity with the help of mobile elements and then selecting for the fittest is restricted to the brain and leaves other organs unaffected. "You wouldn't want that added element of individuality in your heart," he adds.
Precursor cells in the embryonic brain, which mature into neurons, look and act more or less the same. Yet, these precursors ultimately give rise to a panoply of nerve cells that are enormously diverse in form and function and together form the brain. Identifying the mechanisms that lead to this diversification has been a longstanding challenge. "People have speculated that there might be a mechanism to create diversity in brain like there is in the immune system, and the immune system's diversity is perhaps the closest analogy we have," says Gage.
The researchers were aware that the immune system rather systematically reshuffles antibody genes to produce a large variety of immune cells that make many different antibodies for different antigens it might encounter.
In the immune system, the genes coding for antibodies are shuffled to create a wide variety of antibodies capable of recognizing an infinite number of distinct antigens.
In their study, the researchers closely tracked a single human mobile genetic element, a so-called LINE-1 or L1 element in cultured neuronal precursor cells from rats. Then they introduced it into mice. Every time the engineered L1 element jumped, the affected cell started glowing green [WHY?]. "We were very excited when we saw green cells all over the brain in our mice," says research fellow and co-author M. Carolina N. Marchetto, "because then we knew it happened in vivo and couldn't be dismissed as a tissue culture artifact."
Transposable L1 elements, or "jumping genes" as they are often called, make up 17 percent of our genomic DNA but very little is known about them. Almost all of them are marooned at a permanent spot by mutations rendering them dysfunctional, but in humans a hundred or so are free to move via a "copy and paste" mechanism. Long dismissed as useless gibberish or "junk" DNA, the transposable L1 elements were thought to be intracellular parasites or leftovers from our distant evolutionary past.
It has been known for a long time that L1 elements are active in testis and ovaries, which explains how they potentially play a role in evolution by passing on new insertions to future generations. "But nobody has ever demonstrated mobility convincingly in cells other than germ line cells," says Gage.
Apart from their activity in testis and ovaries, jumping L1 elements are not only unique to the adult brain but appear to happen also during early stages of the development of nerve cells. The Salk team found insertions only in neuronal precursor cells that had already made their initial commitment to becoming a neuron. Other cell types found in the brain, such as oligodendrocytes and astrocytes, were unaffected.
At least in the germ line, copies of L1s appear to plug themselves more or less randomly into the genome of their host cell. "But in neuronal progenitor cells, these mobile elements seem to look for genes expressed in neurons. We think that's because when the cells start to differentiate the cells start to open up genes and expose their DNA to insertions," explains co- author Alysson R. Muotri. "What we have shown for the first time is that a single insertion can mess up gene expression and influence the function of individual cells," he adds.
However, it is too early to tell how often endogenous L1 elements move in human neurons and how tightly this process is regulated or what happens when this process goes awry, cautions Gage. "We only looked at one L1 element with a marker gene and can only say that motility is likely significantly more for endogenous L1 elements," he adds.
Maybe some mental illnesses are caused by L1 elements inserting in places where they mess up the functioning of some brain neurons.
If I'm right in my suspicion that this result shows how we could be even more genetically determined than twins studies suggest then we are genetically determined in ways that introduce randomness at an early stage of brain development. This leaves even less room for social environment to influence development. Eventually biotechnological means will be found to reduce the degree of randomness in the L1 insertions so that outcomes of the development of offspring will become more predictable. See my post Children Of The Future May Be More Genetically Determined for further elaboration of that argument.
The idea of jumping genes in our brains triggers a memory of Mark Twain's The Notorious Jumping Frog of Calaveras County. Seems faintly related because the genes jumping around in our brains seem whimsical. Oh, and for some reason unknown to me the story is also known as The Celebrated Jumping Frog of Calaveras County. So which title was the original?
In experiments with mice, researchers have found that nicotine triggers the same neural pathways that give opiates such as heroin their addictively rewarding properties--including associating an environment with the drug's reward. However, unlike opiates, nicotine does not directly activate the brain's opiate receptors, but activates the natural opioid reward pathway in the brain.
The researchers, led by Julie Blendy of the Transdisciplinary Tobacco Use Research Center (TTURC) at the University of Pennsylvania, said their findings suggest more effective ways that opiate blockers could be used to help smokers quit.
In their experiments reported in the June 16, 2005, issue of Neuron, the researchers administered nicotine to mice and analyzed the levels of a protein called CREB--known to control genes involved in the reward pathway of opiates and other abused drugs. They found that not only was CREB activated in the reward regions of the nicotine-treated animal's brains, but also that the drug naloxone, which blocks the opiate receptors, blocked CREB activation. Also, mutant mouse strains lacking the opioid receptor did not show an increase in CREB activity when they received nicotine.
The researchers also studied the relationship among nicotine, the environment, and this reward pathway. They conditioned mice to associate a specific test chamber with receiving nicotine, finding that the mice would prefer to stay in that chamber when given a choice. The researchers found that just placing the conditioned mice in the chamber activated CREB. They also found that naloxone blocked this conditioned increase in CREB, and that mutant mice lacking CREB or pretreated with naloxone did not show any reward response to nicotine.
However, naloxone did not block the chamber choice of mice conditioned with cocaine, found the researchers, indicating that cocaine activates the brain reward pathway in a different way from nicotine and opiates.
"The present results demonstrate that nicotine-associated environmental stimuli can activate the same molecular signal transduction molecules as the drug itself," wrote Blendy and her colleagues. They wrote that the activation of CREB "is evident not only after acute and repeated nicotine administration, but also following exposure to an environment in which the animal has previously received nicotine."The researchers noted that clinical studies of opioid receptor blockers to relieve cigarette cravings "so far have produced mixed results, ranging from ineffectiveness at smoking cessation to mild reduction in the desire to smoke." The researchers wrote that their findings "suggest that the timing and context of opioid receptor antagonist administration are critical for determining the effectiveness of blocking nicotine reward . . . . Given the results reported here, clinical studies designed to evaluate administration of opioid antagonists just prior to cues associated with smoking could lead to a more promising treatment regimen."
The brain reward system is effectively hijacked by recreational drugs. Normally the brain reward circuitry activates to encourage adaptive behavior such as getting food and doing other life-promoting work. Addictive drugs that activate reward circuitry drugs are dangerous because they subvert the purposes of the reward system and reduce or eliminate the motivations for adaptive behaviors.
Note that animals can be conditioned to associate being in a particular room with getting a reward. When medical treatments that control reward responses are developed they'll be used to cure drug addictions. But the understanding of brain reward circuity resulting from addictive drug studies will also point the way toward the development of treatments which can be used to manipulate human behavior in sophisticated ways. Will the greater use of such treatments be by governments and other entities to manipulate the behavior of individuals? Or will individuals use such treatments to manipulate their own behavior?
Self manipulation might sound counter-intuitive at first. But we all end up doing things we think we shouldn't do while at the same time we don't do other things that we think we really ought do so. So will the frontal lobes of our brains choose to administer treatments to ourselvse that realign the motivations of other parts of the brain? Picture the forebrain exclaiming "Bow to me now limbic system. You are no longer in control. I am finally your complete master."
Once the neurotechnologies exist to control rewards I expect a lot of people to modify their reward systems to favor the pursuit of longer term goals. As I've previously argued I also expect people to selectively turn off love as a motivation when they find that feeling stands in the way of reaching their goals.
Researchers from the University of Wisconsin at Madison have shown that it is possible to convert biomass materials like corn into fuel that could be used in diesel engines in a way that automatically separates the fuel from water. "This is a new process to produce liquid fuels from biomass," said James Dumesic, a professor of chemical and biological engineering at the University of Wisconsin.
The main advantage of this process is the reduction of energy needed to convert the biomass materials into a usable form.
The alkane fuel contains 90 percent of the energy of the glucose and hydrogen that the reaction begins with, said Dumesic. "Thus burning the alkane fuel would give you 90 percent of the energy compared to burning the glucose and the hydrogen."
The advantage of the researchers' process is that when alkanes are produced they spontaneously separate from water, said Dumesic. "In contrast to our process... ethanol must be separated from water by an energy-intensive distillation step," he said. "For our process, no energy is required to separate the alkane products from water."
This boosts the overall energy efficiency of the fuel. The ratio of energy derived from ethanol to the energy required to produce it is 1.1 to 1. The researchers' process has an estimated ratio of 2.2 to 1, according to Dumesic.
Note that the energy efficiency of current methods to create ethanol from corn is disputed. The 1.1 units of energy out per 1 unit of energy in to grow, harvest, and convert to ethanol for existing processes might be overly optimistic and so Dumesic's improvement might not yield 2.2 to 1 energy production efficiency. Tad Patzek at UC Berkeley and David Pimentel at Cornell University claim corn ethanol is not a net producer of energy while Mike Graboski at Colorado School of Mines claims it is. Who is right? I don't know. But I am unenthused by an energy source that will increase the demand for agricultural lands and water for farming to produce non-food products since plants are very inefficient at converting light to useful energy as compared to photovoltaic cells. Photovoltaic cells can produce the same amount of energy using much less land. See that last link for my arguments.
The process is not yet ready for practical use. But Dumesic thinks he can improve the process. Environmentalists ought to worry that Dumesic and other researchers will succeed in making biomass commercially viable. If that happens before wind, solar, and nuclear become more competitive suddenly much more land the world over will be shifted into agricultural usage at the expensive of the wilderness and of the creatures which live in the wilderness. Environmentalists ought to shift their focus away from opposing green house gas emissions and instead focus on efforts to develop better substitutes for fossil fuels. Environmentalists really ought to lobby much harder for photovoltaics research. I'd also ask them to lobby for fourth generation nuclear power plants but I suspect for most of them that is still a bridge too far.
My own preferences for fossil fuels substitutes are nuclear and solar. After those two I'd prefer wind over biomass. If it must be wind then I'd prefer offshore wind far enough from the coastline that it is not visible from land.
Update: To clarify one point: I am not opposed to all biomass energy technologies. For example, biomass technologies that can extract energy out of sewage or trash could reduce the cost of waste disposal, reduce the amount of pollution, reduce the growth rate of landfills, and provide energy. Development of such technologies strikes me as a big win. But what I'm at the very least unexcited about are technologies that will increase the demand for tillable land.
A fair degree of overlap exists between technologies that extract energy out of municipal waste and technologies that extract energy out crops. So advances in waste energy extraction are also advances in crop energy extraction. Though technologies that extract energy out of trash and other wastes likely will become cost effective well before those same technologies achieve profitability in agriculture. Why? Because in agriculture the fields have to be tilled, fertilized, planted, watered, harvested, and then transported to processing centers. Each of those steps cost money and cost energy too. Whereas trash and sewage already are collected and concentrated at trash dumps and sewage plants.
Also check out over on Green Car Congress Co-Production of Ethanol and Electricity from Waste about BRI's method of converting waste to ethanol and electricity and also the post New Revenue Stream for Corn-Ethanol Producers: Biodiesel. Plus, check out The Ergosphere for E-P's June biomass roundup. E-P does some calculations on conversion efficiency of the BRI ethanol/electricity conversion process and compares it to Changing World Technology's thermal depolymerization process.
I've had emails from people suggesting I post on CWT's technology. I was skeptical because I saw the collection of biomatter of sufficient quality as too expensive to make a large dent in total energy needs. Once the fairly small number of turkey, chicken, and like processing plants got the CWT technology installed other sources of raw biomass materials would be much more expensive. Well, it turns out that even in a Carthage Missouri ConAgra Butterball turkey plant the CWT technology is not ready for prime and produces energy that costs twice as much as it is sold for.
It turns out that process of cooking turkey guts, feathers, feces and other waste gives off a horrible stench.
“It's rotten,” said Beth Longstaff, a resident who was shopping at Wal-Mart recently. “You can't get away from it. It's like something out of a horror movie.”
The turkey oil is much more expensive to produce than projected — the cost of a barrel is double what it sells for.
Appel told The Kansas City Star recently that he doubts the process can be financially successful in the United States for several years. His company, Changing World Technology, has put on hold plans to build more plants in Colorado, Alabama and Nevada.
Instead, he is considering a deal to build a plant in Ireland, where costs would be considerably less, and where a recent news article predicted a plant should be operating by next year. Appel also is negotiating with officials in Italy and Germany.
But he has to solve the smell problem too.
Some scientists at a few Calfornia research centers have received funding to develop nanotech therapies against atherosclerotic plaques in arteries. Note that this an announcement of the beginning of their research efforts. But the announcement is notable because these scientists are attempting to develop nanodevices to hook onto and modify arterial plaque.
The Burnham Institute has been selected as a "Program of Excellence in Nanotechnology" ("PEN") by the National Heart, Lung, and Blood Institute ("NHLBI") of the National Institutes of Health ("NIH"). A partnership of 25 scientists from The Burnham Institute, University of California Santa Barbara, and The Scripps Research Institute will use the $13 million award to design nanotechnologies to detect, monitor, treat, and eliminate "vulnerable" plaque, the probable cause of death in sudden cardiac arrest.
Led by Jeffrey Smith, Ph.D., of the Burnham Institute and the principal investigator of the program, the scientific team is comprised of biochemists, vascular biologists, chemical engineers and physicists. "This is a novel approach to bring experts from all these fields together," said Dr. Smith. "And it's very exciting. These groups do not normally work together. But in this instance, I think it's going to produce some real scientific progress."
Recent studies have shown that plaque exists in two modes: non-vulnerable and vulnerable. Blood passing through an artery exerts a shearing force and can cause vulnerable plaque to rupture, which often leads to occlusion and myocardial infarction. This is a significant health issue: of the nearly one million people who die each year from cardiac disease, 60 percent perish without showing any symptoms. As many as 60 - 80 percent of sudden cardiac deaths can be attributed to the physical rupture of vulnerable plaque.
"We intend to exploit this new understanding of atherosclerotic plaque," said Dr. Smith. "By focusing on devising nano-devices, which can be described as machines at the molecular level, we will specifically target vulnerable plaque. That cannot be accomplished today. My colleagues and I hope that our work will lead to real diagnostic and therapeutic strategies for those suffering from this form of cardiac disease."
The project team will work on three innovative solutions to combat vulnerable plaque; 1) building delivery vehicles that can be used to transport drugs and nanodevices to sites of vulnerable plaque; 2) designing a series of self-assembling polymers that can be used as molecular nano-stents to physically stabilize vulnerable plaque, 3) creating nano-machines comprised of human proteins linked to synthetic nano-devices for the purpose of sensing and responding to vulnerable plaque.
I like the idea of "self-assembling polymers" for "molecular nano-stents" to stablize plaque. The idea is to anchor it down so it can't break free and cause a heart attack or stroke.
I feel so out of it since I didn't already know what "BioNEMS" means. They will develop bio-nanoelectromechanical systems (BioNEMS).
The multi-organizational team will build "delivery vehicles" that can be used to transport drugs, imaging agents and nano-devices directly to locations where there is vulnerable plaque; design molecular nano-stents to physically stabilize vulnerable plaque and replace its fibrous cap with an anti-adhesive, anti-inflammatory surface; devise molecular switches that can sense and respond to the pathophysiology of atheroma (fatty deposits on arterial walls); and develop bio-nanoelectromechanical systems (called BioNEMS) that can sense and respond to vulnerable plaque, ultimately providing diagnostic and therapeutic capability.
This is another example of development of a treatment that falls within the typology of 7 Strategies for Engineered Negligible Senescence (SENS) to halting and rejuvenate bodies by the removal of accumulated extracellular junk. They are not saying they are attempting to remove the plaque. But once they can target nanomachines to hook on to plaque they might find it just as easy to break it down to remove it as to stabilize it.
Widespread acceptance of SENS for rejuvenation is not necessary for the development of many SENS treatments. My guess is that for at least the next decade most treatments which will accomplish objectives which support SENS will be justified under the old paradigm of development of treatments against specific diseases. Efforts such as this one develop tools that will be useful for rejuvenation. So we are making progress toward the goal of engineered negligible senescence or perpetual youth.
The human body has its own defense against brain aging: the innate immune system, which helps to clean the brain of amyloid-beta waste products. However, UCLA researchers discovered that some patients with Alzheimer's disease have an immune defect making it difficult to clean away these wastes. This may lead to over-saturation of the brain with amyloid beta, which form amyloid plaques, the definitive hallmark of Alzheimer's disease.
Published in the June 10 issue of the Journal of Alzheimer's Disease, the findings could lead to a new approach in diagnosing and treating Alzheimer's disease by helping to diagnose and correct this immune defect. This is the first time that researchers have discovered that the innate -- or more primitive -- part of the immune system may play a role in the development of Alzheimer's disease.
Using blood samples, investigators found that in healthy people, cells belonging to the innate immune system called macrophages, cleared amyloid-beta in a test tube test developed at UCLA. However, the macrophages of some Alzheimer's patients could not adequately perform this cleaning job.
"Macrophages are the janitors of the innate immune system, gobbling up waste products in the brain and throughout the body," said Dr. Milan Fiala, first author and UCLA researcher.
Fiala notes that there may be a problem either with the macrophages not effectively binding to amyloid beta or a problem in the absorption or uptake, which is called "phagocytosis." He adds that this immune defect may also be present in other diseases where a build-up of waste and plaques occur, such as in cardiovascular disease and Gaucher's disease.
"If further study shows that this defective macrophage function is present in most Alzheimer's disease patients, new hormonal or immune-boosting approaches may offer new approaches to treating the disease," adds Fiala.
Researchers add that this new approach differs from the amyloid-beta immunization method, which utilizes another part of the immune system called the adaptive immune system. According to Fiala, the immunization approach has resulted in amyloid-beta clearance in the lab in an animal model, but in a human clinical trial led to brain inflammation in a subset of patients.
In future studies, investigators plan to regulate the innate immune system by natural substances such as hormones, and natural products such as curcumin (from the curry powder). Currently in their lab, Fiala and Dr. George Bernard who is a professor in the UCLA Department of Oral Biology and Medicine,are testing the effectiveness of a naturally occurring hormone, called insulin-like growth factor I, in conjunction with a research team from the MP Biomedicals LLC Company.
This is a valuable piece of work. Perhaps immune system aging causes the innate immune system to fail to clear beta amyloid plaques. If so, restoring its proper function might turn out to be difficult because immune system rejuvenation might be necessary. Or perhaps a genetic difference causes the lower ability to remove the plaques. If so, perhaps a gene therapy could give the macrophages a capability that they lack.
Among the 7 Strategies for Engineered Negligible Senescence (SENS) to halting and rejuvenate bodies is the removal of accumulated extracellular junk. The amyloid plaque accumulations which likely cause Alzheimers are a form of extracellular junk and treatments to remove those plaques will likely be among the earliest rejuvenation treatments used in clinical practice. Because the amyloid plaques are associated with a major neurological disorder the development of means to remove them gets much more attention than some of the other SENS approaches. For example, little effort is going into the development of treatments to remove intracellular junk from lysosomes or to develop gene therapies to repair mitochondrial mutations.
A very interesting New York Times article by Pamela Belluck reports on the widespread reluctance of couples to donate their excess embryos created by in vitro fertilization to other couples to start pregnancies.
"When couples are coming into in vitro, and they are asked what they would want to do with leftover embryos, they might say, 'Oh yeah, donating to another couple - if we could help prevent another couple from going through the agony and the pain of what we've been through, we would be willing to do that,' " said Dr. Susan C. Klock, associate professor of obstetrics, gynecology and psychiatry at Northwestern University's medical school.
But 3 to 10 years later, 9 out of 11 couples who had said they would donate to another couple were no longer willing to do so, Dr. Klock said.
Others have seen similar results. "Of the dozen or 15 cases I've handled where people have considered donating embryos to another couple, over half of those cases never went forward," said Susan L. Crockin, a Boston lawyer.
One couple, Ms. Crockin said, who had had two children through in vitro fertilization, wanted to donate their extra embryos to friends in their neighborhood. "They came in to me to do what they thought at the outset was a simple legal task of 'make it happen,' " Ms. Crockin said. "Instead, after really exploring what this might mean to their existing children, what it might mean for the resulting child, how would they deal with the children they were raising and this child who was going to be raised down the street, they couldn't reach a comfort level. The wife called me in tears: 'We want to do this, we want to be generous, I feel selfish, but I can't do this.' "
The article reports that couples who used either donated eggs or donated sperm to create their embryos are more willing to donate their excess embryos. But then the egg donors express ambivalence or opposition to the idea of their eggs being used to provide babies to other people who they didn't develop a relationship with ahead of time. So people obviously feel a bond to offspring or potential offspring made from their own DNA.
President George W. Bush has been funnelling money to embryo adoption programs such as the Nightlight Christian Adoptions Snowflakes program due to his belief that all embryos are really humans.
The Snowflakes were on hand to show that, in Bush's words, "there is no such thing as a spare embryo." The alternative is "adoption."
The reluctance of couples to donate their embryos to other couples strongly argues against Bush's position. Those couples in the NY Times article who can't bring themselves to see other people raise their genetic children are the tip of a growing iceberg. So far the Snowflakes program has arranged for 81 embryo adoptions. But at least 400,000 embryos sit frozen in IVF clinics in the United States.
A 2003 study by the RAND Law and Health Initiative estimated that there are about 400,000 frozen embryos in IVF clinics across the nation, 11,000 of which have been set aside for research purposes.
The number is probably much larger than that and likely to grow larger still. A British web site reports a very rapid growth in the number of frozen embryos in Britain.
Since 1990 about 250,000 embryos have been frozen following IVF treatment in Britain. In March 1999 there were 51,346 embryos stored. This had jumped to 97,719 in March 2001 and 116,252 by March 2003, more than doubling in four years. Around eight embryos are created in each IVF treatment cycle but only a maximum of two can be implanted, meaning that there are always spare embryos to be frozen, donated, experimented upon or destroyed. Couples are allowed to keep them for up to ten years for an annual storage fee of approximately £250.
Note that since some couples are likely deciding to stop storage of frozen embryos once they've managed to get pregnancies to term the increase in the number of stored embryos is happening in spite of embryos destroyed every year.
Many human embryonic stem cell (hESC) researchers would like to use the left-over embryos created during in vitro fertilization (IVF) attempts to extract cells to create hESC lines to use in research and for the development of medical treatments. Their argument is that these embryos are eventually going to be destroyed anyway. So why not use them for research?
If the people who think embryos are humans want to prevent embryo destruction then their only possible way to achieve that goal is to try to win support for a ban on the creation of embryos through IVF. Though I think the odds are strongly against the enactment of such a ban. The existing federal ban on the use of US federal government research money to use embryos for creation of human embryonic stem cell lines is not preventing embryo destruction.
Future advances in reproductive science and technology will lead to cures for many causes of infertility. That alone would lead one to expect a reduction in the use of IVF in the future. However, advances in DNA sequencing and DNA testing technology combined with much greater understanding of the meaning of all the human DNA sequence variations will increase the incentive for using IVF over regular sex for initiation of pregnancies. IVF combined with pre-implantation genetic diagnosis (PIGD) will be attractive to tens or hundreds of millions of couples as a way to select which genes to pass along to progeny. Therefore in the coming decades I expect IVF's popularity to grow and therefore the number of extra frozen embryos to grow as well.
Alexander Todorov, on the faculty of the Psychology Department at Princeton University, has found that when people are shown quick exposures to pictures of politicians they can rate them on perceived competence and that rating mirrors how those politicians did in elections for the US House of Representatives and US Senate.
Psychologist Alexander Todorov of Princeton University had volunteers look at black-and-white photographs of House and Senate winners and losers from elections in 2000 and 2002, and the competing candidates prior to the 2004 contests. The faces had to be unknown to the participants; images of Sens. Hillary Rodham Clinton, D-N.Y., John McCain, R-Ariz., and John Kerry, D-Mass., for example, were immediately eliminated.
``It was just on facial appearance, it could not be influenced by any other information,'' Todorov said in an interview.
The study found that the candidate perceived as more competent was the winner in 72 percent of the Senate races and 67 percent of the House races.
Democracy is flawed because humans are shallow and superficial. Maybe blind voters make better decisions. Anyone up for restricting the voting franchise to the blind only? Ugly talented candidates would fare much better. Think about it.
“Inferences of competence not only predicted the winner but also were linearly related to the margin of victory,” the scientists wrote.
This will obviously lead to political parties using groups to screen potential candiates for perceived competence. So expect politicians of the future to look more competent on average as a result of recruitment efforts to attract more competent looking candidates.
In the second paper, Leslie Zebrowitz, of Brandeis University in Massachusetts, said that the results appeared to reflect the relative “baby-facedness” of the candidates.
Previous research has shown that people of any age who appear baby-faced, with a round face, large eyes, a small nose, a high forehead and a small chin, tend to be rated as less competent — though often as more trustworthy as well. “Although the study doesn’t tell us exactly what competence is — there are many kinds, including physical strength, social dominance and intellectual shrewdness.
Baby-faced people are perceived to be lacking in all these qualities,” Dr Zebrowitz said.
People do routinely "judge a book by its cover" in spite of countless adminitions not to. (have those admonitions helped at all?)
Despite the age-old admonition not to "judge a book by its cover," we routinely make important judgments about human traits based on instant, superficial impressions of peoples' faces. Such "blink of an eye" decision-making predicted the outcome of about 70 percent of recent U.S. Senate races, according to a new study in Science this week.
According to the study, candidates who looked "competent" prevailed in congressional elections more than two-thirds of the time. In a review of the study, Dr. Leslie Zebrowitz, a psychologist at Brandeis University, and Joann M. Montepare, explain that the outcomes of the political races were likely due to differences in the opponents' "babyfacedness."
"Although the study doesn't tell us exactly what competence is – there are many kinds, including physical strength, social dominance and intellectual shrewdness – babyfaced people are perceived to be lacking in all these qualities," said Zebrowitz, a pioneering research scientist in the field of facial impressions and author of "Reading Faces: Window to the Soul?"
What facial qualities make someone look more babyfaced and less competent? Zebrowitz says that both babies and babyfaced adults, regardless of sex or ethnicity, share such features as a round face, large eyes, small nose, high forehead and small chin. Competency, on the other hand, is associated with facial maturity.
The babyfaced men might actually be the better choices in spite of the electorate's aversion to babyfaces in leaders.
In fact, studies by Zebrowitz and others have shown that babyfaced men are actually more intelligent, better educated, more assertive and apt to win more military medals than their mature-looking counterparts.
Research in the area of facial impressions has implications for political marketing, social decision-making and even the democratic process, Zebrowitz believes. "The data we have suggest that we're not necessarily electing better leaders – people who are actually more competent, though we are electing people who look the part."
Expect to see ambitious young business executives and politicians seeking out plastic surgeons to ask for modifications to make them look more competent.
Standing still when a threat is detected is a defensive, protective reaction. This ancestral and automatic behavior allows the prey to stay unnoticed by a potential predator. A new study published in Psychophysiology finds that humans, like many other complex animals, freeze when encountering a threat. The mere picture of an injured or mutilated human induces this reaction. When viewing these unpleasant images, the study’s participants froze as their heart rate decelerated and amount of their body sway reduced. The authors found that this abrupt reaction, so critical for the survival of some animals, has stayed with humans.
Forty-eight male volunteers stood barefoot on a stabilometric platform, to measure balance and body sway, and viewed twenty-four pictures from three different categories. They were: pleasant (sports), neutral (objects), and unpleasant (injured or mutilated humans). Posturographic and electrocardiographic recordings were collected. The author found a significant reduction in body sway along with increased muscle stiffness following the unpleasant/mutilation block of pictures compared to the neutral pictures. The number of heartbeats per minute was also lower after viewing the mutilation pictures than after looking at the others. “This pattern resembles the ‘freezing’ and ‘fear bradycardia’ seen in many species when confronted with threatening stimuli, mediated by neural circuits that promote defensive survival,” author Eliane Volchan explains.
This suggests the threat of predators - whether human or animal - was a significant selective force even in later human evolution. Else the behavior likely would have been lost by now.
Even though Japan is already among the most frugal countries in the world, the government recently introduced a national campaign, urging the Japanese to replace their older appliances and buy hybrid vehicles, all part of a patriotic effort to save energy and fight global warming. And big companies are jumping on the bandwagon, counting on the moves to increase sales of their latest models.
On the Matsushita appliance showroom floor these days, the numbers scream not the low, low yen prices, but the low, low kilowatt-hours.
A vacuum-insulated refrigerator, which comes with a buzzer if the door stays open more than 30 seconds, boasts that it will use 160 kilowatt-hours a year, one-eighth of that needed by standard models a decade ago. An air-conditioner with a robotic dust filter cleaner proclaims it uses 884 kilowatt-hours, less than half of what decade-old ones consumed.
Japan far produces more economic output per unit of energy than other industrialized countries.
This dependence on imports has prodded the nation into tremendous achievements in improved efficiency. France and Germany, where government crusades against global warming have become increasingly loud, expend almost 50 percent more energy to produce the equivalent of $1 in economic activity. Britain's energy use, on the same measure, is nearly double; the United States nearly triple; and China almost eight times as much.
From 1973 to today, Japan's industrial sector nearly tripled its output, but kept its energy consumption roughly flat. To produce the same industrial output as Japan, China consumes 11.5 times the energy.
But Japan's residential and home sectors have witnessed doublings of energy use over the same period. Rising affluence has allowed people to buy bigger cars, drive more miles, build bigger houses, and use more heating and air conditioning. Rising efficiency in appliances and cars have failed to prevent this trend.
Part of the differences in energy use between countries is a reflection of differences in residential home sizes. Large homes consume more energy for heating and cooling than small homes. Another part of the difference stems for the average distance between homes and work. Another part stems from average vehicle fuel efficiency. I think the odds of getting people in the United States to drive smaller calls and live in smaller homes closer to their jobs are pretty slim.
How much of Japan's increased energy efficiency could be copied by the United States without major changes in American lifestyles? That answer depends on answers to many subquestions. For example, do the Japanese insulate their residences better than Americans? If so, by how much? Also, how far from ideal most cost effective insulation is the average American house or apartment building?
Thanks to Ergosphere's E-P for the NY Times article tip.
The political opportunity exists for selling energy conservation policies to the American people. An overwhelming majority of the American people want to find ways to reduce foreign oil use.
New Haven, Conn. - A new Yale University research survey of 1,000 adults nationwide reveals that while Americans are deeply divided on many issues, they overwhelmingly believe that the United States is too dependent on imported oil.
The survey shows a vast majority of the public also wants to see government action to develop new "clean" energy sources, including solar and wind power as well as hydrogen cars.
92% of Americans say that they are worried about dependence on foreign oil
93% of Americans want government to develop new energy technologies and require auto industry to make cars and trucks that get better gas mileage
The results underscore Americans' deep concerns about the country's current energy policies, particularly the nation's dependence on imported oil. Fully 92 percent say this dependence is a serious problem, while 68 percent say it is a "very serious" problem.
But keep in mind that most people do not want to be inconvenienced by policies designed to reduce oil dependence.
Energy policy reminds me of immigration policy as a subject area where the elites are lagging behind and ignoring the desires of the masses. I think most people understand tha the world's increasing dependence on a country which won't allow women to drive and whose majority admires Osama Bin Laden is a bad thing. That country shows few signs of reforming.
ATLANTA - Why are some people shy while others are outgoing? A study in the current issue of Science demonstrates for the first time that social behavior may be shaped by differences in the length of seemingly non-functional DNA, sometimes referred to as junk DNA. The finding by researchers at the Yerkes National Primate Research Center of Emory University and the Atlanta-based Center for Behavioral Neuroscience (CBN) has implications for understanding human social behavior and disorders, such as autism.
In the study, Yerkes and former CBN graduate student Elizabeth A.D. Hammock, PhD, and Yerkes and CBN researcher Larry J. Young, PhD, also of the Department of Psychiatry and Behavioral Sciences at Emory University's School of Medicine, examined whether the junk DNA, more formally known as microsatellite DNA, associated with the vasopressin receptor gene affects social behavior in male prairie voles, a rodent species. Previous studies, including Dr. Young's gene-manipulation study reported in Nature's June 17, 2004, issue, have shown the vasopressin receptor gene regulates social behaviors in many species.
The researchers bred two groups of prairie voles with short and long versions of the junk DNA. By comparing the behavior of male offspring after they matured, they discovered microsatellite length affects gene expression patterns in the brain. In the prairie voles, males with long microsatellites had higher levels of vasopressin receptors in brain areas involved in social behavior and parental care, particularly the olfactory bulb and lateral septum. These males spent more time investigating social odors and approached strangers more quickly. They also were more likely to form bonds with mates, and they spent more time nurturing their offspring.
I picture women who want their men to stay faithful some day surreptitiously injecting gene therapy into neck arteries of sleeping boyfriends or husbands to reprogram their microsatellite DNA to longer lengths around the vasopressin gene. And here's the twist: If the guy discovers he has been reprogrammed by his woman he'll be so attached to her that he won't want to leave her because of it.
"This is the first study to demonstrate a link between microsatellite length, gene expression patterns in the brain and social behavior across several species," said Young. "Because a significant portion of the human genome consists of junk DNA and due to the way microsatellite DNA expands and contracts over time, microsatellites may represent a previously unknown factor in social diversity."
Hammock and Young's finding extends beyond social diversity in rodents to that in apes and humans. Chimpanzees and bonobos, humans' closest relatives, have the vasopressin receptor gene, yet only the bonobo, which has been called the most empathetic ape, has a microsatellite similar to that of humans. According to Yerkes researcher Frans de Waal, PhD, "That this specific microsatellite is missing from the chimpanzee's DNA may mean the last common ancestor of humans and apes was socially more like the bonobo and less like the relatively aggressive and dominance-oriented chimpanzee."
The researchers' finding also has set a clear course for the next step. They want to build upon previous studies that identified a microsatellite sequence in the human vasopressin receptor that varies in length. "The variability in the microsatellite could account for some of the diversity in human social personality traits," explains Hammock. "For example, it may help explain why some people are naturally gregarious while others are shy." In particular, Young wants his research team to expound upon studies that have identified a link with autism.
Research in prairie voles has provided evidence about vasopressin effects on pair bonding in prairie voles that led to the discovery that pair bonding in humans involves some of the same brain areas as are seen in prairie voles. It is not far fetched to take the discovery of vasopressin receptor microsatellite DNA's role behavior in prairie voles as a reason to look for similar DNA in humans playing a regulatory role in human behavior.
The researchers first showed in cell cultures that the vole vasopressin receptor microsatellites could modify gene expression. Next, they bred two strains of a monogamous species, the prairie vole – one with a long version of the microsatellites and the other with a short version. Adult male offspring with the long version had more vasopressin receptors in brain areas involved in social behavior and parenting (olfactory bulb and lateral septum). They also checked out female odors and greeted strangers more readily and were more apt to form pair bonds and nurture their young.
"If you think of brain circuits as locked rooms, the vasopressin receptor as a lock on the door, and vasopressin as the key that fits it, only those circuits that have the receptors can be 'opened' or influenced by the hormone," added Hammock. "An animal's response to vasopressin thus depends upon which rooms have the locks and our research shows that the distribution of the receptors is determined by the length of the microsatellites."
Prairie voles with the long version have more receptors in circuits for social recognition, so release of vasopressin during social encounters facilitates social behavior. If such familial traits are adaptive in a given environment, they are passed along to future generations through natural selection.
Variability in vasopressin receptor microsatellite length could help account for differences in normal human personality traits, such as shyness, and perhaps influence disorders of sociability like autism and social anxiety disorders, suggest the researchers.
Will humans choose to biologically engineer their male offspring to be much more social? If future generations of men want to gossip endlessly about human relations this could be a problem for those of us with natural male brains. If due to rejuvenation therapies I live to see society dominated by highly social males I'm going to found a club of old style men who can hang around and talk about cars or airplanes or anything else for that matter. Or better yet, go through long periods of not talking at all.
Adult stem cells are hard to grow. But MIT Whitehead Institute researchers have discovered that turning on a gene that is active in the early embryo causes adult stem cells to grow rapidly.
While research on human embryonic stem cells gets most of the press, scientists are also investigating the potential therapeutic uses of adult stem cells. Although less controversial, this research faces other difficulties. Adult stem cells are extremely difficult to isolate and multiply in the lab.
Now, as reported in the May 6 issue of Cell, researchers led by Rudolf Jaenisch of the Whitehead Institute have discovered a mechanism that might enable scientists to multiply adult stem cells quickly and efficiently.
"These findings provide us with a new way of looking at adult stem cells and for possibly exploiting their therapeutic potential," says Jaenisch, who also is a professor of biology at MIT.
I have repeatedly argued that it is just a matter of time before scientists find ways to turn adult stem cells into cells that can become any other cell type. This latest research from MIT is certainly a step in that direction. Note that these scientists used existing knowledge that the gene Oct4 is known to be active in embryonic stem cells. They turned that same gene on in adult stem cells. So this research is a clear step in the direction of making adult stem cells more like embryonic stem cells.
This research focuses on a gene called Oct4, a molecule that is known to be active in the early embryonic stage of an organism. Oct4's primary function is to keep an embryo in an immature state. It acts as a gatekeeper, preventing the cells in the embryo from differentiating into tissue-specific cells. While Oct4 is operating, all the cells in the embryo remain identical, but when Oct4 shuts off, the cells begin growing into, say, heart or liver tissue.
Konrad Hochedlinger, a postdoctoral researcher in Jaenisch's lab, was experimenting with the Oct4 gene, curious to see what would happen in laboratory mice when the gene was reactivated in adult tissue in which it had long been dormant. Hochedlinger found that when he switched the gene on, the mice immediately formed tumors in the gut and in the skin where the gene was active. When he switched the gene off, the tumors subsided, demonstrating that the process is reversible.
Discovering that simply flipping a single gene on and off had such an immediate effect on a tumor was unexpected, even though Oct4 is known to be active in certain forms of testicular and ovarian cancer. Still, the most provocative finding was that "Oct4 causes tumors by preventing adult stem cells in these tissues from differentiating," says Hochedlinger. In other words, with Oct4 active, the stem cells could replicate themselves indefinitely, but could not produce mature tissue.
One of the main obstacles with adult stem cell research is that,
This experiment showed that when Oct4 was reactivated, the adult stem cells in those tissues continued to replicate without forming mature tissue. In a mammal's body, this type of cell behavior causes tumors. But under the right laboratory conditions, it could be a powerful tool.
"This may allow you to expand adult stem cells for therapy," Hochedlinger said. "For instance, you could remove a person's skin tissue, put it in a dish, isolate the skin stem cells, then subject it to an environment that activates Oct4. This would cause the cells to multiply yet remain in their stem cell state. And because this process is reversible, after you have a critical mass of these cells, you can then place them back into the person where they would grow into healthy tissue."
"This could be very beneficial for burn victims," Jaenisch said.
The difference between adult and embryonic stem cells is just that they are in different regulatory states. Think of the genome of a cell as having a big set of switches on it with the pattern of which switches are set On and Off being one way in embryonic cells and other ways in other cell types. One reason I haven't been pessimistic about limitations on human embryonic stem research is that I expect scientists working with adult stem cells to find ways to change their regulatory state (i.e. their pattern of On and Off for their genetic switches) into the same states as is found in embryonic stem cells.
The irony of religious opposition to human embryonic stem cell research is that it logically leads scientists to look harder for ways to make adult stem cells act more like embryonic stem cells. The inevitable outcome of this search will be development of techniqes that convert adult stem cells into cells that can be turned into all other cell types - just as embryonic stem cells can. The ability of embryonic stem cells to turn into all other cell types is called pluripotency. Once non-embryonic stem cells can be made pluripotent then the religious opponents of human embryonic stem cell research are going to have to decide whether they believe all pluripotent stem cells are mini-humans or not.
Even if work with all human pluripotent stem cells is outlawed regardless of what cell type is converted into the pluripotent state or how it is turned into the pluripotent state that still won't stop scientists from manipulating adult stem cels into all other cell types. Such a ban would be just another regulatory barrier that could be programmed around with genetic engineering. Scientists could respond to that ban by fiding some difference between fully pluripotent cells and slightly differentiated cells and convert cells into slightly differentiated (i.e. slightly specialized) states rather into the fully pluripotent state.
So far the acrimonious debate about human embryonic stem cells has caused a big increase in adult stem cell funding by the US federal government (over $500 million per year in total stem cell research funding) and the passage of an initiative in California to spend $300 million per year on stem cells without a restriction on human embryonic funding. So obviously funding levels have risen greatly. The rate of advance is accelerating. But I'd like to see even greater acrimonious debate so that we can get total funding over $1 billion. Come on, get mad at each other. We need more funding!
When I was in grade school and high school I always wanted to stay up later and sleep later than school schedules allowed. My mind was always better later in the day and in the evening. Well, a new study vindicates by feeling at the time that school was not set up for my circadian cycle. Kids are being forced wake up and get going about 2 hours too early.
Current high school start times deprive adolescents of sleep and force students to perform academically in the early morning, a time of day when they are at their worst, according to a study in the June issue of the journal Pediatrics.Results from high school senior sleep/wake diaries kept for the study also showed that adolescents lost as much as two hours of sleep per night during the school week, but weekend sleep times during the school year were similar to those in summer.
Advanced placement biology students were recruited for the study. So the kids do not sound like they were slackers.
The study was a collaborative project involving researchers at the Feinberg School of Medicine and the Center for Sleep and Circadian Biology at Northwestern University and faculty, students and parents from Evanston Township High School, Evanston, Ill. The students were advanced placement biology students who helped conduct the study and analyze the collected data.
Martha Hansen, advanced placement biology teacher and current science department chair at Evanston Township High School, headed the project in collaboration with Margarita L. Dubocovich, professor of molecular pharmacology and biological chemistry and of psychiatry and behavioral sciences, Feinberg; and Phyllis C. Zee, M.D., professor of neurology, Feinberg.
The study assessed the impact of sleep loss after the start of school on cognitive performance and mood and examined the relationship of weekday to weekend sleep in adolescents.
The study also showed that
exposure to bright light in the morning did not modify students' sleep-wake cycle or improve daytime performance during weekdays probably because of their strict school schedule. All students performed better in the afternoon than in the morning.Students in early morning classes reported being wearier, less alert and having to expend greater effort. Potential solutions to this problem could be solved by changing school start times and by giving standardized tests later in the day, the authors suggested.
For example, classes at Evanston Township High School start at 8:05 a.m. and run until 3:35 p.m. – one of the longest school days in Illinois. Many high schools in the country have start times of 7:15 or 7:30 a.m. In addition, almost all standardized tests in high school begin at 8 a.m.
Since this is when adolescents show their poorest performance levels, a change is clearly needed and would be relatively easy to negotiate, the researchers suggest.
Technology should be used to allow kids to adjust their learning schedules to their body's circadian rhythms. The use of pre-recorded high quality and high resolution lectures would allow kids to watch lectures on difficult subjects when their minds feel keen enough to handle difficult material. Our current regimented method of marching kids through a series of fixed time length classes strikes me as a hold-over from the factory era. Lecture delivery could be done electronically at any time of the day or night. A kid who has a hankering to just listen to hours of biology on one day and hours of history on another day ought to be able to do that as long as all the needed material is viewed. Or if the kid wants to watch physics lectures only after 9 PM then make it easy to do so.
I can even picture electronic methods to detect whether each kid paid attention to n hours of biology lectures and m hours of calculus lectures. Biometric scanning equipment attached to a device that plays lecture videos could track whether each kid has watched each lecture. Or kids could have to set for automatically delivered tests to monitor their progress.
Kids could even win greater flexibility in the use of their time by meeting testing goals. A kid who manage to, say, test as being a month ahead of schedule could be allowed to spend more hours listening to music, watching movies, playing video games, or other activities. We should make education less like a planned regimented socialist economy and give kids ways to earn the ability to exercise greater control of their time. I bet many kids would learn more rapdily and also be happier about learning.
Ashkenazi Jews pose two mysteries for biological science. First, why do they have so many genetic diseases that fall into just a few categories of metabolic function such as the sphingolipid storage diseases Tay-Sachs, Gaucher, Niemann-Pick, and mucolipidosis type IV? The rates of such diseases are so high that their incidence must be the result of either a recent genetic bottleneck where the Ashkenazi population was very small or natural selective pressures aimed at some other phenotype(s) selected for these genotypes due to advantages that those genotypes offer for other functionality. The second mystery is why are Jews so smart? Granted, a lot of Jews want to argue that they are just studious due to their culture. Also, lots of ideologues - particularly on the political Left - stand ready to attack anyone who argues that ethnic and racial groups differ in average intelligence. But the higher average level of Ashkenazi Jewish intelligence is so glaringly obvious that I figure anyone who tries to argue otherwise is either engaged in intellectual con artistry or is ignorant or foolish. So again, why are Jews so smart?
Well, three researchers at the University of Utah, anthropologist Henry Harpending, Gregory Cochran (a Ph.D. physicist turned genetic theorist), and Jason Hardy put forth a hypothesis that seeks to explain both mysteries simultaneously. Nicholas Wade of the New York Times has written one of the two news stories about it to date. The proposed hypothesis holds that Jews developed their genetic diseases as a side effect of strong selective pressures for higher intelligence during the Middle Ages as they were forced to work mainly in occupations that required greater cognitive ability. (same article here)
A team of scientists at the University of Utah has proposed that the unusual pattern of genetic diseases seen among Jews of central or northern European origin, or Ashkenazim, is the result of natural selection for enhanced intellectual ability.
The selective force was the restriction of Ashkenazim in medieval Europe to occupations that required more than usual mental agility, the researchers say in a paper that has been accepted by the Journal of Biosocial Science, published by Cambridge University Press in England.
The Economist has the other article about this research paper. The distribution of the Jewish genetic diseases is clustered too much into a few areas of genetic functionality This concentration of mutations argues for selective pressures as the logical explanation for rate of occurrence of these mutations in Ashkenazi Jews.
What can, however, be shown from the historical records is that European Jews at the top of their professions in the Middle Ages raised more children to adulthood than those at the bottom. Of course, that was true of successful gentiles as well. But in the Middle Ages, success in Christian society tended to be violently aristocratic (warfare and land), rather than peacefully meritocratic (banking and trade).
Put these two things together—a correlation of intelligence and success, and a correlation of success and fecundity—and you have circumstances that favour the spread of genes that enhance intelligence. The questions are, do such genes exist, and what are they if they do? Dr Cochran thinks they do exist, and that they are exactly the genes that cause the inherited diseases which afflict Ashkenazi society.
That small, reproductively isolated groups of people are susceptible to genetic disease is well known. Constant mating with even distant relatives reduces genetic diversity, and some disease genes will thus, randomly, become more common. But the very randomness of this process means there should be no discernible pattern about which disease genes increase in frequency. In the case of Ashkenazim, Dr Cochran argues, this is not the case. Most of the dozen or so disease genes that are common in them belong to one of two types: they are involved either in the storage in nerve cells of special fats called sphingolipids, which form part of the insulating outer sheaths that allow nerve cells to transmit electrical signals, or in DNA repair. The former genes cause neurological diseases, such as Tay-Sachs, Gaucher's and Niemann-Pick. The latter cause cancer.
That does not look random. And what is even less random is that in several cases the genes for particular diseases come in different varieties, each the result of an independent original mutation. This really does suggest the mutated genes are being preserved by natural selection. But it does not answer the question of how evolution can favour genetic diseases. However, in certain circumstances, evolution can.
Greg has referred to this hypothesis as "overclocking". The analogy is to overclocking computer processors (computer processing units or CPUs). Some hobbyists turn up the clocks on their desktop PCs to them run faster than they were designed to run. This can cause system instability and other problems. In the case of the Ashkenazis in Europe the hypothesis proposes that selective pressures for higher Ashkenazi intelligence were so high that it caused the propagation of mutations that pushed their intelligence up so quickly (evolutionarily speaking) that the selective pressure overrode the reduction in reproductive fitness caused by the deleterious side effects on some of those who received those mutations. The problem with overclocking is that "Sometimes you get away with it, sometimes you don't."
But I'll hazard a guess: the change accelerates some brain system tied to cognitive functioning - nearly redlines it, leaves it vulnerable to common insults in a way that can cause spectacular trouble. You might compare to overclocking a chip. Sometimes you get away with it, sometimes you don't.
More generally, if this is what I think it is, all these Ashkenazi neurological diseases are hints of ways in which one could supercharge intelligence. One, by increasing dendrite growth: two, by fooling with myelin: three, something else, whatever is happening in torsion dystonia. In some cases the difference is probably an aspect of development, not something you can turn on and off. In other cases, the effect might exist when the chemical influence is acting and disappear when the influence does. In either case, it seems likely that we could - if we wanted to - developed pharmaceutical agents that had similar effects. The first kind, those affecting development, would be something that might have to be administered early in life, maybe before birth. while the second kind would be 'smart pills' that one could pop as desired or as needed. Of course, we have to hope that we can find ways of improving safety. Would you take a pill that increased your IQ by 10 or 15 points that also had a 10% chance of putting you in a wheel chair?
Looked at from this perspective many Jews have paid and continue to pay a high price from the effects of mutations that "overclock" their brains.
This hypothesis cries out to be tested because if it is proven then, as Greg points out, these mutations point in directions for research aimed at raising human intelligence. Drugs or gene therapies that raise intelligence would have enormous economic value and one can even put a price tag on the value of higher intelligence. However, such calculations understate the economic value of higher intelligence because most of the value of scientific and technological knowledge produced by high IQ people flows to lower IQ people.
The paper is downloadable as a 40 page PDF (on big PDFs I get better results downloading to a file and then opening rather than running Acrobat Reader from within a browser).
This paper elaborates the hypothesis that the unique demography and sociology of Ashkenazim in medieval Europe selected for intelligence. Ashkenazi literacy, economic specialization, and closure to inward gene flow led to a social environment in which there was high fitness payoff to intelligence, specifically verbal and mathematical intelligence but not spatial ability. As with any regime of strong directional selection on a quantitative trait, genetic variants that were otherwise fitness reducing rose in frequency. In particular we propose that the well-known clusters of Ashkenazi genetic diseases, the sphingolipid cluster and the DNA repair cluster in particular, increase intelligence in heterozygotes. Other Ashkenazi disorders are known to increase intelligence. Although these disorders have been attributed to a bottleneck in Ashkenazi history and consequent genetic drift, there is no evidence of any bottleneck. Gene frequencies at a large number of autosomal loci show that if there was a bottleneck then subsequent gene flow from Europeans must have been very large, obliterating the effects of any bottleneck. The clustering of the disorders in only a few pathways and the presence at elevated frequency of more than one deleterious allele at many of them could not have been produced by drift. Instead these are signatures of strong and recent natural selection.
Their argument against a population bottleneck is key to their larger argument. Dismissal of the bottleneck argument leads inevitably to the argument that the frequency of these mutations that cause genetic diseases must be the result of selective pressure. If they are the result of selective pressure then the next obvious question is what was being selected for? Cochran, Harpending, and Hardy claim higher intelligence increased reproductive fitness for Jews in medieval Europe who were legally prevented from performing in occupations that had lower need for intelligence. Simultaneously Jews were allowed to work in more cognitively demanding occupations involving money handling even as the Catholic Church banned Christians from many of those same occupations.
They take their argument all the way down to the molecular level and argue that the sphingolipid mutations in some of the Jewish genetic diseases boost glucosylceramide which in turn boosts neural axon growth.
The sphingolipid storage mutations were probably favored and became common because of natural selection, yet we don’t see them in adjacent populations. We suggest that this is because the social niche favoring intelligence was key, rather than geographic location. It is unlikely that these mutations led to disease resistance in heterozygotes for two reasons. First, there is no real evidence for any disease resistance in heterozygotes (claims of TB resistance are unsupported) and most of the candidate serious diseases (smallpox, TB, bubonic plague, diarrheal diseases) affected the neighboring populations, that is people living literally across the street, as well as the Ashkenazim. Second and most important, the sphingolipid mutations look like IQ boosters. The key datum is the effect of increased levels of the storage compounds. Glucosylceramide, the Gaucher storage compound, promotes axonal growth and branching (Schwartz et al., 1995). In vitro, decreased glucosylceramide results in stunted neurons with short axons while an increase over normal levels (caused by chemically inhibiting glucocerebrosidase) increases axon length and branching. There is a similar effect in Tay-Sachs (Walkley et al., 2000; Walkley, 2003): decreased levels of GM2 ganglioside inhibit dendrite growth, while an increase over normal levels causes a marked increase in dendritogenesis. This increased dendritogenesis also occurs in Niemann-Pick type A cells, and in animal models of Tay- Sachs and Niemann-Pick.
Figure 1, from Schwartz et al. (1995) shows the effect of glucosylceramide, the sphingolipid that accumulates in Gaucher disease. These camera lucida drawings of cultured rat hippocampal neurons show the effect of fumonisin, which inhibits glucosylceramide synthesis, and of conduritol B-epoxide (CBE) which inhibits lysosomal glycocerebrosidase and leads to the accumulation of glucosylceramide, thus mimicking Gaucher disease. Decreased levels of glucosylceramide stunt neural growth, while increased levels caused increased axonal growth and branching.
Dendritogenesis appears to be a necessary step in learning. Associative learning in mice significantly increases hippocampal dendritic spine density (Leuner et al., 2003), while enriched environments are also known to increase dendrite density (Holloway, 1966). It is likely that a tendency to increased dendritogenesis (in Tay-Sachs and Niemann-Pick heterozygotes) or to increased axonal growth and branching (in Gaucher heterozygotes) facilitates learning.
Heterozygotes have half the normal amount of the lysosomal hydrolases and should show modest elevations of the sphingolipid storage compounds. A prediction is that Gaucher, Tay-Sachs, and Niemann-Pick heterozygotes will have higher tested IQ than control groups, probably on the order of 5 points.
We do have strong but indirect evidence that one of these, Gaucher disease, does indeed increase IQ. Professor Ari Zimran, who heads the Gaucher Clinic at the Shaare Zedek Medical Centre in Jerusalem, furnished us a list of occupations of 302 Gaucher patients. Because of the Israeli medical care system, these are essentially all the Gaucher patients in the country. Of the 255 patients who are not retired and not students, 81 are in occupations that ordinarily average IQ’s greater than 120. There are 13 academics, 23 engineers, 14 scientists, and 31 in other high IQ occupations like accountants, physicians, or lawyers. The government of Israel states that 1.35% of Israeli’s working age population are engineers or scientists, while in the Gaucher patient sample 37/255 or 15% are engineers or scientists. Since Ashkenazim make up 60% of the workforce in Israel, a conservative base rate for engineers and scientists among Ashkenazim is 2.25% assuming that all engineers and scientists are Ashkenazim. With this rate, we expect 6 in our sample and we observe 37. The probability of 37 or more scientists and engineers in oursample, given a base rate of 2.25%, is approximately 4 x 10-19 . There are 5 physicists in the sample, while there is an equal number, 5, of unskilled workers. In the United States the fraction of people with undergraduate or higher degrees in physics is about one in one thousand. If this fraction applies even approximately to Israel the expected number of physicists in our sample is 0.25 while we observe 5. Gaucher patients are clearly a very high IQ subsample of the general population.
Are there Ashkenazi mutations other than these sphingolipid storage disorders that likely became common because of strong selection for IQ? There are several candidates.
Ever since torsion dystonia among the Ashkenazim was first recognized, observers have commented on the unusual intelligence of patients. Flatau and Sterling (Eldridge, 1976) describe their first patient as showing “an intellectual development far exceeding his age”, and their second patient as showing “extraordinary mental development for his age.” At least ten other reports in the literature have made similar comments. Eldridge (1970, 1976) studied 14 Jewish torison dystonia patients: he found that their average IQ before the onset of symptoms was 121, compared to an averge score of 111 in a control group of 14 unrelated Jewish children matched for age, sex, and school district. Riklan and colleagues found that 15 Jewish patients with no family history of dystonia (typical of DYT1 dystonia) had an average verbal IQ of 117 (Eldridge, 1979; Riklan et al., 1976).
If this hypothesis is correct (and I believe it is) then it is problematic for efforts to raise human intelligence. How many of the intelligence raising genetic variants bring undesirable side effects? Some scientists speculate that assortive mating of high IQ people is contributing to a rising incidence of autism and Asperger's Syndrome. As smart people become more likely to breed with other smart people the odds increase that pairs of autosomal recessives or other problematic combinations of intelligence boosting genes will be inherited by offspring.
Has human intelligence been selected for so rapidly in the last couple of thousand years that a large portion of all intelligence boosting mutations have undesirable side effects? When a selective pressure is strong early adaptations will have side effects. Henry Harpending explained in the gnxp.com thread on this subject:
Re mechanism: The argument (well known to breeders where there is no argument) goes like this:
In a drastic new environment there is big fitness payoff to IQ. In this new environment there is a payoff to "turning down" BRCA1 to free up early CNS development but at the cost of higher cancer rates later in life. Eventually, especially in a big population, a BRCA1 variant with the optimum activity will show up. Meanwhile carriers of one normal and one broken BRCA1 gene have a big fitness advantage because they have, say, 90% of normal suppression of early CNS development. So the broken BRCA1 allele is favored by selection even though homozygotes for it die. After a long time it would be replaced by the optimum allele but it takes a long time for that optimum allele to show up.
Exactly this argument applies to myostatin in several European breeds of beef cattle: it causes muscle hypertrophy and obstetric difficulties. The muscle hypertrophy is good but the obstetric difficulties require veterinarians and in the wild would have been lethal.
Re the implications of our model for eugenics, yes, big time, eugenics is IMHO a route to disaster. Well understood engineered gene introductions could be fine but eugenics would be almost certain to bring all kinds of nightmares.
But keep in mind that the human race already has many genetic variations to choose from that contribute to determining cognitive ability. A massive comparison of DNA sequence information between hundreds of thousands of people combined with IQ testing and collection of a lot of life history and medical history information could demonstrate many of the positive and negative effects of each genetic variation which affects cognitive function. Likely some will be better optimized to provide a cognitive boost without much downside.
Advances in biotechnology will provide ways to avoid some of the harmful side effects of these "overclocking" mutations. One way to accomplish this would be to discover regulatory regions in the genome that could be harnessed to selectively turn on the mutated genes only in the nervous system and turn on normal versions of these genes only in cells outside of the nervous system.
I've got to state the obvious because the obvious is politically incorrect: If smart people have more babies than dumb people the average IQ will rise. If dumb people have more babies than smart people then the average IQ will drop. I'm guessing the latter is currently happening. Bummer dudes.
Proof of this hypothesis would point scientists in the direction of genes to look at for intelligence enhancement. For example, if the mutation for Gaucher's disease causes an IQ boost then drugs that increase the level of glucosylceramide in neurons might accelerate learning by increasing the rate of axon growth to connect neurons to each other.
The hypothesis could be tested fairly rapidly. Recruit some thousands of Ashkenazi Jews to take IQ tests and to have a few dozen genes tested for assorted genetic variations. Compare the IQ test results to the genetic tests and see if all the known genetic variations in sphingolipid storage metabolism, DNA repair, and several other categories account for a large proportion of Ashkenazi Jewish genetic variations.
We also need to find out whether these various potential intelligence boosting mutations have differing effects from each other on other aspects of cognition. Anyone recruited into testing the hypothesis should also have information collected on their mental health, personality, preferences, values, educational history, occupation, income, criminal record, and anything else that might provide clues as the effects of these mutations on cognitive function. For example, do some IQ-boosting mutations favor a career in law whereas others favor a career in medicine or science or math?
Jewish efforts to avoid passing along genes that have harmful effects might be lowering average Jewish intelligence. Some of the genetic variants (e.g. the genes underlying Tay-Sachs, Gaucher, and Niemann-Pick diseases) are autosomal recessive and therefore cause diseases only when a person has two copies of them. If having single copies of these genetic variations boosts intelligence but Jewish couples engage in practices that reduce the number of copies they pass along in general (e.g. by using pre-implantation genetic diagnosis to choose an embryo that has 0 copies of a mutation) then that will reduce the number of Jewish babies born with single copies and therefore if the hypothesis is correct then the resulting babies will be less bright than the average Ashkenazi Jew.
Should the hypothesis be proven then Jewish breeding practices could be adjusted to maximize the benefit of intelligence boosting genetic variations while avoiding the harmful effects. Ideally each child should get one and only one copy of each genetic variation that is autosomal recessive for diseases. Get the intelligence boosting benefit of a single copy while avoiding the diseases that come from having two copies. To execute this strategy a Jewish person would need to get genetically tested and then look for a mate who has complementary mutations for higher intelligence.
If each member of a couple has one copy of an autosomal recessive mutation then, on average, 2 out of 4 pregnancies they start will have the exact 1 desired recessive mutation. But 1 of the other 4 pregnancies will have 2 copies and hence would result in genetic disease. The other 1 of the 4 pregnancies would not have the IQ boosting genetic mutation and hence would not be as smart. If the couple each have the same 3 different autosomal recessives mutations that each boost IQ then the odds of getting a baby that has exactly 1 copy of each of the 3 mutations is only 1 in 8.
The low odds of getting all the desired mutations with the optimal number of copies of each mutation poses a big problem to aspiring eugenicists whether Jewish or non-Jewish. One biotechnological approach to solving this problem would use microfluidics devices to separate and identify each chromosome from a cell to get just exactly the set of chromosomes from each parent chosen for an optimal trade-off of cognitive ability and other qualities. Then somehow insert all those chromosomes back into a cell and kick it into an embryonic state. But we are probably 10 or 20 years away from having such a capability.
Every time a man or woman chooses someone to mate with they are making choices based on the appearances, status, demonstrated intelligence, and other qualities of that person. Women attracted to rock stars, movie stars, and sports stars are driven by genetically caused eugenic desires.
Use of genetic tests to choose a mate is already done to avoid passing on harmful mutations to offspring. This practice will become much more widespread as the significance of more genetic variations becomes known. The negative connotations associated with the term eugenics are already wearing off. As more people can derive benefits from the use of genetic information to guide reproductive decisions eugenic practices will become very widespread. When that happens the term eugenics may be replaced by a different term that effectively means the same thing. But regardless of what it gets called eugenics will become widely accepted and practiced.
Step back and look at Jewish and European history from the context of this hypothesis. A few things come to mind. First off, Middle Ages bans on Christian money lending created an environmental niche in which high IQ was selected for in Jews. This led to a few important historical consequences. First off, it led to financial and reproductive success of urban Jews and hence resentment against them by both elites and masses in Europe. This resentment of course led to pogroms and Hitler's "Final Solution". There's an old Japanese saying that comes to mind: "The nail that sticks up gets hammered down". Well, smart Jews stood out and the response of jealousy and resentment against the more successful "other" is a recurring theme in human history.
But here's the twist: Catholic usury restrictions, by creating an environmental niche that selected for higher Ashkenazi IQs, therefore made possible the eventual return of Jews to Israel. An ethnic group of much lower intelligence never would have been able to pull off the creation and defense of a state in that location against such hostile neighbors.
The persecutions of Jews can also be seen in the context of successful minorities around the world. Yale law professor Amy Chua wrote a book about persecution of economically successful minorities entitled World on Fire: How Exporting Free Market Democracy Breeds Ethnic Hatred and Global Instability where she describes attacks in a number of countries (e.g. Indonesia) against Chinese and other groups that are minorities that are economically more successful than the majorities in countries where they live.
Suppose the successful minorities who are persecuted are successful as a result of genetically caused higher intelligence or perhaps due to other genetically controlled cognitive qualities. When this becomes proven scientifically and becomes widely known will the knowledge lead to more or less persecution of cognitively more able and more successful ethnic groups? For example, will Malaysians or Indonesians resent Chinese people even more if genetically caused higher intelligence in Chinese becomes the accepted explanation for greater Chinese economic success in Malaysia and Indonesia? Or will lower class people become more willing to accept their lots in life if group average differences in genetic endowments for cognitive ability are shown to be responsible for the bulk of inter-group differences in incomes, wealth, achievement, and status?
If you are interested in the evolution of human intelligence, the methods by which evolution has changed the human brain to make it smarter, or how changes in human societies can cause changes in natural selective pressure on human evolution then read this paper. If you are interested in the prospects for future intelligence enhancement then, again, read it. If you are interested in the causes interracial conflict or if you are interested in how religious and cultural practices can exert selective pressures on human populations then read it. If you want to dispute the hypothesis then read the full paper and examine their evidence before trying to disagree.
The savage persecutions suffered by Jews suggest that high intelligence can generate resentment among the masses. No doubt there will be some who will suggest that the Cochran-Harpending paper should have been suppressed to prevent awareness of the secret of Ashkenazi intelligence from seeping out.
But you have to be a true-blue intellectual to assume that the only way anybody would ever notice anything as obvious as Jewish brainpower is if it gets mentioned in the New York Times. Political correctness doesn't keep facts from being talked about—just from being written about in an intelligent, constructive manner.
Yes, everyone thinks Jews are smarter, even many people who publically deny they believe this. Persecutions of smart minorities happen already. An honest accurate discussion of the causes of resentment smarter and more successful groups would, in my view, make it easier to ameliorate the causes of resentment between ethnic groups. I think people would be less prone to ascribe Jewish successes to conspiracies if Jews were accepted as being smarter for genetic reasons. High IQ genes cause higher intelligence. Higher intelligence increases productivity when learning and working. Hence greater wealth. That's a lot less reason for resentment than the idea that some group is no more productive but engages in conspiracies to take from others.
Steve thinks the Parsis have managed to achieve great success while generating less resentment from other groups.
On the other hand, the happier experience of another ethnic minority that may also have evolved stronger intellectual capacities under similar urban conditions—the prosperous Parsis of Bombay—may offer clues to mitigating envy.
I wonder if the Parsis were able to do this simply because India was broken up into so many castes that the Parsis had a hard time being noticed by the average Indian.
In any case, the Cochran-Harpending paper offers a fairly new but crucial perspective on the old nature and nurture question. The researchers have demonstrated that it's quite possible for nurture to change nature. Culture can drive heredity. Economics and social customs alter gene frequencies.
This is an incredibly important point. Currently genetic variations for higher intelligence are being selected against in industrialised societies. We've probably changed selective pressures in other ways. But at this point I can only guess as to those chances. Are introverts or extroverts more or less likely to reproduce relative to each other than in the past? I don't know. Are genes for height being selected for? My guess is yes. The genes for obesity might be getting selected for in modern societies. I would have expected heavier weight people to have a harder time finding mates and hence be less likely to reproduce. But perhaps obese people are willing to settle for less desirable mates due to their own perceived lower attractiveness and hence they spend less time searching for for the ideal mate and hence start reproducing sooner and in greater numbers.
Humanity has not escaped from natural selection. The genes that code for the brain are not immune to the pressures of natural selection. Anyone remember the tune "Elvis is everywhere"? Well, "Darwin is everywhere".
Given that modern genetic technology will soon make it easy to ensure that a child has exactly one copy of such a gene, it seems like this sort of thing is a “low-hanging fruit” for genetic engineering. If such a gene is common, then having one copy is probably an evolutionary advantage - otherwise it would be (mildly) selected against. The main disadvantage is the chance that your kids might end up with two copies of the recessive, so if technology can prevent that, now you just have upside. I wouldn’t be suprised if it soon becomes common to give your offspring many such genes - which would have the side effect of making it impossible for them to reproduce with similarly endowed partners without using genetic engineering.
The problem is if you have a lot of recessives boost intelligence when you have a single copy and your mate has them too then the odds would be very low that your offspring would not get two copies of at least one of the genes. Given that these genes cause diseases when you get two copies that would make natural reproduction very risky for offspring.
The obvious solution to this problem is to adjust these genes with additional regulatory mechanisms to make them no longer cause diseases when two copies are present. But development of such regulatory mechanisms requires additional and difficult genetic engineering work.
March 2009 Update: This work has since become part of the basis of an excellent book by Cochran and Harpending entitled The 10,000 Year Explosion: How Civilization Accelerated Human Evolution.
A team including Michael Kosfeld and Ernst Fehr at the University of Zurich and oxytocin researcher Paul Zak at have found that oxytocin nasal spray makes people more trusting.
A Swiss-led research team tested their creation on volunteers playing an investment game for real money. When they inhaled the nasal spray, investors were more likely to hand over money to a trustee, knowing that, although they could make a hefty profit, they could also lose everything if the trustee decided not to give any of the money back.
The potion's magic ingredient is oxytocin, a chemical that is produced naturally in the brain. Its production is triggered by a range of stimuli, including sex and breastfeeding, and it is known to be important in the formation of social ties, such as mating pairs and parent-offspring bonds. It is perhaps no surprise that the compound has been nicknamed the 'love hormone'.
Note: Oxytocin is not the painkiller oxycontin.
"If I increase your level of oxytocin, I can induce you to overcome your anxiety in trusting a stranger," said Paul J. Zak, director of Claremont's center for neuroeconomic studies and a co-author of the research paper. "It is a [biochemical] signal we induce unknowingly all the time by looking people in the eye or shaking someone by the hand."
The effect of nasally administered oxytocin peaks in about 50 minutes and wears off about 2 hours after administration.
Neuroscientists, including the Swiss researchers, argue that oxytocin is not so much a trust serum as a kind of brain messenger that primes animals to overcome their natural aversion to others. It allows for what they call "approach behavior," that push to walk up and to a stranger and say hello.
This may be an especially important ability in people with autism. Whether oxytocin or other hormones could affect such behavior is unclear, but the oxytocin study suggests it is worth investigating.
If oxytocin turns out to benefit the painfully shy people then it is not hard to imagine many shy people electing to use it on themselves before entering social situations.
Paradoxically, Dr Fehr and his colleagues began the experiment because one of them believed that oxytocin signalled trustworthiness, rather than a propensity to trust.
The discovery is the first direct evidence that a hormone called oxytocin, which evolved 100 million years ago to aid mating among fish and breast-feeding among mammals, also promotes trust between human beings, the scientists said.
"Some may worry about the prospect that political operators will generously spray the crowd with oxytocin at rallies of their candidates," said neurologist Antonio R. Damasio of the University of Iowa, who has long studied the neurobiology of human emotions and who wrote a commentary accompanying the study.
At the same time, he added in an interview, politicians and marketers were probably already triggering the natural release of oxytocin in the brains of audiences through their campaigns. "I am more alarmed about the manipulations of marketing than the possibility of oxytocin sprays," he said.
Even if someone could spray a crowd at a political rally with an effective dose would the feeling of trust last beyond the point where the oxytocin wears off? Would an argument made to a person in a highly trusting state have more effect afterward than one made to a person in a more skeptical state of mind? My guess is yes.
How about a compond made to work like oxytocin which has a much longer half life? Such a compound would be effective when used in fanatical sects when indoctrinating new recruits.
The development of oxytocin blockers might be possible. Oxytocin could be used by people who frequently find themselves to be excessively trusting and who therefore often find themselves victimized by agreeing to deals with swindlers and manipulators. Entire populations might want to take oxytocin blockers during political campaigns or when going into environments which expose people to large amounts of advertising.
Oxytocin might be a useful tool in interrogations of terrorists. By engendering trust it might be a useful component in a truth serum. On the other hand, slow release and long half life oxytocin blockers embedded in a person's body could be used by spies and others at risk of capture and prolonged interrogation.
This study is part of a general trend: The moods and motivations of the human mind are becoming more susceptible to biochemical alteration. Oxytocin joins Valium, Prozac, Zoloft, Paxil, and a long list of other drugs which alter moods. Accumulating discoveries about how the mind works lead inevitably to the development of new drugs and, in the future, even gene therapies that alter emotions, motivations, and circuitry that delivers rewards. Arguments for the existence of free will are hard to square with scientific advances which show how to bend the will.
The Christian Science Monitor has an article on the rising interest in third generation plus and fouth generation nuclear power plants. Nuclear power could meet projected future demand for electricity.
In the US alone, utilities will need to build 281 gigawatts of new generating capacity by 2025 as demand rises and older coal- and oil-fired plants are closed, the DOE estimates.
Would you rather have 281 1 gigawatt coal-fired electric generator plants or 281 1 gigawatt nuclear powered electric generator plants? In the future wind and solar may add to those choices. But for large scale expansion of base generation capacity the two realistic choices today are nuclear and coal. If you oppose one you have to be willing to support the other.
Nuclear designs most likely in the next wave of nuclear reactor construction will be simplified versions of existing light water reactors refined to have fewer possible points of failure and more passive handling of problems.
The latest designs likely to hit the grid come from US manufacturers Westinghouse and General Electric, as well as foreign companies such as Areva in France.
These designs make extensive use of natural processes, such as convection and gravity, in their emergency cooling systems instead of the mammoth pumps and series of valves found in older reactors, which are prone to failure or operator error, says Per Peterson, who chairs the nuclear engineering department at the University of California at Berkeley. Only a small number of battery-operated valves need to open for the emergency cooling systems to kick in. The combination not only reduces the amount of internal plumbing at the plant, he says, it also reduces the need for diesel generators that keep the cooling system operating in case the plant is shut down for maintenance or an emergency.
Overall, "the new, simplified designs eliminate an enormous amount of equipment inside the reactor building," he says. That reduction leads to plants that are much cheaper to build and maintain, he adds. These designs first emerged about four years ago as "third-generation" designs. They have evolved into what many are calling third-generation-plus designs.
The article also reviews a multinational effort of the United States, Britain, Japan, and several other countries to develop fourth generation nuclear reactor designs with the goal of choosing a couple of new designs by 2012. I think they ought to accelerate their work and develop next generation designs more rapidly.
The Christian Science Monitor also has an article about "capture ready" coal plants build to be more easily upgraded to capture and sequester carbon dioxide emissions.
Even environmentalists are wary. Some see the capture-ready idea as another excuse for power companies to drag their heels on a far more advanced clean-coal technology called integrated gasification combined cycle or IGCC.
A big question is cost. Although making a plant capture ready represents only a small fraction of a power plant's construction budget, the equipment to capture CO2 would almost certainly run into serious money, experts say. Even if a reasonable technology were found, installing it in a capture-ready coal plant would raise construction costs some 50 percent (75 percent for plants not capture ready), Gibbins estimates. And running such a plant would raise the cost of producing electricity at least 40 percent due to heat loss involved in the carbon-capture process, he adds.
What is the cost of IGCC versus capture ready plants that are upgraded to do CO2 capture? I'm guessing IGCC would be cheaper. Also, I'm guessing that IGCC will be less polluting for other categories of pollutants such as sulfur oxides and mercury. Anyone know for sure?
Also, how does IGCC electric compare to nuclear electric in cost?
See my previous post "Cost Estimates For New Nuclear Power Plants".
In April, the United States Centers for Disease Control and Prevention released a study challenging the conventional wisdom that eating less promotes longevity. The study found that the very thin run roughly the same risk of early death as the overweight. And now the tide seems to be turning against a common explanation for the long-standing observation that restricting food in lab organisms from yeast to mice prolongs life.
Many studies have indicated that it’s calorie reduction, rather than the specific source of calories, that increases longevity. That this effect occurs in such diverse organisms suggests a common mechanism may be at work, though none has been definitively characterized. And while calorie restriction enhances longevity in mice, it has not always done so in rats. In a new study, William Mair, Matthew Piper, and Linda Partridge show that flies can live longer without reducing calories but by eating proportionally less yeast, supporting the notion that calorie-restriction-induced longevity may not be as universal as once thought.
Dietary restriction in Drosophila involves diluting the nutrients in the fly’s standard lab diet of yeast and sugar to a level known to maximize life span. Since both yeast (which contributes protein and fat) and sugar (carbohydrates) provide the same calories per gram, the authors could adjust nutrient composition without affecting the calorie count, allowing them to separate the effects of calories and nutrients. The standard restricted diet had equivalent amounts of yeast and sugar (65 grams each) and an estimated caloric content of 521, while the yeast-restricted (65 g yeast/150 g sugar) and sugar-restricted (65 g sugar/150 g yeast) diets each had just over 860 calories. The control diet for the flies had equivalent amounts of sugar and yeast (150 grams), amounting to an estimated 1,203 calories.
First, the authors had to make sure the flies didn’t change their eating behavior to make up for a less nutritious diet. (They didn’t.) Reducing both nutrients increased the flies’ life spans, but yeast had a much greater effect: reducing yeast from control to dietary restriction levels increased median life span by over 60%.
But why this result?
Why might different factors promote longevity in flies and rats? It could be that the caloric-restriction/longevity paradigm needs more rigorous review—though a vast body of literature does support it. Or it may be that the animals use the same strategy for dealing with food shortages—shifting resources from reproduction to survival, for example—but have evolved different mechanisms for doing so that reflect each species’s life history, diet, and environment. Whatever explains the disparity, this study should give researchers interested in caloric restriction plenty to chew on.
Does this result upset the general rule that calorie restriction extends life expectancy? Perhaps Drosophila evolved in environments where calorie restriction always occurred at the same time that protein or fat restriction occurred. Natural selection might have selected for a detection of shortage of a single nutrient (or perhaps combination of nutrients) as a proxy for general calorie shortage. So perhaps the metabolism of Drosophila shfits gear when a shortage of protein or fat (or even some other nutrient found in yeast) occurs. Experiments using other food sources are needed to ascertain exactly what nutrient shortage (or nutrient ratio?) shifts Drosophila's metabolism into a state that slows aging.
Few people have the discipline to follow a calorie restriction diet for decades. Even if calorie restriction extends human lifespan (and that is as yet unproven) at best it will slow aging. Perhaps calorie restriction mimetic drugs will eventually provide a way to get the benefits (assuming there are any) of calorie without living with continuous hunger. However, development of treatments to reverse aging would provide more certain benefits and far greater benefits than the best case results for calorie restriction..
See the full research paper Calories Do Not Explain Extension of Life Span by Dietary Restriction in Drosophila.