Researchers are investigating how to counter the danger posed by the IL-4 gene when inserted into mousepox virus. (also same article here)
SAN FRANCISCO — A research team backed by a U.S. federal grant has created a genetically engineered mousepox virus designed to evade vaccines, underscoring biotechnology's deadly potential and stirring debate over whether such research plays into the hands of terrorists.
The team at the University of St. Louis, led by Mark Buller, created the superbug to figure out how to defeat it, a key goal of the government's anti-terrorism plan.
The fear is that terrorists may take the IL-4 gene used in this research and put it into the human smallpox virus instead and thereby perhaps make a much more lethal form of smallpox. It is possible that vaccines would not provide any protection against such a strain of smallpox. IL-4 may effectively disable the immune system and any vaccine development attempts may be futile. Even a vaccine against IL-4 probably wouldn't work because IL-4 serves a useful role in regulating human immunity.
Now Buller has engineered a mousepox strain that kills 100 per cent of vaccinated mice, even when they were also treated with the antiviral drug cidofovir. A monoclonal antibody that mops up IL-4 did save some, however.
This work replicates and extends upon work first reported back in January 2001 where some Australian CSIRO scientists were accidentally trying to develop a mouse contraceptive and produced an extremely lethal strain of mousepox instead.
Ron Jackson and Ian Ramshaw weren't looking for trouble. Jackson, who works at the Pest Animal Control Cooperative Research Centre in Canberra, and Ramshaw, who is in the same city at the Australian National University, were searching for a way to control the mice that are serious pests in Australia. They wanted to make a contraceptive vaccine by altering the genes of the mousepox virus.
Accidental architect: Ron Jackson co-engineered a particularly virulent form of mousepox. But in January the project gained notoriety after the pair inadvertently created an unusually virulent strain of mousepox. If a similar genetic manipulation were applied to smallpox, the scientists realized, this feared killer could be made even more dangerous. When they published their paper1, it was only after much discussion about the wisdom of drawing attention to the findings. "It has to be brought out into the public arena so the situation can be addressed," argues Ramshaw.
There are a few things to worry about in all this. First of all, Ramshaw's team were not even looking for a way to make mousepox more lethal. This will certainly not be the last time that researchers will accidentally find ways to make pathogens more lethal. Would-be terrorists in the future will find many methods published in the research literature for how to make a wide variety of pathogens more lethal. Just research on why some strains of influenza are more virulent than others will have potential terrorist uses.
This brings up the debate on whether all scientific research should be published in public journals. Are there types of research results that will be so incredibly dangerous that they will make it too easy for nefarious groups to harm others on a massive scale? Will it be harder to defend against such attacks than it will be to use those publically available reports to develop effective defenses? While many proponents of open societies take it as a matter of faith that more openness and availability of information is always better that seems far from a proven position.
The ability to detect cancer at ever earlier stages using advances in blood and other testing will combine with the coming ability to grow replacement organs to provide a better method for treating some forms of cancer: organ replacement.
Alexandria, VA-In the first national study to examine survival among liver transplant patients with advanced hepatocellular carcinoma (HCC), researchers found excellent five-year survival results, with a steady improvement over the last decade. Hepatocellular carcinoma, also known as hepatoma, or cancer of the liver, is a common cancer worldwide, with more than one million new cases diagnosed each year and a median life expectancy of six to nine months. Most hepatoma patients have cirrhosis, a risk factor of hepatoma, and are inoperable because of tumor size, location or severity of underlying liver disease. Results of this study will be reported online in the Journal of Clinical Oncology."This study shows that we can achieve excellent survival with liver transplantation among patients with hepatoma, confirming similar results reported by single center studies," said Paul J. Thuluvath, MD, senior author and Associate Professor in the Department of Medicine at Johns Hopkins University School of Medicine. "These findings are particularly reassuring for patients with tumors that cannot be surgically removed, which comprise more than 80 % of HCC patients."
The results for this approach are good and improving:
Researchers found significant and steady improvement in survival over time among liver transplant patients with HCC, particularly in the last five years. Five-year survival improved from 25.3 percent during 1987-1991 to 47 percent during 1992-1996, and 61.1 percent during 1996-2001.
Of course, the big problem is that there are not enough donor organs. The future development of the ability to rapidly grow replacement organs will yield a very attractive option for many forms of organ cancer: replace the defective part. Why not? After all, if you are 50 or 60 or 70 years old replacement of a tired old organ with a lot of miles on it could provide a bit of a boost. Preemptive replacement of many old organs before an organ cancer even begins would partially reverse aging. Put in a new stomach, intestines, liver, pancreas and other parts to get a late middle age partial rejuvenation while simultaenously reducing the risk of cancer.
Washington, D.C. – Even the very youngest children in America are growing up immersed in media, spending hours a day watching TV and videos, using computers and playing video games, according to a new study released today by the Henry J. Kaiser Family Foundation. Children six and under spend an average of two hours a day using screen media (1:58), about the same amount of time they spend playing outside (2:01), and well over the amount they spend reading or being read to (39 minutes).
New interactive digital media have become an integral part of children’s lives. Nearly half (48%) of children six and under have used a computer (31% of 0-3 year-olds and 70% of 4-6 year-olds). Just under a third (30%) have played video games (14% of 0-3 year-olds and 50% of 4-6 year-olds). Even the youngest children – those under two – are widely exposed to electronic media. Forty-three percent of those under two watch TV every day, and 26% have a TV in their bedroom (the American Academy of Pediatrics “urges parents to avoid television for children under 2 years old”). In any given day, two-thirds (68%) of children under two will use a screen media, for an average of just over two hours (2:05).
“It’s not just teenagers who are wired up and tuned in, its babies in diapers as well,” said Vicky Rideout, Vice President and Director of the Kaiser Family Foundation’s Program for the Study of Entertainment Media and Health, the lead author of the study. “So much new media is being targeted at infants and toddlers, it’s critical that we learn more about the impact it’s having on child development.”
Just what is it doing to the brain? Is it having long lasting effects on cognitive development? The brain still continues to grow into adolescence. It seems reasonable to suppose that environmental stimuli might have some lasting effects beyond what is actually learned.
The role of electronic stimuli on child development seems set to continue to grow as wearable electronic gear becomes cheaper, more reliable, and more sophisticated in what it can accomplish. On the bright side the passive nature of the television watching experience will probably gradually be replaced by more interactive experiences and some of those experiences will become more educational. Though other experience will become more pure fantasy.
Impact of TV on reading. According to the study, children who have a TV in their bedroom or who live in “heavy” TV households spend significantly more time watching than other children do, and less time reading or playing outside. Those with a TV in their room spend an average of 22 minutes more a day watching TV and videos than other children do. Those living in “heavy” TV households are more likely to watch every day (77% v. 56%), and to watch for longer when they do watch (an average of 34 minutes more a day). They are also less likely to read every day (59% v. 68%), and spend less time reading when they do read (6 minutes less a day). In fact, they are less likely than other children to be able to read at all (34% of children ages 4-6 from “heavy” TV households can read, compared to 56% of other children that age).
Are these heavy TV watching kids ones who would have been unlikely to learn to read in the pre-TV era? It seems likely that families with generally lower cognitive levels are going to be more likely to have kids who are slow to learn to read. So it is hard to tell how much of the lower reading level is a result of the TV watching and how much the higher TV watching is the result of lower cognitive ability. Surely time spent watching TV is time not spent reading and so there probably is some negative impact from the TV. But it is not clear how much of the difference in reading ability is due to heavier TV watching.
“These findings definitely raise a red flag about the impact of TV on children’s reading,” said Vicky Rideout of the Kaiser Family Foundation. “Clearly this needs to be a top priority for future research.”
Without properly controlled studies it is hard to know what is going on here. Though TV represents a big change in cognitive experiences for kids growing up as compared to previous ages. It must be doing something to the mind.
Parent’s views on educational value of media. Parents of young children appear to have a largely positive view about TV and computers. They are significantly more likely to say TV “mostly helps” children’s learning (43%) than “mostly hurts” it (27%); the overwhelming majority (72%) say computers “mostly help” children’s learning. About half of parents consider educational TV shows (58%) and videos (49%) “very important” to children’s intellectual development. They are also far more likely to say they have seen their children imitate positive behaviors from TV like sharing or helping (78%) than negative ones like hitting or kicking (36%). However, a majority of parents (59%) say their 4-6 year-old boys imitate aggressive behavior from TV (v. 35% for girls the same age).
It is understandable that parents have a more positive view of computers. But the view they have of TV is worrisome. Also, how miuch of the time spent with computers is just time playing shoot-em-up games? Probably the vast bulk of it. What does that do to the brain?
In the longer run artificially intelligent teaching computers that can act as incredibly patient companions ought to provide a clearer net benefit than current computers and television shows. But people might also choose to esacpe into electronic fantasies. Will they at least have the sense to restrict the extent of time their children spend in fantasy experiences? Some will but probably not all.
The full report is available as a downloadable PDF.
"This is the first study to show that cells in the artery wall have the potential to develop into a number of other cell types," said Dr. Linda Demer, principal investigator, Guthman Professor of Medicine and Physiology, and vice chair for cardiovascular and vascular medicine at the David Geffen School of Medicine at UCLA.
UCLA researchers also report that the artery wall cells, called calcifying vascular cells (CVC), are the only cells other than actual bone marrow stromal cells that support survival of immature (developing) blood cells. This finding may have future applications in reconstitution of bone marrow after cancer treatment.
UCLA researchers cultured bovine CVC artery wall cells in the lab to see if the cells would turn into bone, fat, cartilage, marrow and muscle cells. They checked for expression of proteins and tissue matrix characteristic of each cell type.
"We wanted to see if CVC cells would become specific cell types that actually produce their own characteristic matrix (mortar-like) substance. For example, if the cell actually produced bone mineral, it would indicate that the cell had taken on a bone identity," said first author Yin Tintut, Division of Cardiology at the David Geffen School of Medicine at UCLA.
Researchers found that CVC cells had the potential to become several cell types, including bone, cartilage, marrow stromal and muscle cells, but not adipogenic, or fat, cells. Demer suggests this indicates that the CVC cells may not have the entire range of conventional stem cells. However, this may be especially useful in cases where one would not want the stem cell turning into a fat cell — such as in trying to regenerate cartilage.
The next step involves "assessing CVC cells' potential to follow other lineages and also testing human cells," Demer said.
The ability to convert into muscle cells is notable because of the need for therapies to repair damaged heart muscle. It would be interesting to know whether these scientists tried to convert the cells to any internal organ cell types.
Oct. 27, 2003 – A staggering 98 tons of prehistoric, buried plant material – that's 196,000 pounds – is required to produce each gallon of gasoline we burn in our cars, SUVs, trucks and other vehicles, according to a study conducted at the University of Utah.
"Can you imagine loading 40 acres worth of wheat – stalks, roots and all – into the tank of your car or SUV every 20 miles?" asks ecologist Jeff Dukes, whose study will be published in the November issue of the journal Climatic Change.
But that's how much ancient plant matter had to be buried millions of years ago and converted by pressure, heat and time into oil to produce one gallon of gas, Dukes concluded.
Dukes also calculated that the amount of fossil fuel burned in a single year – 1997 was used in the study – totals 97 million billion pounds of carbon, which is equivalent to more than 400 times "all the plant matter that grows in the world in a year," including vast amounts of microscopic plant life in the oceans.
Keep in mind that the ratio between the amount of grown plant carbon matter to actual amount of carbon that gets deposited and eventually turned into fossil fuels is an extremely high number as stated below. Therefore if grown plant matter is used directly as an energy source the pounds of plant matter to make energy equivalent to a gallon of gasoline would be orders of magnitude less than the amount that had to be grown and deposited to eventually become oil.
"Every day, people are using the fossil fuel equivalent of all the plant matter that grows on land and in the oceans over the course of a whole year," he adds.
In another calculation, Dukes determined that "the amount of plants that went into the fossil fuels we burned since the Industrial Revolution began [in 1751] is equal to all the plants grown on Earth over 13,300 years."
Explaining why he conducted the study, Dukes wrote: "Fossil fuel consumption is widely recognized as unsustainable. However, there has been no attempt to calculate the amount of energy that was required to generate fossil fuels, (one way to quantify the 'unsustainability' of societal energy use)."
The study is titled "Burning Buried Sunshine: Human Consumption of Ancient Solar Energy." In it, Dukes conducted numerous calculations to determine how much plant matter buried millions of years ago was required to produce the oil, natural gas and coal consumed by modern society, which obtains 83 percent of its energy needs from fossil fuels.
Very little of ancient plant material ever ended up getting incorporated into oil, gas, and coal deposits.
To determine how much ancient plant matter it took to eventually produce modern fossil fuels, Dukes calculated how much of the carbon in the original vegetation was lost during each stage of the multiple-step processes that create oil, gas and coal.
He looked at the proportion of fossil fuel reserves derived from different ancient environments: coal that formed when ancient plants rotted in peat swamps; oil from tiny floating plants called phytoplankton that were deposited on ancient seafloors, river deltas and lakebeds; and natural gas from those and other prehistoric environments. Then he examined the efficiency at which prehistoric plants were converted by heat, pressure and time into peat or other carbon-rich sediments.
Next, Dukes analyzed the efficiency with which carbon-rich sediments were converted to coal, oil and natural gas. Then he studied the efficiency of extracting such deposits. During each of the above steps, he based his calculations on previously published studies.
The calculations showed that roughly one-eleventh of the carbon in the plants deposited in peat bogs ends up as coal, and that only one-10,750th of the carbon in plants deposited on ancient seafloors, deltas and lakebeds ends up as oil and natural gas.
Dukes then used these "recovery factors" to estimate how much ancient plant matter was needed to produce a given amount of fossil fuel. Dukes considers his calculations good estimates based on available data, but says that because fossil fuels were formed under a wide range of environmental conditions, each estimate is subject to a wide range of uncertainty.
If these calculations by Dukes are accurate then biomass from conventional plants can at best provide for a very small portion of our energy usage.
Unlike the inefficiency of converting ancient plants to oil, natural gas and coal, modern plant "biomass" can provide energy more efficiently, either by burning it or converting into fuels like ethanol. So Dukes analyzed how much modern plant matter it would take to replace society's current consumption of fossil fuels.
He began with a United Nations estimate that the total energy content of all coal, oil and natural gas used worldwide in 1997 equaled 315,271 million billion joules (a unit of energy). He divided that by the typical value of heat produced when wood is burned: 20,000 joules per gram of dry wood. The result is that fossil fuel consumption in 1997 equaled the energy in 15.8 trillion kilograms of wood. Dukes multiplied that by 45 percent – the proportion of carbon in plant material – to calculate that fossil fuel consumption in 1997 equaled the energy in 7.1 trillion kilograms of carbon in plant matter.
Studies have estimated that all land plants today contain 56.4 trillion kilograms of carbon, but only 56 percent of that is above ground and could be harvested. So excluding roots, land plants thus contain 56 percent times 56.4, or 31.6 trillion kilograms of carbon.
Dukes then divided the 1997 fossil fuel use equivalent of 7.1 trillion kilograms of carbon in plant matter by 31.6 trillion kilograms now available in plants. He found we would need to harvest 22 percent of all land plants just to equal the fossil fuel energy used in 1997 – about a 50 percent increase over the amount of plants now removed or paved over each year.
"Relying totally on biomass for our power – using crop residues and quick-growing forests as fuel sources – would force us to dedicate a huge part of the landscape to growing these fuels," Dukes says. "It would have major environmental consequences. We would have to choose between our rain forests and our vehicles and appliances. Biomass burning can be part of the solution if we use agricultural wastes, but other technologies have to be a major part of the solution as well – things like wind and solar power."
World energy usage is rising as the total world economy grows. So the amount of biomass needed to serve as a replacement would have to grow to an even higher level.
Does anyone know how many joules of energy fall on the surface of planet Earth per day from sunlight?
Update: I've gotten suggestions in the past from some folks that I ought to report more on a company called Changing World Tech which designs and builds plants for processing biomass waste into liquid hydrocarbon fuels. I haven't spent more time on it because my intuitive guess is that there just is not that much concentrated biomass waste out there to make that big a difference in our total energy usage. Yes, it is a good thing if some of the waste that is out there can be processed into useful fuel. But biomass waste is probably going to be a bit player no matter how low capital costs get for building a biomass waste processing plant. If someone has some good data to argue to the contrary I'd love to hear it.
So what realistic options are there for fossil fuel alternatives? Nuclear is one. But the biggest problem I see with it is the nuclear proliferation problems it poses. See my Weapons Proliferation Control archive on my ParaPundit blog for posts on the very large problem we face with nuclear proliferation. If a nuclear fuel cycle that was proliferation-proof could be devised I'd be more supportive of nuclear. If anyone knows much about the pros and cons of thorium nuclear power I'd like to hear more about it.
The most attractive option is photovoltaics made from carbon nanotubes or some other materials that could eventually be made much more cheaply than existing photovoltaics that are made from expensive purified silicon. I support the creation of a wide ranging Manhattan Project scale energy research effort to spend tens of billions on basic and applied science in an assortment of areas relating to energy generation, storage, transportation, and usage.
NIH-funded researchers have identified a gene that appears to be a crucial signal for the beginning of puberty in human beings as well as in mice. Without a functioning copy of the gene, both humans and mice appear to be unable to enter puberty normally. The newly identified gene, known as GPR54, also appears necessary for normal reproductive functioning in human beings.
The study, funded in part by the National Institute of Child Health and Human Development (NICHD), appears in the October 23 issue of the New England Journal of Medicine. GPR54 is located on an autosomal chromosome (a chromosome that is not a sex chromosome). The study also was funded by the National Center for Research Resources and the National Institute of General Medical Sciences, both at NIH.
"The discovery of GPR54 is an important step in understanding the elaborate sequence of events needed for normal sexual maturation," said Duane Alexander, M.D., Director of the National Institute of Child Health and Human Development (NICHD). "Findings from this study may lead not only to more effective treatments for individuals who fail to enter puberty normally, but may provide insight into the causes of other reproductive disorders as well."
Puberty begins when a substance known as gonadotropin releasing hormone (GnRH) is secreted from a part of the brain called the hypothalamus. Individuals who fail to reach puberty because of inherited or spontaneous genetic mutations are infertile.
"The discovery of GPR54 as a gatekeeper for puberty across species is very exciting" said the study's first author, Stephanie B. Seminara, of the Reproductive Endocrine Unit, Massachusetts General Hospital, Boston and a member of the NICHD-funded, Harvard-wide Endocrine Sciences Center. "In the future, this work might lead to new therapies for the treatment of a variety of reproductive disorders."
While the researchers involved emphasize the value of the research in terms of the development of new therapies for infertility there are other less conventional but perhaps more widely useful reasons for being able to control the onset of puberty. Among the very practical reasons to delay the onset of puberty:
As I've argued previously, we need to adjust humanity to be more adaptive to the environmental changes that we have created for ourselves which are a consequence of technological advances. The delay of puberty is a great example of how humans could be made more adaptive to modern industrial society. Humans were already selected for to spend a longer time in childhood learning than is the case for most species. But modern technological society demands an even longer period spent learning than we are designed for. Puberty comes too soon before learning is done and before humans are trained well enough to be able to work and support a family. It makes no sense to have puberty start as soon as it does.
DALLAS, Oct. 14 – Infusing a patient’s own cells into a heart artery several days after a heart attack speeds the healing process and strengthens the heart’s pumping power, researchers report in today’s rapid access issue of Circulation: Journal of the American Heart Association.
In a small study of heart attack patients, German researchers extracted progenitor cells from patient’s blood or bone marrow and infused them into an artery. Progenitor cells are derived from stem cells, which have the potential to develop into any cell in the body. Progenitor cells are already on their way to becoming a specific type of cell.
“The infusion of progenitor cells was associated with a reduction in the size of muscle damage, a significant improvement in pumping function, and less enlargement of the heart within four months after a heart attack,” said co-author Andreas M. Zeiher, M.D., chairman of the department of internal medicine at the University of Frankfurt in Germany.
This latest report comes after a recent report about bone marrow stem cells merging with brain Purkinje neurons rather than forming new cells. The authors of that report voiced skepticism that bone marrow stem cells will ultimately prove to be useful in treating conditions in a large number of different types of tissue. This latest report from the University of Frankfurt did not use enough patients or double-blinded controls and will need to be followed up by a larger study with better controls. But it is promising.
A plasmatron is a device that can convert gasoline and diesel fuel into hydrogen. Hydrogen can be used in diesel engines to reduce nitrogen oxides (NOx) emission.
The researchers and colleagues from industry report that the plasmatron, used with an exhaust treatment catalyst on a diesel engine bus, removed up to 90 percent of nitrogen oxides (NOx) from the bus’s emissions. Nitrogen oxides are the primary components of smog.
The plasmatron reformer also cut in half the amount of fuel needed for the removal process. “The absorption catalyst approach under consideration for diesel exhaust NOx removal requires additional fuel to work,” explained Daniel R. Cohn, one of the leaders of the team and head of the Plasma Technology Division at MIT's Plasma Science and Fusion Center (PSFC). “The plasmatron reformer reduced that amount of fuel by a factor of two compared to a system without the plasmatron.”
In gasoline engines the use of plasmatrons will boost car fuel efficiency by 20 percent.
"If widespread use of plasmatron hydrogen-enhanced gasoline engines could eventually increase the average efficiency of cars and other light-duty vehicles by 20 percent, the amount of gasoline that could be saved would be around 25 billion gallons a year," Cohn said. "That corresponds to around 70 percent of the oil that is currently imported by the United States from the Middle East."
The Bush administration has made development of a hydrogen-powered vehicle a priority, Heywood noted. "That's an important goal, as it could lead to more efficient, cleaner vehicles, but is it the only way to get there? Engines using plasmatron reformer technology could have a comparable impact, but in a much shorter time frame," he said.
"Our objective is to have the plasmatron in production—and in vehicles—by 2010," Smaling said. ArvinMeritor is working with a vehicle concept specialist company to build a proof-of-concept vehicle that incorporates the plasmatron in an internal combustion engine. "We'd like to have a driving vehicle in one and a half years to demonstrate the benefits," Smaling said.
In the meantime, the team continues to improve the base technology. At the DEER meeting, Bromberg, for example, reported cutting the plasmatron's consumption of electric power "by a factor of two to three."
Lots of small refinements to gasoline engine vehicles could cumulately boost fuel efficiency quite substantially. Compressed air, compressed hydraulic fluid, and other powertrain design ideas show promise as ways to boost fuel efficiency. Also see the related previous post Hydrogen Not Good Short, Medium Term Form Of Fuel
Professors David Kwok and Larry Kostiuk squeezed a syringe of water through a two-centimetre wide glass disc cut by 480,000 holes, or "microchannels". When electrodes were attached at each end they were delighted to find they had generated just enough power to light a small bulb.
By itself this is not a new energy source. Some source of energy must be used to make the water flow.
Thanks to a phenomenon called the electric double layer, when water flows through these 10-micron-diameter-wide channels, a positive charge is created at one end of the block and a negative charge at the other - just like a conventional battery.
They held a reservoir of water 30 centimetres above the array and allowed it to flow through the disc under hydrostatic pressure, generating a current of 1500 nanoamps in the process.
The research team led by Professor Daniel Kwok and Professor Larry Kostiuk, of the University of Alberta, Edmonton, Canada, claim they have created a new source of clean non-polluting electric power with a variety of possible uses, ranging from powering small electronic devices to contributing to a national power grid.
"The applications in electronics and microelectronic devices are very exciting," says Kostiuk. "This technology could provide a new power source for devices such as mobile phones or calculators which could be charged up by pumping water to high pressure."
Their proposal to use their device as a battery really amounts to using highly pressurized water as the medium for storing energy. But that sounds impractical. The casing that holds the water would have to be strong enough to hold it under fairly high pressure. The water and the case would both add mass. Plus, the case has to have an incredibly strong valve that can open and close to let the highly pressurized water out as necessary. Fuel cells powered by a liquid hydrocarbon fuel seem like they'd have a better energy to mass ratio. Both compressed air and compressed fluids are being explored as energy storage methods in cars. But for something as lightweight as a cellphone compression of water has to compete against fuel cells and, perhaps further into the future, lithium polymer batteries.
And might it one day power everything? "You'd need a really big area, like a coastal region," said Dr Kostiuk. "But then again, I guess, those are available, aren't they?" For a clean, free form of electricity, the answer must surely be yes.
One problem with using ocean water to generate electricity is that the water would have to be filtered before passing into the microchannels. But large scale manufacture of nanodevices might eventually provide the ability to use this approach to make devices that could generate electricity from waves or tide flow.
There is also the prediction of 1964 CMU professor Fletcher Osterle that this method of generating electricity will turn out to be very inefficient.
"Probably the reason no one carried on Osterle's work is that he concludes the efficiency can never be better than .04 percent. We haven't done much better than that so far, but we do think that we can do much better — we have much better technologies today, like [microelectromechanical systems] than they did in the 1960s," said Kostiuk.
Water can already flow thru turbines at dams in order to generate electricity. Though turbines today are big things it will probably eventually be possible to make microelectromechanical systems (MEMS) turbines. Therefore MEMS microchannels are not the only imaginable approach for the use of MEMS devices to generate electricity from flowing water. MEMS turbines might turn out to be a more efficient than MEMS microchannels as a way to harness very small water flows.
"Although physicians generally consider Alzheimer and Parkinson diseases to be distinct disorders, the two exhibit a lot of overlap both clinically and pathophysiologically," said Jeffery Vance, M.D., director of Duke's Morris K. Udall Parkinson's Disease Research Center and associate director of the Duke Center for Human Genetics. "This study emphasizes the similarity between the two diseases by highlighting a single gene that influences their age of onset."
The team reports their findings in the Dec. 15, 2003, issue (available online Oct. 21) of Human Molecular Genetics and will present the work as a keynote paper at the annual meeting of the American Society of Human Genetics, which will be held Nov. 4-8, in Los Angeles. The major funding for the study was provided by the National Institute on Aging, the National Institute of Neurological Disorders and Stroke, the Alzheimer's Association, the Institute de France, and the American Federation for Aging Research.
The team's earlier work identified a broad chromosomal region linked to the age at onset of Alzheimer's and Parkinson's diseases. The new research -- led by Pericak-Vance, Vance, John Gilbert, Ph.D. and Yi-Ju Li, Ph.D., of the Duke Center for Human Genetics and Jonathan Haines, Ph.D., of Vanderbilt University Medical Center -- narrows that region of the genome, which contained many hundreds of genes, to a single gene known as glutathione S-transferase omega-1 or GSTO1.
It is worth noting that glutathione serves as an intracellular antioxidant that gets oxidized in order to reduce free radicals to less harmful forms. It is not at all surprising that a gene playing a role in antioxidant metabolism would turn out to influence age of onset for two different neurodegenerative diseases. A body that expresses a higher level of genes that detoxify free radicals is, all else equal, not going to age as rapidly as one that expresses those genes at a lower level.
An additional analysis involving 1,773 patients with Alzheimer's disease and 635 patients with Parkinson's disease later found that of those four genes, only GSTO1 showed genetic differences associated with age at onset.
"By combining evidence based on gene expression and genetic association, we found a gene that modifies when the diseases start," said Li, the study's first author. "Understanding the role this gene plays in Alzheimer and Parkinson diseases may, in the future, lead to a means to delay the disorders' onset," she added, noting that even a short delay would benefit at-risk patients.
The development of drugs or gene therapies that would up-regulate genes for enzymes that are involved in quenching free radicals might work as an approach for slowing the general rate of aging and delaying the onset of degenerative disorders of the brain and of other parts of the body.
US and Japanese researchers have discovered a rare serotonin transporter gene mutation causes some cases of obsessive compulsive disorder. (same article here)
Analysis of DNA samples from patients with obsessive compulsive disorder (OCD) and related illnesses suggests that these neuropsychiatric disorders affecting mood and behavior are associated with an uncommon mutant, malfunctioning gene that leads to faulty transporter function and regulation. Norio Ozaki, M.D., Ph.D., and colleagues in the collaborative study explain their findings in the October 23 Molecular Psychiatry.
Researchers funded by the National Institutes of Health have found a mutation in the human serotonin transporter gene, hSERT, in unrelated families with OCD. A second variant in the same gene of some patients with this mutation suggests a genetic “double hit,” resulting in greater biochemical effects and more severe symptoms. Among the 10 leading causes of disability worldwide, OCD is a mental illness characterized by repetitive unwanted thoughts and behaviors that impair daily life.
“In all of molecular medicine, there are few known instances where two variants within one gene have been found to alter the expression and regulation of the gene in a way that appears associated with symptoms of a disorder,” said co-author Dennis Murphy, M.D., National Institute of Mental Health (NIMH) Laboratory of Clinical Science. “This step forward gives us a glimpse of the complications ahead in studying the genetic complexity of neuropsychiatric disorders.”
These mutations do not appear to be the causes of most cases of OCD.
Psychiatric interviews of the patients’ families revealed that 6 of the 7 individuals with the mutation had OCD or OC personality disorder and some also had anorexia nervosa (AN), Asperger’s syndrome (AS), social phobia, tic disorder, and alcohol or other substance abuse/dependence. Researchers found an unusual cluster of OCD, AN, and AS/autism, disorders together with the mutation in approximately one percent of individuals with OCD.
The scientists analyzed DNA from 170 unrelated individuals, including 30 patients each with OCD, eating disorders, and seasonal affective disorder, plus 80 healthy control subjects. They detected gene variants by scanning the hSERT gene’s coding sequence. A substitution of Val425 for Ile425 in the sequence occurred in two patients with OCD and their families, but not in additional patients or controls. Although rare, with the I425V mutation found in two unrelated families, the researchers propose it is likely to exist in other families with OCD and related disorders.
Having the pair of mutations appears to make symptoms worse.
In addition to the I425V mutation, the two original subjects and their two siblings had a particular form of another hSERT variant, two long alleles of the 5-HTTLPR polymorphism. This variant, associated with increased expression and function of the serotonin transporter, suggests a “double hit,” or two changes within the same gene. The combination of these changes, both of which increase serotonin transport by themselves, may explain the unusual severity and treatment resistence of the illnesses in the subjects and their siblings.
“This is a new model for neuropsychiatric genetics, the concept of two or maybe more within-gene modifications being important in each affected individual. This is also probably the first report of a modification in a transporter gene resulting in a gain rather than a decrease in function,” said NIMH Director Thomas Insel, M.D.
SERT allows neurons, platelets, and other cells to accumulate the chemical neurotransmitter serotonin, which affects emotions and drives. Neurons communicate by using chemical messages like serotonin between cells. The transporter protein, by recycling serotonin, regulates its concentration in a gap, or synapse, and thus its effects on a receiving neuron’s receptor.
Transporters are important sites for agents that treat psychiatric disorders. Drugs that reduce the binding of serotonin to transporters (selective serotonin reuptake inhibitors, or SSRIs) treat mental disorders effectively. About half of patients with OCD are treated with SSRIs, but those with the hSERT gene defect do not seem to respond to them, according to the study.
People who have more rare mutations have a tougher time finding effective treatments for the obvious reason that they make up a smaller market for any potential drug development. However, the ability to test for rarer mutations opens up the possibility of identifying subcategories of sufferers of a particular disorder to test them with a wider range of existing drugs than would normally be used on people for whom effective treatments are already known.
Also see the previous post Stanford Researchers Discover Treatment For Obsessive Shopping Disorder.
Genetic insurance is a better way of handling the problems brought on by genetic testing. Genetic insurance would pay out depending on the results of a genetic test. If you turn out to have a gene implying a higher risk of heart disease, for example, then the test would pay you enough to cover your now higher health and life insurance premiums and perhaps also something to cover the possibility that you will have a shorter working life.
The basic problem this proposal is trying to solve is that if people have genetic defects that predispose them toward various kinds of diseases and if medical insurance companies are allowed to ask for and know genetic test results then some will find medical insurance too costly or not available at all. The denizens of the US Senate, in their infinite lack of wisdom, want to prevent medical insurance companies from knowing genetic test results. But such an asymmetry of available information would cause high risk people to sign up for more insurance while low risk people decided to sign up for less. Revenues would go down while costs rose for the insurers. That is not a sustainable state of affairs.
Tabarrok's proposed solution will only work for a short period of time at best. The problem is that genetic testing costs are inevitably going to fall so far that everyone will have their DNA entirely sequenced either at birth or while developing as an embryo. There is not going to be a period of universal ignorance about each person's test results during which genetic insurance could be purchased.
A variety of techniques promise to drive down the costs of DNA sequencing by orders of magnitude including microfluidics (see here and here and here) and nanopore technology (see here). Leroy Hood predicts $1000 per person DNA sequencing within 10 years. Whether DNA sequencing becomes cheap enough for the masses in 10 or 20 years that day is coming.
Also, even genetic test insurance invites cheating with secret tests done in advance of official tests. Unscrupulous people (of which the world has no shortage) could get their DNA tested by, say, a lab in another country and get the results back secretly before deciding what genetic insurance to buy in advance. Plus, the size of the pay-out for such a policy would be hard to choose. How big would it need to be to pay higher medical insurance rates for, say, 20 or 30 years? Kinda depends on whether treatments are developed in 10 or 20 or 30 years that can fix the problems that a particular genetic variant causes. Picture a gene therapy, for instance, that programs the liver to lower blood cholesterol or to make blood cholesterol molecules larger. How long will various types of genetic risk factors remain risk factors? Actuaries are not going to be able to make a decent guess.
The very learned and clever LSE graduate Mick Jagger opined 30 years ago in Fingerprint File "These days its all secrecy, no privacy". Mick has definitely been a man ahead of his time. I constantly see signs that privacy is going to be impossible to maintain for everything from personal DNA sequences (see, as a good starting point to my previous posts on the subject: Easy Method To Extract DNA From Fingerprints) to who one is or where one is (see my entire Surveillance Society archive). More information will be available because all kinds of information will become cheaper, faster, and easier to collect, store, and sort through.
Update: Alex has responded to objections I've raised to his argument. He argues that the danger of people getting their DNA tested before they buy genetic insurance is a minor problem easily solved by a contract clause in insurance contracts forbidding previous testing. However, I do not think that in the long run a contract clause will be sufficient to deal with the problem. It will eventually become too easy for a single person to find out their own DNA sequence without anyone knowing that they have done it. Once DNA sequencing devices become operable by an unskilled individual anyone will be able to find out their own DNA sequence. Also, anyone who finds out the genetic sequences of their parents will have useful information about their own risks. If one of their parents is homozygous for some harmful dominant mutation they can be assured that they also have that risk factor. But my bigger objection to the problem remains that future generations will find out our genetic risks at too early of an age for there to be a previous point at which to buy genetic insurance.
There is one other problem with genetic insurance policies as a solution: the need to convince insurance companies to offer them in the first place. Genetic insurance policies could, in theory, be sold now since genetic testing is still fairly rare. But how to write such a policy in advance of various risk factors being identified? One big problem is that we don't know how much of the difference in total disease incidence will turn out to be a function of genetic variations. Keep in mind that some risk factors are of the sort that if you have the risk factor you have X percent increased chance of a disease before age Y. But other risk factors of the sort that you have an X percent of a disease before age Y if you also do A and B and you don't do C. An insurance company writing a policy now would have no way of knowing when factor genetic X will become testable or when the dependence of factor genetic X on behavioral or environmental factors A, B, and C will become known or how difficult it will be for people to avoiding doing A and B or start doing C.
Another problem is that we don't know to what extent people in the future will tend to behave in ways that make their genetic risk factors more or less of a problem. For instance, suppose there are genetic risk factors for type II diabetes (I'd personally be willing to bet serious money that there are several). Well, obesity is a major risk factor for type II diabetes and the incidence of obesity is rising rapidly.
This illustrates an important question: what do you call a genetic risk factor in the first place? If someone has a genetic risk factor for eating too many sweets and that makes them fat and they evenutally get type II diabetes as a result then should the insurance policy pay out? If it does pay out does it pay out when the risk factor for eating sweets is identified? Or when the risk factor for type II diabetes is identified? Or when type II diabetes develops? Or should there be different payouts for different combinations of these events?
The proportion of Americans with clinically severe obesity increased from 1 in 200 adults in 1986 to 1 in 50 adults in 2000—growing twice as fast as the proportion of Americans who are simply obese, according to a RAND Corporation study published today.
To be classified as severely obese, a person has to have a body mass index (a ratio of weight to height) of 40 or higher—roughly 100 pounds or more overweight for an average adult man. The typical severely obese man weighs 300 pounds at a height of 5 feet 10 inches tall, while the typical severely obese woman weighs 250 pounds at a height of 5 feet 4 inches.
One can imagine variations on genetic insurance policies. For instance, one variation could be that you get paid out only if you A) have the risk factor and B) get the disease. There are already disease-specific insurance policies though. So would such a policy add anything of value?
Update II: There is one other bone I want to pick with Alex: how can he claim that most people do not have serious genetic defects?
We should also remember that genetic insurance will be quite cheap because most people do not have serious genetic defects.
To put it another way: what is a defect? Many of the genetic variations that shorten life were selected for and have benefits in spite of their effects upon longevity. Some genetic variations that lengthen life never got selected for strongly enough to become widespread and so not everyone has them. If you lack some life-extending genetic variation does the variation that you have instead constitute a defect? Suppose you lack the genetic variation in Cholestryl Ester Transfer Protein (CETP) that makes cholesterol molecules bigger and extends life as a result. Well, are you defective? Certainly it would make sense for a life insurance company to treat you differently if it knew your CETP alleles. There will probably turn out to be thousands of genetic variations that influence life expectancy and the incidence of disease. Some of those variations will turn out to even improve our chances for some diseases while making our chances worse for getting other diseases. Some of the costs and benefits will be very complex to describe. So how can a policy be written to account for all of this?
Another excellent example is the question of what is a mental defect. Robert Plomin argues that most genetic variations that cause learning disabilities are not abnormal variations but rather just collections of normally occurring variations that were selected for. The people who have learning disabilities are not, in many cases, suffering from a mutation or other developmental defect that occurred in them. They just happened to get a collection of alleles from their parents that cumulatively lowered their intellectual abilities to the point where people today would consider them learning disabled.
Halfway into a mouse pregnancy, before the testes have even formed, the activity of 51 genes is different in males and females, says Eric Vilain of the University of California, Los Angeles. His team analysed 12,000 brain genes.
Note that other news accounts report 54 genes are involved.
It rebuts 30 years of scientific dogma that the hormones, estrogen and testosterone, alone were responsible for differences between the male and female brain. So the researchers were surprised then they found 54 genes produced in different amounts in male and female brains prior to hormonal influence. Eighteen of the genes were produced at higher levels in the male brains and 34 were produced at higher levels in the female brains.
This opens up some interesting possibilities. If a large set of genes all express one way in females and in a different way in males then it may become possible to manipulate subsets of the genes involved in making male and female brains to create people who are in some ways female and in other ways male. Also, by pushing the expression of the genes more in one direction or the other it may eventually be possible to make even more masculine and more feminine minds.
I have a basic rule: the more we learn about the genetic basis of human nature the more we will be able to manipulate it. Abuses are inevitable. Also, the culture wars about abortion rights will seem like the little leagues as compared to the future battles about ethics when it becomes possible to change human nature by manipulating the development of the mind. Real physical wars may end up being fought over different visions of what is allowable to do in creating offspring.
This report also brings up the question of whether any of those who want to undergo a sex change operation experienced during fetal development genetic expression patterns that are more like those the opposite sex. Also, the same question can be asked about homosexuals. The difficulty of answering these questions is that getting neurons out of a brain to test would pose ethical and practical problems and it would take years to wait to see if the fetal stage gene expression patterns have gene expression patterns that are, at least in some respects, more like the opposite sex? My guess is that the homosexuals will turn out to have various mixes of male and female brain gene expression patterns.
"Our findings may help answer an important question -- why do we feel male or female?" Dr. Eric Vilain, a genetics professor at the University of California, Los Angeles School of Medicine, said in a statement. "Sexual identity is rooted in every person's biology before birth and springs from a variation in our individual genome."
This is another nail in the coffin of the tabula rasa view of human nature.
This result may eventually be useful for more accurately identifying and treating those born with sexually ambiguous genitalia.
"If physicians could predict the gender of newborns with ambiguous genitalia at birth, we would make less mistakes in gender assignment," said Vilain.
This report follows and certainly builds on work by a team headed by Vilain reported a year ago that showed that the brain started getting sexual orientation before the SRY gene starts genital development. See the post Brain Gets Sex Orientation Before Genitals for more details.
Update: It would be interesting to compare liver gene expression patterns in homosexuals to heterosexuals to see if liver gene expression differs from male and female expression patterns in homosexuals.
In the November 1 issue of Genes & Development, Dr. Diane Robins and colleagues report on their discovery of two neighboring genes, Rsl1 and Rsl2, that repress male-specific liver gene expression in female mice. They found that female mice harboring mutations in Rsl genes aberrantly turn on male-specific liver genes, causing the female livers to adopt characteristically male patterns of gene expression.
If liver gene expression patterns were to turn out in some cases to be reliable proxies for brain gene expression patterns for genes that are sex-specific then that might make it easier to test for sex-specific gene expression patterns.
Update II: The UCLA press release contains more details on the Vilain study.
"We didn't expect to find genetic differences between the sexes' brains," Vilain said. "But we discovered that the male and female brains differed in many measurable ways, including anatomy and function."
In one intriguing example, the two hemispheres of the brain appeared more symmetrical in females than in males. According to Vilain, the symmetry may improve communication between both sides of the brain, leading to enhanced verbal expressiveness in females.
"This anatomical difference may explain why women can sometimes articulate their feelings more easily than men," he said.
Overall, the UCLA team's findings counter the theory that only hormones are responsible for organizing the brain.
"Our research implies that genes account for some of the differences between male and female brains," Vilain said. "We believe that one's genes, hormones and environment exert a combined influence on sexual brain development."
The scientists will pursue further studies to distinguish specific roles in the brain's sexual maturation for each of the 54 different genes they identified. What their research reveals may provide insight into how the brain determines gender identity.
Men and women really do think differently.
About nine per cent of people have at least one copy of a gene for 5HT2a that call for the amino acid tyramine at one point in the receptor protein. The rest call for histamine. People with the tyramine variant make receptors that are less readily stimulated by serotonin.
De Quervain's team compared 70 people with the tyramine form to 279 with the histamine form. The tyramine group was 21 per cent worse at remembering a list of five words or simple shapes five minutes after seeing them.
There is going to be a veritable torrent of additional such discoveries in the next 10 or so years. Lots of genetic variations that contribute to intelligence will be identified. The identification of these variations will be useful for breeding decisions and offspring genetic engineering. The use of the term "breeding decisions" may seem cold but depending on how quickly offspring genetic engineering becomes available a great many people may make decisions about mating for reproduction based on the genetic profiles of potential mates. If many genetic variations that contribute to cognitive differences are identified in many years before offspring genetic engineering becomes possible then people will opt to choose their mates based on what genetic contributions their prospective mates can make to the cognitive ability of their offspring.
Some of the variations that raise and lower intelligence will turn out to do so by changing levels of expression of genes. Attempts will be made to develop drugs that will bind at targets in the regulatory chains that control the expression of those genes in order to make those genes express in less mentally capable people in ways that mirror their expression levels in smarter people. Also, attempts will be made to develop drugs that bind to receptors and change their shapes to be more like the shapes found in smarter people. The bottom line is that many of the genetic variations that contribute to cognitive ability will provide targets for pharmaceutical development. Every such discovery is another potential target for drug development.
It was already known that some anti-depressants that work on the 5HTa gene's receptor product and that those drugs have the side effect of interfering with short term memory. Another recent published report found that there is also a genetic variation in 5HT2a that appears to impact whether anti-depressant Paroxetine (a.k.a. Paxil) causes intolerable side effects in patients.
One variation of the 5HT2a gene, based on a single nucleotide change in the DNA sequence, is thought to affect the amount of the receptor on nerve cells. When the researchers compared the version of this gene that a patient had to his or her experience taking the drug, the differences due to gene variation were striking. People with the one version of the gene were much more likely to discontinue therapy due to intolerable side effects when compared to the two other versions (46 percent vs. 16 percent).
Unfortunately the press release for the second report doesn't state whether the same nucleotide is involved as in the first case.
Short people may be short-changed when it comes to salary, status and respect, according to a University of Florida study that found tall people earn considerably more money throughout their lives.
"Height matters for career success," said Timothy Judge, a UF management professor whose research is scheduled to be published in the spring issue of the Journal of Applied Psychology. "These findings are troubling in that, with a few exceptions such as professional basketball, no one could argue that height is an essential ability required for job performance nor a bona fide occupational qualification."
Judge and Daniel Cable, a business professor at the University of North Carolina at Chapel-Hill, analyzed the results of four large-scale research studies - three in the United States and one in Great Britain - which followed thousands of participants from childhood to adulthood, examining details of their work and personal lives.
Judge's study, which controlled for gender, weight and age, found that mere inches cost thousands of dollars. Each inch in height amounted to about $789 more a year in pay, the study found. So someone who is 7 inches taller - say 6 feet versus 5 feet 5 inches - would be expected to earn $5,525 more annually, he said.
"If you take this over the course of a 30-year career and compound it, we're talking about literally hundreds of thousands of dollars of earnings advantage that a tall person enjoys," Judge said.
The desire to assure that one's offspring will earn a higher income will serve as a powerful incentive for people to do genetic engineering on their eggs, sperm, and fetuses to ensure that their babies have the best possible chances in life. The prospects for boosting future average earnings potential are even greater from IQ boosts than from height boosts. A mere $5,525 annual salary increase is nothing compared to the differences in salary that would come if one could boost one's offspring's intelligence by, say, 20 IQ points.
Note that there is an important productivity difference between height enhancement and IQ enhancement: Height just makes some people more able to get jobs or close sales or otherwise beat other people when competing for the same existing resources but it probably doesn't increase overall productivity. By contrast, higher intelligence boosts one's ability to do mental work. Height differences, by contrast, are probably a net drain on productivitiy because to the extent that people judge each other by height they judge each other less by differences in real performance. The economy is made less efficient by judgements made on any basis other than real workplace productivity differences. By contrast, boosts in cognitive abilities will lead to dramatic increases in workforce productivity.
Will people genetically engineer their children in the future? Any poll taken today that attempts to measure public attitudes toward offspring genetic engineering probably overestimates eventual future general opposition to the practice. Once prospective parents are offered concrete specific options for providing their offspring with advantages in height, looks, or cognitive abilities the issue of genetic engineering will change from an abstract moral or philosophical question to one in which personal interests are considered and personal benefits and costs are weighed. Given the enormous potential benefits from offspring genetic engineering for health, physical abilities, and mental abilities my guess is that the desire to provide those benefits for one's own offspring will shift a lot of people's opinions toward support for genetic engineering of offspring.
Another factor that is going to play a big role in shifting opinion in favor of offspring genetic engineering is national interest and the competition between nations. The United States faces the very real problem that China has over 4 times as many people as the US and is growing rapidly. The Chinese are fairly bright folks on average and, as Intel chairman Andy Grove has recently argued, it is probable that the United States will lose leadership in software and other industries to China and other countries. What can the US do with a smaller population? Make it smarter. Of course, China will be able to do the same and the Chinese will have no moral qualms about doing so. Therefore the case for making the US population smarter will become even more compelling.
Economic globalization is bringing people all over the world into direct competition with each other. Competition is getting more fierce and people will become generally more willing to embrace new innovations in order to get advantages over their competitors. Fear and greed will both work to promote the widespread embrace of offspring brain genetic engineering.
Update: A National Bureau of Economic Research (NBER) working paper from August 2006 by Anne Case and Christina Paxson find that tall people make more money because increased height is correlated with higher IQ.
It has long been recognized that taller adults hold jobs of higher status and, on average, earn more than other workers. A large number of hypotheses have been put forward to explain the association between height and earnings. In developed countries, researchers have emphasized factors such as self esteem, social dominance, and discrimination. In this paper, we offer a simpler explanation: On average, taller people earn more because they are smarter. As early as age 3 — before schooling has had a chance to play a role — and throughout childhood, taller children perform significantly better on cognitive tests. The correlation between height in childhood and adulthood is approximately 0.7 for both men and women, so that tall children are much more likely to become tall adults. As adults, taller individuals are more likely to select into higher paying occupations that require more advanced verbal and numerical skills and greater intelligence, for which they earn handsome returns. Using four data sets from the US and the UK, we find that the height premium in adult earnings can be explained by childhood scores on cognitive tests. Furthermore, we show that taller adults select into occupations that have higher cognitive skill requirements and lower physical skill demands.
Therefore the market is not unfairly rewarding tall people just for being tall. Intelligence differences explain average income differences as a function of height.
Increased height due to better nutrition has probably been accompanied by increased intelligence in the last century in much of the world.
There's a lesson here for those looking for egg donors: If you can not find out the IQ of prospective egg donor women go for the taller ones. That'll give you a better chance for higher IQ babies.
STANFORD, Calif. Bone marrow cells can fuse with specialized brain cells, possibly bolstering the brain cells or repairing damage, according to research from the Stanford University School of Medicine. This finding helps resolve an ongoing debate: Do adult stem cells transform from bone marrow cells into other cell types, such as brain, muscle or liver cells, or do they fuse with those cells to form a single entity with two nuclei? The research shows that for complex brain cells called Purkinje cells, fusion is the normal pathway.
Helen Blau, PhD, the Donald E. and Delia B. Baxter Professor of Pharmacology, had previously shown that transplanted bone marrow cells can wind their way up to the brain in humans where they take on characteristics of Purkinje cells – large cells in the part of the brain that controls muscular movement and balance. She had also shown that mature cells in a lab dish can fuse with other cell types and take on characteristics of those cells.
In her most recent work, published in the Oct. 16 advance online issue of Nature Cell Biology, Blau showed that the bone marrow cells in mice fuse with existing Purkinje cells and activate genes normally made in Purkinje cell nuclei. The work will also be published in the November issue of the journal.
“I think that fusion might be a really important biological mechanism,” Blau said. She said researchers previously considered fusion to be less medically important than the idea that bone marrow cells may be able to change fates entirely. Blau disagrees with that assessment. “Fusion might be a sophisticated mechanism for rescuing complex damaged cells,” she said.
Blau and senior research scientist James Weimann, PhD, transplanted mice with bone marrow cells that had been genetically altered to produce a fluorescent green protein. Over the course of the next 18 months (75 percent of a mouse’s life span), they looked for signs of fluorescent green cells in the animals’ brains.
Over time, the group found an increasing number of Purkinje cells that glowed green under a microscope. Looking closely at these cells, they found two nuclei – one from the original Purkinje cell and one from the fused bone marrow cell. They also found that the compact nucleus of the bone marrow cell expanded over time to take on the appearance of the more loosely packed Purkinje cell nucleus.
The Stanford researchers want to develop a better understanding of the signalling process that causes a stem cell to merge with a Purkinje cell in order to be able to find ways to encourage stem cells to merge with damaged brain cells in patients suffering from neurological disorders.
This process of stem cell merger with brain cells also opens up the potential for use of stem cells as a way to deliver better genetic programming in order to do cognitive enhancement and to deliver genes that can undo the effects of aging. If stem cells can be genetically engineered to have genetic variations that enhance cognitive performance then at least some of the existing cells in the brain will be able to be enhanced with genetic variations that will be identified in the future as contributing to intelligence and other aspects of brain function.
There is also an obvious application for trying to reverse some of the general aging of the brain. Genomic decay that occurs with aging could be dealt with at least in part by sending in stem cells to deliver younger nuclei to aged brain cells. Those younger stem cells could also deliver additional genes that would help to basically refurbish aged brain cells. One potential class of genes that could be delivered via this mechanism would be lysosomal enzymes (lysosomes are intracellular organelles that break down waste material in a cell) from other species that are capable of breaking down accumulated intracellular junk that human lysosomal enzymes are unable to break down.
Update: This research is also important because it suggests that adult stem cells may have less potential to differentiate into various cell types than was previously thought.
The adult bone marrow cells appear to fuse with existing cells of the heart, liver and brain rather than differentiating into entirely new ones, the scientists said. The studies suggest that only embryonic cells have the potential to regenerate diseased tissues.
Researchers who were involved with this research caution that clinical trials using adult stem cells are proceeding on a questionable assumption about how those stem cells work. Arturo Alvarez-Buylla of the University of California, San Francisco say past results may have mistakenly been interpreted as evidence of differentiation when fusion was really at work.
According to Morrison and Alvarez-Buylla, their findings offer caution to researchers who have already begun clinical trials in which they are inserting bone marrow cells into damaged heart tissue, in an attempt to regenerate healthy muscle.
"Our findings raise a red flag about going too fast to clinical trials based on the assumption that transdifferentiation is the mechanism by which stem cells give rise to other cell types," said Alvarez-Buylla. "Our paper suggests that previous claims of transdifferentiation may be explained by cell fusion." The scientists said they cannot rule out that transdifferentiation might be occurring, but that they saw no evidence of it in their experimental system.
But if any on-going clinical trials using adult stem cell injection find benefits for people with, for instance, heart disease then it is unlikely the patients will complain just because the benefit comes via a mechanism different than the original mechanism on which the trials were originally justified. There are lots of people with very sick hearts and months left to live who are probably very willing to gamble.
Eventually methods will be developed that will allow cells to be instructed to shift between various types of cells regardless of the starting cell type. Advances in the ability to assay methylation patterns on DNA will allow the tracking of epigenetic state changes and the more rapid development of techniques for causing and controlling the transition of cells into different cell types.
ALBUQUERQUE, N.M. — Researchers at the Department of Energy’s Sandia National Laboratories have developed a new lightweight material to withstand ultra-high temperatures on hypersonic vehicles, such as the space shuttle.
The ultra-high-temperature ceramics (UHTCs), created in Sandia’s Advanced Materials Laboratory, can withstand up to 2000ºC (about 3,800ºF).
Ron Loehman, a senior scientist in Sandia’s Ceramic Materials, said results from the first seven months of the project have exceeded his expectations.
“We plan to have demonstrated successful performance at the lab scale in another year with scale-up the next year,” Loehman said.
Thermal insulation materials for sharp leading edges on hypersonic vehicles must be stable at very high temperatures (near 2000ºC). The materials must resist evaporation, erosion, and oxidation, and should exhibit low thermal diffusivity to limit heat transfer to support structures.
Composite materials UHTCs are composed of zirconium diboride (ZrB2) and hafnium diboride (HfB2), and composites of those ceramics with silicon carbide (SiC). These ceramics are extremely hard and have high melting temperatures (3245ºC for ZrB2 and 3380ºC for HfB2). When combined, the material forms protective, oxidation-resistant coatings, and has low vapor pressures at potential use temperatures.
“However, in their present state of development, UHTCs have exhibited poor strength and thermal shock behavior, a deficiency that has been attributed to inability to make them as fully dense ceramics with good microstructures,” Loehman said.
Loehman said the initial evaluation of UHTC specimens provided by the NASA Thermal Protection Branch about a year ago suggests that the poor properties were due to agglomerates, inhomogeneities, and grain boundary impurities, all of which could be traced to errors in ceramic processing.
During the first seven months, the researchers made UHTCs in both the ZrB2 and HfB2 systems that are 100 percent dense or nearly so. They have favorable microstructures, as indicated by preliminary electron microscopic examination. In addition, the researchers have hot pressed UHTCs with a much wider range of SiC contents than ever before. Availability of a range of compositions and microstructures will give system engineers added flexibility in optimizing their designs.
Money spent to fund basic research in materials science does more to advance the cause of space exploration and space travel than money spent operating the Space Shuttle and International Space Station. The development of new materials will eventually help enable the development of much more advanced launch vehicles and spacecraft. Also, nanotech advances promise eventually to enable the construction of a space elevator. Far too much government money is spent doing things in space in the short term that would be better spent pursuing scientific and technological advances that would enable us to do orders of magnitude more in the long term.
On Taiwan, abortions have skewed the island's demographics to the point that only two girls are born for every three boys.
An obvious consequence is that when the little king passes puberty, he discovers that the girl he liked in high school has gone to USC, probably never to return, while those who remain are being snapped up by other men.
To ease the gender gap, Taiwanese men import brides from the mainland. Unfortunately, these women are outnumbered by those smuggled into the country illegally, who, in exchange for $10,000, can legalize their status with marriages of convenience, then head for the brothels to earn real money. These bogus nuptials are difficult to detect since many Taiwanese men hop between marriages until they find a woman who can bear them a son.
The effect this will have is to select against the reproduction of economically less successful and physically less attractive males. Women prefer wealthier and higher status males and the approximately one third of Taiwanese males who are going to be left unable to reproduce will be, on average, lower status and less affluent. Any genetic variations that reduce economic success or physical attractiveness will therefore be selected against. This will probably tend to raise the average IQ of Taiwanese babies and probably will make them more driven and motivated. For more information on the practice of selective abortion of female fetuses in China and India see the post Girl Shortage Causes Wife Buying In India. While Nobelist Sydney Brenner sees natural selection as obsolete (see Sydney Brenner: Biological Evolution Is An Obsolete Technology) it seems obvious that natural selection is still happening to humans. The only thing that has changed is that different genes are being selected for or against than was previously the case before modern medicine and cheap foods became available.
Update: To clarify a point: It can be debated just what is natural versus artificial selection. What is artificial must somehow involve sentience modifying and creating elements of an environment. Should we limit the use of the term artificial selection to refer only to changes in offspring genes caused by conscious engineering of offspring? Or if we change our environments for other reasons and, as a side effect, cause those environments to exert different selective pressures on us that change the frequency of alleles in successive generations should we call that artificial selection too? Heck if I know. I care less about semantic debates than about understanding the actual processes that are at work.
One can (and some people do) even carve out something called sexual selection as distinct from natural selection (that strikes me as more of a subcategory of natural selection - not that I care all that much whether it is treated as a subcategory). But the important point here is that just because we have modified our own environment in substantial ways does not mean that the environment is not still exerting selective pressures on us.
Selection that causes different frequencies of alleles from one generation to the next is still happening. While there are some exceptions of rather limited scope (e.g. preimplantation testing of IVF embryos) we aren't yet using genetic engineering to customize our offspring. Yet for other reasons we have made many changes to our environments which, as a side effect, are causing changes in the frequency of many alleles in successive generations of humans. Scientists who claim that human evolution by natural selection has stopped are trying to imply that there are no selective pressures at work. But it is incredibly obvious that this is an erroneous conclusion.
October 15, 2003 -- (BRONX, NY) -- Researchers at the Albert Einstein College of Medicine of Yeshiva University and colleagues have discovered that a gene mutation helps people live exceptionally long lives and apparently can be passed from one generation to the next. The scientists, led by Dr. Nir Barzilai, director of the Institute for Aging Research at Einstein, report their findings in the October 15, 2003 issue of the Journal of the American Medical Association (JAMA).
The mutation alters the Cholestryl Ester Transfer Protein (CETP), an enzyme involved in regulating lipoproteins and their particle size. Compared with a control group representative of the general population, centenarians were three times as likely to have the mutation (24.8 percent of centenarians had it vs. 8.6 percent of controls) and the centenarians' offspring were twice as likely to have it.
CETP affects the size of "good" HDL and "bad" LDL cholesterol, which are packaged into lipoprotein particles. The researchers found that the centenarians had significantly larger HDL and LDL lipoprotein particles than individuals in the control group. The same finding held true for offspring of the centenarians but not for control-group members of comparable ages.
Evidence increasingly indicates that people with small LDL lipoprotein particles are at increased risk for developing cardiovascular disease, the leading cause of death in the United States and the Western world. Dr. Barzilai and his colleagues believe that large LDL particles may be less apt than small LDL particles to penetrate artery walls and promote the development of atherosclerosis, a major contributor to heart disease and stroke. Their study found that HDL and LDL particles were significantly larger in those offspring and control-group members who were free of heart disease, hypertension and the metabolic syndrome (a pre-diabetic condition that increases risk for cardiovascular disease).
The research team studied people of Ashkenazic (Eastern European) Jewish descent because of the group's genetic homogeneity -- it had a small number of "founders" and was socially isolated for hundreds of years. Studying a group of genetically similar people speeds the identification of significant genetic differences and limits the amount of genetic "noise" that can result when examining more heterogeneous groups. (The research team also included scientists from the University of Maryland School of Medicine; Tufts University; Boston University School of Medicine; and Roche Molecular Systems Inc.)
To identify the biological and genetic underpinnings of exceptional longevity, the researchers studied 213 individuals between the ages of 95 and 107, along with 216 of their children. For comparison, they looked at 258 spouses of the offspring and their neighbors.
"These results are significant because they mean that the mutation of the CETP gene is clearly associated with longevity," says Dr. Barzilai. "Furthermore, finding this mutation in both the centenarians and their offspring suggests that the mutation may be inherited."
Keep in mind that slightly over half of the long-lived did not have this cholesterol size boosting genetic variation. Likely there are a number of genetic variations in a variety of genes that affect longevity.
"Large particle size seems to give people an extra 20 years of life, with very little disability to go along with it," said Dr. Nir Barzilai, who directed the study at the Albert Einstein College of Medicine in the Bronx.
Another 20 years would be great. During those 20 additional years more medical advances will happen that will increase your odds of living even longer. How could this be done? CETP is made in the liver and released into the blood. Possibly a drug could be developed to either mirror the effects of CETP in the blood or interact with it to change its shape or to increase its synthesis and release from the liver. But another strong possibility would be the development of a gene therapy to do to liver cells to provide one's body with the variation of Cholestryl Ester Transfer Protein that increases cholesterol particle size. One thing you can do now: exercise.
One caveat: A person who has large cholesterol particles from birth is going to age more slowly from the very start. A person first getting a treatment to increase cholesterol particle size at age 50 will already have 50 years of aging at a faster rate due to smaller cholesterol particles. So the benefit will not be as great for anyone who gets some future cholesterol particle boosting treatment later in life.
When people who have been sedentary start performing regular exercise, their L.D.L. particles grow bigger, as shown by Dr. William E. Kraus, a cardiologist at the Duke University Medical Center, and his colleagues a year ago in a study of people 40 to 65.
Though Barzilai doesn't endorse smoking, he noted that the cholesterol mutation exerted such a powerful protective effect that many of his volunteers never developed lung disease despite decades of puffing cigarettes and cigars.
Among his volunteers was a 95-year-old woman who had smoked since she was 8 and who - not coincidentally- has a 100-year-old sister and a brother who's 97.
Expect more reports of this sort where life-extending genetic variations are identified. Each one will become a target of either drug development, gene therapy development, or both.
The Senate has moved unanimously to ensure that breaking down the human genetic code will bring health benefits without exposing people to job and health care discrimination. The Genetic Information Nondiscrimination Act that cleared the Senate Tuesday on a 95-0 vote would bar employers from using people's genetic information or family histories in hiring, firing or assigning workers. Insurance companies could not use genetic records to deny medical coverage or set premiums.
Any effort that seeks to prevent people from taking advantage of useful information will cause harmful consequences. If genetic information suggested that some workers were uniquely susceptible to exposure to some class of compounds which were completely innocuous to most people this proposed law will probably prevent companies from assigning workers based on that information. If particular genetic profiles turn out to make some people more likely to break down under physically dangerous stressful situations then, again, under this proposed law employers be banned from using that information.
One might argue that the legislators want to assure that each person is treated as an individual. But individual genetic profiles will provide a great deal of information about each person that will allow a far more detailed breakdown of just how much the members of any group of workers or job candidates differ from each other. But the problem is that using genetic profiles seems to many people to be too much like predestination and the use of genetic information threatens the belief that free will can allow people to mold and develop themselves in ways that transcend individual genetic inheritances. The ability to use DNA test results to more accurately predict future performance in jobs and school and the odds of getting illnesses threatens people's sense of self-control over their destiny and their belief in limitless personal potential. There is a resistance to the use of technological advances to automatically produce more accurate evaluations of the potential of individual humans.
The chairman of the health panel, Senator Judd Gregg, Republican of New Hampshire, described the bill as "civil rights legislation" for "a world where the secrets of human life have been plotted out and sheep have been cloned."
"This is a moral responsibility and a practical necessity," said Senate Majority Leader Bill Frist (R-Tenn.). Senate Minority Leader Thomas A. Daschle (D-S.D.) said, "We can't afford to take one step forward in science but two steps backward in civil rights."
But how would this bill work in practice? Some day personal DNA sequencing will become very cheap and most people in developed countries will be able to afford to have their complete genome sequenced. Once they have that information they will be able to find out their risk profile for a large variety of diseases. This will create a real serious problem for medical insurance companies: those people who are most at risk for getting sick will sign up for more medical coverage and those at lower risk for serious illness will sign up for less coverage on average. This will cause the insurance companies to collect less in revenues while paying more out in benefits. They will respond by raising rates on everyone who seeks insurance. This will cause those at lowest risk to further reduce their coverage. If individuals know their risks and the insurance companies do not then people will be able to game the system. The result will be fewer customers for medical insurance and those who do seek it will be at greater risk of getting sick.
Researchers at UCLA have demonstrated with Functional MRI (fMRI) scans that the pain of rejection looks similar in a brain scan to the neuronal activation pattern seen with physical pain.
In the first of three rounds, experimenters instructed UCLA undergraduates just to watch the two other players because "technical difficulties" prevented them from participating. In the second round, the students were included in the ball-tossing game, but they were excluded from the last three-quarters of the third round by the other players. While the undergraduates later reported feeling excluded in the third round, fMRI scans revealed elevated activity during both the first and third rounds in the anterior cingulate. Located in the center of the brain, the cingulate has been implicated in generating the adverse experience of physical pain.
"Rationally we can say being excluded doesn't matter, but rejection of any form still appears to register automatically in the brain, and the mechanism appears to be similar to the experience of physical pain," Lieberman said.
When the undergraduates were conscious of being snubbed, cingulate activity directly responded to the amount of distress that they later reported feeling at being excluded.
The researchers also detected elevated levels of activity in another portion of the brain — the right ventral prefrontal cortex — but only during the game's third round. Located behind the forehead and eyes, the prefrontal cortex is associated with thinking about emotions and with self-control.
"The folks who had the most activity in the prefrontal cortex had the least amount of activity in the cingulate, making us think that one area is inhibiting one or the other," Lieberman said.
The psychologists theorize that the pain of being rejected may have evolved because of the importance of social bonds for the survival of most mammals.
"Going back 50,000 years, social distance from a group could lead to death and it still does for most infant mammals," Lieberman said. "We may have evolved a sensitivity to anything that would indicate that we're being excluded. This automatic alarm may be a signal for us to reestablish social bonds before harm befalls us."
"These findings show how deeply rooted our need is for social connection," Eisenberger said. "There's something about exclusion from others that is perceived as being as harmful to our survival as something that can physically hurt us, and our body automatically knows this."
There are interesting legal ramifications to this report. As the cost of fMRI and other objective measures of pain become more advanced do not be surprised if fMRI and other tests are used in legal cases to buttress claims of pain and suffering to win legal awards. This will be seen as unfair to those with higher pain thresholds and less sensitivity to rejection and to treatment that others might perceive as unfair and painful. Is it fair for people who suffer differing degrees of emotional pain from the same experience to receive different sized legal settlements because they are not equally prone to feeling emotional pain in response to traumatic experiences?
There is another ramification to this report: humans are wired to not want to be rejected by other humans. As the authors state, this is probably a consequence of human evolution. Well, suppose it becomes possible for people to modify their minds to reduce their need for acceptance by others. This would have all sorts of consequences for behavior. A great many human activities are performed (for both good and ill) in order to win acceptance from others. What would be the net effect of a reduced desire to be accepted? My guess is that among many other effects it would tend to reduce altruistic behavior and would reduce the incentive to avoid doing things that are inconsiderate of others.
The ability to edit memories, change one's personality, change very basic desires, and to change what causes pain or pleasure could provide us with many benefits. But it could also create changes in human nature that undermine civilization. When it becomes possible to reduce one's feeling of empathy or to stop oneself from feeling guilty over acts committed against others some malevolent and foolish people will choose to do so. This could be done out of a motive to reduce suffering. Some who feel very rejected and in pain from rejection will decide to eliminate the pain response that occurs when one is rejected. Imagine the consequences if more people became indifferent to the approval of others.
The ability to do brain reprogramming is going to force the issue of what constitutes a rights-possessing being. Ayn Rand's claim that rights are a product of our ability to think rationally is just not an adequate explanation. It is part of the explanation but only a part. What we feel pain or pleasure over in dealing with others plays a large role in causing us to treat others fairly or unfairly. It seems inevitable that our minds will become much more mutable in the future. Once that happens we will have to face the question of how to decide whether each person who opts to have mind modifications done still possesses the minimum set of qualities that are necessary for a human to possess to safely live in a society, respect the rights of others, and carry out responsibilities that are expected of anyone who is a member of that society.
This is far from the only report that suggests there are qualities of the human brain that help support the functioning of humans in societies. See, for example my previous post on altruistic punishment for another example. Also, see the post Emotions Overrule Logic To Cause Us To Punish.
The U.S. Census Bureau's newest numbers show that married-couple households -- the dominant cohort since the country's founding -- have slipped from nearly 80% in the 1950s to just 50.7% today. That means that the U.S.'s 86 million single adults could soon define the new majority. Already, unmarrieds make up 42% of the workforce, 40% of home buyers, 35% of voters, and one of the most potent -- if pluralistic -- consumer groups on record.
Only half of the married couple households have kids in them.
Married couples with kids, which made up nearly every residence a century ago, now total just 25% -- with the number projected to drop to 20% by 2010, says the Census Bureau. By then, nearly 30% of homes will be inhabited by someone who lives alone.
Delays till marriage, cohabitation, divorce, and other factors such as death of one spouse are all working to make married couples into a minority.
You can bet money that the unmarrieds are going to push for changes in taxes and benefits that currently give the marrieds all sorts of advantages.
In 1940, less than 8 percent of Americans lived alone. Today that proportion has more than tripled, reaching nearly 26 percent. Singles number 86 million, according to the Census Bureau, and virtually half of all households are now headed by unmarried adults.
Signs of this demographic revolution, this kingdom of singledom, appear everywhere, including Capitol Hill.
Last month the Census Bureau reported that 132 members of the House of Representatives have districts in which the majority of households are headed by unmarried adults.
As different aspects of mental health are better understood, more parts of the innovative process will be impacted such as accelerating learning via cogniceuticals to enhancing interpersonal communication with emoticeuticals. As neuroceutical usage spreads across industries it will create a new economic “playing field” wherein individuals who use neuroceuticals will achieve a higher level of productivity than those who don’t.
Many such drugs are already under development. In the future drugs will be available that raise intelligence and that increase the length of time that intense concentration can be maintained. Permanent memory formation will be enhanced. Short term memory will be enhanced as well. Improving the mental capabilities of workers is certain to boost economic productivity and output. Engineers will be able to hold more aspects of designs in their minds and combine design elements in different ways more rapidly. Writers will be able to remember more facts and ideas and formulate sentences more quickly. Managers will be able to think thru problems more thoroughly and do so for longer periods of time. Okay, great. Can't wait. But there are still major questions about how this will all play out. What follows are speculations on the likely distribution of usage of brain-boosting drugs within populations and whether such drugs will make us more or less alike in our mental abilities.
First of all, we start with the fact that people differ in their intelligence. We are not all equally capable to start with. There is general intelligence and then there also are various components of mental functioning (e.g. spatial perception, short term memory, ability to form long term memories) that tend to vary with general intelligence but not perfectly so. For instance, a person can suffer brain damage that prevents the formation of long term memories and still retain the ability to hold short term memories. But the important point here is that people differ in their innate mental abilities.
Given that people start with different innate abilities what will the effect be of the use of brain-boosting drugs. If everyone used brain-boosting drugs would we become more or less equal in mental abilities? Well, it depends in part on why people differ in ability in the first place and the mechanisms by which the brain-boosters will work. By analogy, suppose two engine were identical except one had cleaner spark plugs than the other and hence ran at higher horsespower. If you added something that cleaned the spark plugs then only the lower horsepower engine would benefit and the effect would be to bring the two engines closer together in measured horsepower. But if you took two engines that were of different size and therefore different horsepower (say 100 and 200 hp) and you gave them better fuel that boosted them both in horsepower, by, say 10% the effect would probably be to widen the gap in absolute horsepower since they'd go to being 110 and 220 horsepower. A 100 hp difference would be replaced with a 110 hp difference. Also, imagine some enhancement to an engine's ignition system or fuel system that could only be added to engines that have camshafts that are strong enough to withstand greater force applied to them. Not all engines have the ability to get boosted in horsepower by use of a better fuel system because some other part would break if that happened. Or it could be that only if one has valves of a certain shape that an improved fuel mix could be properly utilized.
My guess is that different brain-boosting drugs will be discovered that operate in ways analogous to each of the engine scenarios outlined above. Some brain-boosting drugs will be discovered that will basically give less smart brains some improvement that smarter brains already have. So those brain-boosting drugs will narrow the differences in intellectual abilities. But it is also likely that other brain-boosting drugs will be discovered that boost by some percentage across all minds and therefore will have the effect analogous to that of the better fuel that makes the performance gap between engines even wider. Still other brain-boosting drugs may turn out to only benefit the smartest because, for instance, it could be that certain genetic variations that boost intelligence can be enhanced by drugs but you would have to have the intelligence-boosting variation that a drug operates on in order to benefit from the drug.
So will brain-boosting drugs overall narrow or increase the spread of intelligence in human populations? My guess is that they will tend overall to widen the spread. Drugs do not seem like effective agents for improving inefficiencies in enzymes that may lower intelligence. Gene therapies will probably be able to do that eventually. But simple classical chemical compound drugs are probably not going to be able to fix most protein shape variations that cause some to be not as smart. Also, to the extent that intelligence is caused by having a larger number of neurons and a generally larger brain, well, short of making the brain case bigger with surgery it is going to be hard to give a smaller brained person more volume for growing new neurons. Plus, trying to expand a fully differentiated brain is going to be difficult. Though there are much better prospects for feeding drugs to babies to make their brains to grow larger. Still, if we restrict our analysis to fully grown adults my guess is that brain-boosting drugs will tend to widen the range of intelligence in a population if the drugs are used equally by all.
Of course, the "if the drugs are used equally by all" assumption is not likely to hold up in practice. Look at the whole world. Certainly when brain-boosting drugs first become available they will not be used by everyone all at once. The drugs will not be approved in all countries at the same time. Some people will be afraid to try them. Some won't be able to afford them. The most affluent in the most industrialized countries are likely to try them at a faster rate. Also, people who do mentally difficult work already will have the biggest short-term incentive to try brain-boosting drugs. Someone who is just digging ditches, is worried about the risk, doesn't see any immediate economic benefit, and who sees the cost as substantial will probably decide to wait. But someone who is a trader on Wall Street tracking huge numbers of bond contracts and pulling down a few hundred thousand dollars a year is going to jump on a drug that boost intelligence or memory before other traders get a leg up by jumping on it first. People in especially competitive and highly paying occupations will have the biggest incentive and means to be first users.
This all leads me to believe that brain-boosting drugs will tend to widen the distribution of the level of mental functioning in any one society and in the world as a whole. This will provoke a variety of political responses. For instance, expect to see demands from the political Left for government subsidies and price controls on drugs that boost intelligence. This isn't necessarily a bad thing. Government subsidies to fund the purchase of brain-boosting drugs will probably turn out to be a net benefit for society as a whole if the drugs help unemployed people to learn skills that turn them into taxpayers.
Businessweek has an article that highlights the development of drugs for memory and general cogntive enhancement.
HUMAN TRIALS. C.L. took nine capsules of CX516 daily for 12 weeks this spring. The impact was immediate. "At the start of the trial, I could remember less than five words out of a list of 20. By the second week, I could get 14 out of 20. There was a very, very appreciable enhancement." He has since finished his part in the study and says it was "heartbreaking" to go off the drug. "I've been thinking of some other way to get it, and I don't give a damn if it's legal or illegal."
If he waits a few years, C.L. should be able to solve his problem legally. At least 60 pharmaceutical and biotech companies around the world are working on novel memory pills. Some 40 are in human trials, and the first of these could be on the market within the next few years.
The aging of the baby boomers and their unwillingness to go quietly into the night provides a big incentive for drug developers to come out with compounds that will help boomers think more clearly. This would be especially helpful for the boomers who did too much bad LSD and other recreational drugs in the 60s and who haven't had a clear thought since then.
The Company’s proprietary family of AMPAKINE compounds affects and enhances the activity of AMPA-type glutamate receptors, complexes of proteins that are involved in communication between cells in the human brain (neurons). Glutamatergic transmission between neurons is by far the brain’s most abundant communication system. When a neuron releases glutamate and it binds to the AMPA receptor, AMPAKINE compounds increase or magnify the effect of glutamate, thereby amplifying normal brain signals.
The AMPAKINE compounds, which can be taken orally, rapidly enter the brain and increase the ability of neurons to communicate with each other. Data from pilot clinical trials have shown that the AMPAKINE CX516, a lead compound, is effective in improving memory in elderly volunteers and patients with schizophrenia.
Some of the drugs mentioned in the Businessweek article operate on neurotransmitter metabolism by enhancing neurotransmitter receptor sensitivity, preventing reuptake, stimulating release, or perhaps by preventing breakdown or stimulating production of neurotransmitters. One or more of these approaches will eventually result in useful drugs that provide real benefits for the middle aged, the elderly and perhaps even younger minds. Large numbers of people (including me!) would certainly jump at the opportunity to take a memory enhancing drug that operated by one of these mechanisms if the side effects didn't seem too onerous or risky. But the better longer term approach to the prevention of age-related decline in memory and processing speed is to develop treatments that reverse brain aging. What is worrying about any approach that does not target an actual aging or disease process is that turning up the functional level of the brain could conceivably increase mental functioning in the short term but, by speeding up brain metabolism, accelerate brain aging.
The article mentions of the Alzheimer's Disease experimental vaccine AN-1792 from Elan Pharmaceuticals that successfully cleared away beta amyloid plaque from many test subjects. In a clinical trial this vaccine AN-1792 prevented mental decline in two thirds of the subjects while causing fatal brain inflammation from immune T cell attack in a small number of them. The result provides evidence for the idea that plaque accumulation really does cause Alzheimer's and strongly suggests that a better vaccine (probably one designed to hit a different antigen section of the plaque that doesn't look like any surface protein on normal brain cells) could successfully target the plaque while avoiding brain inflammation. This result is causing Elan and others to pursue better Alzheimer's vaccines.
This vaccine approach for Alzheimer's is probably the easiest way to try to stop a disease process that comes with age. Send in antibodies to target the plaque and the immune system eats up the plaque and the plaque is no longer there to kill neurons. The problem, of course, is that there are plenty of disease processes and aging processes that are not amenable to treatment with vaccines. The other disease processes that come with aging will be much harder to target to stop and reverse. Accumulated damage in neurons will likely require gene therapy to send in instructions to repair them. But effective gene therapies for brain rejuvenation will take much longer to develop in comparison to vaccines.
Another useful approach will be to send in stem cells to replace aged neural stem cells in the hippocampus. The brain is constantly forming new neurons from stem cells in the brain but the reservoirs of stem cells age from repeated replication and accumulated damage and they need to be replaced in order to restore the ability to form new neurons back to youthful levels.
The classic chemical compound drug approach to disease treatment has real limits to what it can accomplish. To do the most complex manipulations of an organism requires many steps and fancier instructions than a single chemical compound can provide. In a nutshell, what is needed is the ability to send in the equivalent of computer programs. Cell therapies and gene therapies have a lot more potential than classical chemical compound drugs in the long run because cells and genes carry much more information and can therefore carry out more complex tasks.
Chemical compound drug therapies still have a long life ahead of them. One reason is that cell therapy and gene therapy development are both (unfortunately) still in their infancy. Also, the same kinds of advances in DNA sequencing and gene activity assays that are pointing the way toward useful gene therapies are also pointing the way toward targets in the cell for chemical compounds. Since cells already have tens of thousands of genes one way that drug therapies will improve is by the development of compounds that can turn individual genes on and off by binding to regulatory proteins. Fairly complex transformations of cells will eventually be carried out by sending in series of compounds that sequentially turn on and off various genes. One area where this will be used first is in cancer therapy. The way forward for anti-angiogenesis compound usage is probably going to be to use multiple compounds in concert to cut off different pathways that cause the angiogenesis (the growth of new blood vessels) that helps keep growing tumors fed. Also, anti-angiogenesis compounds will be used in combination with other compounds that regulate other steps involved in cell proliferation.
Update: A team led by Canadian researcher Dr. William Molloy of McMaster University and of St. Peter's Centre for Studies in Aging in Hamilton, Ontario have discovered antibiotics slow the rate of advance of Alzheimer's Disease.
Alzheimer's patients who took the drugs - doxycycline and rifampin - in combination for three months showed significantly slower cognitive decline at six months out from the start of the trial than patients who received a placebo, they will report at the annual meeting of the Infectious Disease Society of America in San Diego on Saturday.
Armed with the knowledge that plaques formed in the brain have been associated with jumbled memories, and the fact that researchers have also found the chlamydia pneumoniae bacterium in the brains of people with Alzheimer's disease, Molloy came up with a novel hypothesis.
The antibiotics did not, however, lower the level of chlamydia by all that much (which itself is bad news). So did they knock out a different bacteria or block some action that chlamydia takes or did they work in some manner unrelated to bacteria? Also, would other antibiotics have a more beneficial effect? Expect a lot of research groups to jump to repeat this result. Heck, expect doctors to be deluged by requests for antibiotics prescriptions for Alzheimer's patients. Is the best antibiotic against chlamydia still under patent? What drug company makes it and how much do they make in profit from that antiobiotic?
The research lends credence to the notion that common bacterial infections might play a role in determining who is stricken with the debilitating neurological disorder. It also offers hope of a cheap, simple treatment for Alzheimer's, a condition for which there is virtually no effective treatment.
There is a hypothesis that bacteria play a role in the formation of plaques in arteries and this study was an attempt to discover whether the same might be going on with Alzheimer's plaques. This suggests another target for vaccine development: the bacteria that might be involved in Alzheimer's. Also, the development of better antibiotics would be another approach. Given that Alzheimer's plaque formation probably starts for people in their 40s or 50s it also brings up the question of whether we should all go get dosed with a heavy antibiotic regimen before our brains start accumulating damage. It may also be an argument for less sexual promiscuity if chlamydia is mostly getting spread by that route. (anyone know?)
Scientists at the British Antarctic Survey say the Sun is not going to contribute as much to global warming in the rest of the current century as it is currently. Energy output from the Sun is expected to decline in the next 100 years.
New research on the sun's contribution to global warming is reported in this month's Astronomy & Geophysics. By looking at solar activity over the last 11,000 years, British Antarctic Survey (BAS) astrophysicist, Mark Clilverd, predicts that the sun's contribution to warming the Earth will reduce slightly over the next 100 years.
This is a different picture to the last century when solar flares, sunspots and geomagnetic storms, increased in number. This rise is simultaneous with emissions of greenhouses gases and an estimated increase in solar heat output, which together have warmed the Earth's temperature by a global average of 0.7 degrees centigrade.
The solar contribution to the increase is variously estimated to be around 4-20% leaving greenhouse gases to make up the remaining 80%. Clilverd and colleagues conclude that solar activity is about to peak and predict less activity in the next 100 years, with the occurrence of space storms likely to decline by two thirds. Their assumption is that the solar heat output will decline slightly accordingly.
You might think this decline in the Sun's output is good news because it will help offset the heating effect caused by a rise in greenhouse gases that is supposed to take place for the rest of the 21st century and beyond. Well, sorry, the conventional wisdom is all wrong if the folks at the Association for the Study of Peak Oil (ASPO) are to be believed. While they have been making the argument for years that oil production is going to peak much sooner than others estimate they finally attracted a lot more attention to that argument by point out that if they are right then there is not enough oil to burn to cause the more pessimistic global warming scenarios to play out in reality.
But geologists Anders Sivertsson, Kjell Aleklett and Colin Campbell of Uppsala University say there is not enough oil and gas left for even the most conservative of the 40 IPCC scenarios to come to pass (see graphic).
Well hey, if there isn't enough oil to burn to keep the CO2 concentration in the atmosphere continually rising in the 21st century and if the sun is going to cool off as part of a turn in its natural cycles maybe Earth's climate is going to start getting more chilly later on in the century.
The fossil fuel production pessimists say natural gas production will peak later than oil production.
Conventional natural gas reserves also are heading for peak production as endowment is probably about the same as oil. Less gas has been used so far, but the global peak in conventional gas production is already in sight, in perhaps 20 years, forecast Robert W. Bentley of ODAC, "and hence the global peak of all hydrocarbons (oil plus gas) is likely to be in about 10 or so years."
It is worth noting that Russia has as many Btus of natural gas as Saudi Arabia has of oil. Natural gas is harder to transport, particularly over longer distances. But technologies to make it easier to transport are under development. Still, natural gas will peak in production not long after oil does.
Roger Bentley, head of The Oil Depletion Analysis Center in London, insisted that the predictions made in the 1970s were basically correct. About 50 countries, including the United States, have already passed their point of peak oil output, he said.
The world's total reserves of crude, excluding oil found in shale and tar sands, are estimated to exceed 3 trillion barrels, according to the U.S. Geological Survey and other conventional sources of data.
Campbell insisted the true figure for reserves is closer to 2 trillion barrels, due partly to what he described as overstated reserves reported by Saudi Arabia and other OPEC nations.
Investment banker Matt Simmons says US lower 48 state onshore production peaked in 1970. (PDF format)
The U.S. peaked at a daily production of about 9.6 million barrels per day. A decade later, this base had fallen to 6.9 million barrels per day, despite a drilling boom that produced 4 times more oil wells each year.
When Canada’s oil production is added to and viewed as a North American picture, one can see that Canada also reached peak production in 1973. Today, Western Canada’s oil output is only half of what it was when it peaked.
Today, the U.S. base has dropped to about 3.4 million barrels per day, down from 1970’s record 9.6 barrels per day production. This excludes Alaskan and deepwater oil, as neither had anything to do with the U.S. lower 48 and Gulf of Mexico shelf.
The argument is that once the rate of discovery of new fields starts declining there will be a decline in production within a number of years later. Since discovery is declining the world over it is reasonable therefore to expect a decline in production in the foreseeable future.
For the details of this claim see the 5 part set of PDF files for the The Study of World Oil Resources and the Impact on IPCC Emissions Scenarios, Anders Sivertsson, Uppsala Hydrocarbon Depletion Study Group, Uppsala University, Sweden. From the 4th part of the Results (PDF format) that is most dramatically apparent is the inevitability of increased world dependence on Middle Eastern oil production.
Furthermore, the production of non-regular gas is assumed to increase, mainly because of an increase of coal-bed methane production. The total peak of production of all hydrocarbons appear in 2010, thereafter production is heading downhill. The swing role of the Middle East Gulf countries was mentioned earlier and the scenarios determining world production/consumption of regular oil. Figure ## shows what will be required from Middle East Gulf production in order to achieve the five scenarios described in the oil depletion model. Regardless which scenario is chosen the Middle East Gulf countries must increase their yearly production compared to 2002. By following the base case scenario, as is illustrated in figure ##, the product must increase by about 42% until 2010 to keep world production/consumption flat. Following the very high scenario, the production must be doubled by 2013. Even the low scenario requires an increase in production of 55% until 2018, and although the very low scenario allows the production to decrease for a few years it still has to reach the 2002 level again before 2020.
Because the global resources are unevenly distributed, the oil production of the future will also be unevenly distributed. The Middle East Gulf has got roughly two thirds of the resources of regular oil and their share of world production will exceed 50% by around 2020 (base case scenario, illustrated in figure ##). Thereafter, their share will steadily increase. This is shown in figure ##, which shows different regions’ share of world oil production. USA obviously had more than a 60% share of the world’s total production towards the end of 1940 when the Middle East Gulf started increasing their share of the market.
The United States will probably respond, at least in part, by shifting more toward burning coal. Certainly, it is possible to develop much cleaner ways to burn coal and research toward that end should be continued and even accelerated. But we really need a massive push to develop a large number of technologies to develop replacements for oil and natural gas. The United States goverment ought to be spending tens of billions per year on energy research projects across a large number of areas including nuclear, biomass, cleaner coal, photovoltaics, and batteries.
Still not worried? Too abstract a problem for you to spend much time thinking about? Well, imagine the panic that would set in if we were going to run out of beer.
Colin:"Understanding depletion is simple. Think of an Irish pub. The glass starts full and ends empty. There are only so many more drinks to closing time. It’s the same with oil. We have to find the bar before we can drink what’s in it."
The world doesn't have enough oil bars.
In a study of nutrition's effects on development, the scientists showed they could change the coat color of baby mice simply by feeding their mothers four common nutritional supplements before and during pregnancy and lactation. Moreover, these four supplements lowered the offspring's susceptibility to obesity, diabetes and cancer.
Results of the study are published in and featured on the cover of the Aug. 1, 2003, issue of Molecular and Cellular Biology.
"We have long known that maternal nutrition profoundly impacts disease susceptibility in their offspring, but we never understood the cause-and-effect link," said Randy Jirtle, Ph.D., professor of radiation oncology at Duke and senior investigator of the study. "For the first time ever, we have shown precisely how nutritional supplementation to the mother can permanently alter gene expression in her offspring without altering the genes themselves."
In the Duke experiments, pregnant mice that received dietary supplements with vitamin B12, folic acid, choline and betaine (from sugar beets) gave birth to babies predominantly with brown coats. In contrast, pregnant mice that did not receive the nutritional supplements gave birth predominantly to mice with yellow coats. The non-supplemented mothers were not deficient in these nutrients.
That choice of nutrients was not just a wild guess on the part of the researchers. Those nutrients are all known to serve as methyl donors in a number of metabolic pathways. Methylation is done to many other compounds besides DNA and most of the pathways that use these nutrients to methylate do so to serve other purposes such as changing hormones into other kinds of hormones (and in some cases the methyls are chopped off rather than added).
What is interesting to note about folic acid's role is that the US government and other governments have authorized the addition of folic acid to grain foods in order to lower the risk of spina bifida. At the same time, lots of people are trying to get more folic acid in their diets in order to reduce the risk of Alzheimer's, heart disease (on the theory that folic acid will catalyze pathways that will lower blood homocysteine and, by doing so, reduce the rate of atherosclerotic build-up), and other diseases whose risks may be lower if one gets more folic acid in one's diet. So if increased folic acid will cause changes in human development by increasing DNA methylation we are going to eventually see some unexpected changes in health between generations. Whether those changes will be for good or ill or, perhaps, a mixture of both remains to be seen. We really can only guess at this point. If this worries you keep in mind that industrialization has certainly caused large changes in the nutritional composition of our diets already. Folic acid fortification of food is just one of many changes in our diets and environments that are probably changing human development in all sorts of ways which have not yet been identified.
A study of the cellular differences between the groups of baby mice showed that the extra nutrients reduced the expression of a specific gene, called Agouti, to cause the coat color change. Yet the Agouti gene itself remained unchanged.
Just how the babies' coat colors changed without their Agouti gene being altered is the most exciting part of their research, said Jirtle. The mechanism that enabled this permanent color change – called "DNA methylation" -- could potentially affect dozens of other genes that make humans and animals susceptible to cancer, obesity, diabetes, and even autism, he said.
This ability to change the expression of genes without actually changing their primary sequence is very important. Some genetic engineering in the future aimed at changing offspring will likely be done by causing epigenetic changes rather than changes in the primary DNA sequence. What will be interesting to see is whether more precise methods of controlling methylation can be developed. The use of diet to increase the amount of methylating nutrients that a fetus is exposed to is an intervention that has rather broad effects. Some of the resulting methylations may have desired effects but other methylations near other genes might cause effects that are harmful.
"Our study demonstrates how early environmental factors can alter gene expression without mutating the gene itself," said Rob Waterland, Ph.D., a research fellow in the Jirtle laboratory and lead author of the study. "The implications for humans are huge because methylation is a common event in the human genome, and it is clearly a malleable effect that is subject to subtle changes in utero."
During DNA methylation, a quartet of atoms -- called a methyl group – attaches to a gene at a specific point and alters its function. Methylation leaves the gene itself unchanged. Instead, the methyl group conveys a message to silence the gene or reduce its expression inside a given cell. Such an effect is referred to as "epigenetic" because it occurs over and above the gene sequence without altering any of the letters of the four-unit genetic code.
In the treated mice, one or several of the four nutrients caused the Agouti gene to become methylated, thereby reducing its expression – and potentially that of other genes, as well. Moreover, the methylation occurred early during gestation, as evidenced by its widespread manifestation throughout cells in the liver, brain, kidney and tail.
"Our data suggest these changes occur early in embryonic development, before one would even be aware of the pregnancy," said Jirtle. "Any environmental condition that impacts these windows in early development can result in developmental changes that are life-long, some of them beneficial and others detrimental."
If such epigenetic alterations occur in the developing sperm or eggs, they could even be passed on to the next generation, potentially becoming a permanent change in the family line, added Jirtle. In fact, data gathered by Swedish researcher Gunnar Kaati and colleagues indicates just such a multi-generational effect. In that study of nutrition in the late 1800s, boys who reached adolescence (when sperm are reaching maturity) during years of bountiful crop yield produced a lineage of grandchildren with a significantly higher rate of diabetes. No cause-and-effect link was established, but Jirtle suspects epigenetic alterations could underlie this observation.
What is frustrating about a report like this is that it is a reminder that some environmental factors that we can easily manipulate (e.g. the amount of folic acid in the diet at each stage in development) are probably creating a large variety of effects which we simply don't know enough about to know that we should want to manipulate relevant the controllable factors. Some of us might have gotten lucky because our mothers happened to eat more folic acid rich lentils and greens at one stage of development when it would have helped us in some way and perhaps at less of them at some other stage where the effect of more methylation activity might have been more harmful. But others happened to have gotten dosed with methylating nutrients at other stages of development and therefore got stuck with a worse result in terms of predisposition for obesity, diabetes, cancer, hair texture, or who knows what else.
"We used a model system to test the hypothesis that early nutrition can affect phenotype through methylation changes," said Jirtle. "Our data confirmed the hypothesis and demonstrated that seemingly innocuous nutrients could have unintended effects, either negative or positive, on our genetic expression."
For example, methylation that occurs near or within a tumor suppressor gene can silence its anti-cancer activity, said Jirtle. Similarly, methylation may have silenced genes other than Agouti in the present study – genes that weren't analyzed for potential methylation. And, the scientists do not know which of the four nutrients alone or in combination caused methylation of the Agouti gene.
Herein lies the uncertainty of nutrition's epigenetic effects on cells, said Jirtle. Folic acid is a staple of prenatal vitamins, used to prevent neural tube defects like spina bifida. Yet excess folic acid could methylate a gene and silence its expression in a detrimental manner, as well. The data simply don't exist to show each nutrient's cellular effects.
Moreover, methylating a single gene can have multiple effects. For example, the Agouti gene regulates more than just coat color. Mice that over-express the Agouti protein tend to be obese and susceptible to diabetes because the protein also binds with a receptor in the hypothalamus and interferes with the signal to stop eating. Methylating the Agouti gene in mice, therefore, also reduces their susceptibility to obesity, diabetes and cancer.
Not only can increased methylation of a single gene have multiple effects but increased methylation caused by nutritional supplementation would likely cause increased methylation on other genes and thereby have yet more other effects. But if that is not complicated enough already note that in this experiment the nutrients caused mice that would otherwise be obese to be skinny instead. Well, not all mice are genetically predisposed toward obesity. The effects of supplementation using these same nutrients may cause different effects in other strains of mice. For instance, a gene that is not expressed much in this mouse strain might be expressed more in another mouse strain but folic acid might down-regulate it in the other mouse strain while not causing the down-regulation effect in this mouse strain.
The Duke researchers say much more work of this sort will be forthcoming from many labs.
"One of the things we need to do now is refine this model and see how generalizable it is," Waterland said. He noted that he and Jirtle still don't know which one or combination of the nutritional supplements created the changes in the experimental mice. They also want to know if other genes might be involved, and if the supplements caused any negative effects.
"You're going to see more of this kind of work," Jirtle said, "not just from our group, but from all over."
Variations in nutrient levels are just one cause of changes in methylation patterns that can produce enduring effects. Environmental stimuli can cause changes in methylation patterns that cause changes in personality and behavior.
Methylation that occurs after birth may also shape such behavioral traits as fearfulness and confidence, said Dr. Michael Meaney, a professor of medicine and the director of the program for the study of behavior, genes and environment at McGill University in Montreal.
For reasons that are not well understood, methylation patterns are absent from very specific regions of the rat genome before birth. Twelve hours after rats are born, a new methylation pattern is formed. The mother rat then starts licking her pups. The first week is a critical period, Dr. Meaney said. Pups that are licked show decreased methylation patterns in an area of the brain that helps them handle stress. Faced with challenges later in life, they tend to be more confident and less fearful.
While back in the 1950s Dr. Spock confidently held forth on what were the best child-raising practices he really didn't know what he was talking about. A half century later there are still lots of ways in which child-rearing practices might be having unrecognized subtle effects upon human offspring. For instance, there might be some way of handling babies in the first few months of life which, analogous to the licking behavior of mouse mothers, will increase or decrease the risk of anxiety or depression because the practice changes methylation patterns in the brain. As of yet we just don't know what these ways are.
Fortunately just as there was a large effort to sequence the human genome and there is now a large effort to map all the genetic variations there is also a project to map all the places where methylation happens on the human genome and what the effects are of methylation on all the sites where it happens.
The Human Epigenome Project follows the completion of the Human Genome Project and aims to map the way methyl groups are added to DNA across the entire human genome. These "epigenetic" changes are believed to turn genes on and off. Scientists at the Wellcome Trust Sanger Institute in Cambridge, UK, and Epigenomics, a Berlin-based company, are leading the project.
Update: If, as I expect, many nutritional and other factors are discovered that influence human development in enduring ways the result will be a wide embrace of much more management of maternal diet and environment and of the environment of newborns. The speculative advice of the pop psych baby books of the past and present will be replaced with much more precise and accurate advice derived from a scientifically rigorous model of human development. Genetic profiles of mothers and embryos will be used to develop highly customized instructions for diet and environment in order to increase the odds that some parentally chosen desired ideal epigenetic programming of each baby will be achieved. Therefore the development of much greater knowledge of human development may well result in a more regimented and demanding approach to having and caring for babies. While this may make having a baby an even greater ordeal than it is already it should be expected that pharmaceuticals will eventually be developed that will be able to ensure a particular epigenetic outcome with less work on the part of mothers and fathers to manage the pre- and post-natal environment.
CAMBRIDGE, Mass.—Assembling a machine sounds straightforward, but what if the components of that machine are nanoscopic? Similarly, bringing together the ends of two cables is simple unless those cables have a core diameter many times smaller than a human hair, as is the case with fiber optics.
Although there are devices on the market with similar credentials, they are expensive and have inherent limitations. Using a fundamentally new design, an MIT team has invented the HexFlex Nanomanipulator that's not only inexpensive but performs better in many ways than its competitors.
The HexFlex, developed by MIT inventors led by Assistant Professor of Mechanical Engineering Martin Culpepper, has won a 2003 R&D 100 Award. The awards honor the 100 most technologically significant new products and processes, as determined by the editors of R&D magazine and more than 50 experts.
"A traditional nanomanipulator is the size of a bread box and costs more than a new SUV," Culpepper said. The MIT device is three to four inches tall by six inches in diameter "and could be manufactured for about $3,000."
The principal component of the HexFlex is a flat, six-pronged "star" of aluminum. Three of those prongs are mounted on an aluminum base. The other three are actually tabs that can be moved in six different directions thanks to a system of magnet-coil actuators. The device's name reflects that six-axis capability (hex) and compliant (flex) structure.
This device is noteworthy because it is small and cheap. What the future will bring is nanomanipulators that are more capable as well. This one sounds like more of a research tool. But picture one capable of building very complex and useful devices. There is considerable danger in that capability. Want to prevent a country or a terrorist group from getting, say, centrifuges that can purify uranium? If they can get a device that will build such centrifuges then it does no good to control the sale of uranium enrichment centrifuges made by known reputable makers.
The great thing about nanotechnology is that it is going to be small and cheap. The terrible thing about nanotechnology is that it is going to be small and cheap. If nanotechnology was going to require large economies of scale to make it possible to build a nanoassembler device it would be easier to try to prevent really dangerous products from making it into the hands of terrorists. But at least some nanotech manufacturing devices are going to be very small and cheap. Thirty years or forty years from now how to prevent a terrorist from getting ahold of a nanotech assembler that can build any kind of virus or bacteria that is desired? If there is an answer to that question it is not obvious.
Jordan Peterson of the University of Toronto and colleages at Harvard University have found that decreased latent inhibition of environmental stimuli appears to correlate with greater creativity among people with high IQ. (same press release available here and here)
The study in the September issue of the Journal of Personality and Social Psychology says the brains of creative people appear to be more open to incoming stimuli from the surrounding environment. Other people's brains might shut out this same information through a process called "latent inhibition" - defined as an animal's unconscious capacity to ignore stimuli that experience has shown are irrelevant to its needs. Through psychological testing, the researchers showed that creative individuals are much more likely to have low levels of latent inhibition.
"This means that creative individuals remain in contact with the extra information constantly streaming in from the environment," says co-author and U of T psychology professor Jordan Peterson. "The normal person classifies an object, and then forgets about it, even though that object is much more complex and interesting than he or she thinks. The creative person, by contrast, is always open to new possibilities."
Previously, scientists have associated failure to screen out stimuli with psychosis. However, Peterson and his co-researchers - lead author and psychology lecturer Shelley Carson of Harvard University's Faculty of Arts and Sciences and Harvard PhD candidate Daniel Higgins - hypothesized that it might also contribute to original thinking, especially when combined with high IQ. They administered tests of latent inhibition to Harvard undergraduates. Those classified as eminent creative achievers - participants under age 21 who reported unusually high scores in a single area of creative achievement - were seven times more likely to have low latent inhibition scores.
The authors hypothesize that latent inhibition may be positive when combined with high intelligence and good working memory - the capacity to think about many things at once - but negative otherwise. Peterson states: "If you are open to new information, new ideas, you better be able to intelligently and carefully edit and choose. If you have 50 ideas, only two or three are likely to be good. You have to be able to discriminate or you'll get swamped."
"Scientists have wondered for a long time why madness and creativity seem linked," says Carson. "It appears likely that low levels of latent inhibition and exceptional flexibility in thought might predispose to mental illness under some conditions and to creative accomplishment under others."
A less able mind has a greater need to be able to filter out and ignore stimuli. A less intelligent person with a low level of latent inhibition for filtering out familiar stimuli may well sink into mental illness as a result. But a smarter mind can handle the effects of taking note of a larger number of stimuli and even find interesting and useful patterns by continually processing a larger quantity of familiar information.
You can find the original paper here: Decreased Latent Inhibition Is Associated With Increased Creative Achievement in High-Functioning Individuals (PDF format)
The central idea underlying our research program is therefore that individuals characterized by increased plasticity (extraversion and openness)retain higher post-exposure access to the range of complex possibilities laying dormant in so-called ‘‘familiar ’’environments.This heightened access is the subjective concomitant of decreased latent inhibition,which allows the plastic person increased incentive-reward-tagged appreciation for hidden or latent information (Peterson,1999). Such decreases in LI may have pathological consequences,as in the case of schizophrenia or its associated conditions (perhaps in individuals whose higher-order cognitive processes are also impaired,and who thus become involuntarily ‘‘ﬂooded ’’by an excess of a ﬀectively tagged infor- mation),or may constitute a precondition for creative thinking (in individuals who have the cognitive resources to ‘‘edit ’’or otherwise constrain (Stokes,2001)their broader range of mean- ingful experience).
Note from the text of the full paper that stress causes the release of the hormone corticosterone which lowers latent inhibition. In a nutshell, when an organism runs into problems that cause stress the resulting release of stress hormones causes the mind to shift into a state where it will examine factors in the environment that it normally ignores. This allows the organism to look for solutions to the stress-causing problem that would be ignored in normal and less stressed circumstances.
So perhaps we could hypothesize something like this:under stressful conditions,or in person-ality conﬁgurations characterized by increased novelty-sensitivity,approach behavior,and DA activity, decreased LI is associated with increased permeability and ﬂexibility of functional cog- nitive and perceptual category [see Barsalou (1983)for a discussion of such categories ].Imagine a situation where current plans are not producing desired outcomes —a situation where current categories of perception and cognition are in error, from the pragmatic perspective. Something anomalous or novel emerges as a consequence (Peterson,1999), and drives exploratory behavior. Stress or trait-dependent decreased LI, under such circumstances, could produce increased signal (as well as noise), with regards to the erroneous pattern of behavior and the anomaly that it produced. This might oﬀer the organism, currently enmeshed in the consequences of mistaken presuppositions, the possibility of gathering new information, where nothing but categorical certainty once existed. Decreased LI might therefore be regarded as advantageous, in that it allows for the perception of more unlikely, radical and numerous options for reconsideration, but disadvantageous in that the stressed or approach-oriented person risks ‘‘drowning in possibility,’’ to use Kierkegaard ’s phrase.
One can easily see how this response could have been selected for evolutionarily. At the same time, one can also see how chronic stress could lead a person to fall into a state of confusion as a sustained large flood of stimuli could overwhelm the brain by giving it too much to think about and make a person unable to clearly see solutions that will relieve the feeling of stress.
Steven E. Landsburg reports in Slate on a new study that finds male babies keep marriages together better than female babies.
In the United States, the parents of a girl are nearly 5 percent more likely to divorce than the parents of a boy. The more daughters, the bigger the effect: The parents of three girls are almost 10 percent more likely to divorce than the parents of three boys. In Mexico and Colombia the gap is wider; in Kenya it's wider still. In Vietnam, it's huge: Parents of a girl are 25 percent more likely to divorce than parents of a boy.
The Slate article reports on a paper published by Gordon Dahl of the University of Rochester and the National Bureau of Economic Research (NBER) and Enrico Moretti of UCLA and NBER) enitlted The Demand for Sons: Evidence from Divorce, Fertility, and Shotgun Marriage. (PDF format and that is a draft copy of the paper)
Specifically, we show that having girls has significant effects on divorce, marriage, shotgun marriage (i.e., when the sex of the child is known before birth), remarriage, fertility stopping rules, and child support payments.
Using a simple model, we show that, taken individually, each piece of evidence does not necessarily imply the existence of parental gender bias. But taken together, our empirical evidence indicates that US parents strongly favor boys over girls. The bias is quantitatively important, but seems to be slowly decreasing over time. When we compare the US with five developing countries, we find that gender bias in the US is generally smaller, and is only a fraction of the bias in China.
We begin by documenting the effect of offspring gender composition on the probability of divorce. We find that mothers with girls are significantly more likely to be divorced than mothers with boys. The effect is quantitatively substantial, explaining a 4% to 8% rise in divorce rates in the U.S.1 By itself, this effect is not necessarily evidence of parental bias. For example, it is well documented in the child psychology and sociology literature that the presence of the father in the household when kids are growing up is more important for boys than girls.2 It is possible that parents have unbiased gender preferences, but they decide to avoid or delay divorce if they have boys because they realize the harmful effects of raising a son without a father present in the household. We call this the “role model” hypothesis. Alternatively, it is also possible that the monetary, psychological, or time costs of raising girls are different than the costs of raising boys. A higher cost of raising girls could also explain the documented effect of children’s gender on divorce.
We turn to the effect of child gender composition on marriage. We find that, controlling for family size, women with only girls are substantially more likely to have never been married than women with only boys. The chance a women will be married decreases by two to seven percent for an all-girl family relative to an all-boy family, depending on family size. For divorcees, a similar pattern emerges. Not only are divorced mothers with all-girl offspring less likely to remarry, when they do remarry they are more likely to get a second divorce.
Perhaps the most striking evidence comes from the analysis of shotgun marriages using Vital Statistics data. First we show that, at delivery, gender of the first child is not correlated with marital status for first time mothers. This is reassuring, because for most parents in the sample, gender of the first child is unknown until birth. We then test whether gender of the child affects marital status at delivery when gender is known in advance (with high probability) because the mother has taken an ultrasound test during pregnancy. Among women who have had an ultrasound test, we find that mothers who have a girl are less likely to be married at delivery than mothers who have a boy. We interpret this finding as evidence that fathers who find out their child will be a boy are more likely to marry their partner before delivery.3
The first footnote reports that gender mix in offspring also reduces probability of divorce.
1 While not the focus of this paper, we also find that gender mix reduces the probability of divorce. Other researchers have also documented a demand for variety (Ben-Porath and Welch, 1976, 1980; Rosenzweig and Wolpin, 1980). Angrist and Evans use the demand for a mixed sibling-sex composition to study the effect childbearing on female labor supply (1998).
It might be that having a gender mix in offspring provides each spouse with a child they can better identify with and relate to or otherwise to be satisfied with the result that both spouses may feel more satisfied with the marriage:
2 There is much evidence that fathers play a bigger role in the development of their sons than their daughters. Fathers spend more time with their sons (Lamb 1976; Morgan, Lye and Condran, 1988). Longitudinal data on child development show that the absence of a father has more severe and enduring impact on boys than girls. For example, boys are found to suffer more from divorces than girls (Hetereington, Cox and Cox, 1978). In most cases, children are assigned to the mother, irrespective of sex.
Perhaps men subconsciously realize that there is a greater need for them to stick around for their sons than for their daughters? Or perhaps the behavior of being more desirous to stick around for raising sons was evolutionarily selected for because sons need their fathers more than daughters do?
Also, having daughters leads to polygamy:
Our final piece of international evidence utilizes the fact that 12 percent of marriages in Kenya are polygamous. Among all married women, we find that those with girls are more likely to be in a polygamous relationship compared to women with boys. We interpret this as evidence that the desire for boys lead some husbands to marry another woman if his first wife delivers a girl.
All these results suggest that as the technology for controlling offspring sex becomes more widespread the effect will be to increase the ratio of male to female offspring on a worldwide scale. Think about that. Will males become more common than females all over the world in 20 or 30 years? While some radical feminists are arguing that men are obsolete and headed for the dustbin of history people are using technology to have more male than female offspring.
Those who think this will necessarily boost the status of women in a society might be surprised by the effect that the use of amniocentesis and ultrasound to guide sex selective abortions is having in India: Girl Shortage Causes Wife Buying In India. The female shortage in India and China may become a worldwide phenomenon as other methods of baby gender selection such as the service offered by Microsort become more widely available.
Over a typical 20-year life span of a solar cell, a single produced watt should cost as little as $0.20, compared with the current $4.
The article does not provide a prediction of when this price reduction will take place. Also, the article quotes a cost per watt for conventional existing electric power of $0.40 per watt and so the projected future price of photovoltaics will be half the price of existing sources of electricity. However, it is not clear how those costs were calculated. Do they include the costs of, for instance, storing solar power in batteries to have electricity to use when the sun goes down? My guess is that those costs are not included.
Photovoltaics without sufficient energy storage capacity will not reduce the peak electric generation capacity needed by electric utilities but will reduce average demand. So the result will be that a portion of the expected savings will be offset by higher prices charged by electric utilities that can't amortize their physical plant over as many generated kilowatt hours of electricity. What is needed is not just cheaper photovoltaics but also cheaper batteries or other means of electric power storage. Toward that end, advances in nanotechnology may eventually yield materials that combine batteries and solar cells in a single integrated sheet.
A September 30, 2003 press release from STMicroelectronics that is about their photovoltaic solar cell research efforts does not provide cost projections and sounds more qualified in terms of what they expect to achieve. The press release outlines 2 approaches they are pursuing to try to drive down the cost of photovoltaics.
Semiconductor-based solar cells have the highest efficiency (defined as the electrical energy produced for a given input of solar energy) but there is little that can be done to either increase the efficiency or reduce the manufacturing cost. ST is therefore pursuing alternative approaches in which the aim is to produce solar cells that may have lower efficiencies (e.g. 10% instead of 15-20%) but are much cheaper to manufacture.
"Although there is much support around the world for the principle of generating electricity from solar power, existing solar cell technologies are too expensive to be used on an industrial scale. The ability to produce low cost, high efficiency solar cells would dramatically change the picture and revolutionize the field of solar energy generation, allowing it to compete more effectively with fossil fuel sources," says Dr. Salvo Coffa, who heads the ST research group that is developing the new solar cell technology.
The ST team is following two approaches. One of these, invented in 1990 by Professor Michael Graetzel of the Swiss Federal Institute of Technology, uses a similar principle to photosynthesis. In a conventional solar cell, a single material such as silicon performs all three of the essential functions, which are absorbing sunlight (converting photons into electrons and holes), withstanding the electric field needed to separate electrons and holes, and conducting the free carriers (electrons and holes) to the collecting contacts of the cell. To perform these three tasks simultaneously with high efficiency, the semiconductor material must be of very high purity, which is the main reason why silicon-based solar cells are too costly to compete with conventional means of producing electric power.
In contrast, the Graetzel cell, known as the Dye-Sensitized Solar Cell (DSSC), mimics the mechanism that plants use to convert sunlight into energy, where each function is performed by different substances. The DSSC cell uses an organic dye (photosensitizer) to absorb the light and create electron-hole pairs, a nanoporous (high surface area) metal oxide layer to transport the electrons, and a hole-transporting material, which is typically a liquid electrolyte.
"One of the most exciting avenues we are exploring is the replacement of the liquid electrolytes that are mostly used today for the hole-transport function by conductive polymers. This could lead to further reductions in cost per Watt, which is the key to making solar energy commercially viable," says Coffa.
The ST team is also developing low cost solar cells using a full organic approach, in which a mixture of electron-acceptor and electron-donor organic materials is sandwiched between two electrodes. The nanostructure of this blend is crucial for the cell performance because the electron-donor and electron-acceptor materials have to be in an intimate contact at distances below 10 nm. ST plans to use Fullerene (C60) as the electron-acceptor material and an organic copper compound as the electron-donor.
STMicroelectronics isn't the only company pursuing Graetzel cell development. Venture capital funded Konarka Technologies, based in Lowell, Massachusetts, is also pursuing development of the TiO2 Graetzel cell with non-liquid electrolyte.
According to Paul Wormser, president and COO, the cells will be manufactured at room atmospheric pressure and below 150°C on flexible plastic substrates, using a non-liquid electrolyte.
Another venture capital funded company, Nanosolar, is pursuing a nanotechnological approach to produce flexible plastic photovoltaics.
NanoSolar asserts that it has solved some of the thorniest problems inherent in working with organic materials. The company, applying technology licensed from Sandia National Labs, says it has brought an architectural approach to the process, using self-assembling nano-structures that should substantially improve the energy efficiency of its solar cells.
With venture capital funded start-ups and established firms pursuing the development of photovoltaics that have the potential to be much cheaper to produce than silicon semiconductors a large reduction in the cost of photovoltaics seems a likely outcome.
A Henry Ford Hospital study shows use of antibiotics in babies dramatically increases the risk of allergies and asthma by age 7.
For the Henry Ford study, researchers followed 448 children from birth to seven years. The children were evenly divided by gender.
Data was collected prenatally and at the first four birthdays until the children were 6 and 7 years old, when they underwent a clinical evaluation by a board-certified allergist. The data included information about all prescribed oral antibiotics; blood tests that measure the antibody (immunoglobulin E) that causes allergies; and skin reaction tests that show whether a person is hypersensitive to an allergen. Researchers also collected data on all clinical visits and made home visits to collect environmental samples.
Of the 448 children, 49 percent had received antibiotics in the first six months of life. The most common antibiotic category prescribed was penicillin.
Among the findings:
- By age 7, children given at least one antibiotic in the first six months were 1.5 times more likely to develop allergies than those who did not receive antibiotics. They were 2.5 times more likely to develop asthma.
- By age 7, children given at least one antibiotic in the first six months and who lived with fewer than two pets were 1.7 times more likely to develop allergies, and three times more likely to develop asthma.
- By age 7, children given at least one antibiotic in the first six months and whose mother had a history of allergies were nearly twice as likely to develop allergies.
- By age 7, children given at least one antibiotic in the first six months and who were breast-fed for more than four months were three times more likely to develop allergies. However, breast-feeding did not influence the risk between antibiotics and asthma.
Note that between the second and third bullet items above the only difference was the reduction in pet exposure and the result was a higher incidence of allergies and asthma.
The oddest result that I see is the fourth item where breast-feeding increased the risk of allergies but not of asthma. One potential explanation: the breast milk might cause the immune system of babies to develop more rapidly and hence to be more likely to react to allergens. Given that breast milk probably has other benefits that makes its use a tough call.
Overall, children given antibiotics in their first half-year were 2.6 times more likely to develop allergic asthma, the team told a meeting of the European Respiratory Society on Tuesday. With broad-spectrum antibiotics, which kill a wide range of bacteria, the risk was far higher: children were 8.9 times more likely to suffer from asthma.
Selection of an antibiotic more narrowly tailored for the target bacteria therefore would reduce risk of allergy and asthma among babies.
The new study also backs the growing belief that antibiotics disrupt the normal development of a child's immune system through a phenomenon known as the "hygiene hypothesis."
This "hygiene hypothesis" has been gathering strength in recent years. The latest result certainly strengthens the argument considerably. The idea is basically remiscent of the saying "idle hands are the devil's workshop". Remove the normal antigens that the immune system is exposed to and it starts reacting to things it ought not react to. Our ancestors lived in dirt floor dwellings and had much more exposure to animals, dirt, and nature in general. We live lives which bring us in much less exposure to the antigens we evolved to deal with. Exposure to those antigens appear to be necessary to instruct the immune system on what it should identify as a threat.
From a signal processing perspective you can think of the immune system as a sensor system that will incorrectly react to weaker signals in the absence of stronger signals to react to. The immune system never experienced enough evolutionary pressure to not react to weaker signals because there were so many stronger signals around for it to pay attention to.
Surely antibiotics are overused and this latest report is yet another reason to reduce the use of antibiotics in situations where they are not necessary. But some children will inevitably get bacterial infections in their early months of life that require antibiotic treatment. So what to do about this problem? One potential solution is to develop vaccines that will retrain the immune system to not react to allergens. It may some day be standard practice to give babies anti-allergy vaccines.
In the shorter term having dogs licking babies faces may be a good thing. Also, it might be possible to come up with formulations of beneficial bacteria to give to babies to replace normal harmless bacteria that are wiped out by antibiotic therapy.
The argument that cleaner environments cause problems in the immune system is especially interesting because it demonstrates how moving humans out of the environments they evolved in creates problems that can be quite subtle and that can go unrecognized for many years. This is not the only such problem of this sort. To take another but rather more obvious example, the higher incidence of obesity in affluent nations is a problem that has arisen because of the lifestyles that are the result of living in industrial societies. Obesity is even more interesting than the allergy/asthma problem because obesity is a behavioral problem. Obese people are driven by strong urges to eat more than what is good for them in modern circumstances.
There are other behavioral problems that arise from growing up in industrial society as well. To take another example, humans are obviously not evolved to handle recreational drugs. Recreational drugs interfere with mechanism of pleasure that evolved to guide learning and other activities. Drugs hijack pleasure systems. Many minds become too easily trained to crave the pleasures that the drugs give them and so the pursuit of pleasure becomes far more harmful than it was in our evolutionary past. Humans are going to have to develop methods of adapting themselves to the changes they are creating in their environment.
Update: In another example of the importance of the sensitivity of the immune system to the timing of exposures to various antigens researchers in the United States and Germany have just discovered that the date of first introduction of babies to cereals affects their odds of developing diabetes.
Babies with a family history of diabetes who were introduced to cereals before or after the recommended age of four to six months had a higher risk of developing a precursor to the disease, researchers said Tuesday.
US team lead researcher Dr. Jill Norris says early exposure to cereal proteins may cause an immature immune system to react inappropriately.
She hypothesizes that the youngest babies have extremely vulnerable immune systems. Older babies, denied solids for so long, simply ate more and overwhelmed still developing systems with foreign cereal proteins. The hypothesis has yet to be proved and Norris stressed the scenarios refer to at-risk babies.
Another theory is that children who don't start eating cereals by 6 months of age are more likely to be deficient in vitamins and that this causes their immune systems to malfunction and start making auto-antibodies (i.e. antibodies to ones's own proteins).