Down-regulating a single gene in aged mice boosted their mental functioning to be more like younger mice.
All of us experience a successive decline in learning and memory capacities with ageing. In the course of their investigations of the neurophysiological basis of this decline, Thomas Blank, Ingrid Nijholt, Min-Jeong Kye, Jelena Radulovic, and Joachim Spiess from the Max Planck Institute for Experimental Medicine in Göttingen have obtained new insight into the mechanisms of age-related learning deficits in the mouse model. In experiments with mice, the Max Planck researchers were able to revert the observed age-related learning and memory deficits by down-regulation of calcium-activated potassium channels (SK3) located in the hippocampus, a brain region recognized to be important for learning and memory. The researchers published their results as a Brief Communication in the journal Nature Neuroscience.
This brings up the obvious question: If the human equivalent of the SK3 gene could be down-regulated would old minds regain some of their lost youthful ability? It may not be that easy because the amount of calcium-activated potassium channels in aged human minds might be higher in order to compensate for some other change caused by aging.
A different research team has just published a paper showing that just as humans have a measure of general intelligence called 'g' mice have their own measurable 'g' for general intelligence.
Mice have a version of 'g', according to a team led by Louis Matzel of Rutgers University in Piscataway, New Jersey1. Animals that come top in one learning test often score better on others, they found: a maze champion might be a sniffing sensation too. "Once in a while you come across one that's absolutely stunning," says Matzel.
Both of these results are important because it is a lot easier to do work on mice than on humans. The latter result is particularly interesting because genes that have variations that affect mouse intelligence may turn out to have equivalents in humans that also have variations that affect intelligence in humans.
International technical standards and civil aviation organisations have confirmed that they are working on deploying passports containing details that enable the "machine-assisted identification" of the passenger, which will be required by travellers visiting the US from October 2004.
Current plans call for the new passport books to include a contactless smart chip based on the 14443 standard, with a minimum of 32 Kbytes of EEPROM storage. The chip will contain a compressed full-face image for use as a biometric. European biometric passports, by contrast, are planned to feature both retinal and fingerprint recognition biometrics on their smart cards.
The technology will not just be used in passports but in drivers’ licenses. Malaysia is using biometric smart cards for government services. Unisys is even working on a registered traveler system which can give you a smart card with fingerprint information to use at airports and skip the check in lines.
Even without a formal approval of a national ID card system it seems inevitable that most people will end up having their biometric data recorded by one or more governments. This brings up an interesting twist: anyone who wants to pass thru an airport or other facility that has iris scanners and fingerprint checkers will end up having their biometric data recorded even if they never get a driver's license or other card that requires biometric data recording as part of the application process. Some people travelling around using multiple identities will likely be detected eventually by comparing biometric data and different names and nationalities used by the same person at different times.
If biometric datalogs are archived then British airports will become big iris pattern data collection systems.
Iris-recognition machines, which can identify people by reading the distinctive pattern surrounding the pupil of the eye, are to be installed at 10 British airports within a year.
Biometric passports might seem an improvement since they will be harder to counterfeit. But stop and think about it: A biometric passport is like a one person database of biometric data. Why have every persn carry a database for their own biometric data? After all, if a counterfeit passport can be made then a comparison of a person to the personal biometric database embedded in their passport will yield a match even though a person may be using a false identity. Many biometric identification systems do not rely on a person carrying a card. There is a central database so that each person can be scanned and compared to that database. Of course, a corrupt worker could make an inappropriate entry into that database.
One problem that biometric identification does not solve is that unscrupulous staff can issue biometric ids to people who do not qualify for them.
In Ireland, the introduction of national ID cards and biometric passports has provoked controversy, amid fears of data protection and privacy. On this front, the trustworthiness of staff with access to biometrics systems and data is considered to be important. A question the government and companies would need to ask itself in adopting biometric national IDs is "what checks and balances do you have to prevent them (staff) issuing false IDs to people," according to Allan.
One thing that biometric databases will make possible is comparisons to identify duplicate biometric data for people with multiple identities. A comparison of fingerprints and iris patterns of everyone in a massive database should not yield matches between different records. So biometrics will make it harder for a person to create a false identity if they have already been recorded with their real identity.
Even governments will find it harder to create new false identities for people. If a person travels to other countries and has their name and biometric data recorded in biometric database logs in foreign airports and yet eventually their own government provides a new identity some other government will be able to compare them to a database of previous visitors and recognize them by their older identity.
By using carbon particles which are more than an order of magntude smaller than what is typically used to add to steel of researchers at the National Institute for Materials Science in Tsukuba Japan have produced a much stronger steel
By adding just 0.002% carbon to martensitic steel that already contains 9% chromium, Sawada and colleagues were able to increase the time-to-rupture at 923 Kelvin by a factor of 100 over the strongest creep-resistant steel currently available (which contains about 0.08% carbon).
Under constant low-stress loading, and over extended time periods, many materials undergo creep, a permanent deformation that is particularly marked at elevated temperatures. Incorporation of fine particles into metals and alloys, also called dispersion strengthening, is used to impart creep-resistance at high temperatures. A team from Japan's National Institute for Materials Science, Tsukuba, has developed a dispersion strengthening technique that incorporates nanometre-scale carbonitride particles into a martensitic stainless steel (a chromium-containing steel hardened by heat treatment) for improved creep performance.
Students with musical training recalled significantly more words than the untrained students, and they generally learned more words with each subsequent trial of three. After 30-minute delays, the trained boys also retained more words than the control group. There were no such differences for visual memory. What's more, verbal learning performance rose in proportion to the duration of musical training.
The researchers, led by Dr Agnes Chan, said giving music lessons to children "somehow contributes to the reorganization [and] better development of the left temporal lobe in musicians, which in turn facilitates cognitive processing mediated by that specific brain area, that is, verbal memory."
But Nora Newcombe, a psychology professor at Temple University in Philadelphia, says there are two major flaws in the new study. The students were not randomized to the music and non-music groups, they were "self-selected," she points out. And, she adds, "It shows nothing [in a study] when you self-select."
Still, the fact that the same people experienced a change in only one type of memory is strongly suggestive that a real effect was found. This is likely to lead to even more attempts by parents to get their kids to take music lessons. But would it help an adult to first take up music lessons in adulthood?
Age-Related Stem Cell Loss Prevents Artery Repair And Leads To Atherosclerosis
DURHAM, N.C. – Aging has long been recognized as the worst risk factor for chronic ailments like atherosclerosis, which clogs arteries and leads to heart attacks and stroke. Yet, the mechanism by which aging promotes the clogging of arteries has remained an enigma.
Scientists at Duke University Medical Center have discovered that a major problem with aging is an unexpected failure of the bone marrow to produce progenitor cells that are needed to repair and rejuvenate arteries exposed to such environmental risks as smoking or caloric abuse.
The researchers demonstrated that an age-related loss of particular stem cells that continually repair blood vessel damage is critical to determining the onset and progression of atherosclerosis, which causes arteries to clog and become less elastic. When atherosclerosis affects arteries supplying the heart with oxygen and nutrients, it causes coronary artery disease and puts patients at a much higher risk for a heart attack.
The researchers' novel view of atherosclerosis, based on experiments in mice, constitutes a potential new avenue in the treatment of one of the leading causes of death and illness in the U.S., they said. Just as importantly, they continued, this loss of rejuvenating cells could be implicated in a broad range of age-related disorders, ranging from rheumatoid arthritis to chronic liver disease.
The results of the Duke research were posted early (July 14, 2003) on the website of the journal Circulation, (http://circ.ahajournals.org). The study will appear in the July 29, 2003, issue of the journal.
At issue is the role of stem cells, which are immature cells produced in the bone marrow that have the potential to mature into a variety of different cells. The Duke team examined specific stem cells known as "bone-marrow-derived vascular progenitor cells" (VPCs).
What we need is a way to take cells from our bodies and manipulate them into becoming youthful VPC stem cells. This will likely become a key treatment for reversing the process of aging.
The researchers believe that it might ultimately be possible to forestall or even prevent the development of atherosclerosis by injecting these cells into patients, or to induce the patient's own stem cells to differentiate into progenitor cells capable of arterial repair.
"Our studies indicate that the inability of bone marrow to produce progenitor cells which repair and rejuvenate the lining of the arteries drives the process of atherosclerosis and the formation of plaques in the arteries," said Duke cardiologist Pascal Goldschmidt, M.D., chairman of the Department of Medicine. "For a long time we've known that aging is an important risk factor for coronary artery disease, and we've also known that this disease can be triggered by smoking, bad diet, diabetes, high blood pressure and other factors.
"But if you compare someone who is over 60 with someone who is 20 with the same risk factors, there is obviously something else going on as well," he continued. "The possibility that stem cells may be involved is a completely new piece of the puzzle that had not been anticipated or appreciated before. These findings could be the clue to help us explain why atherosclerosis complications like heart attacks and strokes are almost exclusively diseases of older people."
Doris Taylor, Ph.D. a senior member of the research team, sees these findings leading researchers into new areas of investigation.
"For the first time we are beginning to an insight into how aging and heart disease fit together -- we've know they go hand-in-hand – but we haven't understood why," she said. "Understanding that we either run out of progenitor cells or that they don't work as well is a big molecular clue to what might be going on in the whole aging process.
"We are excited that as we unravel the mechanisms of this process, we will be able to look deeper into heart and vascular disease, as well as other disease," she added. "These studies form the basis of future collaborations."
In their experiments, the Duke team used mice specially bred to develop severe atherosclerosis and high cholesterol levels. The researchers injected bone marrow cells from normal mice into these atherosclerosis-prone mice numerous times over a 14-week period. As a control, an equal of number of the same kind of atherosclerosis-prone mice went untreated.
After 14 weeks, the mice treated with the bone marrow cells had significantly fewer lesions in the aorta, despite no differences in cholesterol levels. Specifically, the researchers detected a 40-60 percent decrease in the number of lesions in the aorta, the main artery carrying blood from the heart.
Using specific staining techniques on the aortas, the researchers were able to determine that the donor bone marrow cells "homed in" on areas where atherosclerotic lesions are most common, especially where smaller vessel branches take off from larger vessels. These areas tend to experience "turbulence" of blood.
When the researchers examined the vessels under a microscope, it appeared that the bone marrow cells not only migrated to where they were needed most, but that they differentiated into the proper cell types. Some turned into endothelial cells lining the arteries, while others turned into the smooth muscle cells beneath the endothelium that help strengthen the arteries.
To further prove that the donor bone marrow cells were responsible for rejuvenating arteries, the scientists measured in the endothelial cells the lengths of structures known as telomeres at the end of chromosomes. They found that the telomeres in the endothelial cells were longer in the treated mice than the untreated mice. Over time, telomeres are known to shorten as the organism ages.
Note that researchers at Stanford have developed a technique for lengthening telomeres. However, in order to rejuvenate aged cells it is likely that additional modifications besides telomere lengthening will be needed. It would be desireable to have ways to select out cells that have less accumulated mutational damage to DNA so that the risks of cancer development in rejuvenated cells would be lowered. Cells that are restored to a state that allows them to divide more rapidly would be a cancer risk if they contained mutations to regulatory genes that control cell division.
The researchers also injected these atherosclerotic mice with donor cells from older mice as well as from younger, pre-atherosclerotic mice."We found that the bone marrow cells from the young mice had a nearly intact ability to prevent atherosclerosis, while the cells from the older mice did not," Goldschmidt explained. "This finding suggests that with aging, cells capable of preventing atherosclerosis that are normally present in the bone marrow became deficient in the older mice that had developed atherosclerosis."
Note that many of the risk factors of heart disease may exert their influence by causing a continual stream of injuries to arteries that essentially cause VPCs in the bone marrow to divide so many times that they get worn out. Every time a cell divides its telomeres get shorter. One effect of shortened telomeres is that they are an obstacle to normal cell division. So a diet and health habits that reduce the demand for VPCs may allow them to function for more years repairing arteries.
Once the repair cells from the marrow become deficient, inflammation develops and leads to increase in inflammation markers (such as CRP). By providing competent bone marrow cells, the investigators were able to suppress the inflammation and its blood markers.
While the direct use of stem cells as a treatment may be many years off, the researchers said it is likely that strategies currently used to reduce the risks for heart disease – such as lifestyle modifications and/or different medications – preserve the collection of these rejuvenating stem cells for a longer period of time, which delays the onset of atherosclerosis.
For Goldschmidt, a major question is whether researchers can somehow use these cells to restore the integrity of the circulatory system of patients who already have a lifetime of atheroslerosis.
"We need to look at the possibility of re-training stem cells that would otherwise be targeted to a different organ system to help repair the cardiovascular system," he said. "Another interesting question is whether rheumatoid arthritis, as an example of chronic inflammatory disorders, causes stem cell loss, since such arthritis is a risk factor for coronary artery disease. The chronic process of joint disease could consume stem cells that could otherwise be used for the repair of the cardiovascular system. We are just beginning to appreciate the links between stem cells and cardiovascular disease."
The research was supported by the National Heart Lung Blood Institute and the Stanley Sarnoff Endowment for Cardiovascular Science.
Other members of the Duke team include: Frederick Rauscher, M.D., Bryce Davis, Tao Wang, M.D., Ph.D., Priya Ramaswami, Anne Pippen, David Gregg, M.D., Brian Annex, M.D., and Chunming Dong, M.D.
This study demonstrates the importance of developing the ability to replenish stem cell reservoirs as a rejuvenation therapy. Progress on methods for how to take cells from the body and turn them into youthful VPCs is essential for extending life and avoiding heart disease and stroke.
This latest result is not that big of a surprise. See my previous post Aged Blood Stem Cells Indicator For Cardiovascular Disease Risk to see how this latest result is consistent with earlier research.
July 22, 2003 -- Researchers at the University of Toronto and St. Michael's Hospital have shown that a vegetarian diet composed of specific plant foods can lower cholesterol as effectively as a drug treatment. The study, published in the July 23 issue of the Journal of the American Medical Association, compared a diet of known cholesterol-lowering, vegetarian foods to a standard cholesterol-reducing drug called lovastatin. The special diet lowered levels of LDL cholesterol - the "bad" cholesterol known to cause clogging in coronary arteries - in subjects by almost 29 per cent, compared to a 30.9 per cent decrease in the lovastatin subjects. The special diet combined nuts (almonds), soy proteins, viscous fibre (high-fibre) foods such as oats and barley and a special margarine with plant sterols (found in leafy green vegetables and vegetable oils).
Lead author David Jenkins, a professor in U of T's Department of Nutritional Sciences and director of the Clinical Nutrition and Risk Factor Modification Centre at St. Michael's Hospital, believes the reason these foods work so well to reduce cholesterol is that humans may be evolutionarily adapted to what has been called the "ape diet," a diet very high in fibre, nuts, vegetable proteins and plant sterols.
He adds the study could have far-reaching implications for public health. "As we age, we tend to get raised cholesterol, which in turn increases our risk of heart disease. This study shows that people now have a dietary alternative to drugs to control their cholesterol, at least initially." Jenkins notes the diet can also be used to maintain normal cholesterol levels.
In this month-long study, a follow-up to one released December 2002, 46 men and women with raised cholesterol were randomly assigned to one of three vegetarian diet groups. The control group ate meals low in saturated fats (such as those found in animal products like beef and butter). The second group had the same low fat diet, plus a daily 20 mg treatment of lovastatin. The last group had a diet high in four foods known to have cholesterol-lowering properties. This special diet, designed to be easy to prepare and eat, included foods such as oat bran bread and cereal, soy drinks, fruit and soy deli slices. A typical dinner for people on the special diet was tofu bake with eggplant, onions and sweet peppers, pearled barley and vegetables.
The key components of the ape diet are plant sterols, found in plant oils and enriched margarines, viscous fibre, found in oats, barley and aubergine, and soy protein and nuts.
The margarines enriched with plant sterols (which compete with cholesterol for absorption) used in the study may have been the commercial brands Take Control, Benecol, and Benecol Light. To up your plant sterol content using natural foods one possibility is pecans with 95 milligrams of plant sterols per 100 grams. However, the level of plant sterols in the margarines is about two orders of magnitude greater (1.7 grams sterols in 14 grams of Take Control) and clinical trials in plant sterols have used about 2 grams per day. Still, the nuts have other heart-healthy benefits.
"We went right back in time to, hypothetically, five million years ago, when the diet would largely be leafy vegetables, fruits, nuts and seeds," Dr. Jenkins said.
While the researchers bill this diet as a return to the sort of diet that our ancestors ate for millions of years that is not exactly the case. First of all, it is unlikely that before the development of agriculture any humans or pre-humans ate soy as a major food source. If there are compounds in soy that have some sort of pharmaceutical effect upon cholesterol levels it is not clear (at least to me) that those compounds were present in diets thru some other food sources. Also, oats and barley would not have been major sources of calories. However, they are serving here as sources of soluble fiber and it does seem likely that whatever humans did eat provided a considerable amount of soluble fiber. So it seems likely there are elements in this diet which are not part of our evolutionary history while other elements have been added into the diet by using food sources that humans did not eat historically.
There is another important caveat to keep in mind when interpreting these results: various human subpopulations have split off from each other long enough and ate sufficiently different diets from each other to have evolved adaptations to local food sources. We see signs of this, for example, with northern Europeans who make more lactase enzyme for digesting milk. We also see it in the differing abilities of racial and ethnic groups to handle alcohol. It is unlikely that every ethnic and racial group has the same average ideal diet. Eventually declining costs of DNA sequencing and the identification of genetic variations that affect how we metabolise food will lead to the widespread practice of nutrigenomics where dietary recommendations will be personalized for one's specific genetic profile and risk factors.
The results of this study are sufficiently dramatic that JAMA recommends its use before cholesterol-lowering drugs are tried.
In an accompanying editorial, James Anderson, a professor of medicine at the University of Kentucky in Lexington, said the findings have dramatic public-health implications. He suggested that physicians prescribe the "ape diet" to patients before even considering drugs.
The daily volume of food was about a third of that of the Garden of Eden diet, Jenkins said, adding the people who followed it didn't complain about how much they had to eat but said they couldn't eat any more.
Those who lost weight were asked to, however.
Most of the reports on this study didn't pick up on one particularly interesting result the researchers observed.
Surprisingly, the diet also lowered the levels of C-reactive protein, considered a risk marker for heart disease.
Equally impressive was a 28 percent drop in C-reactive protein, a substance found in the blood that is a sign of inflammation and possible heart disease. The statin group had a 33 percent drop.
The fact that this diet lowers C Reactive Protein (CRP) levels is an added bonus. CRP is a marker for inflammation that has been found to be correlated with heart disease risk. While the importance of CRP as a marker is still debated among cardiology researchers it seems important because there is growing recognition among medical researchers of the role chronic inflammation plays as a cause of the development of degenerative diseases. Any diet that lowers inflammation markers likely will yield more benefits than just lowering the risk of heart failure.
As for how this diet lowered C Reactive Protein (CRP): there are a lot of possibilities. Vitamin E, omega 3 fatty acids, and vitamin B6 are a few of the factors that are thought to lower CRP. Losing weight helps lower CRP as well. However, statin drugs for lowering cholesterol also lower CRP. So is the cholesterol-lowering effect of the diet causing CRP to drop? Maybe.
The Scientist has a good recent survey of the many ways chronic inflammation appears to contribute to the development of many diseases. (requires free registration - and I really recommend taking the trouble as they are one of the better science news sites)
Centuries ago, this trigger was pulled on a more consistent basis as humans battled a harsher environment; Johnson attributes today's toll of inflammation on the super clean environments of Western society. Also, because humans are living longer than evolutionarily designed, and in larger numbers, says Johnson, the odds are increased for disease. "You have an immune system that's looking for something to do and is basically getting into trouble," he says. "I think the problems are caused by an ongoing, aggravated, chronic response to an immune problem that the innate system imagines is there, but isn't." Also, the various byproducts associated with immune system attack, such as reactive oxygen species that decimate joints, may be causing long-term, deleterious effects.
The argument here is that the immune system no longer has enough real enemies to attack and yet it is all hyped up ready to attack something and responds inappropriately. Of course there is a large assortment of auto-immune diseases but many scientists are looking at inflammation response (and it is hard to untangle inflammation response from immune response) as another manifestation of this general problem. However, in some cases chronic inflammation may be getting triggered by chronic infections. This can happen with helicobacter pylori in the stomach. Peridontal disease also causes arterial inflammation and increased risk of heart disease.
However, many clinicians were unclear of the cause of elevated CRP levels. A study published earlier this year in the Journal of Periodontology reported that inflammatory effects from periodontal disease, a chronic bacterial infection of the gums, cause oral bacterial byproducts to enter the bloodstream and trigger the liver to make proteins such as CRP that inflame arteries and promote blood clot formation.
Keep your teeth clean and your gums healthy.
The inflammation-lowering angle is a lot more interesting because the benefits of cholesterol lowering and the methods for lowering cholesterol are a lot more widely known in comparison. We still do not understand all the factors that contribute to chronic inflammation or the best strategies to use to reduce it.
Professor Moshe Koppel of Bar-Ilan University in Israel has developed software that can identify male versus female writers of fiction and non-fiction text with 80% accuracy.
Female writers use more pronouns (I, you, she, their, myself), say the program's developers, Moshe Koppel of Bar-Ilan University in Ramat Gan, Israel, and colleagues. Males prefer words that identify or determine nouns (a, the, that) and words that quantify them (one, two, more).
My guess is that a lot of other kinds of patterns in writing styles will be found that provide indications about differences in personality characteristics and cognitive processess. While these patterns may correlate with gender they are likely to be found to correlate with other characteristics such as political leanings, aggressiveness, empathy, and other mental characteristics.
Prof Koppel said that when his research group first submitted one of two papers to be published to the publishing panel of the prestigious National Academy of Sciences in the United States, the referees rejected it "on ideological grounds". "They said, ‘What do you mean? You’re trying to make some claim about men and women being different, and we don’t know if that’s true. That’s just the kind of thing that people are saying in order to oppress women’," he told The Boston Globe.
As more empirical evidence is gathered about innate differences in cognitive processes between individuals the ideological leftists who embrace a tabula rasa view of human nature are going to find themselves increasingly on the defensive. The steady advance of a truly scientific model of human nature will undermine all ideological belief systems about human nature and politics. The extent to which ideologies will be undermined by sicence will vary. Every ideology amounts to a set of simplifying assumptions about reality. The amount of simplification varies between ideologies and the importance of each simplification for the overall structure of each ideology varies condierably as well. But at this point it seems clear that radical egalitarians who claim we all have equal innate abilities and equal innate desires and preferences are going to find their ideology to be badly mauled by scientific advances in our understanding of human genetics and neurobiology.
September 17, 2002
There is a sound neurological basis for the cliché that men are more aggressive than women, according to new findings by scientists at the University of Pennsylvania School of Medicine.
Using magnetic resonance imaging (MRI) scans, the Penn scientists illustrated for the first time that the relative size of the sections of the brain known to constrain aggression and monitor behavior is larger in women than in men.
The research, by Ruben C. Gur, PhD, and Raquel E. Gur, MD, PhD, and their colleagues in Penn's Department of Psychiatry and Department of Epidemiology, was published in a recent issue of the Journal of the Cerebral Cortex.
The findings provide a new research path for therapies that may eventually help psychiatric patients control inappropriate aggression and dangerous patterns of impulsive behavior. They also bolster previous work by the Gurs demonstrating that although some gender differences develop as result of adaptive patterns of socialization, other distinctions are biologically based and probably innate.
"As scientists become more capable of mapping the functions of activity in various parts of the brain, we are discovering a variety of differences in the way men and women's brains are structured and how they operate," said Ruben Gur, first author of the study.
"Perhaps the most salient emotional difference between men and women, dwarfing all other differences, is aggression," he said. "This study affords us neurobiological evidence that women may have a better brain capacity than men for actually 'censoring' their aggressive and anger responses."
The Gurs' work relied on established scientific findings that human emotions are stimulated and regulated through a network that extends through much of the limbic system at the base of the brain (the region encompassing the amygdala, hypothalamus and mesocorticolimbic dopamine systems), and then upward and forward into the region around the eyes and forehead (the orbital and dorsolateral frontal area), and under the temples (the parietal and temporal cortex).
The amygdala is involved in emotional behavior related to arousal and excitement, while the orbital frontal region is involved in the modulation of aggression.
The Gurs' study measured the ratio of orbital to amygdala volume in a sample of 116 right-handed, healthy adults younger than 50 years of age; 57 subjects were male and 59 were female. Once the scientists adjusted their measurements to allow for the difference between men and women in physical size, they found that the women's brains had a significantly higher volume of orbital frontal cortex in proportion to amygdala volume than did the brains of the men.
"Because men and women differ in the way they process the emotions associated with perception, experience, expression, and most particularly in aggression, our belief is that the proportional difference in size in the region of the brain that governs behavior, compared to the region related to impulsiveness, may be a major factor in determining what is often considered 'gendered-related' behavior," Raquel Gur said.
Others Penn investigators participating in the study were Faith Gunning-Dixon, PhD, and Warren B. Bilker, PhD, of the Department of Epidemiology.
In fact, only one man had a "modulator" that was at least seven times larger than his "emotional stimulator," compared to eight women, and only three women had a really small modulator (less than 3.5 times the size of the stimulator) compared to about a quarter of the men. But oddly enough, one woman had the smallest modulator of all, less than two times the size of her amygdala, suggesting that it might not be a good idea to rile her up
Here is the original paper Sex Differences in Temporo-limbic and Frontal Brain Volumes of Healthy Adults.
Smaller than average baby brains that grow very rapidly in the first year of life are seen as key to the development of autism.
Small head circumference at birth, followed by a sudden and excessive increase in head circumference during the first year of life, has been linked to development of autism by researchers at the University of California, San Diego (UCSD) School of Medicine and Children’s Hospital and Health Center, San Diego. Autism spectrum disorder occurs in one out of every 160 children and is among the more common and serious of neurological disorders of early childhood.
It was found that the head size of the autistic children at birth was, on average, in the 25th percentile, meaning that the circumference measurement for these children was smaller than 75 percent of other newborns. During the first year of life, however, these same children experienced sudden, rapid and excessive brain growth, that put them in the 85th percentile at about 12- to 14-months of age. From then on, the brain growth slowed.
“This burst of overgrowth takes place in a brief period of time, between about two months and six to 14 months of age,” Courchesne said. “So, we know it cannot be caused by events that occur later, such as vaccinations for mumps, measles and rubella or exposure to toxins during childhood.”
Although no one has yet determined the biological cause of autism, the new findings “give us information about the timing of abnormal brain development,” said study co-author Ruth Carper, Ph.D., a post-doctoral researcher in the UCSD Department of Neurosciences and a research associate at Children’s. “This provides a timeframe for further research, to determine the exact brain abnormalities and the biological mechanisms which produce them.”
Is the rapid rate of brain growth a consequence of a brain growth regulatory system's sensing and responding to the fact that the brain is smaller than it ought to be?
This result will enable the detection of risk for autism at a much younger age. But what is needed is the ability to intervene in the regulatory systems that control brain growth. If research on autism leads to knowledge about how to control brain growth it may become possible to also use that knowledge to boost intelligence by intervening in baby brain development.
Petty conducted the study with Pablo Brinol, a former doctoral student at Ohio State now at the Universidad Autonoma de Madrid in Spain. The research appears in the current issue of the Journal of Personality and Social Psychology.
In one study, the researchers told 82 college students that they were testing the sound quality of stereo headphones – particularly how the headphones would perform when they are being jostled, as during dancing or jogging.
Half the participants were told to move their heads up and down (nodding) about once per second while wearing the headphones. The other half was told to move their heads from side to side (shaking) while listening on the headphones.
All of the participants listened to a tape of a purported campus radio program that included music and a station editorial advocating that students be required to carry personal identification cards.
After listening to the tape, the participants rated the headphones and gave their opinions about the music and the editorial that they heard. The study found that head movements did affect whether they agreed with the editorial. But the effect is more complicated than might be expected.
The study found that nodding your head up and down is, in effect, telling yourself that you have confidence in your own thoughts – whether those thoughts are positive or negative. Shaking your head does the opposite: its gives people less confidence in their own thoughts.
So participants in this study who heard an editorial that made good arguments agreed more with the message when they were nodding in a “yes” manner than shaking in a “no” manner. This is because the nodding movements increased confidence in the favorable thoughts people had to the good arguments compared to shaking.
However, students who heard an editorial that made poor arguments showed the reverse pattern. These students agreed less with the message when they were nodding than when shaking. This is because the nodding movements increased confidence in the negative thoughts they had to the poor arguments compared to shaking.
Want to increase your self confidence on some subject? Think about it while nodding yes. Or listen or read someone else talk about it while nodding yes. Of course, if you are reaching your opinions on some subject without suffiicient critical thought or knowledge then it makes sense to decrease your confidence on that subject so that you try harder to learn enough to be right.
Also, if someone is trying to sell you on something and they are nodding yes while intereacting with you then keep in mind that their self confidence may be due more to the nodding than to their actually knowing what the heck they are talking about.
The Houston Chronicle has a good brief overview of the history of the development of the Space Shuttle with the politics and the decisions that caused it to be the dangerous and incredibly expensive spacecraft that it is. Near the end of the article Alex Roland sums up a view of the Shuttle with which I am in complete agreement:
Alex Roland, a space historian at Duke University who likens the shuttle to "a camel -- a horse designed by a committee," prefers a different route. He said NASA should concentrate its efforts on bringing down the cost of getting into space.
"Instead of doing that," Roland said, "NASA keeps on throwing away its money on a system that doesn't work and which doesn't do anything."
The money currently spent on Shuttle operations would be better spent on the development of technologies that hold the promise of greatly lowering space launch costs. Materials science research should be funded toward that end. Also, a series of experimental spacecraft should be designed, built, and tested in order to test concepts and to gather information about hypersonic flight for scramjet propulsion systems development. The continued operation of an old tech spacecraft for a couple of more decades is just a total waste of time and money.
In a paper appearing in the July 18 issue of Science magazine, Alex Farrell, assistant professor of energy and resources at UC Berkeley, and David Keith, associate professor of engineering and public policy at Carnegie Mellon University, present various short- and long-term strategies that they say would achieve the same results as switching from gasoline-powered vehicles to hydrogen cars.
"Hydrogen cars are a poor short-term strategy, and it's not even clear that they are a good idea in the long term," said Farrell. "Because the prospects for hydrogen cars are so uncertain, we need to think carefully before we invest all this money and all this public effort in one area."
Farrell and Keith compared the costs of developing fuel cell vehicles to the costs of other strategies for achieving the same environmental and economic goals.
"There are three reasons you might think hydrogen would be a good thing to use as a transportation fuel - it can reduce air pollution, slow global climate change and reduce dependence on oil imports - but for each one there is something else you could do that would probably work better, work faster and be cheaper," Farrell said.
The biggest problem with hydrogen as a means to reduce pollution is that it has to be produced from another energy source. But the most cost competitive energy sources are all forms of fossil fuels. The production of the hydrogen is not 100% efficient and producing it from fossil fuels produces pollution. The transportation and storage of the hydrogen also use substantial amounts of energy.
Hydrogen is also more difficult to store and transport and takes up much more space than liquid hydrocarbon fuels. It is not the only conceivable approach to pursue for reducing net pollution from vehicles for the purpose of reducing green house gasses. Another approach to reduce the net production of green house gasses would be to develop a light-driven chemical process that would fix carbon out of atmospheric carbon dioxide to make hydrocarbon fuels. Or if cheap photovoltaic solar cells could be developed then another approach would be to use electricity from solar cells to drive the chemical process to fix carbon from carbon dioxide. Effectively gasoline would be generated from solar power. Then the gasoline could be burned in cars. This artificial carbon cycle would eliminate the net addition of carbon dioxide gas to the atmosphere.
Back in 2000 the MIT Sloan Automotive Laboratory report On The Road: A life-cycle analysis of new automobile technologies by Malcolm A. Weiss, John B. Heywood, Elisabeth M. Drake, Andreas Schafer, and Felix F. AuYeung registered reservations about the future of hydrogen fuel. (PDF Format)
Continued evolution of the traditional gasoline car technology could result in 2020 vehicles that reduce energy consumption and GHG emissions by about one third from comparable current vehicles and at a roughly 5% increase in car cost. This evolved “baseline” vehicle system is the one against which new 2020 technologies should be compared.
More advanced technologies for propulsion systems and other vehicle components could yield additional reductions in life cycle GHG emissions (up to about 50% lower than the evolved baseline vehicle) at increased vehicle purchase and use costs (up to about 20% greater than the evolved baseline vehicle).
If automobile systems with drastically lower GHG emissions are required in the very long run future (perhaps in 30 to 50 years or more), hydrogen and electrical energy are the only identified options for “fuels”, but only if both are produced from non-fossil sources of primary energy (such as nuclear or solar) or from fossil primary energy with carbon sequestration.
A more recent MIT study released in March 2003 voices even greater doubts about the viability and desireability of hydrogen as a fuel source in the next couple of decades.
Published in MIT Tech Talk, March 5, 2003.
Even with aggressive research, the hydrogen fuel-cell vehicle will not be better than the diesel hybrid (a vehicle powered by a conventional engine supplemented by an electric motor) in terms of total energy use and greenhouse gas emissions by 2020, says a study recently released by the Laboratory for Energy and the Environment (LFEE).
And while hybrid vehicles are already appearing on the roads, adoption of the hydrogen-based vehicle will require major infrastructure changes to make compressed hydrogen available. If we need to curb greenhouse gases within the next 20 years, improving mainstream gasoline and diesel engines and transmissions and expanding the use of hybrids is the way to go.
These results come from a systematic and comprehensive assessment of a variety of engine and fuel technologies as they are likely to be in 2020 with intense research but no real "breakthroughs." The assessment was led by Malcolm A. Weiss, LFEE senior research staff member, and John B. Heywood, the Sun Jae Professor of Mechanical Engineering and director of MIT's Laboratory for 21st-Century Energy.
However, the researchers do not recommend stopping work on the hydrogen fuel cell. "If auto systems with significantly lower greenhouse gas emissions are required in, say, 30 to 50 years, hydrogen is the only major fuel option identified to date," said Heywood. The hydrogen must, of course, be produced without making greenhouse gas emissions, hence from a non-carbon source such as solar energy or from conventional fuels while sequestering the carbon emissions.
The full text of the March 2003 MIT study Comparative Assessment Of Fuel Cells is available as a PDF document.
Curiously, in spite of the drawbacks of hydrogen as a way to store and transport energy hydrogen produced in cars for immediate burning may be a way to increase the efficiency of internal combustion engines.
But the researchers want to take the concept a big step further, using plasma technology to turn cars into small-scale hydrogen- producing plants - and sharply boosting the spark-ignition engine's efficiency along the way.
"Spark-ignition engines are roughly 30 percent efficient and diesels are about 40 percent efficient," notes Cohn. "We want to approach a diesel level of efficiency while avoiding diesel's pollution problems."
The plasmatron - about the size of a half-gallon milk carton - would convert about a third of a vehicle's gasoline stream into hydrogen. In doing so, it would boost efficiency in varied ways.
I think the hydrogen fuel hype is vastly overblown. The US government spending on hydrogen development is money that would be better spent developing photovoltaic materials that can be made much more cheaply than current photovoltaics. The goal of US government-funded energy research ought to be to obsolesce fossil fuels by developing cheaper competitors.
Update: A big step forward in battery tech would lower the cost and weight of batteries far enough to make hybrid vehicles competitive would allow reductions in emissions and in fossil fuel use in a way that would use all the existing infrastructure. Donald Sadoway of MIT says that a big step forward in battery tech is achieveable. On the subject of whether much better batteries could be developed for use in hybrid vehicles see my Energy Tech archives and in particular see my post Is Hydrogen The Energy Of The Future? for the bottom part of the post where I link to Sadoway's views.
On the question of whether photovoltaics would have to take up too much space, first of all, it will eventually be possible to achieve fairly high solar photovoltaic cell efficiency. See my post Material Discovered For Full Spectrum Photovoltaic Cell about some LBNL researchers who found a material that is 50% efficient. Surely nanotubes will be able to achieve a still higher effiency.
Also, I've done rough calculations on surface area needed for photovoltaics and the energy needed looks like it is achieveable with a fairly small portion of the Earth's surface. On my Parapundit.com blog in the Grand Strategy archive see the comment section of my post Energy Policy, Islamic Terrorism, And Grand Strategy where I introduce some rough calculations on the area needed for photovoltaics. I'd appreciate it if anyone could point to more accurate calculations of how much energy the United States currently uses and how much space in southern parts of the US would be needed to be used to collect enough energy for current consumption rates.
Cambridge University researcher John Gurdon and colleagues have transplanted adult mouse and human nuclei into frogs eggs and found that frog egg cytoplasm has compounds in it that induce the production of Oct4 RNA which is normally expressed only in pluripotent embyonic stem cells.
When the researchers injected the adult nuclei into frog egg nucleii, rather than into the surrounding cytoplasm, Oct4 levels shot up by a factor of ten. "The reprogramming activity is particularly concentrated here," says Gurdon. Molecules in the frog nucleus may be responsible for the eggs' revitalizing abilities, he speculates
"We believe that the ability of amphibian oocyte components to induce stem cell gene expression in normal mouse and human adult somatic cells, and the abundant availability of amphibian oocytes, encourages the long-term hope that it may eventually be possible to directly reprogram cells, easily obtained from adult human patients, to a stem cell condition,"
Frog eggs are larger and much easier to work with. Also, since they are larger and have now been demonstrated to contain compounds that can cause mouse and human genomes to revert to a state more like the embryonic state it will be much easier for the scientists to isolate the compounds in the eggs that can do this. Trying to get enough human or mouse egg contents to fractionate and look for active compounds for this purpose would be much harder.
The obvious larger goal behind this research is to be able to take a sample of a person's cells, make those cells revert to an embryonic state. Those cells then would hold the potential to be coaxed into growing replacement organs or to supply various adult stem cell lines to replenish depleted aged stem cell reservoirs in the body.
Keep in mind that while a great deal of debate centers around whether human embryonic stem cell research should be allowed and in what ways it is ethical to acquire embryonic stem cells there is a great deal of research work on the state of other cell types that needs to be done to make useful therapies as well. Just as we need to understand better what exactly defines an embryonic stem cell or how to make a cell become an embryonic stem cell we also need to understand how cells become and maintain their state as other cell types.
Imagine you wanted to take some embryonic stem cells and convert them into liver cells in order to grow a new liver. Cells converted from one cell type to another cell type using some manipulation may be converted to a state that makes them seem like liver cells. But since they would not have experienced the exact sequence of signals and timings of signals that cells would experience in a developing embryo they may in some subtle way be different than liver cells in the regulatory state of their genes. Then the one potential danger is that they might revert to an embryonic state or convert into cancer cells or become some other undesired cell type.
See also the Better Humans report on this for other relevant links.
This report comes on the heels of the discovery of the gene Nanog which can turn adult cells into embryonic stem cells.
The Christian Science Monitor has an interesting article on the increasing recognition among climate scientists of the role that variations in solar activity play in changes in Earth's climate.
Other researchers found that the sun appears to display variations in its magnetic field and solar wind that span longer time scales. According to some researchers, during the past century, the strengths of the solar wind and the sun's magnetic field have doubled.
The cosmic-ray proposition holds that when the sun's magnetic field and the field generated by the solar wind increase, Earth is increasingly shielded from cosmic rays. These charged particles are thought to have the ability to seed cloud formation by triggering processes at the micro level that generate the nuclei around which water vapor condenses. Thus, if Earth's "shield" had been strengthening over the past century, it should have led to lower average cloud cover and warmer temperatures.
Will the sun increase or decrease its output overall in the 21st century? It is not inconceivable that some day humans may seek to engage in large scale measures to compensate for trends in solar output. Imagine, for instance, that the sun began again to behave has it did in the late 17th century. Variations in solar output in are believed to have been at least partly responsible for the Little Ice Age.
Because the sun is the ultimate source of Earth's warmth, some researchers have looked to it for an answer. In the 1970s, solar researcher John Eddy, now at Saginaw Valley State University in Michigan, noticed the correlation of sunspot numbers with major ups and downs in Earth's climate. For example, he found that a period of low activity from 1645 to 1715, called the Maunder Minimum, matched perfectly one of the coldest spells of the Little Ice Age.
Judith Lean, a solar physicist at the Naval Research Laboratory in Washington, estimates that the sun may have been about a quarter of 1 percent dimmer during the Maunder Minimum. This may not sound like much, but the sun's energy output is so immense that 0.25 percent amount to a lot of missing sunshine -- enough to cause most of the temperature drop, she says.
NASA climate researcher Eric Shindell believes the decrease in overall Earth temperatures were fairly small during the Maunder Minimum but the changes were much more drastic in the North Atlantic and Europe.
Shindell noted that the effects of this period of a dimmer sun were concentrated more regionally than globally. "Global average temperature changes are small, approximately .5 to .7 degrees Fahrenheit (0.3-0.4C), but regional temperature changes are quite large." Shindell said that his climate model simulation shows the temperature changes occurring mostly because of a change in the Arctic Oscillation/North Atlantic Oscillation (AO/NAO).
Once photovoltaic cells and other means of producing energy become cheap the release of CO2 into the atmosphere will become optional. One can imagine scenarios in which solar output dropped enough that a large scale increase in CO2 produced by burning fossil fuels might usefully compensate for some of the decrease in solar radiation. On the other hand, if solar output increased one can also imagine strategies which could be used to take CO2 out of the atmosphere. Energy produced by solar cells or nuclear reactors could be used to drive chemical processes that fixed carbon from the atmosphere into a solid form.
This reminds me of an idea that I think deserves more attention than it gets: Since hydrogen is a less dense form of energy than hydrocarbon liquids it is a poor substitute for hydrocarbons in vehicles. An alternative would be to use energy from photvoltaics or nuclear reactors to drive the fixing of carbon from atmospheric carbon dioxide to hydrogen extracted from water (effectively to do what chloroplasts do in plants but without any oxygen in the resulting compounds). The resulting hydrocarbons could be used as an energy source for vehicles. This would produce what would be, in effect, an artificial carbon cycle that would operate in parallel with the natural one.
As the saying goes "I am not making this up". Anti-depressant SSRI citalopram controls obsessive compulsive shopping disorder.
In a study appearing in the July issue of the Journal of Clinical Psychiatry, patients taking citalopram, a selective serotonin reuptake inhibitor that is approved for use as an antidepressant, scored lower on a scale that measures compulsive shopping tendencies than those on a placebo. The majority of patients using the medication rated themselves "very much improved" or "much improved" and reported a loss of interest in shopping.
"I'm very excited about the dramatic response from people who had been suffering for decades," said Lorrin Koran, MD, professor of psychiatry and behavioral sciences and lead author of the study. "My hope is that people with this disorder will become aware that it's treatable and they don't have to suffer."
Compulsive shopping disorder, which is estimated to affect between 2 and 8 percent of the U.S. population, is categorized by preoccupation with shopping for unneeded items and the inability to resist purchasing such items. Although some people may scoff at the notion of shopping being considered an illness, Koran said this is a very real disorder. It is common for sufferers to wind up with closets or rooms filled with unwanted purchases (one study participant had purchased more than 2,000 wrenches; another owned 55 cameras), damage relationships by lying to loved ones about their purchases and rack up thousands of dollars in debt.
"Compulsive shopping leads to serious psychological, financial and family problems including depression, overwhelming debt and the breakup of relationships," Koran said. "People don't realize the extent of damage it does to the sufferer."
Earlier studies suggested that the class of medications known as SSRIs might be effective for treating the disorder, but this had not been confirmed through a trial in which participants didn't know whether they were taking a placebo or the actual medication. Koran and his team sought to test citalopram - the newest SSRI on the market at that time - by conducting a seven-week, open-label trial followed by a nine-week, double-blind, placebo-controlled trial.
The study involved 24 participants (23 women and one man) who were defined as suffering from compulsive shopping disorder based on their scores on the Yale-Brown Obsessive-Compulsive Scale-Shopping Version, or YBOCS-SV. Patients with scores above 17 are generally considered as suffering from compulsive shopping disorder. Most of the participants had engaged in compulsive shopping for at least a decade and all had experienced substantial financial or social adverse consequences of the disorder.
During the open-label portion of the study, each participant took citalopram for seven weeks. By the end of the trial, the mean score of the YBOCS-SV decreased from 24.3 at baseline to 8.2. Fifteen patients (63 percent) were defined as responders - meaning they self-reported as being "very much improved" or "much improved" and had a 50 percent or greater decrease in their YBOCS-SV scores. Three subjects discontinued their use of the medication because of adverse events such as headache, rash or insomnia.
The responders were randomized into the double-blind portion of the trial in which half took citalopram for nine weeks and the other half took a placebo. Five of the eight patients (63 percent) who took the placebo relapsed - indicated by self-reporting and YBOCS-SV scores above 17. The seven patients who continued the medication saw a decrease in their YBOCS-SV scores and also reported a continued loss of interest in shopping, cessation of browsing for items on the Internet or TV shopping channels, and the ability to shop normally without making impulsive purchases.
I've argued in the past that biomedical advances will lead to treatments that cause large changes in group-average behavior. Well, here's a drug that already exists which has the potential to affect the size of the GDP and the efficiency of the markets. My guess is that it would be a net benefit to the economy over a longer period of time since consumer debt in the US economy is much too high. People who do not make compulsive purchases probably tend to accumulate longer-lasting assets and to invest more.
The MIT Technology Review has a report on the efforts of Honeywell Laboratories in Minneapolis, MN, the Intel Proactive Health Research lab in Hillsboro, OR, and other labs to develop technology to monitor the health and activities of senior citizens. (free registration required)
The Intel consortium is developing even more sensitive ways to follow the activities of elderly people. Its research goes beyond motion detectors and pillbox sensors to include things like pressure sensors on an Alzheimer’s patient’s favorite chair, networks of cameras, and tiny radio tags embedded in household items and clothing that communicate with tag readers in floor mats, shelves, and walls. From the pattern of these signals, a computer can deduce what a person is doing and intervene—giving instructions over a networked television or bedside radio, or wirelessly alerting a caregiver. Dishman says Intel will install the first trial systems in the homes of two dozen Alzheimer’s patients by early next year.
In collaboration with Intel Research Seattle, the Proactive Health team is building an advanced smart-home system to help those like Carl and Thelma deal with Alzheimer’s. Researchers are integrating four main technology areas into a prototyping environment to be tested in the homes of patients: sensor networks, home networks, activity tracking, and ambient displays. The researchers wonder about developing a better pill-tracking system for Carl’s medications, about sensor networks to help his adult children look in on things from far away, and about computer-based coaches that help Carl keep his mind fresh.
Intel foresees the use of WiFi wireless networks to spread sensors and actuators throughout our physical environment. (PDF format).
Small, inexpensive, low-powered sensors and actuators, deeply embedded in our physical environment, can be deployed in large numbers, interacting and forming networks to communicate, adapt, and coordinate higher-level tasks. As we network these micro devices, we’ll be pushing the Internet not just into different locations but deep into the embedded platforms within each location. This will enable us to achieve a hundredfold increase in the size of the Internet beyond the growth we’re already anticipating. And it will require new and different methods of networking devices to one another and to the Internet.
The University of Rochester Center for Future Health is working on a model home in their Smart Medical Home Research Laboratory which they are using to try out a number of concepts for constantly measuring human health signs and activities.
The Center's overall goal is to develop an integrated Personal Health System, so all technologies are integrated and work seamlessly. This technology will allow consumers, in the privacy of their own homes, to maintain health, detect the onset of disease, and manage disease. The data collected 24/7 inside the home will augment the data collected by physicians and hospitals. The data collection modules in the home will start with the measurement of traditional vital signs (blood pressure, pulse, respiration) and work to include measurement of "new vital signs", such as gait, behavior patterns, sleep patterns, general exercise, rehabilitation exercises, and more. This five-room "house" is outfitted with infrared sensors, computers, biosensors, and video cameras for use by research teams to work with research subjects as they test concepts and prototype products.
There are a few things to note about these reports:
The last point is in many ways the most interesting. Even adults in perfect health in safe environments will want to have extensive automated sensing systems installed in their homes if those systems can save them time and effort. Well, if automated systems can detect a dirty carpet to send out the automated vacuum or it can detect a spill on the kitchen floor and send out an automated cleaning device to clean it up then many people will want the automated sensor systems that will make these things possible. Ditto for systems that can pick up dirty clothes to take them to the laundry, that can notice that the counter has lots of dirty dishes, or that can respond to a voice command to clear the table.
But less obvious sensor systems can be imagined. Picture a section of floor tile that can accurately weigh what is standing on it. If that tile was connected to a computer that also had several video cameras which intersected that position then it could recognize what was standing on it, what it was dressed in or carrying (got to adjust the weight for clothes, pocketbooks, a plate of food, or whatever), and determine that Spot the dog is getting too fat or daughter Kathy might be becoming anorexic.
The Surveillance Society is going to become widespread more because of individual choices of hundreds of millions of private individuals than because of decisions taken by governments.
Update: MIT inventor Ted Selker has a smart futon that watches your face for cues about what you want.
The seemingly normal futon in the corner is actually a multimedia couch bed. By staring or blinking at images projected on the ceiling above the bed, you can turn on a radio or set an alarm clock without moving a major muscle. While the system could create the world’s worst couch potato, it could also be ideal for people with physical disabilities.
PHILADELPHIA -- Scientists at the University of Pennsylvania have found new support for the age-old advice to "sleep on it." Mice allowed to sleep after being trained remembered what they had learned far better than those deprived of sleep for several hours afterward.
The researchers also determined that the five hours following learning are crucial for memory consolidation; mice deprived of sleep five to 10 hours after learning a task showed no memory impairment. The results are reported in the May/June issue of the journal Learning & Memory.
"Memory consolidation happens over a period of hours after training for a task, and certain cellular processes have to occur at precise times," said senior author Ted Abel, assistant professor of biology at Penn. "We set out to pinpoint the specific window of time and area of the brain that are sensitive to sleep deprivation after learning."
Abel and his colleagues found that sleep deprivation zero to five hours after learning appeared to impair spatial orientation and recognition of physical surroundings, known as contextual memory. Recollection of specific facts or events, known as cued memory, was not affected. Because the brain's hippocampus is key to contextual memory but not cued memory, the findings provide new evidence that sleep helps regulate neuronal function in the hippocampus
What conclusions can be drawn from this aside from that it is wise to get a full night of sleep? One possibility is that if you can choose when to study then it might help more to study in the evening in the last 5 waking hours before you go to bed. Best to have freshly learned information in your mind before going to sleep.
One problem with this advice is that schools typically teach classes during the day. If you really want to get radical in your approach to learning one possibility to consider is to wake up and go to bed much earlier. One could learn during the day and then immediately go to sleep for the evening hours. Then wake up in the early hours of the new day to start your day's activities. This isn't practical for most people. But if you are going to work your way thru college it might make more sense to have a job that starts at midnite and then stay awake at work until it is time for your morning classes. Then go to school and then come home and go to sleep.
A much less radical approach which would allow one to keep regular hours would be to do all the non-study activities (chores, errands, jobs, etc) before evening time and reserve all evening hours for studying.
An afternoon siesta might well help the learning process as well.
Speaking at the International Congress of Genetics in Melbourne Australia Nobel Prize winner Sydney Brenner says biological evolution is obsolete.
Another laureate, Professor Sydney Brenner, who helped crack the DNA code, told the 2750 at the conference that biological evolution was an obsolete technology. "The brain is more powerful than the genome."
By this he means random generation of mutations which then get sorted thru by natural selection and survival of the genes which optimize reproductive fitness. As far as humans are concerned he is not right quite yet. But in another 20 or 30 years he will be. Natural selection is still happening in humans right now. Unfortunately, as demonstrated by analysis of data from the Australian Twin Registry (ATR) published a couple of years ago in Evolution, the genetic variations for higher intelligence and delayed childbearing are being selected against in industrialized societies.
University-educated women have 35% lower fitness than those with less than seven years education, and Roman Catholic women have about 20% higher fitness than those of other religions.
In spite of the fact that the results are consistent with what we see happening around us in our regular lives one of the researchers who co-authored the paper professed to be surprised by the results.
“I was staggered by the results we got,” said Dr Owens. “When we decided to control for these factors, I wasn’t expecting anything to come out of it. I thought, ‘let’s just run with the analysis’. But there was a massive difference in the number of children born to families with a religious affiliation. Many of the Catholic twins we studied had an average family of five children, where other families were having only one or two children.
“We also found that mothers with more education were typically having just one child at an older age. Their reproductive fitness was much lower than their peers who left school as early as possible. Again, and again, our analyses for these two factors came back with the same results.”
The researchers who published the study did not even mention the word "intelligence" but the conclusions are pretty plain to see. I expect higher intelligence to be selected against for the foreseeable future. The first change that might begin to swing the trend back toward selection for higher intelligence may come as a result of cheap DNA sequencing. When the genetic variations for higher intelligence are identified and it becomes cheap to check a prospective mate for genetic potential for producing high intelligence offspring then some people are going to start using the results of such tests as guides when choosing mates. As I've discussed in previous posts, cheap DNA sequencing will also increase the incentive for women to use sperm bank sperm.
The next big change will probably come when it becomes possible to do germ line genetic engineering to give one's progeny genetic variations that enhance intelligence. Then the vast bulk of all genetic changes that get introduced into progeny will be placed there as a result of conscious human intent and not as a result of the occurrence of random mutations. At that point we will be able to say that biological evolution by natural selection on randomly generated mutations will be obsolete.
George Mason University geography Ph.D. candidate Scott Gorman and assistant professor research Laurie Schintler have created a computer database that maps the entire United States telecommunications network overlaid with all major industries.
He can click on a bank in Manhattan and see who has communication lines running into it and where. He can zoom in on Baltimore and find the choke point for trucking warehouses. He can drill into a cable trench between Kansas and Colorado and determine how to create the most havoc with a hedge clipper. Using mathematical formulas, he probes for critical links, trying to answer the question: "If I were Osama bin Laden, where would I want to attack?" In the background, he plays the Beastie Boys.
I hope they are encrypting their data.
"Only by trying to understand critical infrastructure can we begin to formulate plans and policies designed to mitigate the effects that could occur as the result of a targeted physical and/or cyber attack on infrastructure in the developed world," wrote Schintler in an August, 2002 CIP Report article. "And we need understand our complex infrastructure even better than the enemy."
One problem here is that this sort of data needs to be collected in order to identify vulnerabilities for defensive purposes. But by collecting it the data becomes vulnerable to being stolen by the Bad Guys. Obviously, security precautions can be taken to make the data more difficult to steal. The Washington Post article mentions that the researchers are using some physical security devices to make it hard for anyone to get to their computers. But my guess is those devices could be defeated by sufficiently sophisticated thieves. Hence my hope that they are encrypting their data.
But there is a longer run problem here that probably can not be solved: it will become increasingly easy for anyone to collect the information that Gorman and Schintler have collected. Gorman collected much of his information using the internet. Well, companies and government agencies can remove some information from their web sites. But are electronic map databases going to remove the location information for companies? Also, some of the information ends up being fairly accessible because construction companies and government agencies have to know where it is not safe to dig. A large number of people need fairly easy access to the information.
There is also the separate question of whether important vulnerabilities, once identified, will be dealt with. Redundant systems cost money. Parallel fiber optiic cables and switching facilities would have to be built. Companies compete with each other and need to keep their costs low. Who is going to have sufficient incentive to build in the amount of redundancy that would effectively protect against terrorist attacks? Also, can it even be done? If one switching station is too important and two more get built then the terrorists have to blow up 3 buildings rather than 1 building. Well, an organization that can get the material together to make one van into a bomb the total effort needed to scale up to make 3 bomb vehicles will probably less than 3 times as much as to make and deliver one of them.
It might make more sense to approach the problem by developing better capabilities for doing really rapid repair. Mobile fiber optic switching equipment that could be transported in less than a day to anywhere in the country to be installed to replace destroyed facilities might make more sense than redundant fully installed equipment. Also, pre-installed hooks across major bridges and even on the sides of some buildings could be used to quickly extend a fiber optic cable into a place like Manhattan if major chunks of existing cables were cut.
The world's scientific establishment is frustrating research into cancer, which could probably be cured in 10 years if fought through a central agency, according to one of the world's most eminent scientists.
Along with one of Australia's top expatriate scientists, Bruce Stillman, Dr Watson is pushing for an international effort to map the genetic makeup of all cancers. It would be similar to the sequencing of the human genome, a task completed this year, but would cost much less - up to $A300 million compared to the $A4.5 billion spent on the human genome.
As I understand it, Watson's argument is that rather than parcelling smaller amounts of money out to many different scientists to study which ever facet of cancer they find interesting there should be a big systematic effort to examine a large number of cancer cell lines and to look at their gene expression for a large number of genes.
Watson is saying, in essence, we now have the tools to collect the genetic information we need in order to discover the changes in genetic regulatory mechanisms that cause cancer. Given that it is possible to do this he says we should spend the money on a big effort to collect the information we need to.
Hey, suppose he is right. But suppose the tools are expensive to use. If discovering a cure for cancer was going to cost, say, $500 billion dollars would you be for or against? I do not think it would really cost that much. But even if it did I'd be for it. Put that number in perspective. The US economy produces about $10 trillion dollars per year in goods and services. The health part of that is around 14% (give or take a percentage point - didn't look up the latest figures) and so is about $1.4 trillion per year. So what is $500 billion in the bigger scheme of things?
The late Lewis Thomas, former director of Sloan-Kettering, observed in his book Lives Of A Cell: Notes Of A Biology Watcher that diseases are expensive to treat when we do not have effective treatments that get right at the causes. He cited, for example, tuberculosis sanitariums. People had to be kept in professionally staffed institutions for long periods of time and could not work or take care of family while sick. But along came drugs that cured TB and the people walked out in a few weeks. The cost savings were enormous. Similarly, the cost savings that will come from a cure for cancer will be enormous. Even if we spent hundreds of billions on Watson's Manhattan Project to cure cancer we'd gain it back many times over because effective treatments would be far cheaper than radiation therapy, chemotherapy, and the other treatments currently used that have horrible side-effects and which can not even cure most cancer patients.
But with the recently announced historic completion of the Human Genome Project, and other advances in molecular biology and proteomics, medical science is about to take its largest leap, probably since the discovery of antibiotics.
The results for the prevention, diagnosis and treatment of cancer are expected to be profound. "We are now in a position to rapidly and continuously accelerate the engine of discovery, so we can eliminate suffering and death from cancer by 2015," said Dr. Andrew von Eschenbach, Director of the National Cancer Institute. "We may not yet be in a position to eliminate cancer entirely," he continued, "but eliminating the burden of the disease by preemption of the process of cancer initiation and progression is a goal within our grasp."
Cancer can be thought of as an information problem. Each cancer has genes that have been deleted and other genes that have been upregulated or downregulated. Go thru a large number of cancer cells and collect the information about the state of a large number of genes and it may be possible to deduce exactly what genetic switching combinations can cause or stop a cell from being cancerous. From that information it may be possible to devise effective therapies using RNA interference and other gene therapies. Also, drugs could be developed to target any genetic switching mechanism which is identified to be important.
University of Texas researchers see influenza as a greater bioweapons threat than smallpox.
The Texas researchers wrote in the Journal of the Royal Society of Medicine that scientists are close to completing the blueprint of the 1918 Spanish Flu (search) that killed 20-40 million people around the world, including half a million in the United States. That blueprint, they said, could provide the recipe for terrorists looking for a deadly weapon.
The sequencing of the 1918 Spanish Flu DNA has been underway for quite some time as this 2001 PNAS paper demonstrates. Once it is complete the sequence will be available for use by any group capable of building a virus from scratch or by any group that can modify an existing influenza virus strain to put in the genetic variations that caused the level of virulence that was characteristic of the 1918 strain.
The complete sequence of the 1918 flu will be known within 2 years. Madjid says that it will become increasingly easier to build the 1918 virus from the sequence information.
Dr Madjid told BBC News Online: "Using influenza as a bioweapon is a probability.
"It's just a matter of technology. If it's difficult now, it will be easier in six months and much easier in a year's time."
He is right about the increasing ease of creating such a virus. This is more generally true of building almost any kind of weapon you can imagine. The more technology advances the easier it becomes to make things.
Here is the Pub Med entry for the paper by Mohammad Madjid MD, Scott Lillibridge MD, Parsa Mirhaji MD, and Ward Casscells MD on Influenza as a bioweapon. From the Royal Society of Medicine Press Release on this paper.
Many bioterror warnings have focused on diseases like smallpox, but flu has very different implications for public health. Influenza is far more easily available, and common enough that a cluster of cases would not cause alarm at first. Once an epidemic has begun, it is more difficult to immunize against, as the incubation period is short. The virus is very difficult to eradicate since birds, rats and pigs all carry flu.
A third difference is that the incubation period for influenza is short (1–4 days) versus 10–14 days for smallpox. Immunization after exposure to influenza is therefore not protective, and even the neuraminidase inhibitors such as oseltamivir must be administered before symptoms develop or within the first 48 hours after their appearance. Fourth, influenza is harder to eradicate, because of avian, murine, and swine reservoirs. Fifth, influenza outside of pandemics, has lower case-fatality (2.5% versus 25%, though the newly recognized triggering of cardiovascular events suggests that the true mortality may be much higher in ill or elderly persons). Finally, influenza poses a greater threat to world leaders than does smallpox, because they are older and prone to influenza and its cardiovascular complications, have some residual immunity to smallpox (whereas unvaccinated youth have none), and are often in public places.
As I've previously argued, we need facilities that are capable of being operated in a crisis mode to very rapidly sequence and make a vaccine for a new killer strain of influenza. That strain might arise naturally (this seems inevitable in fact) or it could be made by terrorists. But eventually we are going to be faced with it. The development of DNA vaccine technology for influenza has the potential of enabling the manufacture of much greater quantities of vaccine more quickly and cheaply than conventional vaccine manufacturing approaches. Therefore DNA vaccine development for more influenza strains should be a priority as a useful learning experience.
The Defense Advanced Research Projects Agency (DARPA) is developing a system for use in urban combat called "Combat Zones That See" (CTS) to better protect troops fighting in urban combat zones.
The project's centerpiece would be groundbreaking computer software capable of automatically identifying vehicles by size, color, shape and license tag, or drivers and passengers by face.
The Combat zones That See (CTS) Program explores concepts, develops algorithms, and delivers systems for utilizing large numbers (1000s) of cameras to provide the close-in sensing demanded for military operations in urban terrain. Automatic video understanding will reduce the manpower needed to view and manage this impossibly large collection of data and reduce the bandwidth required to exfiltrate the data to manageable levels. The ability to track vehicles across extended distances is the key to providing actionable intelligence for military operations in urban terrain. Combat zones That See will advance the state-of-the-art for multiple-camera video tracking, to the point where expected track lengths reach city-sized distances. Trajectories and appearance information resulting from these tracks are the key elements to performing higher-level inference and motion pattern analysis on video-derived information. Combat zones That See will assemble the video understanding, motion pattern analysis, and sensing strategies into coherent systems suited to Urban Combat and Force Protection.
This project really is motivated by military needs.
Military Operations in Urban Terrain are fraught with danger. Urban canyons and abundant hide-sites yield standoff sensing from airborne and space-borne platforms ineffective. Short lines-of-sight neutralize much of the standoff and situation-awareness advantages currently rendered by U.S. forces. Large civilian populations and the ever-present risk of collateral damage preclude the use of overwhelming force. As a result, combat in cities has long been viewed as something to avoid. However, modern asymmetric threats seek to capitalize on these limitations by hiding in urban areas and forcing U.S. Forces to engage in cities. We can no longer avoid the need to be prepared to fight in cities. Combat zones That See will produce video understanding algorithms embedded in surveillance systems for automatically monitoring video feeds to generate, for the first time, the reconnaissance, surveillance and targeting information needed to provide close-in, continuous, always-on support for military operations in urban terrain.
You can read DARPA's contractor FAQ as a PDF.
DARPA says this system is being developed for use in foreign urban battlefields and is not meant for domestic use. This certainly seems like an honest statement of their motivations. However, such a disclaimer tells us little about how the system will eventually be used (though it certainly will be used for military purposes). First of all, once it is working are they going to turn down requests from, say, the City of New York, to install some cameras to watch for known terrorists? Seems unlikely. Secondly, once DARPA demonstrates some capability companies not involved in the development will rush to produce equivalent systems if the demand exists among law enforcement agencies. There are lots of engineers and scientists who could assist in the development of such a system.
However, just because DARPA's project will eventually enable large scale surveillance of cities which are not war zones (okay, at least not military war zones) does not mean that the project should be opposed by those who are opposed to increased domestic surveillance by governments. Civil libertarians who may wish try to stop the growth of the surveillance society by lobbying against government funding of the development of the enabling technologies in projects such as the DARPA CTS are at best fighting a delaying action. The ability to automatically recognize specific faces or cars or to read license plates is coming sooner or later as computers become faster, sensor quality improves, and visual pattern matching algorithms improve. DARPA's efforts might speed up the development of the needed technologies but their development is inevitable.
Update: The London Underground is about to test a software system called Intelligent Pedestrian Surveillance System that does automated computer monitoring of digital cameras at tube subway stops.
If the trial due to go live in two London Underground stations this week is a success, it could accelerate the adoption of the technology around the world. The software, which analyses CCTV footage, could help spot suicide attempts, overcrowding, suspect packages and trespassers. The hope is that by automating the prediction or detection of such events security staff, who often have as many as 60 cameras to monitor simultaneously, can reach the scene in time to prevent a potential tragedy.
The software is marketed by Ipsotek (Intelligent Pedestrian Surveillance and Observation Technologies), a firm that is a spin-off of the research done by their managing directory Dr. Sergio Velastin at Kingston University.
Dr. Sergio A Velastin obtained his doctoral degree from the University of Manchester (UK) for research on vision systems for pedestrian and road-traffic analysis. Joining the Department of Electronic Engineering in Kings College London (University of London) in October 1990, he became a Senior Lecturer and founded and led the Vision and Robotics Laboratory (VRL). In October 2001, Dr. Velastin and his VRL team joined the Digital Imaging Research Centre in Kingston University, with which he is still associated, attracted by its size and growing reputation in the field.
Note how the basic research being funded by a variety of governments in vision processing leads inevitably to automated systems that can monitor and detect patterns in human behavior.
The Scientist has a good review of all the approaches being pursued to dramatically lower the cost of complete genome sequencing. (free registration required)
Some companies estimate that within the next five years, technical advances could drop the cost of sequencing the human genome low enough to make the "thousand-dollar genome" a reality. Whether or not that happens, new sequencing approaches could in the short term facilitate large-scale decoding of smaller genomes. In the long term, low-cost, rapid human genome sequencing could become a routine, in-office diagnostic test--the first step on the road to truly personalized medicine.
Companies discussed in the article which are pursuing approaches to radically lower the cost of DNA sequencing include VisiGen Biotechnologies, 454 Life Sciences, Solexa, and US Genomics. A number of university research labs are also pursuing approaches that may radically lower the cost of DNA sequencing including that of Daniel Branton at Harvard (using nanopores), George Church at Harvard (particularly his work on polymerase colony or polony technology) and Watt W. Webb of Cornell.
The approach being pursued by Cornell professor of applied and engineering physics Watt Webb's group is particularly interesting because the ability to optically watch the behavior a single biomolecule at a time could be used for many research purposes.
His report on watching individual molecules at work, "Zero-Mode Waveguides for Single-Molecule Analysis at High Concentrations," appears in the Jan. 31 issue of the journal Science. The article, which is illustrated on the cover of Science, also is authored by a multidisciplinary group of Cornell researchers: Michael Levene, an optics specialist and postdoctoral associate in applied and engineering physics; Jonas Korlach, a biologist who is a graduate student in biochemistry, molecular and cell biology; former postdoctoral associate Stephen Turner, now president and chief scientific officer of Nanofluidics, a Cornell spin-off (read the story); Mathieu Foquet, a graduate student in applied and engineering physics; and Harold Craighead, professor of applied and engineering physics.
"This is an example of the possibilities provided by integrating nanostructures with biomolecules," said Craighead, the C.W. Lake Jr. Professor of Productivity, who also is co-director of the National Science Foundation (NSF)-funded Nanobiotechnology Center at Cornell. "It represents a major step in the ability to isolate a single active biomolecule for study. This can be extended to other biological systems."
A new technique for the determination of the sequence of a single nucleic acid molecule is being developed in our laboratory. In its principle, the activity of a nucleic acid polymerizing enzyme on the template nucleic acid molecule to be sequenced is followed in real time. The sequence is deduced by identifying which base is being incorporated into the growing complementary strand of the target nucleic acid by the catalytic activity of the polymerase at each step in the sequence of base additions. Recognition of the time sequence of base additions is achieved by detecting fluorescence from appropriately labeled nucleotide analogs as they are incorporated into the growing nucleic acid strand.
Because efficient DNA synthesis occurs only at substrate concentrations much higher than the pico- or nanomolar regime typically required for single molecule analysis, zero-mode waveguide nanostructures have be developed as a way to overcome this limitation. They effectively reduce the observation volume to tens of zeptoliters, thereby enabling an inversely proportional increase in the upper limit of fluorophore concentration amenable to single molecule detection. Zero-mode waveguides thus extend the range of biochemical reactions that can be studied on a single molecule level into the micromolar range.
The cost of DNA sequencing looks set to drop dramatically. The big question is just how fast will the costs drop. DNA sequencing still costs several million per person. But that is orders of magnitude cheaper than it was 10 years ago. The newer approaches that attempt to read a single strand of molecules could be made very cheap if only they can be made to work in the first place.
See also my previous posts on approaches to lower DNA sequence costs in the Biotech Advance Rates archive.
The SARS outbreak appears to have ended. But historically influenza epidemics have always waned in the summer and for similar reasons higher temperatures may be blocking the spread of SARS.
There are also suspicions that the first outbreak in the southern Chinese province of Guangdong stopped so abruptly because of the onset of summer. The SARS virus does not survive well in a hot environment, and if most transmission is due to people touching contaminated surfaces, higher temperatures would have reduced transmission.
We aren't really going to know whether SARS has been stopped until the fall season comes to China, temperatures drop, and people spend more time in-doors. Whether we see it again also depends in part on whether there is a significant animal reservoir for it. Did it mutate in a single animal for sale in a marketplace to be able to jump into humans? Did that animal get killed and thereby end its presence in animals? Or are there many animals walking around that carry the coronavirus in a form that is infectious in humans? The answers to these questions are unknown.
Takeshi Satow, of the Kyoto University Graduate School of Medicine in Japan, has discovered that applying an electrical stimulation to an area on the surface of the inferior temporal gyrus region of the brain produces happiness and laughter.
Researchers found the tickle spot on one epileptic woman's brain when they realized that stimulating a specific brain region caused her to feel happy and laugh.
This brings to mind an excellent story written by Spider Robinson entitled God Is An Iron about a woman who was trying to commit suicide by starving to death while hooked up to a device that gave her intense continuous pleasure. That story became the second chapter of his novel Mindkiller.
While current drugs for making people more happy are fairly crude it seems inevitable that far more narrowly focused techniques for inducing happiness with fewer side effects will be developed. Whether, as in Robinson's story, someone will want to be happy and still want to kill themself remains to be seen.
Scientists used a stimulation technique to improve the sensitivity of people's fingertips, and then gave them drugs that either doubled or deleted this effect. Similar skin stimulation/drug treatment combinations may eventually help the elderly or stroke victims button shirts and aid professional pianists according to the authors of a paper appearing in the 04 July issue of the journal Science, published by AAAS, the science society.
Finger stimulations and drugs can temporarily reorganize parts of the human brain. This stimulation, called co-activation, shuffles the synapses that link neurons. The stimulated area becomes more sensitive as more neurons are recruited to process encountered tactile information. The scientists showed that amphetamine doubled stimulation-induced gains in tactile acuity. In the presence of an alternate drug, an NMDA blocker, the improvements in tactile acuity, or perceptual learning, gained via finger stimulations were lost.
Dinse said that related treatments could improve a person's ability to read Braille and that drug-mediated muscle stimulation could help the elderly and chronic pain patients perform everyday tasks.
"We are at the beginning of an era where we can interact with the brain. We can apply what we know about brain plasticity to train it to alter behavior. People are always trying to find ways to improve learning. What we tested is unconscious skill learning. How far could this carry to cognitive learning?…that remains to be seen," said Dinse.
"My personal opinion," Dinse maintained, "is that progress in brain pharmacology will sooner or later result in implications that are equally or possibly more dramatic than the implications tied to discussions about genes and cloning."
Drugs will almost certainly be developed that will enhance the training of the mind to increase specific types of sensitivity and discernment of sensory signals. The example of musicians who learn to discern finer grained differences between musical notes is another example. Also, drugs will be found - perhaps the same drugs - that will enhance the ability to learn new forms of coordination such as when learning a musical instrument. Also, drugs will be found that will enhance general learning without causing harmful side effects (unless, of course, you use the drugs to learn truly harmful ideas such as a vile and dangerous religious belief).
"We were able to change the tactile acuity of 80-year-old subjects to a performance of a 50-year-old," Dinse said -- a 50 percent to 100 percent improvement.
Coactivation causes a remapping of somatosensory cortex, in which the area used to represent the index finger becomes larger and produces a stronger EEG signal. In the new study, this cortical reorganisation was considerably more dramatic in the group that had received amphetamine.
Imagine the possibilities once scientists manage to find ways to increase the sensitivity of sex organs. That's one kind of enhancement that will overcome public opposition to human brain enhancement.
While Dinse is using amphetamines for his studies he recognises that they have too many effects including effects that are harmful. Another recent study reports that ex-users of methamphetamine show signs of neuronal damage.
But there is significant evidence that the drug can cause damage to the brain's neurons - the cells which are used for thinking.
Methamphetamine users have reduced concentrations of a chemical called N-acetyl-aspartate, which is a byproduct of the way neurons work.
Decrease in the brain levels of N-acetyl-aspartate occurs in a number of neurological disorders such as multiple sclerosis and Alzheimer's disease. It is found at lower levels even in ex-amphetamine users. Hence amphetamine appears to cause prolonged neuronal changes that likely are signs of neurological damage.
In a review of selenium metabolism (which is worth a read if you have any interest in selenium metabolism) the authors mention in passing that methamphetamine use causes free radical generation and damage to dopaminergic neurons.
Methamphetamine (MA) exposure of animals results in enhanced formation of superoxide radical (O2–) and nitric oxide (NO), which interact to produce peroxynitrite (OONO–). Peroxynitrite is a potent oxidant, leading to dopaminergic damage (Imam and Ali 2000). Thus, multiple dose administration of MA to mice results in long-lasting toxic effects in the nigrostriatal dopaminergic system, which is a relevant model of PD.
So kids, the moral of this story is don't try this at home. It is bad for you.
A 13 year old girl in Wimbledon England named Kat Reid has become the first person to receive an artificial bone replacement that can be extended in length without surgery as she grows.
In November, orthopaedic surgeons Steve Cannon and Tim Briggs fitted the world's first bionic bone into the thigh of 13-year-old Kat Reid.
The extendable prosthesis, to give it its proper name, is a major breakthrough because it allows Kat's left leg to grow in pace with her right, without her having to undergo further surgery.
Application of an external magnetic field causes the implanted prosthetic part to grow by 1 millimeter at a time.
While the article does not say why Kat Reid needed a prosthetic replacement for her thigh bone many children who must loose a piece of bone as part of a cancer treatment or who were born with a bone defect or who suffered severe damage to a bone in an accident would benefit from an artificial replacement that can be extended without repeated surgeries.
Edward A. Lee of UC Berkeley's Department of Electrical Engineering and Computer Science has proposed a technological solution to the threat of aircraft being flown by hijackers into buildings: Program the avionics of aircraft to automatically engage auto-pilot to take over control and steer an aircraft away from any airspace which it is programmed not to enter.
Lee and his colleagues have an alternative. They propose modifying the avionics in aircraft so that the plane would fight any efforts by the pilot to fly into restricted airspace. So if a plane was flying with a no-fly-zone to the left, and the pilot started banking left to enter the zone, the avionics would counter by banking right. Lee's system, called "soft walls", would first gently resist the pilot, and then become increasingly forceful until it prevailed.
Of course a bug in the software could be a cause for trouble: "Auto-pilot, I need to land the plane". "Sorry Dave, I can't let you do that".
But seriously, with GPS signals feeding into the aircraft avionics a hijacker would be hard pressed to thwart the decisions of an auto-pilot system. If the flight computer which passed along the fly-by-wire signals to the control surfaces was also the same computer that ran the auto-pilot algorithms an effort to thwart it would not be trivial. If the computer or its outputs were physically reachable while in flight then one would need to either bring along a computer of one's own with some sophisticated software to take over the driving of the outputs. Another alternative would be to get between the GPS and the flight computer and replace the real GPS signals with some fake GPS signals.
A high tech attack on the flight controls could be defended against by using encryption. The use of encrypted communications within the network of computers in an aircraft could defeat most attempts to install a replacement computer. Unless the replacement computer knew the encryption keys and algorithms used by the various embedded computers the other computers would know to ignore it.
Sperm turn out to do such a great job of packing in and coating their DNA that they can be dried, sealed, and stored at room temperature for use in In Vitro Fertilization (IVF)
Madrid, Spain: A novel method of preserving sperm through air drying is showing initial promise and has the potential to revolutionize sperm storage, allowing men awaiting in vitro fertilization (IVF) to take care of their sperm at home.
Dr Daniel Imoedemhe, a consultant in reproductive medicine and endocrinology, working in Saudi Arabia, told the annual meeting of the European Society of Human Reproduction and Embryology, that for the first time studies on human embryos fertilized with air-dried sperm have shown that the new technique does not impair the early stages of embryo cell division.
Dr Imoedemhe, from Erfan and Bagedo Hospitals, Centre for Assisted Reproduction, Jeddah, said that in the past it was believed that sperm "died" when allowed to dry in air because they were no longer motile and therefore unable to penetrate an egg. "But with the technique intracytoplasmic sperm injection (ICSI), the loss of motility doesn't necessarily mean the loss of ability to fertilize an egg, since this is largely dependent on the DNA (genetic material) that is tightly packed into the sperm head. We believe our study confirms that sperm DNA is resistant to damage by air drying."
Current techniques for freezing sperm are expensive.
Sperm are stored in large liquid Nitrogen tanks that require regular top-ups to ensure that they remain in the desired condition. The tanks are expensive, large and occupy a great deal of laboratory space. In the current system, in order to prevent mis-identification during recovery from storage tanks, a rigorous labelling and coding system is required.
"These methods are time-consuming and cumbersome compared to our simple technique of air-drying that just requires re-suspension before use," said Dr Imoedemhe. "The process can be further simplified by allowing patients to take responsibility for storing their air-dried sperm at home."
The new air-drying technique involves smearing a sample pellet of washed sperm on to a glass slide and then leaving it to dry for two to three hours in a laminar flow cabinet that allows a directional flow of filtered air to ensure that the sample remained uncontaminated by airborne dust or micro-organisms. The dried sperm can then be stored at normal room temperature or in a normal refrigerator and do not seem to require any other special storage conditions. Just prior to injection by ICSI into an egg, the sperm film can be re-suspended with a large drop of special biological medium (similar to that in which the eggs are held in order to avoid osmotic changes).
This technique has not yet been tested using eggs that were of the same quality as the eggs typically used for IVF techniques. Therefore the poorer results with the air-dried sperm are at least partially attributable to the poorer quality eggs used for those experiments.
But although drying did not seem to interfere with fertilization, it was found that 72 hours after sperm injection, the therapeutic group (eggs fertilized with fresh un-dried sperms) had significantly more embryos advancing to the eight or more cell stage than the experimental group (air dried sperm) – 50.5% versus 18.2%.
However Dr Imoedemhe believes this may have more to do with the differences in experimental procedure necessitated by the difficulty of acquiring fresh mature eggs for the experimental group, than the effects of air-drying. In the treatment group all the eggs were mature (at metaphase II), whereas in the air-dried group the eggs were immature (at metaphase I) and had to be matured outside before ICSI. It is thought that such immature eggs may have less potential for development after fertilization compared to normally matured eggs.
If this becomes cheap and easy to do one can easily imagine why some men will arrange to have it done just as an insurance policy against the possibility of disease or injury.
This also opens up greater possibilities for sperm theft. If the sperm can more easily be stored then there will be greater incentive for a woman to get sperm from someone they have a brief affair with and arrange to have the sperm air dryed and then kept for later fertilization. A woman could easily build up a large collection from all the men she ever slept with and then go thru and, once DNA sequencing is cheap, sequence a part of each sample and then choose which man's sperm she wants to use for making a baby on her own.
Update: Dr. Nikolaos Sofikitis, professor of urology at Ioannina University in Greece, has taken germ cells from healthy testicles of men who had testicular cancer in the other testicle and transplanted the germ cells back into the testicular area after cancer treatment was completed. This restored sperm production in the cancer patients.
The new technique preserves the "germ cells" which make sperm, which are frozen and then transplanted back into the man when he is given the all-clear from the disease.
Remarkably, the frozen cells then "re-colonise" the testicle, and start producing enough sperm to allow fertility doctors to extract it from semen.
The germ cells included stem cells and those cells were able to recolonize the healthy testicle.
The germ cells were then thawed and injected back into the healthy testicles of three of the men.
Thirteen months later, the scientists found the testicles had been successfully recolonized with germ cells that produce sperm.
Technology Review has a good review article on various efforts to use cells grown in mechanical devices to create temporary organ replacements for kidneys, livers,
“Patients who are undergoing chronic dialysis become malnourished, and they sort of wither,” says Harmon. The solution, believes Humes, lies in harnessing kidney cells themselves—cells that can rapidly react to changes in the body’s environment in a way that machines simply can’t.
The kidney-in-a-cartridge, which is being developed by Lincoln, RI-based University of Michigan spinoff Nephros Therapeutics, could be ready for widespread use in as little as three years. And it’s only one example of the increasingly popular strategy of using living cells to do the heavy lifting in artificial organs. Several academic labs are developing similar devices packed with liver cells to chew up the toxins that accumulate in the blood when the liver suddenly fails. Already in human trials, these bioartificial livers could help patients in acute liver failure, whose only chance today is a rare organ transplant.
Nephros Therapeutics, Inc. announced today the successful completion of a Series C financing totaling $17 million to accelerate the clinical development of its lead product, the Renal Assist Device (RAD). Based on patented renal stem cell technology licensed from the University of Michigan and invented by Dr. H. David Humes, Nephros is developing the RAD for the potential treatment of Acute Renal Failure (ARF). Lurie Investments of Chicago, IL led the financing round. New investors participating in this round include CDP Capital of Montreal, QC, as well as Foster & Foster of Greenwich, CT. All of Nephros’ existing investors, including BD Ventures, Portage Venture Partners, North Coast Technology Investors, Palermo Group (an affiliate of the Apjohn Group), and the founding investor, Seaflower Ventures, also participated in this round.
Nephros’ Renal Assist Device (RAD) is a cellular replacement therapy system that leverages the Company’s proprietary Renal Proximal Tubule (RPT) cell technology for the potential treatment of ARF patients. RPT cells play a key role in the regulation of response to inflammation and stress and are critical to normal kidney function and the patient’s ability to fight infection. Nephros has established pioneering technologies to isolate and expand (ex vivo) kidney-derived stem cells and to then create delivery systems. In contrast to the limitations associated with current replacement therapy (e.g. hemodialysis), the RAD’s RPT therapy is being investigated for the potential to replace and maintain a full range of key functions of the kidney, including endocrine equilibrium, metabolic activity and immune surveillance. Nephros is currently supporting two Phase I/II physician-sponsored clinical trials for RAD, at the University of Michigan and the Cleveland Clinic, for the potential treatment of ARF.
David Humes, M.D., Professor of Internal Medicine at the University of Michigan has worked for a decade to develop the bioartificial kidney that is going thru clinical trials.
A bioartificial kidney device invented by Dr. Humes is being clinically evaluated for treatment of patients in acute renal failure. Phase II human trial of the device has been approved by FDA and is expected to commence in the Fall of 2003.
The same core technology using adult kidney stem cells will be soon be tested in a device designed to ameliorate hyperinflammation associated with End Stage Renal Disease. Hyperinflammation may lead to infections and cardiovascular problems, leading causes of early death in chronic renal failure patients.
No volunteers are needed for either of these studies.
Phase I trial recently concluded on the device for treating acute renal failure was led by principal investigator Robert Bartlett and co-investigators William (Rick) Weitzel and Fresca Swaniker in Ann Arbor and by Emil Paganini in Cleveland. The initial study demonstrated that the device is safe for further testing. The next investigations will measure the treatment's effectiveness. Published results, as they become available, will be added to the link at the bottom of this page.
Acute renal failure is a sudden onset of kidney failure brought on by accident or poisoning. Unlike chronic renal failure, acute renal failure is potentially reversible, if the patient can be sustained through the episode. Most cannot. The mortality rate of ARF is greater than 50%.
The poor survivability of ARF appears to be linked to the loss of certain functions of the kidney that reside in cells called renal proximal tubule (RPT) cells. The RAD conceived by Dr. Humes contains living human renal proximal tubule cells. In large animal studies [reported in the journal Nature Biotechnology (April 30 1999)], the Humes lab demonstrated that the cells in the RAD perform the metabolic and hormonal functions lost in ARF. Restoring these critical functions by use of the RAD may be key to helping patients survive acute renal failure.
Because the RAD contains living human tissue it is termed a bioartificial kidney.
Initial trials with patients suffering from acute kidney failure were more successful than expected probably because the bioartifiical kidney cells released chemical messengers that suppressed an immune response that was damaging the kidneys of the patients experiencing kidney failure.
In early clinical trials at the University of Michigan, a bioartificial kidney has been used in a handful of intensive-care patients who were deemed very likely to die because kidneys and other organs were failing. All but one recovered with normal kidney function.
Also see this previous post entitled Device Maintains External Liver Cells For Blood Filtration.
Current purely artificial kidney dialysis machines do only a subset of what a real kidney does and there is a clear need for functionally richer replacement devices. Bioartificial livers and kidneys which use living cells are going to reach the market well before fully functional purely artificial versions of those same organs are ready. In large part this is because we do not know all the functions that livers and kidneys carry out. Best to use cells that know how to do all the functions while scientists try to figure out how those organs work in greater detail.
Complete organs grown to replace diseased organs are also further into the future than bioartificial devices. The tissue engineering problems involved in growing complete organs are also a lot tougher to solve than problems involved in growing cells in artificial apparatuses. My guess is that for more complex organs such as the liver and the kidney the ability to grow replacement organs will be achieved many years before the ability is developed to build a totally artificial organ that carries out all the functions that the real organs do.