2004 February 29 Sunday
Ocean Gas Hydrate Estimates Seen As Too High

Previous estimates of natural gas bound up in ocean floor hydrates may be too high.

One widely cited estimate proposes that 10,000 gigatonnes (Gt) of methane carbon is bound up as hydrate on the ocean floor.

But Dr Alexei Milkov of BP America says his research shows reserves are between 500 and 2,500 Gt, a significantly smaller figure than has been previously estimated.

Gas hydrates are still very expensive to extract from the ocean floor.

"Drilling gas hydrates is estimated to be six times more expensive than exploitation of oil and other gas sources," said Prof Bahman Tohidi, director of the Centre for Gas Hydrate Research in Edinburgh.

Even the lower estimate is still a huge amount of energy. To put it in perspective, from 1850 thru 2000 the total amount of natural gas burned in the world was only 61 gigatonnes measured in oil equivalent weight. (not sure how that compares to the gigatonnes figures above though) Gas hydrates researcher Anne Trehu at Oregon State University says previous models may be too high in some cases but there is still a lot of gas hydrate methane in the oceans.

Trehu and her colleagues have found that some widely cited previous models estimating the total mass of methane trapped in marine sediments are probably too high.

On the other hand, some local concentrated deposits may be larger than previously thought. “There is still a lot of methane out there, even if the models were wrong,” Trehu said.“

Arthur H. Johnson, Chairman and Chief Executive Officer of Hydrate Energy International, presented Congressional testimony in June 2003 on the potential of gas hydrates as an energy source.

Gas hydrate is a crystalline substance composed of gas and water. It forms when water and natural gas combine under conditions of moderately high pressure and low temperature. If gas hydrate is either warmed or depressurized it will revert back to water and natural gas, a process termed “dissociation”. Natural gas is concentrated in hydrate so that the dissociation of a cubic foot of hydrate will yield 0.8 cubic feet of water and approximately 160 cubic feet of natural gas. The conditions where hydrates occur are common in sediments off the coasts of the United States in water depths greater than approximately 1600 feet and at shallower depths in sediments associated with deep permafrost in the Arctic. Preliminary investigations indicate that considerable volumes of gas hydrate are present in at least some of these areas.

The total volume of gas hydrate in the United States is not known, although the results of a wide variety of investigations conducted over the past thirty years indicate that the volume is very large, on the order of hundreds of thousands of TCF. More important, however, is the amount of hydrate that can be commercially recovered. Characterization of hydrate resources that has been carried out, for example in the MacKenzie Delta of Canada, the North Slope of Alaska, offshore Japan, and elsewhere indicate that the total in less explored areas of the U.S. hydrate province is likely in the range of many thousands of TCF.

Gas hydrate investigations have been undertaken by many Federal agencies during the past 30 years. These include the U.S. Geological Survey, Naval Research Laboratory, National Science Foundation, and Department of Energy. The Methane Hydrate Research and Development Act of 2000 initiated a new program to study several aspects of gas hydrates, including seafloor stability, global climate change, and the potential of gas hydrate as a commercial resource. The resource target has been for production in the year 2020. Funding for the new program, which is managed by the DOE, has typically been on the order of $10 million per year.

Given the potential of gas hydrates as a huge energy source the $10 million per year spent on research by the US government strikes me as chump change. The United States spends tens of billions more on the military than it would if it was not dependent on Middle Eastern oil. Therefore basic research funding in alternative energy sources ought to be funded at a level commensurate with the recognition of how much Middle Eastern oil costs us in defense spending, money spend on aid in the region in order to achieve foreign policy goals, and in increased spending in homeland defense against the threat of terrorism.

For national security arguments on why energy research should be accelerated see my previous ParaPundit posts Intervention In Liberia Linked To Oil Dependency and Michael Scott Doran: The Saudi Paradox and China Energy Consumption Growth Complicates Anti-Terrorist Efforts.

Also see my previous post Natural Gas May Be Extractable From Ocean Gas Hydrates.

By Randall Parker 2004 February 29 03:55 PM  Energy Tech
Entry Permalink | Comments(0)
2004 February 27 Friday
Cryo-Electron Microscopy Provides Clearer Picture Of Ribosomes

An advance in cryo-electron microscopy instrumentation will enable the mechanisms of antibiotic resistance due to bacterial ribosome mutations to be understood more rapidly.

By refining a technique known as cryo-electron microscopy, researchers from Imperial College London and CNRS-Inserm-Strasbourg University have determined how the enzyme RF3 helps prepare the protein-making factory for its next task following disconnection of the newly formed protein strand.

The team's success in capturing the protein-making factory, or ribosome, in action using cryo-electron microscopy will help scientists to begin deciphering the molecular detail of how many antibiotics interfere with the final steps of protein synthesis - an area not currently targeted in antibiotics research.

Professor Marin van Heel of Imperial's Department of Biological Sciences and senior author of the study says:

"Many antibiotics kill bacteria by interfering with their protein-making factories, or ribosomes. But bacteria can often become resistant by mutating their ribosome machinery. Observing ribosomes in action helps us understand which areas of the protein complex evolve such resistance quickly. This information could then be used to develop new antibiotics that target the more stable regions.

"We've used cryo-electron microscopy in a similar way to time lapse footage. It has allowed us to visualise how one cog in a cell's protein manufacturing plant operates. By refining the technique even further we hope to be able to visualise the molecular interactions on an atomic scale. This kind of knowledge has applications across the board when you are trying to work out key factors in diagnosis, treatment or cause of many diseases."

Professor van Heel pioneered cryo-electron microscopy 10 years ago. Since then it has become an essential part of many structural biologists' toolkit. It overcomes the problem of weak image contrast in electron microscopy and avoids the difficult and time-consuming process of growing crystals that can be analysed using X-ray diffraction.

As professor van Heel points out, this technique is applicable to the study of the shape and action of many other types of molecules in cells.

Rapid freezing provides a snapshot of what ribosomes were doing at that moment of time. Also, the freezing prevents higher electron doses from changing the shape of the ribosomes and hence a larger electron dose can be used to get a clearer picture. This is analogous to how a bright camera flashbulb provides more light to get a better picture.

Electron microscopy images are created by firing electrons at the sample but this process rapidly damages the biological material. To overcome this degradation problem researchers use a low dose of radiation, which leads to extremely poor image quality. "It's like trying to see in the dark," says Professor van Heel.

"Cryo-electron microscopy uses a sample in solution which is rapidly frozen by plunging it into liquid ethane and maintaining it at the same temperature as liquid nitrogen," he explains.

"This maintains the 3D structure of the molecule and gives you instant access to the cellular process of interest. Also, the effect of freezing the sample before electron microscopy reduces radiation damage. This makes it possible to apply a higher electron dose, which gives a clearer image."

The goal is to create a movie of the molecular level changes that happen to ribosomes as they perform protein synthesis.

"After the X-ray structure of the ribosome became available a few years ago, one might think we already know all there is to know about protein synthesis," explains Professor van Heel.

"But we've still got so much to learn about the precisely synchronised series of steps that occurs. Researchers only became aware of the existence of ribosomes 50 years ago but they've a mystery since the creation of life 3.5 billion years ago. By improving the high resolution images we can create using cryo-electron microscopy our long term goal is to create a movie of protein synthesis on an atomic scale."

Advances in instrumentation speed up the rate of advances in basic biological science, biomedical research, and biotechnology by providng scientists with better tools for watching and manipulating biological systems. As a result each year witnesses a faster rate of discovery than the year previous. When people ask when various diseases will be cured what they ought to ask is when will the tools biologists have to use become advanced enough to enable biologists to figure out the causes and to develop effective treatments? While that is perhaps a more precise question it is still very difficult to answer.

By Randall Parker 2004 February 27 09:20 AM  Biotech Advance Rates
Entry Permalink | Comments(0)
2004 February 26 Thursday
Voice Stress Lie Detectors Do Not Work

Hand-held lie detectors appear to be useless.

"We tested one of the more popular voice-stress lie detection technologies and got dismal results, both in the system's ability to detect people actually engaged in deception and in its ability to exclude those not attempting to be deceptive," said Mitchell S. Sommers, an associate professor of psychology in Arts & Sciences at Washington University in St. Louis.

"In our evaluation, voice-stress analysis detected some instances of deception, but its ability to do so was consistently less than chance — you could have gotten better results by flipping a coin," Sommers said.

Sommers' research was supported by and conducted in conjunction with the Department of Defense Polygraph Institute (DODPI), located in Fort Jackson, S.C. Findings were presented at the World Congress of International Conference of Psychophysiology in July 2002. An academic paper on the study is under review for journal publication.

Sommers' study assessed the ability of Vericator, a computer-based system that evaluates credibility through slight variations in a person's speech, to detect deception in a number of different scenarios. Participants were questioned using different forms of interrogation and under conditions inducing various levels of stress.

...

"Voice-stress analysis is fairly effective in identifying certain variations in stress levels in human speech, but high levels of stress do not necessarily correlate with deception," Sommers said. "It may someday be possible to refine voice-stress analysis so that it is capable of distinguishing among various sources of stress and accurately identifying those that are directly related to deception. However, all the research that I've seen thus far suggests that it's wishful thinking, at best, to suggest that current voice-stress analysis systems are capable or reliably detecting deception."

My guess is that a high resolution image processing system that analyzed facial muscle changes would have a better chance of working. Take Paul Ekman's research into his Facial Action Coding System (FACS) and develop an automated means of using it and it might be possible to build a useful lie detector.

By Randall Parker 2004 February 26 11:35 AM  Brain Surveillance
Entry Permalink | Comments(4)
Serotonin Receptor Concentration Correlates With Anxiety

People with less of the serotonin neurotransmitter transporter 5-HT1A are more likely to suffer panic attacks and anxiety.

The finding is the first in humans to show that a receptor, which is pivotal to the action of widely prescribed anti-anxiety medications, may be abnormal in the disorder and help to explain how genes might influence vulnerability.

In the study, positron emission tomography (PET) determined that three brain areas of panic disorder patients are lacking in a key component of a chemical messenger system that regulates emotion, says Alexander Neumeister, MD, of the National Institute of Mental Health (NIMH). Brain scans revealed that the component, a type of serotonin receptor, is reduced by nearly a third in three structures straddling the center of the brain, according to the report in the current issue of The Journal of Neuroscience.

“This is first time anyone has shown, in vivo, a decrease in serotonin binding in panic disorder patients. Eventually, this work could lead to new more selective pharmacological treatments that would specifically target this receptor,” says Michael Davis, PhD, of Emory University, who studies anxiety disorders. “Clinical studies like this are extremely important for guiding basic research in animals to understand more fully the role of these receptors in anxiety.”

Each year, panic attacks strike about 2.4 million American adults “out of the blue,” with feelings of intense fear and physical symptoms sometimes confused with a heart attack. Unchecked, the disorder often sets in motion a debilitating psychological sequel syndrome of agoraphobia, avoiding public places. Panic disorder runs in families and researchers have long suspected a genetic component.

In the study, Neumeister and his colleagues used PET scans to visualize serotonin 5-HT1A receptors in the brains of 16 panic disorder patients – seven of whom also suffered from major depression – and 15 matched healthy controls. In the panic disorder patients, including those who also had depression, receptors were reduced by an average of nearly a third in the anterior cingulate in the front middle part of the brain, the posterior cingulate, in the rear middle part of the brain, and in the raphe, in the midbrain.

Unfortunately it doesn't sound like these researchers had the genes for 5-HT1A sequenced in this group of patients. Though even if they had it is possible that such a test wouldn't find the genetic difference causing this difference in receptor concentration. The genetic difference may be at a different site in the genome that codes for a regulatory protein or a piece of regulatory RNA (interference RNA) that regulates this gene. this group of patients do not

A genetic difference is likely because anxiety runs in families.

Because the disorder can run in families, experts have suspected that certain genetic variations might make people more vulnerable to developing it. The new research gives weight to that idea.

"This is the first study that shows a very clear biological difference in patients and controls," Neumeister said.

Anxiety and related disorders are very widespread problems.

The illness, which most commonly begins between late adolescence and the mid-30's, is just one in a group of anxiety-inducing ailments that are relatively widespread. About 19 million Americans are afflicted by one of the diseases; obsessive-compulsive disorder, post-traumatic stress disorder and specific phobias are among the more well known.

It is interesting to note that a genetic variation of the 5-HT1A receptor gene is correlated with depression. Differences in the same receptor have been found to also correlate with differences in beliefs about spirituality.

By Randall Parker 2004 February 26 09:53 AM  Biological Mind
Entry Permalink | Comments(4)
2004 February 25 Wednesday
Adolescent Brains Are Less Motivated

The proverbial listless and directionless adolescents are wired up to be less motivated than adults.

In the MRI study, James Bjork, Ph.D., and others in the laboratory of Daniel Hommer, M.D., scanned the brains of twelve adolescents aged 12 to 17 years and twelve young adults aged 22 to 28 years. While being scanned, the subjects participated in a game-like scenario risking monetary gain or loss. The participants responded to targets on a screen by pressing a button to win or avoid losing 20 cents, $1, or $5.

For both age groups, the researchers found that the anticipation of potential gain activated portions of the ventral striatum, right insula, dorsal thalamus, and dorsal midbrain, with the magnitude of ventral striatum activation sensitive to gain amount. In adolescents, however, the researchers found lower activation of the right ventral striatum centered in the nucleus accumbens, a region at the base of the brain shown by earlier research (see Alcohol Researchers Localize Brain Region That Anticipates Reward August 3, 2001 at News Releases-http://www.niaaa.nih.gov) to be crucial for motivating behavior toward the prospect of rewards.

"Our observations help to resolve a longstanding debate among researchers about whether adolescents experience enhanced reward from risky behaviors--or seek out alcohol and other stimuli because they require enhanced stimulation. They also may help to explain why so many young people have difficulty achieving long-term goals," according to James Bjork, Ph.D., first author on the study.

When the researchers examined brain activity following gain outcomes, they saw that in both adolescents and young adults monetary gain similarly activated a region of the mesial frontal cortex. "These results suggest that adolescents selectively show reduced recruitment of motivational but not consummatory components of reward-directed behavior," state the authors.

In a nutshell: adolescents want stuff but they are too lazy to work to get as much as they want. Worse yet, they have few skills with which to work to get what they want. No wonder they frustrated, depressed, and angry.

The mentioned earlier research is here: Alcohol Researchers Localize Brain Region That Anticipates Reward

Researchers in the laboratory of Daniel Hommer, M.D., measured changes in blood oxygen level dependent contrast in a functional magnetic resonance (FMRI) scanner in order to track changes in brain activity that occurred while eight volunteers participated in a videogame task involving real money. In this monetary incentive delay (MID) task, participants saw cues that indicated that they might win or lose money, waited for a variable anticipatory delay period, then tried to either win or avoid losing money by pressing a button in response to a rapidly presented target. The researchers examined the response of the nucleus accumbens during anticipation of different amounts of potential rewards (i.e., gains of $0.20, $1.00, and $5.00) or punishments (i.e., losses of $0.20, $1.00, and $5.00). They found that nucleus accumbens activity increased as volunteers anticipated increasing monetary rewards but not punishments. Another nearby brain region, the medial caudate, showed increased activity not only during anticipation of increasing rewards but also during anticipation of increasing punishments.

Imagine a drug or gene therapy that stimulates the growth or activity of the nucleus accumbens. It might make adolescents and even adults more motivated. The educational and economic effects of such therapies could be enormous.

By contrast, stimulation of growth of the appropriate portion of the medial caudate might be more useful for treating criminals. If criminals could be made to have a greater fear of punishment they might become less likely to violate the law. I'm betting that most criminals will eventually be found by brain scan studies to have a lower fear of punishment than the population as a whole.

For more on young brains see my previous posts Adolescence Is Tough On The Brain and Adolescent Mice More Sensitive To Addictive Drugs and Early Nicotine Exposure Increases Nicotine Craving.

By Randall Parker 2004 February 25 11:06 AM  Brain Development
Entry Permalink | Comments(4)
2004 February 24 Tuesday
Web Site Trades Human Eggs

John Gonzalez, creator of the existing website www.ManNotIncluded.com for trading human sperm, has now expanded into human donor egg trading with his new website www.WomanNotIncluded.com. His organization is London based and in Britain actual donors can not be paid for eggs.

The donor receives expenses, but in the UK they are not allowed to be paid directly for their eggs under the Human Fertilisation and Embryology Authority (HFEA) regulations.

The database is global, meaning a couple wishing to use a donor from another country could buy the eggs without the same limits on expense costs.

One consequence of this silly British government rule: Brits will buy more of their eggs abroad.

The fees for finding an egg donor using this service are quite a bit less than the in vitro fertilization costs for getting a pregnancy started using donor eggs.

Joining Fee access to database search £145.00

Criteria based search of database by our staff producing results £ 537.00

Select a donor introduction £620.00

Donor selection payments must be made prior to details being passed onto clinic of your choice.

At the time of this writing the exchange rate is about $1.90 per pound. So US dollar pricing is not quite double the price in British pounds.

So far, 40 donors have signed up to the site from the UK and France. Donors must supply details about their health history, ethnic origin, hair and eye colour - and can also include information about their academic achievements.

In the United States, where paying women to be egg donors is legal, the price for what are judged to be higher quality eggs (e.g. from Stanford and Ivy League undergraduates) can range as high as $50,000. Check out the comments section of that post and you will see many woman posting comments offering to sell their eggs. Less highly sought donors may expect to be paid around $5000. Though there is not enough transparency in the market to be able to predict what any one woman might hope to get for selling her eggs.

My prediction for the future is that the fraction of pregnancies started with donor eggs and donor sperm will rise quite dramatically. The big incentive for using donors will come once persona DNA sequencing makes it possible to know exactly what genetic advantages one can gain by using particular donors. I predict that the use of donor sperm will become especially desirable for women because it is far easier to use donor sperm than donor eggs.

Though another option that may become popular (provided it is not outlawed by an international treaty) is what I call Cloning Plus. With Cloning Plus people will be reproduce themselves with clones which have many of their own genetic flaws removed. Imagine, for instance, the appeal to a woman to have a daughter who looks like her except for being a little bit prettier (straighten those teeth and make them perfectly white and well-shaped), resistant to allergies, resistant to acne, slightly more blond hair, smarter, less prone to depression, and generally better in every way some women wishes she was herself.

By Randall Parker 2004 February 24 01:14 PM  Biotech Reproduction
Entry Permalink | Comments(256)
2004 February 23 Monday
Compound Converts Stem Cells To Heart Muscle Cells

Scripps Research Institute researchers have discovered a molecule called cardiogenol C that will turn mouse embryonic stem cells into heart muscle cardiomyocyte cells. (same article here)

A group of researchers from The Skaggs Institute for Chemical Biology at The Scripps Research Institute and from the Genomics Institute of the Novartis Research Foundation (GNF) has identified a small synthetic molecule that can control the fate of embryonic stem cells.

This compound, called cardiogenol C, causes mouse embryonic stem cells to selectively differentiate into "cardiomyocytes," or heart muscle cells, an important step on the road to developing new therapies for repairing damaged heart tissue.

Normally, cells develop along a pathway of increasing specialization. In humans and other mammals, these developmental events are controlled by mechanisms and signaling pathways we are only beginning to understand. One of scientists' great challenges is to find ways to selectively differentiate stem cells into specific cell types.

"It's hard to control which specific lineage the stem cells differentiate into," says Xu Wu, who is a doctoral candidate in the Kellogg School of Science and Technology at Scripps Research. "We have discovered small molecules that can [turn] embryonic stem cells into heart muscle cells."

Wu is the first author of the study to be published in an upcoming issue of the Journal of the American Chemical Society and which was conducted under the direction of Peter G. Schultz, Ph.D., who is a professor of chemistry and Scripps Family Chair of the Skaggs Institute for Chemical Biology at The Scripps Research Institute, and Sheng Ding, Ph.D, who is an assistant professor in the Department of Chemistry at Scripps Research.

The researchers developed a means to test 100,000 molecules in a fairly automated fashion to find a few compounds that appeared to have the ability to cause stem cells to convert into heart muscle cells.

Scripps Research scientists reasoned that if stem cells were exposed to certain synthetic chemicals, they might selectively differentiate into particular types of cells. In order to test this hypothesis, the scientists screened some 100,000 small molecules from a combinatorial small molecule library that they synthesized. Just as a common library is filled with different books, this combinatorial library is filled with different small organic compounds.

From this assortment, Wu, Ding, and Schultz designed a method to identify molecules able to differentiate the mouse embryonic stem cells into heart muscle cells. They engineered embryonal carcinoma (EC) cells with a reporter gene encoding a protein called luciferase, and they inserted this luciferase gene downstream of the promoter sequence of a gene that is only expressed in cardiomyocytes. Then they placed these EC cells into separate wells and added different chemicals from the library to each. Any engineered EC cells induced to become heart muscle cells expressed luciferase. This made the well glow, distinguishing it from tens of thousands of other wells when examined with state-of-the-art high-throughput screening equipment. These candidates were confirmed using more rigorous assays.

In the end, Wu, Ding, Schultz, and their colleagues found a number of molecules that were able to induce the differentiation of EC cells into cardiomyocytes, and they chose one, called Cardiogenol C, for further studies. Cardiogenol C proved to be effective at directing embryonic stem cells into cardiomyocytes. Using Cardiogenol C, the scientists report that they could selectively induce more than half of the stem cells in their tests to differentiate into cardiac muscle cells. Existing methods for making heart muscle cells from embryonic stem cells are reported to result in merely five percent of the stem cells becoming the desired cell type.

Now Wu, Ding, Schultz, and their colleagues are working on understanding the exact biochemical mechanism whereby Cardiogenol C causes the stem cells to differentiate into cardiomyocytes, as well as attempting to improve the efficiency of the process.

The article, "Small Molecules that Induce Cardiomyogenesis in Embryonic Stem Cells" was authored by Xu Wu, Sheng Ding, Qiang Ding, Nathanael S. Gray, and Peter G. Schultz and is available to online subscribers of the Journal of the American Chemical Society at: http://pubs.acs.org/cgi-bin/asap.cgi/jacsat/asap/abs/ja038950i.html. The article will also be published in an upcoming issue of the Journal of the American Chemical Society.

This is not the first use by Scripps researchers of an automated method to screen tens of thousands of compounds for activity that changes the diferentiation state of cells. Fairly recently some of the same Scripps researchers (Sheng Ding and Peter Schultz mentioned above) have also recently also discovered a molecule called reversine that will dedifferentiate (convert into a less specialized form) muscle cells into stem cells.

A group of researchers from The Scripps Research Institute has identified a small synthetic molecule that can induce a cell to undergo dedifferentiation--to move backwards developmentally from its current state to form its own precursor cell.

This compound, named reversine, causes cells which are normally programmed to form muscles to undergo reverse differentiation--retreat along their differentiation pathway and turn into precursor cells.

The technique involved in the search for reversine also was able to test tens of thousands of compounds.

The team hit upon reversine by systematically treating mouse muscle cells with some 50,000 different candidate molecules that they hoped might stick to and switch on enzymes capable of producing dedifferentiation

To do stem cell therapies we need the ability to put cells into various states of differentiation. Adult stem cells and progenitor cells can be thought of as being in partially differentiated states. We need the ability to put cells into those partially differentiated states in order to be able to replenish adult stem cell reservoirs. We also need the ability to shift cells into fully differentiated states. There are likely hundreds and perhaps even thousands of different states that cells can be in and we need the ability to put cells into many of those states. The Scripps researchers are making progress developing tools and techniques that automate the testing of compounds for the ability to change the differentiation state of cells into different cell types.

Automation is speeding up the rate of advance of biological science and biotechnology.

See this previous post for more on reversine.

By Randall Parker 2004 February 23 12:49 AM  Biotech Organ Replacement
Entry Permalink | Comments(2)
2004 February 20 Friday
Dogs Evolved To Read Human Cues

Dogs are better than chimpanzees at reading human signals.

A chimpanzee enters a room where food is hidden in one of two opaque containers. A human gazes at the container that hides the food. Reaches for it with outstretched arm. Marks the container with a wooden block. The chimp doesn't get the message, even though chimpanzees are one of Homo sapiens' two closest extant primate relatives and might be expected to figure it out. Biological anthropologist Brian Hare and colleagues tried this game with 11 chimps, and only two of the brainy apes used the conspicuous cues to find the food.

Dog owners may not be surprised to learn that nine of 11 dogs in the same situation correctly read the human signals and found the food. A control exercise established that odor was not a cue in either trial.

Humans served as a selective factor in canine evolution.

"Our new work provides direct evidence that dogs' lengthy contact with humans has served as a selection factor, leading to distinct evolutionary changes," says Hare, who recently completed his Ph.D. in anthropology in Harvard's Faculty of Arts and Sciences. "This is the first demonstration that humans play an ongoing role in the evolution of canine cognition."

Wolves do not look to humans for help but dogs do.

Ádám Miklósi led a group of researchers at Eötvös University in Budapest, Hungary who conducted the "shell game" tests on wolves. The test wolves were raised by humans and socialized to a comparable level as their dog counterparts. But although they could follow some signals, the wolves could not perform to the level of dogs.

Miklósi's test also included an important second step. He presented the animals with an unsolvable problem—a bowl of food that was impossible to access. The team found that while wolves continued to work at the unsolvable problem for long periods, dogs quickly looked at the humans for help.

Dogs branched off from wolves only 15,000 years ago.

Dec. 4 — The Eves of the dog world are five or six wolf females that lived in or near China nearly 15,000 years ago, according to a series of genetic research.

The progenitor breeds from all current breeds first appeared only 3,000 to 5,000 years ago.

The researchers believe that by 10,000 to 12,000 years later, 10 "progenitor breeds" of dog had been created to fulfill different roles alongside their masters. It took a further 5000 to 3000 years for people to create the 300 or so pure breeds known today.

What is interesting about this result from a human evolutionary perspective is that it demonstrates how, contrary to popular belief, 10,000 or 20,000 years of selective pressure from relatively new environmental factors can produce large changes in shape, cognitive function, and behavior of a species. The example of dogs changing so much under human influence suggests the possibility that humans have changed a great deal as they moved out of Africa and evolved to fit into various ecological niches around the world.

An example of an evolutionary adaptation in humans that may have developed as recently as dogs developed from wolves occurred in the human Andean population which developed an adaptation to high altitudes.

Previous studies have shown that the Tibetan, Ethiopian and Andean populations have developed slightly different ways of boosting their oxygen levels to cope with the thin air. Those in the Andes pump out more haemoglobin - a molecule that carries oxygen around in the blood. The Tibetans, by contrast, have relatively low haemoglobin levels but breathe faster to take in more oxygen. "The slightest bit of exercise makes them really pant," Beall says.

The Tibetans probably had more time in which to develop high altitude adaptations and certainly the Ethiopians had more time since humans have been in Africa for a longer period of time. But the Andean human adaptation couldn't begin until the human populations came across the Bering Strait and then migrated all the way to South America.

Loren Cordain claims that the ability of adult northern Europeans to digest lactose sugar is a fairly recent adaptation that may have become widespread in just the last few hundred generations of humans.

Commentary: There are calculations which estimate how long it took to increase the gene for adult lactase persistence (ALP) in northern Europeans from a pre-agricultural incidence rate of 5% to its present rate of approximately 70% [Aoki 1991]. (Note: The enzyme lactase is required to digest the sugar lactose in milk, and normally is not produced in significant quantity in human beings after weaning.) In order for the gene frequency to increase from 0.05 to 0.70 within the 250 generations which have occurred since the advent of dairying, a selective advantage in excess of 5% may have been required [Aoki 1991].

Therefore, some genetic changes can occur quite rapidly, particularly in polymorphic genes (those with more than one variant of the gene already in existence) with wide variability in their phenotypic expression. ("Phenotypic expression" means the physical characteristic(s) which a gene produces.) Because humans normally maintain lactase activity in their guts until weaning (approximately 4 years of age in modern-day hunter-gatherers), the type of genetic change (neoteny) required for adult lactase maintenance can occur quite rapidly if there is sufficient selective pressure. Maintenance of childlike genetic characteristics (neoteny) is what occurred with the geologically rapid domestication of the dog during the late Pleistocene and Mesolithic [Budiansky 1992].

Domestication of animals for milk began only about 6,000 years ago and so the selection for adult human lactase enzyme synthesis began only then.

Influence of human culture on genetic selection pressures. However--and this is where it gets interesting--those population groups that do retain the ability to produce lactase and digest milk into adulthood are those descended from the very people who first began domesticating animals for milking during the Neolithic period several thousand years ago.[119] (The earliest milking populations in Europe, Asia, and Africa began the practice probably around 4,000 B.C.[120]) And even more interestingly, in population groups where cultural changes have created "selection pressure" for adapting to certain behavior--such as drinking milk in this case--the rate of genetic adaptation to such changes significantly increases. In this case, the time span for widespread prevalence of the gene for lactose tolerance within milking population groups has been estimated at approximately 1,150 years[121]--a very short span of time in evolutionary terms.

It is worth noting that domestication of milk animals was such a large selective advantage that it could cause the mutation for lactase expression in adults to be selected for in a relatively short period of time. But since the selective pressure for adult lactase expression was very strong this suggests than any kind of behavior or other aspect of human physiology that was beneficial for doing milk animal herding and protection of milk animals would also have been selected for very strongly at the same time that adult lactase expression was being selected for. We have to consider the possibility that personality types more suited for the herding-tending and herd-protection may have been fundamentally different than the personality types most suited for a hunter-gatherer lifestyle that involved no use of milk animals.

Another post-Africa adaptation in humans is the spread of a mitochondrial mutation for generating more heat in colder weather.

These lineages are not found at all in Africans but occur in 14 percent of people in temperate zones and in 75 percent of those inhabiting Arctic zones. Wallace and his colleagues say this correlation is evidence that the lineages were positively selected because they help the body generate more heat.

...

Wallace says that climatic selection may have operated on the human population from the moment it moved north of the African tropics. Most such pioneers died but two lineages, known as M and N, arose in northeast Africa some 65,000 years ago and might have been adapted to temperate climates. Almost everyone outside of sub-Saharan Africa has mitochondria descended from the M and N lineages.

The writers of the research paper reporting on the heat-generating mtDNA variation speculate that human mtDNA has adaptations for local environmental conditions that are making humans have higher incidences of a number of diseases due to modern environments and diets.

Evidence has already accumulated that different human mtDNA lineages are functionally different. Haplogroup T is associated with reduced sperm motility in European males (30), and the tRNAGln nucleotide position 4336 variant in haplogroup H is associated with late-onset Alzheimer's disease (31). Moreover, Europeans harboring the mild ND6 nucleotide position 14484 and ND4L nucleotide position 10663 Leber's hereditary optic neuropathy missense mutations are more prone to blindness if they also harbor the mtDNA haplogroup J (32, 33), and haplogroup J is associated with increased European longevity (34). Because haplogroup J mtDNAs harbor two missense mutations in complex I genes (Y304H in ND1 and A458T in ND5), in addition to the above-mentioned L236T variant in the cytb gene, these polymorphisms all could affect the efficiency of OXPHOS ATP production and thus exacerbate the energy defects of mildly deleterious new mutations.

Given that mtDNA lineages are functionally different, it follows that the same variants that are advantageous in one climatic and dietary environment might be maladaptive when these individuals are placed in a different environment. Hence, ancient regionally beneficial mtDNA variants could be contributing to modern bioenergetic disorders such as obesity, diabetes, hypertension, cardiovascular disease, and neurodegenerative diseases as people move to new regions and adopt new lifestyles.

In humans mitochondrial DNA (mtDNA) is only 16,569 DNA letters long whereas the DNA in the human cell nucleus is over 3 billion letters long. Note that while the mtDNA is very small it still manages to have many variations with different effects on disease risks and environmental adaptation. It seems likely that the mtDNA heat variation is not the only mtDNA variation is a result of selective pressures to allow humans to adapt better to local conditions.

Another important thing to note about canine evolution is that to the extent that dog breeds developed special adaptations to perform various functions those dogs reduced the need for humans to do those functions and hence changed the selective pressure on humans.

"We know that dogs were useful for lots of things in Stone Age culture, as draft animals, in hunting, for warmth, and for protection," said Jennifer Leonard, a postdoctoral fellow at the Smithsonian Institution’s National Museum of Natural History. And in sharing food, shelter, survival and play, modem dogs have somehow genetically acquired an insight about humans that has earned them the title of man's best friend

For instance, a hunting dog that could smell prey reduced the need for humans to have an acute sense of smell for that purpose. Therefore the domestication of dogs must have changed the selective pressures on humans. Those changes in selective pressures must have been different depending on the types of dogs and the ecological niches various human groups found themselves in. Human groups that learned to train and work with dogs for various purposes had a selective advantage against human groups that did not do so. So just as humans have exerted selective pressures in dog evolution it seems highly likely that dogs have caused selective pressures in human evolution.

By Randall Parker 2004 February 20 08:57 PM  Trends, Human Evolution
Entry Permalink | Comments(12)
2004 February 19 Thursday
Cocaine Depresses Expression Of Protein Involved In Learning

Cocaine interferes with the expression of a gene for a protein involved in learning.

Howard Hughes Medical Institute investigators at Duke University Medical Center have linked a gene previously shown to play a role in learning and memory to the early manifestations of drug addiction in the brain. Although scientists had previously speculated that similar brain processes underlie aspects of learning and addiction, the current study in mice is the first to identify a direct molecular link between the two.

...

"There has been the idea that brain changes in response to psychostimulants may be similar to those critical for learning and memory," said Marc G. Caron, Ph.D., an HHMI investigator at Duke. "Now, for the first time, we have found a molecule that links drug-induced plasticity in one part of the brain to a mechanism that underlies learning and memory in another brain region." Caron is also interim director of the Center for Models of Human Disease, part of Duke's Institute for Genome Sciences and Policy, and James B. Duke professor of cell biology.

...

Previous work by other researchers revealed that exposure to cocaine triggers changes in a brain region called the striatum -- a reward center that also plays a fundamental role in movement and emotional responses. Cocaine leads to a sharp increase in communication among nerve cells in the striatum that use dopamine as their chemical messenger. This brain chemical surge is responsible for the feeling of pleasure, or high, that leads drug users to crave more.

"Drugs essentially hijack the brain's natural reward system," thereby leading to addiction, explained Wei-Dong Yao, Ph.D., an HHMI fellow at Duke and first author of the new study.

Humans have a problem with addictive drugs because humans did not get much exposure to these drugs as humans evolved. The limited previous exposure means there was little selective pressure to select for genetic variations that would make humans less susceptible to drug addiction. Addictive drugs in the quantity and quality now available are, evolutionarily speaking, new to human experience and humans are not adapted to deal with them.

Note the sheer number of genes which were compared for activity under different conditions and in different strains of mice. Most likely the researchers used gene array chips that allow the expression levels of thousands of genes to be compared at the time. As gene array chip technology improves the ability to do this kind of work becomes cheaper and easier.

The study sought to identify genes involved in the brain's heightened response after drug use. The researchers compared the activity of more than 36,000 genes in the striatum of mice that had "super-sensitivity" to cocaine due to a genetic defect or prior cocaine exposure, with the gene activity in the same brain region of normal mice. The genetic screen revealed six genes with consistently increased or decreased activity in super-sensitive versus normal mice, the team reported.

There is a difference between easily addicted mice and regular mice in the change of their PSD-95 gene expression when exposed to cocaine.

The protein encoded by one of the genes -- known as postsynaptic density-95 or PSD-95 -- dropped by half in the brains of super-sensitive mice, the researchers found. The protein had never before been linked to addiction, Caron said, but had been shown by Seth Grant, a member of the research team at the Wellcome Trust Sanger Institute, to play a role in learning. Mice lacking PSD-95 take longer than normal mice to learn their way around a maze. In other words, mice with normal amounts of PSD-95 appear less likely to become addicted and more likely to learn.

Two of the other five genes had earlier been suggested to play a role in addiction. The function of the remaining three genes is not known, Caron said, and will be the focus of further investigation.

If the human equivalent of the PSD-95 gene reacts to cocaine in the same manner then a fairly small amount of cocaine use may hobble learning for weeks and perhaps even for months.

Among the mice more responsive to the effects of cocaine, the decline in PSD-95 occurred only in the striatum, while levels of the protein in other brain regions remained unaffected. In normal mice, the protein shift occurred after three injections of cocaine and lasted for more than two months.

The researchers also measured the activity of nerve cells in brain slices from the different groups of mice. Neurons in the brains of super-sensitive mice exhibited a greater response to electrical stimulation than did the nerve cells of control mice. Neurons from mice lacking a functional copy of PSD-95 showed a similar increase in activity, the team reported.

Mice deficient in PSD-95 also became more hyperactive than normal mice following cocaine injection, further linking the protein to the drug's brain effects. However, the deficient mice failed to gain further sensitivity upon repeated cocaine exposure, as mice typically do.

"Drug abuse is a complex disorder and will therefore be influenced by multiple genes," Caron noted. "PSD-95 represents one cog in the wheel."

The brain protein likely plays a role in addiction to other drugs -- including nicotine, alcohol, morphine and heroine -- because they all exert effects through dopamine, Caron added. Natural variation in brain levels of PSD-95 might lead to differences in individual susceptibility to drugs of abuse, he suggested. The gene might therefore represent a useful marker for measuring such differences.

It would be interesting to know how PSD-95 expression responds to various drugs which are used to treat a variety of mental illnesses. For instance, how do SSRI (Selective Serotonin Reuptake Inhibitor) antidepressants such as Prozac and Zoloft change PSD-95 expression? Or how does Ritalin, which is used to treat youthful ADHD (Attention Deficit Hyperactivity Disorder), change PSD-95 expression?

By Randall Parker 2004 February 19 03:16 PM  Brain Addiction
Entry Permalink | Comments(0)
Smart Vivarium Technology To Automate Animal Studies

Advances in electronics and software are being harnessed at UC San Diego to automate the monitoring and analysis of lab animals used in research.

Computer scientists and animal care experts at the University of California, San Diego (UCSD) have come up with a new way to automate the monitoring of mice and other animals in laboratory research. Combining cameras and distributed, non-invasive sensors with elements of computer vision, information technology and artificial intelligence, the Smart Vivarium project aims to enhance the quality of animal research, while at the same time enabling better health care for animals.

The pilot project is led by Serge Belongie, an assistant professor in Computer Science and Engineering at UCSD’s Jacobs School of Engineering. It is funded entirely by the California Institute for Telecommunications and Information Technology [Cal-(IT)²], a joint venture of UCSD and UC Irvine. “Today a lot of medical research relies on drug administration and careful monitoring of large numbers of live mice and other animals, usually in cages located in a vivarium,” said Belongie. “But it is an entirely manual process, so there are limitations on how often observations can be made, and how thoroughly those observations can be analyzed.”

This work at UCSD is still at a fairly early stage and the project is really of a rather open-ended nature. For decades to come advances in image processing algorithms, artificial intelligence algorithms, and other areas of computer science will combine with continuing advances in sensors and in computer speed and storage capacity to enable more useful information to be automatically derived from computerized automated monitoring systems. This project is definitely a step in a direction that promises to drastically lower costs and speed the rate of advancement of behavioral and biomedical research.

The ability to collect more data in a single experiment will reduce the number of experiments that need to be done. This will both speed research and lower costs.

UCSD is a major biological sciences research center, and animal-care specialists believe the technology under development could dramatically improve the care of research animals. “The Smart Vivarium will make better use of fewer lab animals and lead to more efficient animal health care,” said Phil Richter, Director of UCSD’s Animal Care Program, who is working with Belongie on the project. “Sick animals would be detected and diagnosed sooner, allowing for earlier treatments.” The technology would also help to reduce the number of animals needed in scientific investigations. “In medical research, experiments are sometimes repeated due to observational and analytical limitations,” said Belongie. “By recording all the data the first time, scientists could go back and look for different patterns in the data without using more mice to perform the new experiment.”

For many of the same reasons, the underlying technology could be useful for the early diagnosis and monitoring of sick animals in zoos, veterinary offices and agriculture. (“Early detection of lameness in livestock,” noted Belongie, “could help stop the transmission of disease.”) The computer scientist also intends to seek collaboration with the San Diego Zoo and other local institutions for practical field deployment of the monitoring systems as part of an upcoming study.

The total amount of data collected per experiment will go up by orders of magnitude with this system.

As for improvements in medical research from the continuous monitoring of lab animals, Belongie expects at least an improvement of two orders of magnitude in the automated collection and processing of monitoring data. “Continuous monitoring and mining of animal physiological and behavioral data will allow medical researchers to detect subtle patterns expressible only over lengthy longitudinal studies,” noted Belongie. “By providing a never-before-available, vivarium-wide collection of continuous animal behavior measurements, this technology could yield major breakthroughs in drug design and medical research, not to mention veterinary science, experimental psychology and animal care.”

Advances in computer hardware and software technologies serve as major enablers for advances in biomedical research, environmental reearch, and other aspects of biological and behavioral research. Continued rapid advances in computing technologies in coming decades will improve the productivity of researchers by orders of magnitude above current levels of productivity. Therefore the rate of advance of all the biological sciences will accelerate dramatically.

By Randall Parker 2004 February 19 01:26 PM  Biotech Advance Rates
Entry Permalink | Comments(0)
2004 February 18 Wednesday
New Type Of Stem Cells Found In Adult Human Brain

Human brain astrocyte cells are capable of dividing and turning into all three types of mature brain cells.

February 18, 2004— Researchers have found an unexpected source of stem cells in the adult human brain. They have demonstrated for the first time that human astrocytes — brain cells thought to play more of a secondary role by providing a supportive, nurturing environment for the neuron — can actually function as stem cells. The astrocytes can form new stem cells and are able to generate all three types of mature brain cells.

But these astrocytes are different: They form a novel ribbon-like structure in the brain's lateral ventricle. Stem cells from comparable areas in the rodent brain follow a distinct path from their place of origin to the olfactory bulb (a brain region that processes smells), where they create new neurons.

The work, led by former HHMI medical student fellow Nader Sanai and Arturo Alvarez-Buylla, Heather and Melanie Muss Professor of Neurological Surgery at the University of California, San Francisco, opens the possibility that such stem cells could be harnessed and one day used to regenerate damaged areas in the central nervous system. The scientists reported their findings February 19, 2004, in the journal Nature.

“We've found a structure in the human brain that represents a significant departure from other species,” Sanai said. “The differences we see imply that this region in the human brain doesn't necessarily do the same things as its primate and rodent counterparts. This is a cell population that has the potential to regenerate parts of the brain, though it's not clear what regions those may be. Neurons generated in this area may migrate to other areas of the brain and potentially regenerate those areas.”

What is not clear to me is whether this exact same experiment has been tried in the primate species (chimpanzees and bonobos) that are closest to humans in evolution. Would a repeat of this experiment on chimp astrocytes from the same part of the brain yield the same result? Does anyone know whether this has been tried? I'm not quite ready yet to accept this a feature of neurobiology that is totally unique to humans. Does any reader have enough expertise in the relevant areas of research to answer this question?

What seems surprising about this result is that only now in the year 2004 has anyone even checked to see if astrocytes can become nerve cells.

They studied brain tissue from the lateral ventricles - two cerebrospinal fluid-filled cavities in the center of the brain - available from either surgery patients or from pathology samples after autopsy. The researchers first stained the tissue to locate astrocytes, and immediately saw the ribbon of astrocytes lining the ventricle walls. They subsequently determined that cells within the ribbon were dividing, implying that they were part of a region of proliferative stem cells.

Next, the scientists decided to look for the stem cells. They took representative sections of tissue from the lining of the lateral ventricles, and found that these specimens could generate neurospheres in a dish. Neurospheres contain all of the precursors for the major central nervous system cell types the stem cell produces: neurons, astrocytes, and oligodendrocytes. They result from a stem cell being put in a culture dish with various growth factors.

To make sure, they subsequently isolated individual human astrocytes and put each in a dish with growth factors, showing they could form neurospheres as well.

This was the first time anyone had shown that a single human astrocyte could function as a stem cell. Alvarez-Buylla, Sanai, and their co-workers then found that single astrocytes from the lateral ventricle could generate neurons without added growth factors — direct evidence that a single astrocyte could generate a neuron.

The findings are provocative because astrocytes have traditionally been considered simple helper cells, Sanai said.

“This speaks to the plasticity of the human brain,” he said. “Certain cell types may have hidden potential.” These subtypes of astrocytes appear no different from any other astrocytes, implying that “it's possible that other astrocytes in other regions of the body have the same potential.”

The hippocampus has been previously known as a site in the brain for adult neural stem cells. The existence of stem cells in the brain was first discovered in canaries and this discovery upset the received wisdom for many decades that the adult brain never gained new nerve cells. In fact, it has been previously reported that astrocytes provide growth factors that help hippocampal stem cells convert more rapidly into neurons. Now that it is known that astrocytes can convert into neurons this opens up the potential to stimulate astrocytes to divide and create neurons to do repair for various neurological disorders and even to replace nerve cells lost to aging. This result also means that future development of the ability to replenish astrocytes with rejuvenated replacement cells could turn out to be a useful rejuvenation therapy for the brain.

This particular discovery is also part of a larger pattern of discovery in which new sources of adult stem cells are being found in different parts of the body. It seems likely that many more sources of adult stem cells are still waiting to be discovered.

By Randall Parker 2004 February 18 11:50 AM  Biotech Organ Replacement
Entry Permalink | Comments(8)
2004 February 17 Tuesday
Plants Will Grow More Rapidly With Higher Carbon Dioxide

Soy will grow more rapidly in higher CO2.

Although ozone slows plant growth, the beneficial effect of the carbon dioxide more than compensates for this effect, Leakey found. His unpublished results predict an increase in soy yields of 13% by 2050. US farmers currently plant about 150 million acres of soybean a year.

The following press release emphasies that the increased plant growth in the presence of higher CO2 is not enough to take all the CO2 out of the atmosphere. But the fact that the trees and plants grow more rapidly is economically valuable.

OAK RIDGE, Tenn., Feb. 16, 2004 -- Trees absorb more carbon dioxide when the amount in the atmosphere is higher, but the increase is unlikely to offset the higher levels of CO2, according to results from large-scale experiments conducted at Oak Ridge National Laboratory and elsewhere.

"Some people have used carbon dioxide fertilization to argue that this is a boon of the fossil fuel era and that it will lead to greater agricultural yields and carbon sinks," said Richard Norby of the Department of Energy's ORNL. "Some recent experiments, however, have suggested that there will be no lasting effect of carbon dioxide fertilization. As is often the case, the truth may lie in between."

Norby is among several scientists participating in a panel discussion titled "CO2 Fertilization: Boon or Bust?" Feb. 16 at the American Association for the Advancement of Science annual meeting in Seattle.

For the last six years, Norby and colleagues at ORNL have examined the responses to elevated carbon dioxide levels in a stand of sweetgum trees a few miles from ORNL. The experiment consisted of pumping tons of carbon dioxide into the plots, raising the concentration of carbon dioxide in the tree stand from the ambient level of about 370 parts per million to 550 ppm, and studying the effects.

...

In every year since the FACE project began, net primary productivity, which is the total amount of carbon dioxide fixed into organic matter such as leaves, stems and roots, has been higher in plots given extra carbon dioxide. The average increase has been 24 percent, and there is no indication that the increase will not continue. But, Norby notes, while his colleagues have observed a sustained increase in leaf photosynthesis, the response to carbon dioxide fertilization would not be apparent if only above-ground growth were measured. Wood production increased significantly during only the first year of treatment.

While Norby and colleagues have learned a great deal about above-ground allocation of carbon dioxide, in recent years they have focused their efforts on impacts on fine roots and soil sequestration of carbon dioxide. Fine root production has increased substantially in response to elevated carbon dioxide.

Fine roots are important for water and nutrient uptake, but they have a short life and their carbon returns to the soil within a year. Initial results suggest that the increase in carbon supply to fine roots has increased the carbon content of the soil. Norby cautions, however, that the positive effect of carbon dioxide fertilization is insufficient to halt the rising level of atmospheric carbon dioxide.

If some types of forest trees will grow more rapidly then higher atmospheric CO2 holds the prospect of lowering timber costs and hence of lowering housing and furniture costs.

Another forest experiment shows CO2 raises tree growth rates.

SEATTLE -- A futuristic Duke University simulation of forest growth under the carbon dioxide-enriched atmosphere expected by 2050 does not reinforce the optimism of those who believe trees can absorb that extra CO2 by growing faster, said a spokesman for the experiment.

During seven years of exposure to carbon dioxide concentrations 1½ times higher than today's, test plots of loblolly pines have indeed boosted their annual growth rates by between 10 and 25 percent, found the researchers. But "the highest responses have been in the driest years, and the effect of CO2 has been much less in normal and wet years," said William Schlesinger, a professor of biogeochemistry and dean of Duke's Nicholas School of the Environment and Earth Sciences.

These counterintuitive findings suggest that nitrogen deficiencies common to forest soils in the Southeastern United States may limit the abilities of loblolly pine forests to use the extra CO2 to produce more tissues as they take in more of the gas, he said.

"In a dry year trees naturally grow less so the amount of nitrogen doesn't make any difference," he said. "In a wet year, when there's plenty of water, the amount of nitrogen does make a difference." Tree growth depends on the availability of nitrogen, which foresters routinely add to Southeastern soils in the form of fertilizer when they plant trees, he added.

One advantage the plants may have in dry years is that with more CO2 in the atmosphere the leaves do not have to open their pores as much to let in the CO2. This reduces water loss from evaporation and allows plants to grow in dry environments. This explanation has been put forward to explain plant growth into the Negev desert in Israel.

The really bad news? More poison ivy:

Meanwhile, some other species in Duke's CO2-bathed forest plots have grown at faster rates than the loblolly pines, scientists report. Still-unpublished data shows 70 percent growth increases for poison ivy, according to Schlesinger.

It seems likely that the growth increase caused by higher CO2 will differ by tree species. Some will experience larger increases in growth rates and others will benefit from higher CO2 to a lesser extent. Also, since water is more of a rate-limiting factor in some areas and less in other areas the extent of the benefit of higher CO2 in terms of faster growth in lower water conditions will be greater in some geographic regions and less in other regions. Higher CO2 probably will increase total tree cover in drier areas and may even make it possible to grow trees into deserts as appears to be happening with the Negev.

Another factor to consider: It should be possible to select for or genetically engineer crop plants that will grow even faster in higher CO2 conditions. So the extent of the benefit of high CO2 seen with existing crop plants understates the size of the benefit likely to be achievable in the longer run.

Of course, higher atmospheric CO2 levels will cause many other effects. If higher CO2 raises global temperatures it could change precipitation patterns, total global precipitation, length of growing seasons (generally longer), wind patterns, and other many other factors. How will all this work out in terms of benefits and costs? It seems impossible at this point to hazard a guess that will have any degree of accuracy. But it seems clear that rising atmospheric CO2 will generate not just costs but benefits as well.

By Randall Parker 2004 February 17 03:22 PM  Climate Trends
Entry Permalink | Comments(11)
2004 February 16 Monday
NASA May Use Nuclear Ion Propulsion In Jupiter Moon Probe

Nuclear electrical ion propulsion is being proposed for an unmanned mission to Jupiter.

LOS ALAMOS, N.M., Feb. 10, 2004 -- A proposed U.S. mission to investigate three ice-covered moons of Jupiter will demand fast-paced research, fabrication and realistic non-nuclear testing of a prototype nuclear reactor within two years, says a Los Alamos National Laboratory scientist.

The roots of this build and test effort have been under way at Los Alamos since the mid-1990s, said David Poston, leader of the Space Fission Power Team in Los Alamos' Nuclear Design and Risk Analysis Group.

NASA proposes using use electrical ion propulsion powered by a nuclear reactor for its Jupiter Icy Moons Orbiter, an element of Project Prometheus, which is scheduled for launch after 2011. However, the United States hasn't flown a space fission system since 1965.

One advantage of a nuclear power source for propulsion is that the space probe would travel to its destination more quickly. However, another big advantage is that the probe would have a lot more power to run sensors, computers, and a transmitter. Hence it seems likely such a probe could gather much more and better quality data.

We can not do more in space without much better propulsion systems both for getting into orbit and to move around once up there. It is great that NASA is seriously considering this proposal and I hope they go ahead with it. Definitely a step in the right direction.

By Randall Parker 2004 February 16 02:58 PM  Airplanes and Spacecraft
Entry Permalink | Comments(1)
2004 February 14 Saturday
Hostile Personalities More Prone To Nicotine Addiction

The brains of aggressive hostile people light up much more under PET scans when exposed to nicotine.

“We call this brain response a ‘born to smoke’ pattern,” said study leader Dr. Steven Potkin, professor of psychiatry and human behavior. “Based on these dramatic brain responses to nicotine, if you have hostile, aggressive personality traits, in all likelihood, you have a predisposition to cigarette addiction without ever having even touched a cigarette.” Study results appeared in the January issue of Cognitive Brain Research.

Potkin and Dr. James H. Fallon, professor of anatomy and neurobiology, gave study subjects standard psychiatric personality exams and separated them into two groups — those with high-hostility personality traits, which are marked by anger, aggression and anxiety, and those with low-hostility traits. Both groups included smokers and non-smokers. The groups were given nicotine patches of strengths of 3.5 or 21 milligrams, or placebo, and later subjected to PET scans to see if the nicotine triggered any responses in brain metabolism of glucose energy.

While the PET scans showed no metabolic changes in the low-hostility subjects, nicotine induced dramatic metabolic responses in the high-hostility group individuals in the limbic system and the cortical and subcortical sectors of the brain. Among members of the high-hostility group, smokers showed a metabolic reaction only to the more powerful 21 milligram nicotine patch, while non-smokers reacted to both patches.

The fact that non-smokers in the high-hostility group showed a significant metabolic response to nicotine provides the first biological evidence that people with high-hostility personalities are likely to become dependent on cigarettes because of their brains’ strong response to nicotine, said Potkin. “In turn, this might also help explain why other people have no compelling drive to smoke or can quit smoking with relative ease,” he added.

It is conceivable that a drug that can make a person less hostile and less aggressive could make it easier for that person to quit smoking.

Another speculation: the association between drug use and crime may in part be due to the fact that the kinds of personalities most prone to become drug addicts are more aggressive in the first place. What would be interesting to know is whether people with high levels of hostility who never try drugs or cigarettes are more or less likely to become criminals than those who do. The answer may depend in part on which drug a hostile person becomes addicted to. Some addictive drugs might even have net calming effects that make a hostile and aggressive person less hostile.

Another interesting question: Suppose people with criminal records who smoke who were trying to stop smoking were studied. Would criminals who have a hard time quitting cigarettes who finally manage to quit become more or less likely to commit violent crimes than they were when they were still smoking?

One complication of studying links between nicotine and crime is that nicotine causes brain damage.

Nicotine causes degeneration in one part of the brain, according to professor of psychology Gaylord Ellison, who announced the finding in the journal Neuropharmacology, and at this year's meeting of the Society for Neuroscience.

Ellison found that nicotine causes selective degeneration of the fasciculus retroflexus, the part of the higher brain that primarily controls the dopamine and serotonin levels in the body.

Dopamine controls movement, emotional response, and the ability to experience pleasure and pain, while serotonin regulates a person's mood.

Suppose a person has a brain that is aggressive and hostile and that person becomes a nicotine addict and basically racks up a bunch of brain damage. Then suppose that person manages to quit smoking. Is that person then even more hostile as a result of the brain damage? Or does the type of damage done have the effect of reducing violent behavior? A similar question can be asked about other addictive drugs because lots of addictive drugs cause brain damage.

There is increasing evidence that the fasciculus retroflexus (FR) represents a 'weak link' following the continuous administration of drugs of abuse. A variety of drugs which predominantly potentiate dopamine, including D-amphetamine, methamphetamine, MDMA, cocaine, and cathinone, all induce degeneration in axons from lateral habenula, through the sheath of FR, to midbrain cells such as SN, VTA, and raphe. For some drugs, such as cocaine, this is virtually the only degeneration induced in brain. Continuous nicotine also selectively induces degeneration in FR, but in the other half of the tract, i.e. in axons from medial habenula through the core of the tract to interpeduncular nucleus. This phylogenetically primitive tract carries much of the negative feedback from forebrain back onto midbrain reward cells, and the finding that these descending control pathways are compromised following simulated drug binges has implications for theories of drug addiction but also psychosis in general.

I am a skeptic on the issue of addictive drug legalization because if the barrier to access to addictive brain-damaging substances is lowered then more people will become addicts and damage their brains. What will be the net result? The legalization advocates can't answer that question. It may depend on the drug. Some drugs might damage circuits that cause hostility. Other drugs might damage circuits that suppress hostility. Also, hostility is not the only factor in play here. Impulsiveness, happiness, anxiety, and other aspects of personality may be enhanced or decreased by the sorts of selective brain damage various addictive drugs cause.

By Randall Parker 2004 February 14 02:04 PM  Brain Addiction
Entry Permalink
2004 February 12 Thursday
Brain Has Separate Areas For Actual And Interpreted Sensory Data

But we decide what is right and which is an illusion.

But a new collaborative study involving a biomedical engineer at Washington University in St. Louis and neurobiologists at the University of Pittsburgh shows that sometimes you can't believe anything that you see. More importantly, the researchers have identified areas of the brain where what we're actually doing (reality) and what we think we're doing (illusion, or perception) are processed.

Daniel Moran, Ph.D., Washington University assistant professor of biomedical engineering and neurobiology, and University of Pittsburgh colleagues Andrew B. Schwartz, Ph.D., and G. Anthony Reina, M.D., focused on studying perception and playing visual tricks on macaque monkeys and some human subjects. They created a virtual reality video game to trick the monkeys into thinking that they were tracing ellipses with their hands, though they actually were moving their hands in a circle.

They monitored nerve cells in the monkeys enabling them to see what areas of the brain represented the circle and which areas represented the ellipse. They found that the primary motor cortex represented the actual movement while the signals from cells in a neighboring area, called the ventral premotor cortex, were generating elliptical shapes.

The mind has the capability to create an interpreting facility to map between what it sees and how it perceives what it sees. This allows the mind to adjust for the effects of bifocals and other sense-distorting factors. While this capability is adaptive it can sometimes be tricked into creating erroneous interpretations of sensory input.

The research shows how the mind creates its sense of order in the world and then adjusts on the fly to eliminate distortions.

For instance, the first time you don a new pair of bifocals, there is a difference in what you perceive visually and what your hand does when you go to reach for something. With time, though, the brain adjusts so that vision and action become one. The ventral premotor complex plays a major role in that process.

Knowing how the brain works to distinguish between action and perception will enhance efforts to build biomedical devices that can control artificial limbs, some day enabling the disabled to move a prosthetic arm or leg by thinking about it.

Results were published in the Jan. 16, 2004 issue of Science.

"Previous studies have explored when things are perceived during an illusion, but this is the first study to show what is being perceived instead of when it is happening," said Moran. "People didn't know how it was encoded. And we also find that the brain areas involved are right next to each other."

Think back to childhood. We all had to learn to judge the distance of our hands from our faces by how the hands became smaller and the angles of the arms showed the hands changing position. We now all make those interpretations and many similar interpretations of raw sense material quite subconsciously. But we need the ability to change those interpretations as we grow older and our senses decline or because we encounter new environments which create new patterns of sensory input.

By Randall Parker 2004 February 12 03:44 PM  Brain Memory
Entry Permalink | Comments(1)
2004 February 11 Wednesday
British Most Highly Monitored By Video Cameras

Security video cameras, known as Closed Circuit TV or CCTV in Britain, are so popular among the British that the British are the most monitored by video cameras of any people on Earth.

The technology has become popular and widespread, with the result that Britons are by far the most watched people on earth, with one camera for every 14 people, according to recent estimates.

But questions remain as to their effectiveness.

A government review 18 months ago found that security cameras were effective in tackling vehicle crime but had limited effect on other crimes. Improved streetlighting recorded better results.

...

"I have talked to offenders about this," says Gill. "They say they are not concerned by security cameras, unless they were actually caught by one

My take: even if criminals are not deterred by the presence of cameras if the cameras are of suffciently high quality to enable identification of perpetrators of crimes then the cameras ought to increase conviction rates. What would be interesting would be to find data on what percentage of charges brought against suspected criminals use video evidence. Have video cameras increased clearance rates (i.e. the rate at which police can identity and charge a suspect) on various types of crimes? Also, is the rate of conviction higher in those cases which include CCTV evidence? Also, what percentage of all types of crimes in public places are caught by CCTV in areas where it is heavily deployed? Even when a crime isn't caught by a video camera there can be cameras pointing to areas nearby that could record images of those entering and leaving an area around the time a crime takes place. So how often does that happen?

The Midlothian and Borders Police claim CCTV crimes are solved at high rates.

Dalkeith and Penicuik are both reaping the benefits of town centre closed circuit television systems. With over 50 incidents recorded on camera this year and a 100% conviction rate in the courts, the cameras are undoubtedly helping deter anti-social behaviour on our streets.

But is the 100% for all 50 cases or for a smaller subset of cases for which charges were brought?

From some UK Home Offices studies in lighting and CCTV for crime reduction and prevention:

The major findings from the reviews are:

- Street lighting and CCTV work in cutting crime particularly when used within a package of other crime reduction measures.
- Improved street lighting reduced crime by 20%.
- CCTV was especially effective in reducing vehicle crime in car parks, leading to a 41% reduction.

The UK government Home Office report on street lighting and crime prevention is a downloadable PDF. Also, the matching report on CCTV and crime prevention is available as a downloadable PDF as well. The report claims that CCTV works very well to reduce crime in car parks (in American English "parking lots").

Overall, the best current evidence suggests that CCTV reduces crime to a small degree. CCTV is most effective in reducing vehicle crime in car parks, but it had little or no effect on crime in public transport and city centre settings.

...

Both published and unpublished reports were considered in the searches, and the searches were international in scope and were not limited to the English language.

The search strategies resulted in 22 CCTV evaluations meeting the criteria for inclusion. The evaluations were carried out in three main settings: (1) city centre or public housing, (2) public transport, and (3) car parks.

Of the 22 included evaluations, half (11) found a desirable effect on crime and five found an undesirable effect on crime. Five evaluations found a null effect on crime (i.e., clear evidence of no effect), while the remaining one was classified as finding an uncertain effect on crime (i.e., unclear evidence of an effect).

Results from a meta-analysis provide a clearer picture of the crime prevention effectiveness of CCTV. From 18 evaluations – the other four did not provide the needed data to be included in the meta-analysis – it was concluded that CCTV had a significant desirable effect on crime, although the overall reduction in crime was a very small four per cent. Half of the studies (nine out of 18) showed evidence of a desirable effect of CCTV on crime. All nine of these studies were carried out in the UK. Conversely, the other nine studies showed no evidence of any desirable effect of CCTV on crime. All five North American studies were in this group.

The meta-analysis also examined the effect of CCTV on the most frequently measured crime types. It was found that CCTV had no effect on violent crimes (from five studies), but had a significant desirable effect on vehicle crimes (from eight studies).

Across the three settings, mixed results were found for the crime prevention effectiveness of CCTV. In the city centre and public housing setting, there was evidence that CCTV led to a negligible reduction in crime of about two per cent in experimental areas compared with control areas. CCTV had a very small but significant effect on crime in the five UK evaluations in this setting (three desirable and two undesirable), but had no effect on crime in the four North American evaluations.

The four evaluations of CCTV in public transportation systems present conflicting evidence of effectiveness: two found a desirable effect, one found no effect, and one found an undesirable effect on crime. For the two effective studies, the use of other interventions makes it difficult to say with certainty that CCTV produced the observed crime reductions. The pooled effect size for all four studies was a non-significant six per cent decrease in crime.

Unfortunately the Home Office study on CCTV and crime says little about arrest rates and conviction rates. What portion of crimes of each type in an area with CCTV were recorded by CCTV? How many of those recordings were of sufficiently high quality to allow arrest of perpetrators? Is CCTV image quality a serious obstacle for the effective use of CCTV? My guess is that the answer the final question is "Yes" and that advances in technology will improve image quality and perpetrator identification rates.

If CCTV is not helping to reduce crime rates then some comments on a BBC discussion board suggest obvious reasons why:

We had our car stolen in Dec 2000 in front of CCTV cameras. The police caught the thief by chance. He was convicted sentenced to community service (this was his EIGHTH offence), and ordered to pay us £80 compensation. We had seen nothing of the money and he has committed 4 more offences. He is only 18, which means he will probably carry out more serious crimes in the future. It is about time that the law was brought down hard on even first time offenders. First time means first time caught.
Anon, Scotland

I retired as a Chief Superintendent in 1996, having been a Divisional Commander for some years. By the time I retired I was ashamed of the service we were able to provide. A daily struggle to put out a minimum number of officers, sometimes as few as 8 or 9 from a paper total of more than 200. Where were they all? Attending courses, tied up in court, and dealing with time wasters complaints (every villain now complains as a routine, and boy does it use up police time). We need to get back to good old fashioned policing. It's time for us to return to the criminal being afraid, not the public.
John Lilley, England

I was mugged recently. The police turned up after quite some time. Records later showed that by the time they responded to my call my cards were already being used around Brixton. I was more than willing to give up my time to look at CCTV images near to where the mugging took place and where the cards were used to try to spot this guy. The police didn't seem to know how to respond to that suggestion - it was like it had never occurred to them.

I was more than willing to go out of my way to catch this guy who had caused me and doubtless many other people an awful trauma. The police just weren't interested. I'm a lawyer and I think I would have made a good witness. I am very sure about what I saw. Unfortunately, I was never given the opportunity to demonstrate this. I received three offers of counselling from the police. The best therapy they could have given me would have been to get the coward who did it in the dock.
Claire, England

There is a limit to what technology can do to counteract the decay of a culture that has lost belief in the right of law-abiding people to defend themselves. One of the hardest problems when trying to guess about the future is that there is no way of knowing whether any given culture will partially or totally decay and become very degenerate. More generally, what technology can make possible is a far larger set of possibilities than what people will choose to do with it.

By Randall Parker 2004 February 11 02:58 PM  Surveillance Cameras
Entry Permalink | Comments(5)
2004 February 10 Tuesday
Stanford Researchers Develop Fast Cheap Way To Silence Genes

A Stanford team has developed a way to do RNA interference (RNAi) on many genes in a way that is cheap and fast enough to allow much wider use in research laboratories.

STANFORD -- Sometimes the first step to learning a gene's role is to disable it and see what happens. Now researchers at the Stanford University School of Medicine have devised a new way of halting gene expression that is both fast and cheap enough to make the technique practical for widespread use. This work will accelerate efforts to find genes that are involved in cancer and the fate of stem cells, or to find genes that make good targets for therapeutic drugs.

The technique, published in the February issue of Nature Genetics and now available online, takes advantage of small molecules called short interfering RNA, or siRNA, which derail the process of translating genes into proteins. Until now, these molecular newcomers in genetics research have been difficult and expensive to produce. Additionally, they could impede the activity of known genes only, leaving a swath of genes in the genetic hinterlands unavailable for study.

"siRNA technology is incredibly useful but it has been limited by expense and labor. A better method for generating siRNA has been needed for the whole field to move forward," said study leader Helen Blau, PhD, the Donald E. and Delia B. Baxter Professor of Pharmacology. She said some companies are in the process of creating pools, or libraries, of siRNA molecules for all known genes in specific organisms but these libraries aren't yet available.

Pathology graduate students George Sen, Tom Wehrman and Jason Myers became interested in creating siRNA molecules as a way of screening for genes that alter the fate of stem cells -- cells that are capable of self-renewal and the primary interest of Blau's lab. The students hoped to block protein production for each gene to find out which ones play a critical role in normal stem cell function.

"I told them that creating individual siRNAs to each gene was too expensive," said Blau. Undaunted, the students came up with a protocol for making an siRNA library to obstruct expression of all genes in a given cell -- including genes that were previously uncharacterized. They could then pull individual molecules like books from a shelf to test each one for a biological effect.

The team had several hurdles to overcome in developing their protocol. The first was a size limit -- an siRNA molecule longer than 29 subunits causes wide-ranging problems in the cell. The key to overcoming this barrier was a newly available enzyme that snips potential siRNA molecules into 21-subunit lengths. A further step copied these short snippets into a form that could be inserted into a DNA circle called a plasmid. When the researchers put a single plasmid into a cell, it began churning out the gene-blocking siRNA molecule.

The group tested their approach by creating a handful of siRNA molecules to genetically disable three known genes. In each case, their technique generated siRNA that effectively blocked the gene in question.

Wehrman said this technique of creating siRNA molecule libraries could be widely used to find genes that, when disabled, cause cells to become cancerous or alter how the cells respond to different drugs. These genes could then become potential targets for drugs to treat disease.

A paper in the same issue of Nature Genetics described a similar way of creating siRNA libraries. "Having two unrelated groups working on the same problem shows there has been a real need for the technology," Blau said. The Stanford group has filed a patent for its technique.

Here is yet another reason why the rate of advance in biological research is accelerating. Better tools and techniques speed the rate at which experiments can be done and increase the amount of information that can be collected.

The abstract for the Stanford team's work is here. The other team the press release mentions is a Japanese team from the University of Tokyo. You can read their abstract here.

On a related note also read my recent post on the results of another team's effort to develop a technique to interfere with the activity of many genes at once using RNA interference: Massively Parallel Gene Activity Screening Technique Developed

Update Another report from MIT Whitehead scientist David Bartel and MIT assistant professor of biology Chris Burge on computational methods for finding a type of RNA called microRNA which regulates RNA expression.

CAMBRIDGE, Mass. (Jan. 28, 2004) – Research into the mechanics of microRNAs, tiny molecules that can selectively silence genes, has revealed a new mode of gene regulation that scientists believe has a broad impact on both plant and animal cells. Fascinated by the way microRNAs interfere with the chemical translation of DNA into protein – effectively silencing a targeted gene – scientists are exploring the role that these miniature marvels play in normal cell development and how they might be used to treat disease.

A critical component of understanding how microRNAs work in humans has been identifying which genes’ microRNAs silence and what processes they control. In a recent study, scientists identified more than 400 human genes likely targeted by microRNAs, taking an important step toward defining the relationship between microRNAs and the genes they target, including those linked to disease and other vital life functions.

...

In 2003, Bartel and Chris Burge, an assistant professor of biology at MIT, developed a computational method able to detect the microRNA genes in different animals. Using this method, they estimated that microRNAs constitute nearly 1 percent of genes in the human genome, making microRNA genes one of the more abundant types of regulatory molecules.

Bartel and Burge then set out to apply a similar approach to defining the relationship between microRNAs and the genes they target. Last month in the journal Cell, their labs reported that they have created a new computational method, called TargetScan, which does just that.

For each microRNA, TargetScan searches a database of messenger RNAs (mRNAs) – chemical messages that transcribe DNA into protein – for regions that pair to portions of the microRNA, and assigns a score to the overall degree of pairing that could occur between the microRNA and each mRNA. Those mRNAs that have high scores conserved in three or more organisms are predicted as targets of the microRNA.

Using this method, the team identified more than 400 genes in the human, mouse and rat genomes likely to be regulated by microRNAs. In addition, TargetScan predicted an additional 100 microRNA targets that are conserved in humans, mice, rats and the pufferfish.

According to Burge, 70 percent of targets predicted by TargetScan are likely to be authentic microRNA targets and the experimental data in the paper supports that a majority of their predictions are correct.

The take-home lesson here is that advances in the development of computer algorithms and the development of better tests and instrumentation are all accelerating the rate at which scientists can figure out systems of gene expression and genetic regulation in cells.

By Randall Parker 2004 February 10 11:02 AM  Biotech Advance Rates
Entry Permalink | Comments(2)
2004 February 09 Monday
MIT Researchers Find Molecular Pathway For Memory Formation

A kinase enzyme (which transfers a phosphate onto a protein - which often turns a protein into a less or more active state) called Mitogen-Activated Protein Kinase (MAPK) has been found to play a crucial role in increasing the synthesis of a large assortment of different proteins needed for long-term memory formation.

The MIT research team, led by Nobel laureate Susumu Tonegawa, director of the Picower Center for Learning and Memory, has now identified a crucial molecular pathway that allows neurons to boost their production of new proteins rapidly during long-term memory formation and synaptic strengthening.

"What we have discovered that hasn't been established before is that there is a direct activational signal from the synapse to the protein synthesis machinery," said Tonegawa, the Picower Professor of Biology and Neuroscience in MIT's Departments of Brain and Cognitive Sciences and Biology The central component of this pathway, an enzyme called "mitogen-activated protein kinase" (MAPK), effectively provides a molecular switch that triggers long-term memory storage by mobilizing the protein synthesis machinery.

Acting on a hunch that MAPK might be an important part of such a "memory switch," Ray Kelleher, a postdoctoral fellow in Tonegawa's laboratory and lead author of the study, created mutant mice in which the function of MAPK was selectively inactivated in the adult brain. Intriguingly, he found that these mutant mice were deficient in long-term memory storage. In contrast to normal mice's ability to remember a behavioral task for weeks, the mutant mice could remember the task for only a few hours. Similarly, the researchers found that synaptic strengthening was also much more short-lived in neurons from the mutant mice than in neurons from normal mice.

Realizing that the pattern of impairments in mutant mice suggested a problem with the production of new proteins, the researchers then performed an elegant series of experiments that revealed precisely how MAPK translates synaptic stimulation into increased protein synthesis. Based on molecular comparisons of neurons from normal and mutant mice, they found that synaptic stimulation normally activates MAPK, and the activated form of MAPK in turn activates several key components of the protein synthesis machinery. This direct regulation of the protein synthesis machinery helps explain the observation that activation of MAPK enhanced the production of a broad range of neuronal proteins.

"Many people had thought that long-term memory formation involved only boosting the synthesis of a very limited set of proteins," said Tonegawa. "But to our surprise, this process involves 'up-regulating' the synthesis of a very large number of proteins."

This information may be useful for researchers trying to develop memory formation enhancement drugs. A drug that upregulates MAPK synthesis or that turns on its activity might have the effect of enhancing memory formation.

As the steps in memory formation becomes identified and better understood they all become potential targets for drug therapies. The same holds true for emotional reactions and other aspects of cognitive function. As any biological system becomes better understood it becomes more manipulable. Where is this all going to lead? It seems likely that most people 30 or 40 years from now will be using drugs to enhance and fine-tune the performance their brains in a variety of ways. While many people use drugs today for either recreation purposes or to treat mental disorders it seems likely that the focus of drug use for altering the mind will shift toward cognitive enhancement in the future both to improve thinking and to align one's emotional reactions more closely with goals one wants to achieve..

By Randall Parker 2004 February 09 11:58 PM  Brain Memory
Entry Permalink | Comments(3)
2004 February 05 Thursday
Massively Parallel Gene Activity Screening Technique Developed

Researchers at Howard Hughes Medical Institute (HHMI), Harvard Medical School, the University of Heidelberg and the Max Planck Institute for Molecular Genetics in Germany have demonstrated in the fruit fly Drosophila a general technique usable in any organism to simultaneously assay thousands of genes to determine whether each gene is involved in a particular aspect of cell function.

“A major challenge now that many genome sequences have been determined, is to extract meaningful functional information from those projects,” said HHMI researcher Norbert Perrimon, who directed the study. “While there are a number of analytical approaches that can measure the level of gene expression or the interaction between proteins, ours is really the first high-throughput, full-genome screening method that allows a systematic interrogation of the function of every gene.”

The technique uses double-stranded RNA made to match every known gene in the target organism that is of interest. The double-stranded RNA causes a phenomenon called RNA interference (RNAi) which blocks the action of the corresponding RNA strand which gets made from each gene. Nornally cellular machinery reads a gene in the DNA and creates what is called messenger RNA (mRNA) which has a matching sequence to that gene. Then that mRNA is read to make proteins. But dsRNA prevents that step and therefore blocks the creation of proteins. This effectively blocks the gene from having any effect and then the automated assay system of these researchers watches what the effect is on cell growth or on whatever other aspect of cellular activity the system could be set up to measure.

The screening technique developed by Perrimon and his colleagues builds on methods developed in one of the hottest areas of biology, RNA interference (RNAi) research. In RNAi, double-stranded RNA (dsRNA) that matches the messenger RNA produced by a given gene degrades that messenger RNA — in effect wiping out the function of that gene in a cell. RNAi is widely used as a research tool to selectively erase the cellular contributions of individual genes to study their function.

In their mass screening technique, Perrimon and his colleagues first created a library of 21,000 dsRNA that corresponded to each of the more than 16,000 genes in the Drosophila genome. They then applied each of these dsRNA molecules to cultures of Drosophila cells and assayed how knocking down the function of a targeted gene affected cell numbers in the cultures. This basic measure, said Perrimon, revealed genes that are not only involved in general cell growth, but also in the cell cycle, cell survival and other such functions.

The researchers then selected 438 genes for further characterization. The degradation of these genes produced profound affects on cell number. “Out of this subset, we found many that produced proteins involved in general metabolic processes such as the ribosomes that are components of the protein synthesis machinery,” said Perrimon. “But we also found genes that are more specific to cell survival.”

According to Perrimon, only 20 percent of the genes that were identified had corresponding mutations — an important characteristic for studying gene function. “The classic approach to studying gene function is to identify mutations in genes and select those that produce interesting phenotypes that yield insight into function,” said Perrimon. “But this approach has never really given us access to the full repertoire of genes. With this high-throughput technology, however, we can study the function of a complete set of genes. We can systematically identify all the genes involving one process.”

The technique can be used to screen for genes involved in intercellular communication, cancer cell proliferation, and other cellular activity. Combined with drug screening the technique can accelerate the search for drugs that operate on particular cellular pathways and processes.

The RNAi assay will contribute to the screening of new drugs, he said. “One exciting aspect of this approach is that we can combine our assay with screening of potential therapeutic compounds,” he said. “One of the big problems in the pharmaceutical industry is that researchers may discover pharmacologically active compounds but have no idea what their targets are in the cell. However, it would be possible to perform coordinated screens — one for compounds that interfere with a target pathway and an RNA interference screen for genes that act in that pathway. This correlation would allow you to match the compounds with the proteins they affect in a much more useful way.”

One can see by reading between the lines here how this technique has to be built on top of a lot of other existing tools that automate the creation of needed components. There has to be a fairly automated existing technique to generate all the different kinds of dsRNA strands used in this techinque. Also, the technique must rely on an automated tool for, feeding cells, measuring cell growth, and doing other steps in this process.

Results of studies for new ways to treat diseases or discoveries of ways that genes and cells work get a lot of press attention. But the ability to automate and therefore accelerate massive parallel screening and manipulation of genes, proteins, and other parts of cells is what makes possible the faster rate of discoveries of disease causes and disease treatments. Cells are so complex with so many pieces, subsystems, and types of interactions that only with the development of massively parallel techniques can we hope to fully figure out how cells work and how to cure most diseases in the next few decades.

By Randall Parker 2004 February 05 12:13 PM  Biotech Advance Rates
Entry Permalink | Comments(4)
2004 February 04 Wednesday
What Brain Scans Of People Falling In Love Tell Us

Rutgers University evolutionary anthropologist Helen Fisher has writtern a new book titled Why We Love : The Nature and Chemistry of Romantic Love In a very interesting interview she discucsses results of her functional MRI (fMRI) brain scans of people in the early intense stages of falling in love.

On average, men tended to show more activity in two regions in the brain: One was associated with the integration of visual stimuli and the second was with penile erection. This really shouldn't come as a surprise. Everybody knows that men are highly visual -- men spend their lives commenting on women, looking at porn, and the like. I believe these visual networks evolved 1 or 2 million years ago because men needed to look at a woman and size up her ability to give him healthy babies. If he saw that she was young and healthy and happy, it would be adaptive for him to become aroused to start the mating process. Men definitely fall in love faster than women -- there's good psychological data on that. And I think that's because they are more visual.

And women?

Several regions associated with memory recall became active. And I couldn't figure out why at first, and then I thought to myself, my goodness -- for millions of years women have been looking for someone to help them raise their babies, and in order to do that you really can't look at someone and know whether they're honest or trustworthy or whether they can hit the buffalo in the head and share the meat with you. You've got to remember what they said yesterday, what they said three weeks ago, what they gave your mother two months ago at the midwinter festival. For millions of years women have had the hardest job on earth -- raising tiny helpless babies for as long as 20 years. That is an enormous job. There's no other animal on earth for whom motherhood is so complex. And if their husband died they'd have to expend an enormous amount of metabolic energy to find another one, and they're that much older, and the clock is ticking -- it's an adaptive strategy to remember all these details.

Fisher comments that the use of Selective Serotonin Reuptake Inhibitors (SSRIs) as antidepressants may reduce the capacity for falling in love by blocking some of the changes in serotonin metabolism that normally happen while one is falling in love. Regardless of just how well existing SSRIs produce this effect if they can do it at all this suggests that drugs can be developed in the future that can totally block falling in love. It also seems likely that the opposite effect could be aimed for. The love potion of mythical tales may eventually be attainable through coming advances in pharmaceuticals.

Fisher is also the author of Anatomy of Love: A Natural History of Mating, Marriage, and Why We Stray.

Update: Fisher's interview reminds me of an idea that I've been wanting to get out into the public realm for a long time: We need drugs that will keep people happily married. The cost of divorce and illegitimacy for society is terrible. In some societies marriage for child-rearing is becoming the exception. This means childen are less well cared for and they do not turn out as well in terms of educational attainment, crime rates, and general success in life. Split ups of households lower the living standards as it costs more to maintain two separate households. If we accept the evolutionary psychology argument about why people fall in and out of love it seems to me that the problem is that humans have not been selected for to behave in a way most optimal for extended child-raising and this problem needs to be fixed pharmacologically. Everything from the declining strength of religious belief to the mass media portrayals of tempting objects of affections are reducing forces holding marriage together with tragic results.

We can not fix this problem with gene therapy because that is going to take a lot longer to develop. Many potential gene therapies will have to be done on fetuses and therefore their results will not be felt until the babies grow to be adults. Also, many people might oppose the idea of genetically engineering their children to be highly monogamous and faithful by nature But we might be able to keep people together with pharmaceuticals.

Take whatever biochemical state people have in the initial flush of love. Imagne being able to maintain that feeling for years with both partners agreeing to do so together. Imagine a drug which. if you took it while looking at a particular person, that person would, as a result, look very sexy to you. Think about how much happier everyone would be if they weren't all walking around thinking that the grass looked greener on that unattainable other side of the river. Imagine that the sexiness of a lover never wore out or got old. A lot of married people would stay together a lot longer and long enough to raise kids to adulthood of they could use drugs to maintain their attraction to each other.

Science may eventually be able to produce the love potions of mythical stories and modern fantasy TV shows and movies. Love drugs could help prevent and reverse the decline of marriage. If this became possible the benefits would be substantial.

By Randall Parker 2004 February 04 01:35 AM  Brain Love
Entry Permalink | Comments(15)
2004 February 02 Monday
Bird Flu Virus Outbreak Has Infectious Disease Experts Worried

The A(H5N1) strain of avian influenza that has spread to many Asian countries and caused 7 deaths out of 11 known human cases has a lot of disease experts on edge. The fear is that the avian virus and the human H3N2 strain currently spreading in Asian could coinfect either a human or a bird and the avian and human viral DNA could recombine in the same infected cell to create a new virus strain that could spread rapidly in humans with fatal results. A World Health Organization (WHO) official says a combined human/avian flu strain could kill millions worldwide.

Shigeru Omi, director of the UN agency's Western Pacific office, warned last week that millions of people around the world could die if the H5N1 strain of bird flu mixes with the human H3N2 virus that was headed towards Asia.

The fear is for a reprise of the 1918 flu pandemic which killed 1% to 2% of the world's population.

Dr. Klaus Stöhr of the WHO says so far the pattern of human cases of bird flu fit the previous patterns of bird flu outbreak in Hong Kong.

Because the virus apparently "vanished" after causing the cluster of infections, Dr. Stöhr said, his agency does not consider the possible person-to-person spread a major public health threat. Similar transmission, limited to a short chain of people and with a definite end, occurred in earlier avian influenza outbreaks in Hong Kong, he said.

The fact that the bird flu virus has not rapidly spread from existing human cases suggests it hasn't mutated yet to be easily spreadable in humans.

The fact that just a handful of human cases have been reported this time despite the bird flu being around for several weeks was also fairly encouraging.

Government cover-ups and insufficient testing have allowed bird flu to spread for months before being detected.

W.H.O. first learned about the mutated A(H5N1) strain in January through reports from Vietnam, then learned that birds had begun getting infected elsewhere in Asia as early as April 2003. In addition to Vietnam, the affected countries are Cambodia, China, Indonesia, Japan, Laos, South Korea and Thailand. "We have no clue which species of bird first spread it," Dr. Stöhr said.

Better surveillance procedures are needed in many countries. Plus, there is not enough money to kill the chickens in countries where the bird flu is spreading. This increases the chance that bird flu viruses will combine with human viruses to produce a virus capable of killing millions in a human outbreak.

All flu viruses probably originate in birds, and the best environment for making the jump to humans is one where densely packed people live closely with birds and animals.

"In Asia we have a huge animal population, a huge bird population and two-thirds of the world's people living there,'' said Klaus Stohr, chief influenza scientist at the World Health Organisation.

The population of China alone is bigger than that of the whole of Africa, and 80 percent of the new human flu strains the last few decades appeared in China first.

Growth in egg and chicken consumption in Asia has increased the capacity of chickens to be hosts for avian influenza strains.

From the early 1970s to the early 1990s, per capita consumption of meat, eggs and milk grew about 50 per cent in developing countries, leading to big increases in animal herds. Over the last 25 years, the fastest growth has been in the numbers of chickens and pigs, the FAO says.

Asians' fondness for shopping at live animal markets also adds to the chances for flu jumping species, experts say.

The problem with vaccines is that they take too long to develop and manufacture. The method of growing vaccines in eggs takes months and requires that large numbers of the proper kinds of eggs and egg-growing facilities be available. It simply is not possible to scale up that quickly and it takes months to do so. Even worse, bird flu poses a special difficulty because unmodified avian influenza virus would kill the egg and therefore stop the growth process short.

The extra complication where bird flu is involved, Professor Gust says, is that the virus cannot be grown in eggs, as is usual practice in making vaccines, because it would simply kill the eggs. Instead it has to be put through a process known as reverse genetics technology to engineer a strain that both grows in eggs and protects against the bird virus.

Even if the manufacturers were ready to go when a human-to-human virus appeared, it would still take at least six months for a vaccine to be widely available. And in that time, the virus would be likely to have made its way very quickly around the world and caused many deaths.

Whether the bird flu strain currently spreading in Asia will mutate into a form easily transmittable in humans remains to be seen. But if this particular outbreak in chickens doesn't turn into a major human pandemic some flu strain will eventually mutate into a very deadly strain and could kill millions. What we need are much faster ways to develop and produce vaccines.

One bright light on the horizon if the development of cell culture techniques for growing vaccine viruses. Chiron Corporation CEO Howard Pien claims that Chiron cell culture flu vaccine will be able to halve the production time for flu vaccine.

A: The general estimate for a vaccine product is that it takes five to six years to develop it for the market. It is entirely possible we will do this faster, but that is assuming our test is very, very positive. Currently, it takes four months to make flu vaccine in chicken eggs. We believe that time can be reduced by 50%. The net effect will be to increase output.

Chiron says they will go into Phase III clinical trials for the cell culture version of their vaccine in 2004.

Chiron expects to enter Phase III studies for its flu cell-culture vaccine this year.

The US government is trying to accelerate development of faster virus production technologies but some doubt cell culture methods will be faster.

Health and Human Services Secretary Tommy Thompson says federal officials want to urge companies to move toward newer technologies that would allow faster production of vaccine, which currently takes at least six months from egg to vaccine.

In mid-December, Thompson said he hopes some of the expected $50 million in new federal funding for flu research will be used to encourage new companies to start making vaccine using newer, egg-free technologies.

A switch to cell culture technology, which Aventis already uses to make vaccines against polio and other diseases, wouldn't speed production of flu vaccine, says Michael Decker, vice president for scientific and medical affairs at Aventis. "Let's suppose we had no chickens and no eggs. Then, cell culture is faster." But for companies with established supplies of chicken eggs, there's no advantage. "The virus takes the same time to grow in either."

The use of either cell culture grow viruses to make killed-virus vaccines is not the only imaginable approach for more rapid vaccine production. DNA vaccines could probably be produced more quickly.

Another approach is the use of DNA vaccines. Here, the gene for a pathogen protein is introduced into human cells and is then expressed to produce the protein inside the body. There are many advantages to the DNA vaccination method. For example, it is much cheaper to produce and distribute large amounts of DNA than it is to produce and distribute large amounts of protein. Also, the same strategy can be used to tackle virtually any pathogen, so multiple vaccinations are possible. Technical hurdles that need to be overcome include finding efficient ways of getting the DNA into human cells, making sure the gene is expressed once it is inside the cell, and making sure the DNA does not integrate into the genome and disrupt our own genes. There are many DNA vaccines in clinical and pre-clinical trials, including vaccines for HIV, herpes, hepatitis and influenza.

Until it becomes possible to develop and produce hundreds of millions of vaccine doses in a matter of weeks the human race is going to continue to live under the threat of a repeat of the 1918 pandemic with tens or even hundreds of millions killed. Research aimed at developing types of vaccines that can be produced more quickly ought to be a higher priority.

By Randall Parker 2004 February 02 11:47 PM  Dangers Natural Bio
Entry Permalink | Comments(1)
Site Traffic Info
Site Copyright
The contents of this site are copyright ©