Ehud Shapiro and his team at the Weizmann Institute of Science in Israel have published a paper in Nature on a DNA computer that activates in the presence of genes that indicate a cell is cancerous.
Scientists have built a tiny biological computer that might be able to diagnose and treat certain types of cancer. The device, which only works in a test-tube, is years from clinical application. But researchers hope it will be the precursor of future 'smart drugs' that roam the body, fixing disease on the spot.
Prof Shapiro's device is a development of a biological computer that he first built in 2001. DNA is the software of life: it carries huge quantities of information, programs the operating system of every cell, controls the growth of the whole organism and even supervises the making of the next generation.
In one example, the computer determined that two particular genes were active and two others inactive, and therefore made the diagnosis of prostate cancer. A piece of DNA, designed to act as a drug by interfering with the action of a different gene, was then automatically released from the end of the computer.
This ability to use multiple inputs is needed to accurately detect cancers since any one gene can be on or off at different times in normal cells. We need the ability to check many indicators in each cell to decide whether it is a cancer cell. Still more precise activation mechanisms could be constructed using more gene expression levels as inputs and by making yet more complex sensor mechanisms that detect length of activation of genes or ratios of expression of genes.
The software molecules follow a simple computational path. If they attach to normal mRNA, they do nothing; if they attach to abnormal mRNA, indicating the presence of a disease cell, they initiate a process to unleash a DNA treatment molecule modelled on an anticancer drug.
The Weizmann Institute press release provides more details on the DNA computer state machine and its ability to measure concentrations of molecules as inputs to make a decison on whether to activate its treatment.
As in previous biological computers produced in Shapiro’s lab, input, output and “software” are all composed of DNA, the material of genes, while DNA-manipulating enzymes are used as “hardware.” The newest version’s input apparatus is designed to assess concentrations of specific RNA molecules, which may be overproduced or under produced, depending on the type of cancer. Using pre-programmed medical knowledge, the computer then makes its diagnosis based on the detected RNA levels. In response to a cancer diagnosis, the output unit of the computer can initiate the controlled release of a single-stranded DNA molecule that is known to interfere with the cancer cell’s activities, causing it to self-destruct.
In one series of test-tube experiments, the team programmed the computer to identify RNA molecules that indicate the presence of prostate cancer and, following a correct diagnosis, to release the short DNA strands designed to kill cancer cells. Similarly, they were able to identify, in the test tube, the signs of one form of lung cancer. One day in the future, they hope to create a “doctor in a cell”, which will be able to operate inside a living body, spot disease and apply the necessary treatment before external symptoms even appear.
The original version of the biomolecular computer (also created in a test tube) capable of performing simple mathematical calculations, was introduced by Shapiro and colleagues in 2001. An improved system, which uses its input DNA molecule as its sole source of energy, was reported in 2003 and was listed in the 2004 Guinness Book of World Records as the smallest biological computing device.
Shapiro: “It is clear that the road to realizing our vision is a long one; it may take decades before such a system operating inside the human body becomes reality. Nevertheless, only two years ago we predicted that it would take another 10 years to reach the point we have reached today.”
I love this approach. Why? Because the use of a treatment that operates as a state machine attempts to solve the problem of cancer on the level that the mechanisms of cancer operate. Cells are really complex state machines. Our genome is a really complex computer program executing with biochemical mechanisms. Cancers result when that state machine becomes damaged in enough places to lose control of the process of cell division. What we need is a smaller state machine to go into cells, recognize which cells have damage to their programs that make them cancerous, and then to either order those cells to die or to fix the damaged pieces of the cellular program that make those cells cancerous in the first place.
Genetic instructions are the right way to develop a complex state machine to use as a cancer treatment. DNA can have far more complex behavior than any conventional drug compound. The DNA can interact with the messenger RNA made by the cells to ascertain a cell's state and to change that state. Genes and other DNA fragments contain far more information than os contained in conventional chemical drug structures. Groups of genes can function as rather complex state machines. Since cancer cells and normal cells have so much in common a high level of sophistication of behavior is needed in order to develop enough selectivity to identify and manipulate cancer cells.
What these Israeli scientists are doing is the future of medicine. There are still many hurdles in the way of making a DNA state machine drug treatment work. Most notably, a lot of scientists have been trying for years to develop gene therapy delivery mechanisms for getting gene therapies into cells and the advances have been slow in coming. A gene therapy approach that delivers a DNA state machine computer would need to be able to get into nearly all the cancer cells in the body. If effetive gene therapy delivery mechanisms can be developed then DNA computer hackers can start hacking the human body to fix what is broken and improve us in numerous ways.
Alex Tabarrok of Marginal Revolution draws attention to a paper by Nobel Prize winning economist Gary Becker of the Milken Institute’s Center for Accelerating Medical Solutions on the desirability of removing the US Food and Drug Administration's (FDA) power to hold off drug approval until efficacy is demonstrated. (PDF format)
Treatment of serious diseases usually offers options that trade off various risks, for example, forcing comparisons between the likely quality of life and its expected length. No one is really able to read the thoughts of others well enough to make informed decisions on their behalf. This is why patients – not regulators or even physicians – should have ultimate control over treatment. I mention regulators because government authorities, in particular the Food and Drug Administration, make it extremely difficult for the very ill to gain access to unproven therapies. This barrier became more daunting when the FDA instituted regulations in 1962 that signifi cantly raised the cost and lengthened the time it takes to bring new drugs to market.
Before that year, pharmaceutical makers had to show only that drugs appeared to be safe for the vast majority of patients likely to take them. Assuring safety required clinical trials along with other evidence and was not an easy obstacle to overcome. But I do not want to argue here the issue of how safe is safe enough, and I assume that an FDA safety standard, perhaps even a strengthened one, is desirable.
The 1962 regulations, however, went beyond safety to add an efficacy standard. That is, clinical trial evidence would from then on have to strongly support claims that products significantly aid in the treatment of specific diseases or conditions. Indeed, in the final stage of mandated clinical tests, randomized trials must show that those treated with a drug are significantly helped compared with a control group.
This efficacy test greatly lengthened the average time between discovery and approval. Although in recent years the FDA has maintained a “fast track” for high priority drugs, bringing a new therapy to market takes an average of 12 to 15 years. The typical drug must be tested on some 6,000 patients, increasing total development costs by about 40 percent. It follows that a return to a safety standard alone would lower costs and raise the number of therapeutic compounds available. In particular, this would include more drugs from small biotech firms that do not have the deep pockets to invest in extended efficacy trials. And the resulting increase in competition would mean lower prices – without the bureaucratic burden of price controls. In turn, cheaper and more diverse drugs would induce insurance companies and public providers to cover many more new drugs, even when their efficacy was uncertain.
Elimination of the efficacy requirement would give patients, rather than the FDA, the ultimate responsibility of deciding which drugs to try. Presumably, the vast majority of patients would continue to rely on the opinions of physicians about which drugs to use. But many people whose lives are at risk want proactive in reporting what is known about the value of drugs in treating diseases, making data available through the Internet and other consumer-friendly media.
One of the more depressing aspects of serious disease is the sense of impotence – that very sick persons can do little to help themselves. This may explain why placebos sometimes generate positive effects. Indeed, Anup Malani of the University of Virginia found that patients receiving placebos in double-blind trials of ulcer drugs reported significantly less pain than they had before the trials.
Giving people whose lives are threatened by serious diseases greater access to safe, promising (albeit unproven) drugs and other treatments would help their psychological state. More important, it would lower the cost and hasten the development of therapies.
Even a simple safety standard seems excessive for a disease like cancer. The vast bulk of the treatments for cancer are highly toxic. The whole problem with trying to selectively kill cancer cells is that they share too much in common with normal cells aside from their uncontrolled growth.
Medical drugs and devices cannot be marketed in the United States unless the U. S. Food and Drug Administration (FDA) grants specific approval. We argue that FDA control over drugs and devices has large and often overlooked costs that almost certainly exceed the benefits. We believe that FDA regulation of the medical industry has suppressed and delayed new drugs and devices, and has increased costs, with a net result of more morbidity and mortality. A large body of academic research has investigated the FDA and with unusual consensus has reached the same conclusion.
Drawing on this body of research, we evaluate the costs and benefits of FDA policy. We also present a detailed history of the FDA, a review of the major plans for FDA reform, a glossary of terms, a collection of quotes from economists who have studied the FDA, and a bibliography with many webbed links.
From a rights perspective I am especially opposed to allowing the FDA to continue to have the power to slow the delivery of new drugs to the market for diseases that are fatal. If you have just been told you have 1 or 2 or 5 years to live because of some form of cancer why shouldn't you be free to try any drug against your disease? Why should the government have the power to "protect" you. Protect you from what? Death? You are already going to die. Sure, an experimental therapy may kill you sooner. But suppose you have a bone cancer that is going to take 5 or 6 or 7 disabling and extremely painful years to kill you. Isn't it your right as owner of your own body and life to gamble on some alternatives?
There is also the compelling economic case for fewer regulatory obstacles which Gary Becker and many other economists make. As it stands now there are very few organizations with the financial wherewithal to bring a drug all the way to market. This limits how many drugs will be brought all the way through all the steps. Medicinal chemist and blogger Derek Lowe delivers a daily source of sobering commentary on all the scientific, financial, and regulatory obstacles to bringing new drugs to market. The cost of bringing a new drug to market is over $800 million dollars (more like $900 million) Such a large sum of money makes investors extremely risk averse and any drug is a longer shot gamble just isn't going to be developed.
One of the costs is the work that needs to be done for the regulatory process. But another substantial cost is time to market. The longer it takes the higher the cost is in interest for the money to develop the drug. If the efficacy stage was cut out of the regulatory process then the cost of the work on the efficacy stage of regulatory approval as well as the interest cost on all the earlier stages would drop.
Perhaps the "efficacy" stage of FDA approval could be left in place as an optional stage. Anyone who wanted to take only drugs that were approved for efficacy would be free to do so. Any drug company would be free to start selling once the safety stage (or some less restrictive stage in the case of cancer drugs) was passed. The drug company could start selling then and still go after "efficacy" approval in order to later use that additional approval in marketing to doctors and patients.
The result of this proposed reduction in FDA power would be to increase the rate at which new drugs are tried, increase the number of organizations developing new drugs, and to lower the cost of drugs. The rate of advance of biomedical resesarch and biotechnology will accelerate if the cost and time to market for drugs is reduced. We will have longer life expectancies and lower medical bills if the efficacy stage of testing is removed as a requirement for new drug approval.
Richard Jackson and Neil Howe have written an excellent report about demographic trends for the Center for Strategic and International Studies (CSIS) entitled The Graying of the Middle Kingdom.
CHINA IS ABOUT TO UNDERGO A STUNNING DEMOGRAPHIC TRANSFORMATION. Today, China is still a young society. In 2004, the elderly—here defined as adults aged 60 and over—make up just 11 percent of the population. By 2040, however, the UN projects that the share will rise to 28 percent, a larger elder share than it projects for the United States.(See Figure 1.) In absolute numbers, the magnitude of China’s coming age wave is staggering. By 2040, assuming current demographic trends continue, there will be 397 million Chinese elders, which is more than the total current population of France, Germany, Italy, Japan, and the United Kingdom combined.
China's shift from a younger to older population is far more rapid than has been occurring in Western nations.
The forces behind China’s demographic transformation—falling fertility and rising longevity—are causing populations to age throughout the world. China’s aging, however, is occurring with unusual speed. In Europe, the elder share of the population passed 10 percent in the 1930s and will not reach 30 percent until the 2030s, a century later. China will traverse the same distance in a single generation. How China navigates its demographic transformation will go a long way toward determining whether it achieves its aspiration of becoming a prosperous and stable developed country. In the near term, while its population is still young and growing, China must rush to modernize its economy and raise living standards. In the long term, it must find ways to care for a much larger number of dependent elderly without overburdening taxpayers or overwhelming families.
China will have a higher percentage of elderly than the United States will by 2040.
Elderly (Aged 60 & Over) as a Percent of the Population
China US 1970 7% 14% 2000 10% 16% 2015 15% 20% 2040 28% 25% 2050 31% 25%
There is one really big caveat that ought to be kept in mind when talking about the elderly in the year 2040: at least some of the ideas proposed by Aubrey de Grey as Strategies for Engineered Negligible Senescence (SENS) will be developed and widely used by the year 2040. A 60 year old body will be younger in 2040 than a 60 year old body is today. Methods of carrying out aging reversal will allow people to remain economically productive for decades longer.
Declining birthrates and the migration of children to cities are undermining the traditional role of children as the supporters and care-givers for the elderly.
The biggest problem, however, is that most of today’s workers—and hence most of tomorrow’s retirees— have no pension or health-care coverage at all. The great majority of Chinese continue to rely on the traditional form of old-age insurance: children. But as birthrates decline and urbanization breaks up extended families, this informal safety net is beginning to unravel. In China, demographers call it the “4-2-1 problem,” a reference to the fact that in many families one child will be expected to support two aged parents and four grandparents.
China is going to have fewer workers to elderly than the United States of America by 2040.
Aged Support Ratio
2000 2040 US 3.9 2.3 China 6.4 2.0 South Korea 6.2 1.5 EU15 2.8 1.5 Singapore 6.4 1.5 Hong Kong 4.8 1.4 Japan 2.7 1.1
The UN projects that the size of China's working age population will peak in 2015. The rate of decline after that point depends on future fertility rates that are hard to predict.
100 Total Percentage Decline in the Working-Age Population: 2005-50
Constant Fertility Scenario: -18% Low Variant Scenario: -35%
Pension coverage in China is largely limited to urban workers in the state-owned sector of the economy. In 2002, the “basic pension system” covered 45 percent of the urban workforce, mainly employees at state- and collectively owned enterprises. Although the government has begun to extend pension coverage to the private sector, participation remains minimal. A separate and more generous pension system for civil servants covers another 10 percent of the urban workforce. Rural workers are excluded from the basic pension system, although 11 percent participate in a small and voluntary rural pension system. All told, just 25 percent of China’s total workforce, urban and rural, have any pension provision at all. (See Figures 9 and 10.) By and large, government health insurance is limited to the same privileged groups, although overall coverage rates are somewhat higher than for pensions.
The whole report is 40 pages long and delves into many aspects of population aging, pension systems, tax rates, and other demographic trends in China including the rising male-to-female sex ratio. That rising male-to-female sex ratio will make China's elderly care problem even more difficult. The daughters-in-law provide most elderly care in China. But elders whose son(s) can find no wife will have no daughter-in-law to help care for them. At the same time, the movement of younger workers from the rural areas to the cities is separating sons and daughters from their aging parents.
Absent a functioning nationwide pension program, unforgiving arithmetic suggests there may be something approaching a one-to-one ratio emerging between elderly parents and the children obliged to support them. Even worse, from the perspective of a Confucian culture, a sizable fraction — perhaps nearly one-fourth — of these older Chinese will have no living son on whom to rely for sustenance. One need not be a novelist to imagine the intense social tensions such conditions could engender (to say nothing of the personal and humanitarian tragedies).
Second, and no less important, there is no particular reason to expect that older people in China will be able to make the same sort of contributions to economic life as their counterparts in Japan. In low-income economies, the daily demands of ordinary work are more arduous than in rich countries: The employment structure is weighted toward categories more likely to require intense manual labor, and even ostensibly non-manual positions may require considerable physical stamina. According to official Chinese statistics, nearly half of the country’s current labor force toils in the fields, and another fifth is employed in mining and quarrying, manufacturing, construction, or transport — occupations generally not favoring the frail. Even with continuing structural transformations, regular work in 2025 is sure to be much more strenuous in China than in Japan. Moreover, China’s older population may not be as hardy as peers from affluent societies — people likely to have been better fed, housed, and doctored than China’s elderly throughout the course of their lives.
In a new book, Bare Branches: Security Implications of Asia's Surplus Male Population (MIT Press), Valerie M. Hudson and Andrea M. den Boer warn that the spread of sex selection is giving rise to a generation of restless young men who will not find mates. History, biology, and sociology all suggest that these "surplus males" will generate high levels of crime and social disorder, the authors say. Even worse, they continue, is the possibility that the governments of India and China will build up huge armies in order to provide a safety valve for the young men's aggressive energies.
"In 2020 it may seem to China that it would be worth it to have a very bloody battle in which a lot of their young men could die in some glorious cause," says Ms. Hudson, a professor of political science at Brigham Young University.
Bare Branches offers some disheartening numbers: In 1993 and 1994, more than 121 boys were born in China for every 100 baby girls. (The normal ratio at birth is around 105; for reasons debated among biologists, humans seem naturally to churn out slightly more boys than girls.) In India during the period 1996 to 1998, the birth ratio was 111 to 100; in Taiwan in 2000, it was 109.5. In 1990 a town near New Delhi reported a sex ratio at birth of 156.
Valerie Hudson argues that the shortage of females is not going to self-correct because the females and their parents can not leverage the scarcity of the females for self-benefit and so there is no market incentive to have more female children. If certain free-market Ph.D. economists of my acquaintance (and the rest of you as well) have read this far do you have any comments to offer on this point?
Given the imbalance in the sex ratio in China consider this question: Would China be more peaceful internally and less of a threat to the rest of the world if it has high marriage rates with lasting marriages and high sexual fidelity? Or would China be less of a problem if a sizable fraction of Chinese women never married or did not stay married? Consider the extremes. If all women marry at a young age, stay married, and remain sexually faithful to their husbands then any man who can't find a wife will never have a lover or wife. A society of high marital fidelity would make most male losers in the marriage sweepstakes permanent losers. This will create a group of men who are frustrated, bitter, and angry. But if the limited supply of women engage in a succession of relationships more men will have a chance of having a lover or wife at least part of the time. Would that make the society more peaceful and lower crime rates?
The answer to this question is not obvious to me. On the one hand, a society in which women are constantly shifting between relationships may seem to allow more men to be winners. But it would also cause men to continuously compete for women. In a society with the vast bulk of women paired up permanently to one man some men will come to know that they are permanent losers those men might resign themselves and become passive and peaceful in defeat. Or will they? Will they stay ultra-competitive for decades because they don't know whether they might eventually win a woman?
Some argue we do not have to worry about China as a military power because an aging population is not a militarily vigorous one. But if rejuvenation therapies restore the youthful vitality to China's population even as China's sex ratio continues to be lopsided then a wealthier, more youthful China with a high ratio of males to females could become militarily aggressive. Advances in medical science could therefore produce a more dangerous China. On the other hand, Chinese people faced with the prospect of much longer lives might become opposed to aggressive military moves because they have too many future years of life to lose in a war.
The gender difference in appearance memory was not great, but it shows another area where women are superior to men in interpersonal sensitivity, said Terrence Horgan, lead author of the study and research fellow in psychology at Ohio State University.
“Women have an advantage when it comes to remembering things like the physical features, clothing and postures of other people,” Horgan said. “This advantage might be due to women being slightly more people-oriented than men are.”
The study also found that both men and women did better at remembering the appearance of women than they did remembering how men looked.
The appearance of women may be more memorable because women try to look different from each other.
Women in general may be more memorable than men because their hair and clothing styles and use of jewelry tends to be more varied than that of men. For example, in many offices men may look similar in their suits and ties. But women may be wearing necklaces and earrings, or have other jewelry or clothing that makes their appearance stand out more, Horgan said.
However, the results suggest women aren’t more memorable because people spend more time looking at them. The researchers measured how long participants in the last three studies looked directly at their partners. Overall, the participants didn’t look at women any longer than they looked at men.
Women are known to be better at reading body language and facial expressions. So they tend to focus more on visible qualities of a person.
For example, other studies have shown women have an advantage at using nonverbal cues to understand how others are feeling, and how they are likely to behave. Women also appear to be better at using nonverbal cues to understand someone’s personality traits.
Another reason that women may do a better job at remembering the appearance of other women than men do of remembering the appearance of other men is that heterosexual women are more sexually aroused by the sight of other women than heterosexual men are at the sight of other men. It would be interesting to see these researchers repeat their study using homosexuals and heterosexuals split out separately. It would also be interesting to see whether more variation in the men's clothing and less variation in the women's clothing would influence the outcome.
Kei Koizumi, US federal R&D budget analyst for the American Association for the Advancement of Science (AAAS), claims that most categories of research and development spending will see inflation-adjusted declines in the Bush Administration's long-term budget plan through FY2009.
The president has proposed budget decreases at nine of the 12 federal agencies with the largest R&D portfolios, with only the Department of Defense (DOD), the Department of Homeland Security (DHS), and the National Aeronautics and Space Administration (NASA) staying ahead of inflation. Large projected increases in NASA and DHS obscures the steep cuts in all other nondefense agencies. In fact, DHS will see a $100 million increase from FY2004 to FY2005, with small increases projected each following year, culminating in a 25 percent boost and record-breaking funding levels over five years after adjusting for inflation.
Although the space exploration programs at NASA will benefit from large funding increases, all other R&D areas will decline dramatically over the next five years, including Earth Science (down 15.9 percent), aeronautics (down 16.2 percent), and Biological and Physical Research (down 11.8 percent).
AAAS analysis shows that even a past favorite like the National Institutes of Health (NIH) is susceptible to cuts. Over the next five years, NIH's $27 billion portfolio will see a modest rise due to increases in biodefense research. But funding for non-biodefense programs will fall by seven percent.
Many R&D funding programs face steep cuts over the next five years:
- Department of Energy (DOE) programs will see dramatic decreases such as: energy R&D (down 21 percent by FY2009), fossil energy R&D (down 22 percent), and energy conservation (down 26 percent).
- Department of Agriculture (USDA) intramural research will decline by 19 percent and extramural research grants will see a 28 percent cut.
- At the Department of Commerce, the Bush Administration would eliminate the Advanced Technology Program (ATP), as well as cut the budgets of the National Oceanic and Atmospheric Administration (NOAA) and the National Institute of Standards and Technology (NIST) by 10.5 percent and 17.3 percent respectively by FY 2009.
"In order to meet deficit reduction targets, even agencies receiving modest increases like NIH and NSF will see their R&D funding fall beginning in FY 2006," Koizumi said.
Why does Bush think the US can not afford to spend more on science? Lots of reasons. Bush has signed into law a prescription drug benefit that is going to cost $534 billion over the next decade (and that estimate is probably low if past Medicare entitlement spending estimates are indicative). This is especially worrisome because as government spending on drugs increases the pressure to implement drug price controls will increase as well. By reducing the profitability of new drug development and price controls would lead to a drop in private sector funding of medical research. Other entitlements for the elderly are set to grow. The Iraq war and occupation are adding hundreds of billions of more costs. Bush is effectively robbing the future to pay for more immediate demands of various interest groups and for his expensive foreign policy pursuits. See the chart at the bottom of the AAAS full report for how the projected changes in R&D fuding break out through fiscal year 2009. See here for other formats for the same AAAS report.
With the President's FY 2005 budget proposal, total federal R&D investment during the first term will be increased 44%, to a record $132 billion in 2005, compared to $91 billion in FY 2001.
President Bush's 2005 budget request commits 13.5% of total discretionary outlays to R&D - the highest level in 37 years. Not since 1968 and the Apollo program have we seen an investment in science of this magnitude.
Of this, the Bush budget commits 5.7% of total discretionary outlays to non-defense R&D. This is the third highest level in the last 25 years.
Funding for Basic Research, the fuel for future technology development, is at an all-time high of $26.8 billion in FY 2005, a 26% increase from FY 2001.
The President has completed the doubling of funding for the national Institutes of Health (NIH). Funding for NIH during the four years of this Administration is increased more than 40% since FY 2001 to $28.6 billion.
Funding for NSF during the four years of this Administration is increased 30% over FY 2001 to $5.7 billion.
The White House leaves out the fact that most of the NIH budget doubling occurred during Clinton's term in office and was done in large part because some Republican and Democratic party US Senators decided it was a wise thing to do. Some of those Senators have since left office and NIH budget growth doesn't appear to be as well supported politically at this point. Sorry I don't have a URL for this brief aside on NIH budget politics but I read an account of how it happened a couple of years ago.
The White House also ignores what it intends to do in specific categories and what it intends to do out beyond FY2005. The growth in NASA spending and Department of Homeland Defense spending is hiding cuts in other areas.
In my view most of the increase in NASA spending is a waste. The only exception is the plan to develop a nuclear powered space probe which will help to enable the development of defenses against asteroids. There are higher science priorities with much bigger financial and quality-of-life payoffs than a mission to the Moon and Mars.
Biological research can lengthen our lives, make us healthier, smarter, and generally more capable. The biological research will eventually produce treatments that will extend youth and middle age. This will increase the length of time that people can work and therefore would allow us to entirely avoid the financial catastrophe of tens of billions of dollars of unfunded liabilties for care for the elderly that is looming as a growing fraction of the population becomes too old to work. The acceleration of anti-aging and rejuvenation research is the best way to solve the demographic problem of aging populations. See Aubrey de Grey's writings on strategies of engineered negligible senescence for a roadmap of the types of research we ought to be pursuing that could save us tens of trillions of dollars in money that will otherwise have to be spent on the aged. The ability to reverse aging will also unleash huge increases in productivity and economic growth that would produce orders of magnitude more wealth than the cost of the research spent to make it possible.
Energy research in another area which can pay itself back many times over. Newer energy technologies will reduce trade deficits, make our air healthier to breathe, and reduce the threat of terrorism by reducing the financial flows to the Middle East. Another benefit will be greatly reduced defense costs. Instead of cutting energy research we ought to launch a major effort at an additional $10 billion dollars per year aimed at obsolescing oil by pursuing research into a number of alternatives. While Bush purports to be big on national defense he misses the obvious point that energy policy is an essential element of national security policy and energy policy is going to become more important for national security in the future.
The Bush Administation's plans for research and development spending are short-sighted. Scientific advances can solve problems in ways that pay back orders of magnitude more than the original research will cost to fund. Budget deficits and huge unfunded liabilities for those who are going to become elderly in the coming decades combined with the threat of terrorism and the greater globall competition for a limited supply of oil call for mammoth attempts to research and innovate our way to solutions.
Victor I. Klimov and Richard Schaller of the Los Alamos National Laboratory have shown that quantum dot nanocrystals will be able to absorb light for conversion to electricity at a much higher rate of efficiency than existing photovoltaic materials.
In ordinary photovoltaic cells, lots of sunlight goes to waste as it heats up the cell. New results suggest that solar cells made from nanocrystals can trade this wasteful heating for an electricity-generating boost.
Theoretical calculations indicate that nanocrystal-based solar cells could convert 60 percent of sunlight into electricity, say Richard D. Schaller and Victor I. Klimov of Los Alamos (N.M.) National Laboratory. The best solar cells today operate at an efficiency of about 32 percent.
Schaller and Klimov describe their results, the first observations of a long-sought cue ball effect in nanometer-scale crystals, in an upcoming Physical Review Letters.
A group at Lawrence Berkeley National Laboratory is pursuing a different approach that can convert over 50 percent of sunlight to electricity and Wladek Walukiewicz of that group claims that it will take 3 years to investigate whether their own approach will be practical to use for production of solar cells. For more on that approach see my previous post Material Discovered For Full Spectrum Photovoltaic Cell.
The problem of how to cheaply convert sunlight to useful electric power is eventually going to be solved. It would be worth it to spend more on research in this area to make the needed breakthroughs happen sooner. The same holds true for battery technology and fuel cell technology.
An excellent article by Erin Anderssen and Anne McIlroy in the Canadian Globe And Mail summarizes research on child development and human violence. They report that Richard Tremblay has found that 2 year old babies are more physically aggressive than teenagers or adults but fortunately too uncoordinated to do much damage to others.
Are human beings born pure, as Rousseau argued, and tainted by the world around them? Or do babies arrive bad, as St. Augustine wrote, and learn, for their own good, how to behave in society?Richard Tremblay, an affable researcher at the University of Montreal who is considered one of the world leaders in aggression studies, sides with St. Augustine, whom he is fond of quoting. Dr. Tremblay has thousands of research subjects, many studied over decades, to back him up: Aggressive behaviour, except in the rarest circumstances, is not acquired from life experience. It is a remnant of our evolutionary struggle to survive, a force we learn, with time and careful teaching, to master. And as if by some ideal plan, human beings are at their worst when they are at their weakest.
St. Augustine was obviously much closer to the truth.
What Dr. Tremblay and his colleagues around the world have now demonstrated is that the ability to feel rage exists the moment human beings take their first breaths. A four-month-old infant can show anger. And as they gain more control over their arms and legs, their mothers report increasing incidents of kicking and biting: They can also act in anger.
By the second year, aggressive behaviour peaks in temper tantrums, with slapping and pushing; according to Dr. Tremblay's work, a typical two-year-old, playing with others over the course of an hour, will commit one act of physical aggression for every four social interactions.
With teenagers, he says, researchers talk in terms of years or months or weeks between aggressive acts -- never hours -- though the incidents, obviously, are more severe. By their third birthdays, children have the motor skills to perform any of the acts of aggression an adult can. But at just that age, aggression begins to drop.
For almost everyone, it continues to drop for the rest of their lives. By Dr. Tremblay's calculation, only in about 5 per cent of men does the rate of aggression remain relatively stable into early adulthood. They are the most dangerous group to society.
The article tries to put what I consider to be an excessive environmental spin on the reason for the decline in physical aggression as children age. The fact that babies simultaneously become more physically coordinated and less violent at the same time strikes me as too much of a coincidence to be the result of teaching and discipline. Likely there is a genetically programmed stage of mental development that builds inhibiting neural circuits to control the physical outbursts.
The article vaguely refers to a study on the genetic and environmental factors that cause children to grow up to be antisocial. That is probably a reference to a New Zealand twins study which showed that a combination of a genetic variant for low level of expression of Mono-Amine Oxidase A (MAOA) and childhood abuse produces much higher rates of adolescent and adult criminal violence. However, contrary to Anderssen and McIlroy the study did not show that both the environmental and genetic factors studied had to be present to result in violence. It is just that those two factors made children far more prone to grow up to be violent. Some children who were not abused still grew up to be violent. Similarly, some children who expressed MAOA at a high level still grew up to be violent as well. There may be still more as yet unknown genetic variations, nutritional factors, toxins, social environmental factors, and other factors which contribute to higher probability of violent behavior.
The article also refers to Adrian Raine's work using positron emission tomography (PET) scans that showed differences in the glucose consumption rates in the brains of murderers.
For the Biological Psychiatry study, Dr. Raine directed scientists at USC and the University of California at Irvine as they used positron emission tomography (PET) to scan the brains of 41 murderers who had pleaded not guilty by reason of insanity. The scientists also scanned the brains of 41 control subjects matched for known mental disorders and for age and gender. Mental disorders among the subjects included schizophrenia, organic brain damage and a history of head injury.
PET scans measure the uptake of blood sugar (glucose) in various brain areas during the performance of simple, repetitive tasks. (Glucose is the basic fuel that powers most cell functions. The amount used is directly related to the amount of cell activity.)
On average, the murderers showed significantly lower rates of glucose uptake in three areas of the brain -- the prefrontal cortex, the corpus callosum and the posterior parietal cortex. Their rates were 4, 18 and 4 percentage points lower, respectively, than the rates measured in control subjects performing the same tasks.
When the researchers compared the brain's two hemispheres for glucose uptake rates, they found that murderers consistently showed weaker activity in the amygdala and the hippocampus of the brain's left -- or more rational -- hemisphere. These glucose uptake rates were each 4 percentage points lower than the rates measured in control subjects performing the same tasks.
But the murderers showed stronger activity in the thalamus, the amygdala, and the hippocampus of the right -- or more emotional -- hemisphere. These glucose uptake rates were 6, 6 and 3 percentage points higher, respectively, than the rates measured in control subjects performing the same tasks.
Raine has also shown that psychopaths have distinct differences in the shapes of some parts of their brains and that violent criminals have less brain gray matter.
These discoveries add up to suggest that there are limits to how much violent tendencies can be reduced using environmental changes and different methods of teaching and disciplining children. It seems unlikely that methods of teaching and socialization can fully compensate for less grey matter in those who commit violent acts as adults or the larger corpus callosums and asymmetrical hypothalamuses found in psychopaths.
In the next five to ten years, we think drugs that enhance memory are going to raise important issues of freedom of thought. Will you have a right to say no to these drugs if you are the only eyewitness to a crime? Could a future government say: "It's very important that you remember what you saw. We want you to take this drug at least until after you have testified in court."
How would you like to be forced to maintain an accurate memory of, say, a murder you witnessed? That would be difficult but understandable. However, imagine you were forced to more accurately remember your own rape. At the very least use of a memory-boosting technology for that purpose would be an argument for speedier trials and should come with an optional ability to reduce the clarity of the memory once the trial was completed.
I have previously argued with Boire on the question of whether there is an unlimited right to erase one's memories. However, in a follow-up clarification post on his postion Boire acknowledged that the right to erase one's own memories should not be treated as unlimited regardless of circumstances.
Also, I should note that no right is "absolute." Not even something like freedom of speech or freedom of religion. I'm perfectly willing to accept reasonable "time, place, and manner" type restrictions on cognitive liberty.
As the mind becomes more malleable to both memory erasure and false memory implant technologies such technologies will pose a serious problem regardless of the positions governments take over cognitive rights. The advent of date rape drugs demonstrate that, once again, governments have no monopoly on the ability or willingness to violate the rights of individuals. My greater concern for the future in Western societies is that the drugs that will be developed that alter and erase memories and change personalities will be illegally used by individuals against others without their knowledge.
We need effective technological defenses usable by ourselves against drugs and other technologies that alter our minds. Imagine, for instance, implanted sensors that would be able to signal us when the sensors detect drugs or other agents in our bodies. The ability to detect that we are under some form of cognitive attack would, for instance, allow a woman who has just consumed a date rape drug to call for help before losing consciousness. A really advanced implant would even be able to release counter-agents to block some or all of the effects of the cognitive state altering agents.
Boire is also concerned about a technology called brain fingerprinting offered by a company called Brain Fingerprinting Laboratories. The inventor of brain fingerprinting claims brain fingerprinting can detect whether a person is lying with far greater accuracy than a polygraph test.
Brain Fingerprinting, developed by Dr Larry Farwell, chief scientist and founder of Brain Fingerprinting Laboratories, is a method of reading the brain's involuntary electrical activity in response to a subject being shown certain images relating to a crime. Unlike the polygraph or lie detector to which it is often compared, the accuracy of this technology lies in its ability to pick up the electrical signal, known as a p300 wave, before the suspect has time to affect the output.
Boire's concern is that people could be compelled to take a brain fingerprinting test and that this would remove a person's right to remain silent. Keep in mind that in much (most?) of the world suspects and defendants do not have such a right in the first place. A technology for detecting deception therefore may not so much cause a violation of an existing right as it would cause the coercion of testimony to produce a more accurate result. Though if interrogation becomes much easier to use to produce accurate confessions one can expect at least some governments to make greater use of it.
As least in the West I see greater threats to cognitive liberty coming from actions of non-state actors than from governments. William Gibson' narrator in Neuromancer said something along the lines of "The streets find uses for things". Reflecting my own view of reality my misremembrance of that quote which I will now claim as my own is "The streets find their own uses for technology". In free societies those street uses of cognitive state-altering technologies are what I think we have the most to worry about.
A number of afflictions of humans in the modern era are a result of the fact that natural selection adapted us to environments so different than the environments of modern industrial societies. Obesity was not much of a problem in the past because for most of human history there was rarely much surplus food. The same is the case with salt cravings where, again, in the past there was usually not enough sodium in the foods our ancestors had to eat. Similarly, drug and alcohol addiction are obviously the result of the lack of human adaptiveness to substances they rarely encountered in the past. However, alcoholism susceptibility is less frequent among Mediterranean populations that have been growing grapes and making wine for centuries. So there are human populations that have developed at least partial genetic adaptations to ethanol consumption.
Some scientists have been advancing theories to explain some auto-immune disorders as being the result of lack of exposure to diseases that used to be common in the human past. Among the diseases suspected as being a consequence of lack of exposure to diseases are the painful digestive tract disorders inflammatory bowel disorder (IBD) and Crohn's Disease. Joel Weinstock MD, a professor of internal medicine at University of Iowa, and colleagues have demonstrated that eggs of pig whipworm, when consumed by suffers of Crohn's Disease (CD) and Ulcerative Colitis (UC), greatly reduce symptoms of those diseases.
). To assess safety and efficacy with repetitive doses, two patients with CD and two with UC were given 2500 ova at 3-wk intervals as maintenance treatment using the same evaluation parameters. RESULTS: During the treatment and observation period, all patients improved clinically without any adverse clinical events or laboratory abnormalities. Three of the four patients with CD entered remission according to the Crohn's Disease Activity Index; the fourth patient experienced a clinical response (reduction of 151) but did not achieve remission. Patients with UC experienced a reduction of the Clinical Colitis Activity Index to 57% of baseline. According to the IBD Quality of Life Index, six of seven patients (86%) achieved remission. The benefit derived from the initial dose was temporary. In the maintenance period, multiple doses again caused no adverse effects and sustained clinical improvement in all patients treated every 3 wk for >28 wk. CONCLUSIONS: This open trial demonstrates that it is safe to administer eggs from the porcine whipworm, Trichuris suis, to patients with CD and UC. It also demonstrates improvement in the common clinical indices used to describe disease activity. The benefit was temporary in some patients with a single dose, but it could be prolonged with maintenance therapy every 3 wk. The study suggests that it is possible to downregulate aberrant intestinal inflammation in humans with helminths.
At the moment the concoction cannot be stored for long, so doctors or hospitals would have to prepare fresh batches of the eggs for their patients. But a new German company called BioCure, whose sister company BioMonde sells leeches and maggots for treating wounds, hopes it will soon solve the storage problem.
If you find this idea completely revolting and perversely want to be even more revolted then see a picture of these worms or see this other picture of the worms so you can intensify your feelings of being grossed out by the idea of eating digestive tract worm eggs. Yes, picture worms wriggling around in your guts and say "Oh that is so gross! That is so disgusting!"
Okay, are you done imagining swallowing slimy worms that wriggle around in your mouth? Back to the science.
A pair of researchers at Scripps Research Institute, Nora Sarvetnick and Cecile King, are proposing a mechanism by which a lack of expsoure to pathogens causes autoimmune responses that cause diseases such as type I diabetes and rheumatoid arthritis.
According to the new hypothesis that Nora Sarvetnick and her colleague Cecile King are proposing, the root cause of autoimmunity is a failure to make an adequate response to an infection—in other words, an immune system that is not working hard enough (one that is hyporesponsive). This hyporesponsiveness creates a condition known as lymphopenia, where there is a reduction in the number of T cells in the body. Often people with autoimmune diseases like Type 1 diabetes, lupus, and rheumatoid arthritis have low T cell numbers.
If the body detects low levels of T cells, it resorts to homeostatic expansion, a mechanism that has never been associated with autoimmunity before. Under homeostatic expansion, growth signals stimulate the existing T cells in the body to divide and multiply.
This homeostatic process should normally fill the body, but sometimes that does not happen due to disrupted growth signals or a viral infection that causes the number of T cells to go down even as the body is trying to increase their numbers. These are the conditions that lead to autoimmunity, says Sarvetnick.
Sarvetnick, King, and colleagues Alex Ilic and Kersten Koelsch have shown that in a mouse strain genetically engineered to develop type I diabetes that they can prevent the development of the diabetes by feeding them bacterial cell wall components that keep up their T cell counts.
In their paper, Sarvetnick and her colleagues showed that NOD mice can be protected against diabetes by challenging them with a swill of bacterial cell wall components called CFA, which increased the T cell count and curtailed the development of diabetes in the mice.
To show that this effect was due to the increase in T cell count following the CFA administration and not some other cause, they passively stimulated the immune systems of NOD mice by infusing them with T cells. These infusions also prevented the NOD mice from developing diabetes.
According to Sarvetnick's and King's hypothesis, the protection against diabetes results from exposure to these pathogens because it keeps the body full of immune cells. Increased numbers of T cells act as a buffer against the emergence of self-reactive T cells by shutting down homeostatic expansion.
This hypothesis could explain a discrepancy in the number of cases of autoimmune disease in developed and developing countries. Disease rates have been on the rise in developed countries in the last 50 years compared to their developing neighbors, presumably because people in less developed countries are exposed to more pathogens.
Now, some of you may rather eat worm eggs or even wriggling worm eggs to treat your autoimmune disorders. But I think Sarvetnick, King, and company are performing a useful public service by searching for a treatment based on bacterial cell wall components.
In the longer run expect to see the development of vaccines that stimulate the immune system in ways that greatly decrease the odds of development of autoimmune disorders. Some of these future vaccines may even cure existing autoimmune disorders.
Rattan Lau (another page of his here), director of the Ohio State University Carbon Management and Sequestration Center, argues that no-till farming could pull a substantial portion of carbon out of the atmosphere as an anti-global warming strategy.
Traditional plowing, or tilling, turns over the top layer of soil. Farmers use it for, among other reasons, to get rid of weeds, make it easier to use fertilizers and pesticides and to plant crops. Tilling also enriches the soil as it hastens the decomposition of crop residue, weeds and other organic matter.
Still, the benefits of switching to no-till farming practices outweigh those of traditional planting.
Since the mechanization of agriculture began a few hundred years ago, scientists estimate that some 78 billion metric tons – more than 171 trillion pounds – of carbon once trapped in the soil have been lost to the atmosphere in the form of CO2.
Lal and his colleagues estimate that no-till farming is practiced on only 5 percent of all the world's cultivated cropland. Farmers in the United States use no-till methods on 37 percent of the nation's cropland, which results in saving an estimated 60 million metric tons of soil CO2 annually.
"If every farmer who grows crops in the United States would use no-till and adopt management practices such as crop rotation and planting cover crops, we could sequester about 300 million tons of soil carbon each year," said Lal, who is also a professor of soil science at Ohio State.
"Each year, 6 billion tons of carbon is released into the planet's atmosphere as fossil fuels are burned, and plants can absorb 20 times that amount in that period of time," he said. "The problem is that as organisms decompose and plants breathe, CO2 returns to the atmosphere. None of it accumulates in the soil."
Out of that 6 billion tons of carbon that is released into the atmosphere the United States probably accounts for approximately a quarter since the US accounts for about a quarter of all energy use. Those are rough figures. But compare that approximately 1.5 billion tons released with the 300 million tons Lau says we could capture in the soil. About 20% of our current carbon release could be captured in the soil with changed farming practices. Of course, as population and the per capita GDP grow energy consumption will grow. The amount of carbon captured by changed farming practices would likely not increase along with the population and energy consumption increases.
Keep in mind that we may not want to reverse all the carbon dioxide increase caused by farming and the burning of fossil fuels. First of all, release of carbon dioxide caused by the human development of farming may already have prevented onset of an ice age. Also, higher carbon dioxide appears to be allowing plants to grow into the Negev Desert and other deserts. We may be able to genetically engineer crop plants to grow faster in the presence of higher atmospheric carbon dioxide and those plants may be more drought resistant.
Still, some carbon sequestration may eventually become desirable and changing of farming practices might be a cheap way to accomplish it. A friend points out that sequestration in soil even has the advantage over deep ocean sequestration in that the soil carbon is more easily accessible should we need it again in the much longer term to use as a warming gas. Given that the interglacial warming periods like the one we are living in now are the exceptions to the average colder periods that have characterized Earth's history keeping carbon accessible seems prudent.
On the other hand, for planet warming we could make a small amount of carbon go a much longer way. Methane is a more potent warming gas than carbon dioxide. If thousands of years from now it ever became necessary to release hot house warming gasses into the atmosphere we could run nuclear fusion power plants to generate power to reduce carbon with hydrogen to produce methane. Then we could release the methane gas into the atmosphere to have it serve as a warming gas.
Also see my previous post Iron Enriching Southern Ocean Pulls Carbon Dioxide From Atmosphere.
Teens are known to be more likely to smoke if their mothers smoked during pregnancy. Among the possible explanations for this phenomena could be a genetic predisposition that increased the odds of mothers being smokers in the first place. However, epidemiological studies which have controlled for many factors support the idea of a biological cause that is a result of the prenatal exposure. Dr. Theodore Slotkin of Duke University has produced evidence in a rat model that suggests prenatal nicotine exposure causes brain damage that creates a predisposition to nicotine addiction.
The rats exposed to nicotine before birth suffered loss of brain cells and a decline in brain activity that persisted throughout adolescence and into adulthood, the team found.
When given doses of nicotine for a two-week period as adolescents, the earlier exposed rats showed a weaker brain response in circuits using acetylcholine -- a natural chemical messenger that plays a critical role in learning and memory -- as compared to rats that did not experience the prenatal exposure. Nicotine's activity in the brain stems from its ability to mimic acetylcholine. The earlier exposure also worsened the decline in brain activity during nicotine withdrawal and led to an increase in the amount of brain cell injury induced by the drug, they reported.
"The current study suggests that the lasting neurotoxic effects of prenatal exposure to nicotine from maternal smoking during pregnancy may worsen the long-term consequences of adolescent smoking -- effects that may increase the likelihood that an individual will take up and keep smoking," Slotkin said.
Specifically, the team explained, the reduced response of acetylcholine systems in the adolescent brain following prenatal exposure might lead teens to self-administer nicotine in an attempt to replace the brain's functional loss. Furthermore, that deficient brain response might drive higher cigarette consumption.
Here is the abstract of the paper and that includes a link to the full paper.
Another group of Duke researchers has previously shown in a rat model that initial exposure to nicotine more strongly predisposed the rats toward later nicotine cravings if the initial exposure first happened in adolescence rather than in adulthood. Brains that are still developing are generally more vulnerable to toxins and so this result is not too surprising.
It is also worth noting that hostile personalities are more prone to nicotine addiction. This latest result suggests the possibility that prenatal nicotine exposure might make people more hostile later in life.
UPTON, NY— Scientists at the U.S. Department of Energy’s Brookhaven National Laboratory have produced new evidence that brain circuits involved in drug addiction are also activated by the desire for food. The mere display of food — smelling and tasting favorite foods without actually eating them — causes increases in metabolism throughout the brain. Increases of metabolism in the right orbitofrontal cortex, a brain region that controls drive and pleasure, also correlate strongly with self-reports of desire for food and hunger.
“These results could explain the deleterious effects of constant exposure to food stimuli, such as advertising, candy machines, food channels, and food displays in stores,” says Brookhaven physician Gene-Jack Wang, the study’s lead author. “The high sensitivity of this brain region to food stimuli, coupled with the huge number and variety of these stimuli in the environment, likely contributes to the epidemic of obesity in this country.” The study appears in the April 2004 issue of NeuroImage.
This reaction is an example of an evolutionary adaptation that is no longer adaptive in modern environments. While calorie malnutrition was a major cause of death for almost all of human history in many parts of the world today calorie malnutrition is rarely a cause of death. Human brain reactions to food are therefore more often maladaptive than adaptive. Humans need biotechnologies that will help them adapt themselves to the new environments they have created for themselves and our response to plentiful food is a major cause of the need for adaptive biotechnologies.
Previous Brookhaven work has shown similarities between the brains of food addicts and drug addicts.
Brookhaven scientists have conducted previous research showing that the right orbito-frontal cortex is involved in compulsive behaviors characteristic of addictive states, and that this brain region is activated when addicted individuals crave drugs such as cocaine (see related story). They have also shown that food stimulation, as done in this study, increases levels of dopamine, a neurotransmitter involved in pleasure and reward, in the brain’s dorsal striatum (see related story). Additionally, to better understand the relationship of the dopamine system to obesity, they looked at the brain circuits of obese individuals and found that, like drug addicts, these individuals had fewer dopamine receptors than normal control subjects (see related story).
In their most recent research the Brookhaven scientists used positron emission tomography (PET) scans to study the response of normal people to food and found that food exposure increases metabolism in almost all major areas of the brain.
The researchers found that food stimulation significantly increased whole brain metabolism. Metabolism was higher in all regions of the brain examined, except for the occipital cortex, which controls vision and would not be affected. The areas most affected were the superior temporal, anterior insula, and orbitofrontal cortices. Food stimulation also resulted in increases in self-reports of hunger and desire for food. Increases in metabolism in the right orbitofrontal cortex were the ones that were most significantly correlated with increased reports of hunger.
People suffering from food addiction problems would benefit from removing reminders about food from their environment. That is hard to do because of the sheer number of sources of food reminders ranging from street signs of restaurants, billboards, advertisements in magazines, on radio and on TV, TV portrayals of people eating, and the presence of a kitchen in most dwellings. Still, there are ways to reduce one's exposure to food smells and images. For instance, remove all visible food in a house by having no fruit basket on a table and no food cans visible on shelves. One could listen to commercial-free satellite as a way to avoid food ads. Also, avoid looking at the types of magazines that have a lot of food ads.
Technology may eventually provide greater control over one's environmental stimuli. For instance, some web sites already provide ways to configure personalized ad profiles of areas of interest. Digital TV advances ought to allow one to separate what show one is going to watch from what ads one sees. Perhaps some day TV channels will let one specify the types of ads one wants to watch and thereby provide a way to avoid, say, food or alcohol ads.
In the longer run food addiction problems will be controllable by drugs that will suppress appetite. Also, it will eventually become possible to turn up the amount of brown fat cells that burn off excess calories.
Gene-Jack Wang has done other interesting work about addiction. Wang and his colleagues previously showed that methamphetamine-induced brain damage is long-lasting. For details see my previous post Partial Recovery From Methamphetamine-Induced Brain Damage.
It will be interesting to see whether the lower dopaminergic neuron levels in the right orbito-frontal cortex of people with addiction problems in all cases precede the addiction or are caused by the addiction or a mixture of both. Also, are obese people more prone to drug addictions? Given that some obese people feel motivated to take methamphetamine to suppress appetite separating out the causes and effects is difficult.
Young women who took iron supplementation for 16 weeks significantly improved their attention, short-term and long-term memory, and their performance on cognitive tasks, even though many were not considered to be anemic when the study began, according to researchers at Pennsylvania State University.
The study, the first to systematically examine the impact of iron supplementation on cognitive functioning in women aged 18 to 35 (average age 21), was presented at Experimental Biology 2004, in the American Society of Nutritional Sciences' scientific program. Dr. Laura Murray-Kolb, a postdoctoral fellow in the lab of Dr. John Beard, says the study shows that even modest levels of iron deficiency have a negative impact on cognitive functioning in young women. She says the study also is the first to demonstrate how iron supplementation can reverse this impact in this age group.
Baseline cognition testing, looking at memory, stimulus encoding, retrieval, and other measures of cognition, was performed on 149 women who classified as either iron sufficient, iron deficient but not anemic, or anemic. All of the women underwent a health history, and the research design controlled or took into account any differences in smoking, social status, grade point average, and other measures. The women were then given either 60 mg. iron supplementation (elemental iron) or placebo treatment for four months. At the end of that period, the 113 women remaining in the study took the same task again.
On the baseline test, women who were iron deficient but not anemic completed the tasks in the same amount of time as iron sufficient women of the same age, but they performed significantly worse. Women who were anemia also performed significantly worse, but in addition they took longer. The more anemic a woman was, the longer it took her to complete the tasks. However, supplementation and the subsequent increase in iron stores markedly improved cognition scores (memory, attention, and learning tasks) and time to complete the task.
This finding has great implications, says Dr. Murray-Kolb, because the prevalence of iron deficiency remains at 9 percent to 11 percent for women of reproductive age and 25 percent for pregnant women. In non-industrialized countries, the prevalence of anemia is over 40 percent in non-pregnant women and over 50 percent for pregnant women and for children aged five to 14. According to current prevalence estimates, iron deficiency affects the lives of more than two billion people worldwide.
The findings also are important, say the researchers, because they illustrate the significance of lower amounts of iron deficiency on cognitive functioning, including memory, attention, learning tasks, and time to complete studies.
Some of the known consequences of iron deficiency are reduced physical endurance, an impaired immune response, temperature regulation difficulties, changes in energy metabolism, and in children, a decrease in cognitive performance as well as negative affects on behavior. While iron deficiency was once presumed to exert most of its deleterious effects only if it had reached the level of anemia, it has more recently become recognized that many organs show negative changes in functioning before there is any drop in iron hemoglobin concentration.
Authors of the study are Dr. Murray-Kolb, Dr. Beard, both of the Nutritional Sciences Department at Penn State, and Dr. Keith Whitfield, of Penn State's Biobehavioral Health Department.
Two billion people would operate at a higher level of cognitive function if they received adequate amounts of iron. That represents a huge amount of wasted potential and that is from just one micronutrient deficiency. There may well be other widespread micronutrient deficiencies (choline, omega 3 fatty acids, zinc, and still others) that are reducing cognitive function. But even worse, some of these deficiencies are surely holding back brain development before and after birth and preventing full genetic potential from ever being realized. For people who grow up eating a nutritionally deficient diet supplementation in adulthood will be beneficial. But if a person goes through a stage of development with deficiencies persisting throughout that stage then supplementation that comes later in life will be too late to allow full brain development..
Unfortunately the taboos around IQ help to prevent a full airing of the implications of this sort of research. That is a shame. We need to talk about how to raise IQs and improve mental function by improving nutrition and in other ways as well. For instance, British researchers found that improved nutrition makes prisoners behave much better. Would it also lower crime rates if released prisoners were required to take supplements? Also, a recent report provides preliminary evidence that zinc will lower the severity of attention deficit hyperactivity disorder (ADHD) in children. It may be the case that millions of children would be learning more and disrupting classes less if they were getting more zinc in their diets.
In order to improve average level of mental functioning I am an advocate of both more aggressive fortification of foods and of a more rapid reduction in the emissions of compounds such as mercury and lead that are neurotoxins. Also, a lot more research into nutritional state and cognitive function should be done. Our brains are our greatest assets and we should err on the side of doing whatever we can to ensure they will develop to reach their fullest genetic potential.
Salting the Southern Ocean with iron results in increased growth of phytoplankton that take carbon dioxide from the atmosphere and extract the carbon into a form that eventually sinks deep into the ocean.
A remarkable expedition to the waters of Antarctica reveals that iron supply to the Southern Ocean may have controlled Earth's climate during past ice ages. A multi-institutional group of scientists, led by Dr. Kenneth Coale of Moss Landing Marine Laboratories (MLML) and Dr. Ken Johnson of the Monterey Bay Aquarium Research Institute (MBARI), fertilized two key areas of the Southern Ocean with trace amounts of iron. Their goal was to observe the growth and fate of microscopic marine plants (phytoplankton) under iron-enriched conditions, which are thought to have occurred in the Southern Ocean during past ice ages. They report the results of these important field experiments (known as SOFeX, for Southern Ocean Iron Enrichment Experiments) in the April 16, 2004 issue of Science.
Previous studies have suggested that during the last four ice ages, the Southern Ocean had large phytoplankton populations and received large amounts of iron-rich dust, possibly blown out to sea from expanding desert areas. In order to simulate such ice-age conditions, the SOFeX scientists added iron to surface waters in two square patches, each 15 kilometers on a side, so that concentrations of this micronutrient reached about 50 parts per trillion. This concentration, though low by terrestrial standards, represented a 100-fold increase over ambient conditions, and triggered massive phytoplankton blooms at both locations. These blooms covered thousands of square kilometers, and were visible in satellite images of the area.
Each of these blooms consumed over 30,000 tons of carbon dioxide, an important greenhouse gas. Of particular interest to the scientists was whether this carbon dioxide would be returned to the atmosphere or would sink into deep waters as the phytoplankton died or were consumed by grazers. Observations by Dr. Ken Buesseler of Woods Hole Oceanographic Institution and Dr. Jim Bishop of Lawrence Berkeley National Laboratories (reported separately in the same issue of Science) indicate that much of the carbon sank to hundreds of meters below the surface. When extrapolated over large portions of the Southern Ocean, this finding suggests that iron fertilization could cause billions of tons of carbon to be removed from the atmosphere each year. Removal of this much carbon dioxide from the atmosphere could have helped cool the Earth during ice ages. Similarly, it has been suggested that humans might be able to slow global warming by removing carbon dioxide from the atmosphere through a massive ocean fertilization program.
This report provides support for the idea that dissolving large quantites of iron into the ocean could be used as a technique to slow or perhaps even reverse the build-up of carbon dioxide in the atmosphere. The report of the Woods Hole scientists is particularly interesting because it suggests that some of the carbon pulled from the atmosphere by ocean iron fertilization will stay out of the atmosphere for long enough to be worthwhile as an approach for preventing global warming.
The controversial idea of fertilizing the ocean with iron to remove carbon dioxide from the atmosphere gained momentum in the 1980s. Climate and ocean scientists, as well as ocean entrepreneurs and venture capitalists, saw potential for a low-cost method for reducing greenhouse gases and possibly enhancing fisheries. Plankton take up carbon in surface waters during photosynthesis, creating a bloom that other animals feed upon. Carbon from the plankton is integrated into the waste products from these animals and other particles, and settles to the seafloor as "marine snow" in a process called the "biological pump." Iron added to the ocean surface increases the plankton production, so in theory fertilizing the ocean with iron would mean more carbon would be removed from surface waters to the deep ocean. Once in the deep ocean, the carbon would be "sequestered" or isolated in deep waters for centuries. The oceans already remove about one third of the carbon dioxide released each year due to human activities, so enhancing this ocean sink could in theory help control atmospheric carbon dioxide levels and thus regulate climate.
However; estimates have been produced, suggesting only US$1-5 per tonne of carbon fixed. This compares very well with US$50-200 for other proposed sinks, so both commercial and political interest is thus high.
A controversial report by the National Academy of Sciences in 1992 looked at iron fertilization, among other geoengineering options. Although the NAS noted some caveats, it concluded that iron fertilization does have one attractive feature: a relatively cheap price tag. Running 360 ships full-time to fertilize 46 million square kilometers of ocean would cost somewhere between $10 billion and $110 billion a year.
Understanding of the impact on greenhouse gas balances through ocean fertilisation by the addition of iron or nutrients, such as nitrates and phosphates, is still limited (Annex B). With regard to iron fertilisation, there are substantial uncertainties about the overall response of the Southern Ocean ecosystem, and it is possible fertilisation could lead to increased emissions of other greenhouse gases, nitrous oxide and methane, significantly offsetting any increased uptake of carbon dioxide. Estimated costs of ocean fertilisation are highly uncertain. One source estimates are £3 to £37/tC but this may be rather optimistic due to uncertainties over the effectiveness of the process. The other option, ocean fertilisation by nutrients appears to be even less practical than that for iron. Costs are also uncertain but are likely to be higher at £30 to £120/tC.
But those estimates will become more precise as additional research work more accurately measures how much of the carbon fixed by ocean plants ends up sinking into the ocean depths.
The ferilization of oceans with iron would need to be done continuously to maintain a continued rate of extraction of carbon from the atmosphere to counteract carbon dioxide emissions from fossil fuel burning. Cessation of seeding would very quickly lead to a return to lower levels of plankton.
Projections from this experiment indicate that if the polar oceans were completely seeded in such a fashion, atmospheric CO2 would decrease by about 10%. This would substantially mitigate the greenhouse effect caused by CO2. Such plankton growth has other benefits as well. One potential benefit may be that the increase in plankton would lead to an increase in the populations of other ocean fauna, such as whales and dolphins, that feed on plankton. Another benefit, again, is that it is relatively inexpensive. A continual iron-seeding program would cost only about $10 billion a year. Yet another benefit is that plankton growth stops about a week after seeding, so if the plankton were determined to have a detrimental effect, the effort could be quickly disbanded.
But why store carbon in the oceans?
Faced with the stark reality, even the International Panel on Climate Change has admitted that we may have to consider what it calls ‘carbon management strategies’ to complement reductions in greenhouse gas emissions. One option is to store the excess carbon on land; this is already being done in deep geological formations, abandoned mines and the like.
But it is the oceans that have the greatest natural capacity to absorb and store carbon. On an annual basis, the surface of the ocean absorbs about 30% of the carbon in the atmosphere, less during El Niño years. But over very long timescales, of thousands of years, as much as 85% is absorbed by the oceans. The ocean contains an estimated 40,000 billion tons of carbon, as compared to 750 billion tons in the atmosphere and about 2200 billion tons on land. This means that, were we to take all the atmospheric CO2 and put it in the deep ocean, the concentration of CO2 in the ocean would change by less than 2%.
Iron ocean fertilization could be a bone for fish growth as the fish ate away at the much larger quantities of plankton. So part of the cost of this proposal might be paid for in larger fish catches. If a property rights system could be worked out for the fertilized ocean fisheries then a portion of the sales of harvested fish could go toward paying for the ocean fertilization. Though a carbon tax on all oil, natural gas, and coal extraction could be levied as well.
My recent few days absence here was due to attendance at a recent Liberty Fund conference on liberty, biological determinism and Steven Pinker's The Blank Slate: The Modern Denial of Human Nature. The conference was also attended by bloggers Alex Tabarrok and Tyler Cowen of Marginal Revolution, the mentally nimble and facially expressive (really, his speedily changing mental state is visible for all to see) Daniel Drezner (see Dan's own description of the conference's highlights), the towering intellectual figure Megan McArdle and heck-of-a-nice-guy art dealer and historian David Nishimura.
One very gratifying moment at the Liberty Fund conference came when Ph.D. academic economists Alex and Tyler both emphatically agreed with me that spending on the search for asteroids that might strike the Earth is a woefully underfunded public good. Well, that leads to the topic for this post. One big asteroid hit could ruin your whole day and even end your life along with the lives of hundreds of millions or billions of others. I don't want to wake up some day and hear a news report that we all have about 3 days to live because of a just-discovered asteroid that is about to kill us all that can not be stopped because we didn't prepare in advance.
Former NASA astronaut Russell L. "Rusty" Schweickart is currently Chairman of the B612 Foundation which is dedicated to the development of anti-asteroid defenses to protect planet Earth from Near Earth Asteroids (NEAs). The U.S. Senate Subcommittee on Science, Technology, and Space held hearings on April 7, 2004 about Near Earth Objects (NEO) which included discussion of what should be done about asteroids that may strike planet Earth. Schweickart recently testified before those hearings presenting arguments on the feasibility and desirability of developing a system for diverting any large asteroid found to be on a collision course with Earth.
It became immediately clear to us that the combination of advanced propulsion technologies and small space qualified nuclear reactors, both operating in prototype form already, would be powerful enough, with reasonable future development, to deflect most threatening asteroids away from a collision with the Earth, given a decade or more of advance warning.
Nevertheless we saw two immediate problems.
First we lack the specific knowledge of the characteristics of NEAs necessary to design anything approaching a reliable operational system. We could readily show that the technology would exist within a few years to get to and land on an asteroid. We also determined that after arriving at the asteroid we would have enough propulsive energy available to successfully deflect the asteroid from an Earth impact a decade or so later. What was missing however was knowledge about the structure and characteristics of asteroids detailed enough to enable successful and secure attachment to it.
Second we recognized that before we would be able to gather such detailed information about NEAs there would likely be many public announcements about near misses and possible future impacts with asteroids which would alarm the general public and generate a growing demand for action. We felt strongly that there needed to be some legitimate answer to the inevitable question which will be put to public officials and decision makers, "and what are you doing about this?"
These two considerations led us to the conclusion that the most responsible course of action would be to mount a demonstration mission to a NEA (one of our choosing) which would accomplish two essential tasks; 1) gather critical information on the nature of asteroid structure and surface characteristics, and 2) while there, push on the asteroid enough to slightly change its orbit thereby clearly demonstrating to the public that humanity now has the technology to protect the Earth from this hazard in the future.
We furthermore determined that this demonstration mission could be done with currently emerging capabilities within 10-12 years.
We therefore adopted the goal of "altering the orbit of an asteroid, in a controlled manner, by 2015".
Astronaut Edward Lu, President of the B612 Foundation also testified at the Senate hearings arguing for
Recent developments have now given us the potential to defend the Earth against these natural disasters. To develop this capability we have proposed a spacecraft mission to significantly alter the orbit of an asteroid in a controlled manner by 2015.
Why move an asteroid? There is a 10 percent chance that during our lifetimes there will be a 60 meter asteroid that impacts Earth with energy 10 megatons (roughly equivalent to 700 simultaneous Hiroshima sized bombs). There is even a very remote one in 50,000 chance that you and I and everyone we know, along with most of humanity and human civilization, will perish together with the impact of a much larger kilometer or more sized asteroid. We now have the potential to change these odds.
There are many unknowns surrounding how to go about deflecting an asteroid, but the surest way to learn about both asteroids themselves as well as the mechanics of moving them is to actually try a demonstration mission. The first attempt to deflect an asteroid should not be when it counts for real, because there are no doubt many surprises in store as we learn how to manipulate asteroids.
Why by 2015? The time to test, learn, and experiment is now. A number of recent developments in space nuclear power and high efficiency propulsion have made this goal feasible. The goal of 2015 is challenging, but doable, and will serve to focus the development efforts.
How big of an asteroid are we proposing to move? The demonstration asteroid should be large enough to represent a real risk, and the technology used should be scaleable in the future to larger asteroids. We are suggesting picking an asteroid of about 200 meters. A 200 meter asteroid is capable of penetrating the atmosphere and striking the ground with an energy of 600 megatons. Should it land in the ocean (as is likely), it will create an enormous tsunami that could destroy coastal cities. Asteroids of about 150 meters and larger are thought to be comprised of loose conglomerations of pieces, or rubble piles, while smaller asteroids are often single large rocks. The techniques we test on a 200 meter asteroid should therefore also be applicable to larger asteroids.
Lu argues that the nuclear propulsion system proposed for the Jupiter Icy Moons Orbiter spacecraft should be used to move an asteroid.
How can this be accomplished? This mission is well beyond the capability of conventional chemically powered spacecraft. We are proposing a nuclear powered spacecraft using high efficiency propulsion (ion or plasma engines). Such propulsion packages are currently already under development at NASA as part of the Prometheus Project. In fact, the power and thrust requirements are very similar to the Jupiter Icy Moons Orbiter spacecraft, currently planned for launch around 2012. The B612 spacecraft would fly to, rendezvous with, and attach to a suitably chosen target asteroid (there are many candidate asteroids which are known to be nowhere near a collision course with Earth). By continuously thrusting, the spacecraft would slowly alter the velocity of the asteroid by a fraction of a cm/sec – enough to be clearly measurable from Earth.
I have previously argued that the development of nuclear ion propulsion for the JIMO mission is an excellent idea. Well, development of a space nuclear propulsion system for any number of missions is a great idea because it then allows the propulsion system to be used on something that may some day save millions or perhaps even billions of lives. The development of a space nuclear propulsion system ought to be greatly accelerated so that we have a method to protect us against asteroids as soon as possible.
Encouragingly the Bush Administration has allocated $3 billion over the next 5 years for Project Prometheus. NASA has more on Prometheus.
Aside from still moving too slowly to develop technologies to use against an asteroid that is on a collision course there is still one big problem with NASA's current strategy: the amount of money going into finding asteroids on a collision course with Earth is still chump change.
NASA spends a modest $3.5 million per year as part of the Spaceguard Survey search for large asteroids, the sort that could cause global damage, including a global "winter" that might last years and could kill off some species and possibly threaten civilization.
The current mission of the NASA Near-Earth Object Program is focused on finding only the bigger asteroids and not even all of them.
NASA’s Near-Earth Object Program Office will focus on the goal of locating at least 90 percent of the estimated 2,000 asteroids and comets that approach the Earth and are larger than about 2/3-mile (about 1 kilometer) in diameter, by the end of the next decade.
“These are objects that are difficult to detect because of their relatively small size, but are large enough to cause global effects if one hit the Earth,” said Dr. Donald K. Yeomans of JPL, who will head the new program office. “Finding a majority of this population will require the efforts of researchers at several NASA centers, at universities and at observatories across the country, and will require the participation by the international astronomy community as well.”
A panel of experts working at NASA's request has recommended a bold new search for potentially dangerous asteroids, including smaller objects that could cause regional damage in an Earth impact.
The price tag: At least $236 million.
The United States is spending hundreds of billions in Iraq for unclear benefit. The US government spends billions on many other undertakings of questionable benefit. In the space program both the Space Shuttle and the International Space Station come to mind as programs with dubious benefits and huge price tags. By contrast, we know that somewhere out there multiple asteroids are on collision paths with planet Earth and some are large enough to kill millions or billions of lives. Efforts to discover them and plot their orbits will both help to protect human lives and further the advance of space science. Efforts to develop technologies to deflect asteroids will both protect human lives and increase our capabilities to do things in space for other purposes. The effort to discover and plot the future trajectories of all asteroids ought to be increased by a couple of orders of magnitude and considerable funding should be allocated to the development of spacecraft capable of deflecting asteroid paths.
Update: Tyler Cowen draws my attention to a previous report he linked to from the Volokh Conspiracy about an effort to provide monetary incentives for private individuals and groups to discover asteroids.
"Amateur astronomers could receive awards of $3,000 for discovering and tracking near-Earth asteroids under legislation approved by the House of Representatives on Wednesday.
"Given the vast number of asteroids and comets that inhabits Earth's neighborhood, greater efforts for tracking and monitoring these objects are critical," said Rep. Dana Rohrabacher, sponsor of the legislation that passed 404-1.
Rohrabacher's legislation is H.R. 912, the Charles "Pete" Conrad Astronomy Awards Act.
H.R. 912, the Charles "Pete" Conrad Astronomy Awards Act, named for the third man to walk on the moon, establishes awards to encourage amateur astronomers to discover and track near-earth asteroids. The bill directs the NASA Administrator to make awards, of $3,000 each, based on the recommendations of the Smithsonian Minor Planet Center. Earth has experienced several near misses with asteroids that would have proven catastrophic, and the scientific community relies heavily on amateur astronomers to discover and track these objects.
There is a US Senate equivalent of that bill as S. 1855 for the 108th Congress. However, it is sitting in the Senate Committee on Commerce, Science, and Transportation. This bill deserves more attention.
Tyler also links to a Gregg Easterbrook Easterblogg post on how the threat from asteroids is, statistically speaking, roughly the same as that posed by commercial airliner risks.
Should humanity simply assume its luck will hold? Many don't think so. As Nathan Myhrvold, the chief technology officer at Microsoft, has written, "Most estimates of the mortality risk posed by asteroid impacts put it at about the same risk as flying in a commercial airliner. However, you have to remember that this is like the entire human race riding the plane."
Here is a potentially huge threat that could be protected against at a cost that probably ranges in the billions or at most a few tens of billions of dollars. In the process of developing the knowledge and technology needed to protect against it we will both gain scientific knowledge about the solar system and will gain technologies that are useful for doing other things up in space. Contrast the cost and benefit of doing this with the cost of the International Space Station which has costs for US taxpayers alone that many estimate to be as high as $100 billion if not more (also see here and here for more estimates in the $100 billion range). The ISS accomplishes very little in terms of science produced. An asteroid protection program might literally save all our lives. The risk is high enough to justify expenditures to protect us.
The way newborn lambs regulate their temperature in the first few weeks of life using a special deposit of brown fat could give clues for tackling obesity in humans, according to Imperial College London scientists.
Unlike normal white fat that stores surplus energy, brown fat generates heat in response to cold or excess caloric intake. While some mammals such as rodents maintain this 'good' fat throughout life, humans are similar to lambs: brown fat is present in the newborn to act as an internal central heating system maintaining body temperature and preventing hypothermia, but rapidly disappears as brown fat is irreversibly replaced by normal white fat.
Now, researchers based at Imperial's Wye campus in Kent are determining the molecular switch in lambs that transforms brown fat into normal white fat, and investigations are underway to determine whether this conversion could be reversed and used as a new fat busting technique.
Professor Michael Lomax, head of Imperial's Animal Science Research section at Wye and leader of the project says:
"Obesity is fast becoming one of the biggest problems for the Western world. In the UK statistics suggest 20 per cent of us are overweight and in the USA the situation is even worse with more than 60 per cent of the population either overweight or clinically obese.
"While researchers continue to investigate how to increase the body's natural appetite suppressants most of us at some point will have resorted to some kind of diet to move those couple of extra stubborn pounds. But by reactivating natural brown fat we could lose weight without even trying.
In most cells, mitochondria use the energy they liberate to make ATP, the fuel that drives chemical reactions in living organisms. But in brown fat cells, Uncoupling Protein 1 (UCP-1) interferes with this process, forcing the cells to release energy as heat. However, researchers are unsure how the molecular switch is flicked during development, turning UCP-1 expression off after birth in some mammals and not others.
During development so called master fat determination genes or PPARs govern whether an immature cell commits to becoming a fat cell, explains Dr Fouzia Sadiq of Imperial's Animal Science Research section.
"A further signal, PGC- 1 alpha is then needed to convert immature fat cells into brown fat that expresses UCP-1 rather than normal white fat," she says.
"But what we don't know is the underlying mechanism that regulates the loss of UCP-1 activity after birth. If we can establish this then we will be in a much better position to understand how to switch back on the signals that make immature fat cells develop into brown fat."
To establish whether PPARs and PGC-1 alpha play a role in switching UCP-1 off the researchers looked at expression levels during late pregnancy and over the first month after birth. The results indicate that levels of UCP-1 closely mirror levels of PPARs and PGC-1 alpha, suggesting that they are the key switches that control conversion of immature fat cells into brown rather than white fat.
"Having established the key role of PPARs and PGC-1 alpha we're now focusing on what drugs and natural compounds could reverse the process," says Dr Sadiq.
"Already the drug isoprenaline has been shown to increases levels of PGC-1 alpha and PPARs with a subsequent increase UCP-1 after birth. Now we're looking at whether synthetic and natural activators of the genes that express PPARs and PGC-1 alpha have the same effect."
Obesity is a consequence of the fact that we humans no longer live in the environments that we evolved to be adapted to. The rising prevalence of obesity is a sign of just how maladapted we are to modern life. Another less noticed sign of maladaptation is the physical reaction many experience when in an argument in a business meeting. When the adrenaline rushes through your body and your body becomes ready for "fight or flight" you are experiencing a maladaptive response. You aren't going to fight and you aren't going go running down the hallway to escape. The stress of each "fight or flight" response is causing damage to accumulate that ages the body and it would be good to be able to avoidi that stress by suppressing the physiological reaction to anger and frustration.
To avoid getting overweight in the first place the development of effective appetite suppression drugs seem to be what is called for. However, for those already overweight the ability to turn up one's fat cells to burn off the fat as heat would be very valuable. Of course, that will generate more heat. So once methods of converting white fat cells into heat-generating brown fat cells are developed one can even imagine someone saying "I'm going to take off these pounds as soon as the weather gets cold enough".
Drugs that would turn up heat generation from fat cells would also have value for those who have to work outside in cold weather.
Think about what the future will look like once weight control becomes possible by use of safe pharmaceuticals. Obesity will become rare. Given that substantial advances are being made in our understanding of both appetite and fat cell operation and also given that the rate of advance in biotechnology as a whole is accelerating it seems most likely that 20 years from now obesity will be a rare condition and mostly will be found either in people who want to be fat for some reason (e.g. for a movie role or for an extreme cold weather sport) or who have some unusual desire to be fat.
Before obesity becomes a rare condition I expect we will first witness the near total disappearance of both corrective glasses and contact lenses. While LASIK and other technologies for reshaping lenses are making some in-roads in fixing eyesight problems the real promising advances are coming from the ability to replace aged hard lenses with soft and flexible lenses. See my previous posts on this here and here.
Here is a pretty wild idea. Some day brain scans may be used to determine what each mind is in best condition to learn each day.
It's the future as imagined by Max Cynader, director of the Brain Research Centre at the University of British Columbia. "Forty or 50 years from now, a student will stick her head in a scanner and see what she could best learn that day," he says. "That's a dream. We aren't there, but we can see how to get to there from here."
Many types of scanners use radiation. The PET (Positron Emission Tomography) scan uses radioactive glucose. The CAT or CT scan uses X-rays and so involves radiation exposure. In fact, CT scan doses are much larger than the dose from conventional X-ray images. Well, one alternative is Magnetic Resonance Imaging. Leaving aside the question of whether the high magnetic fields might cause some damage (my guess: yes!) it faces a more immediate obstacle: MRI requires a person to lie still for up to 20 minutes. This is not practical since it would take too much time, the kids wouldn't lie still, and too many machines would be needed. But if a means could be developed to get a good view of the brain on a daily basis without causing any radiation damage then why not check each brain to see what it appears to be up for learning?
It seems more likely to expect methods will first be found to enhance a brain's function for learning each particular type of material. One idea I've previously suggested is to use drugs to cycle more rapidly between wakefulness and the sleeping brain states in which memories are consolidated. It makes sense to enhance memory formations when one is experiencing memories that are worth remembering. The person who has a boring clerical job doesn't want to go to sleep to remember every boring detail of every day of drudgery. But when studying for a test or learning new technologies it would be very helpful to enhance memory formation.
The amount of information we can remember from a visual scene is extremely limited and the source of that limit may lie in the posterior parietal cortex, a region of the brain involved in visual short-term memory, Vanderbilt psychologist René Marois and graduate student J. Jay Todd have found. Their results were published in the April 15 edition of Nature.
"Visual short-term memory is a key component of many perceptual and cognitive functions and is supported by a broad neural network, but it has a very limited storage capacity," Marois said. "Though we have the impression we are taking in a great deal of information from a visual scene, we are actually very poor at describing its contents in detail once it is gone from our sight."
Previous findings have determined that an extensive network of brain regions supports visual short-term memory. In their study, Todd and Marois showed that the severely limited storage capacity of visual short-term memory is primarily associated with just one of these regions, the posterior parietal cortex.
Todd and Marois used functional magnetic resonance imaging (fMRI), a technique that reveals the brain regions active in a given mental task by registering changes in blood flow and oxygenation in these regions, to identify where the capacity limit of visual short-term memory occurs.
The brains of research participants were scanned with fMRI while they were shown scenes containing one to eight colored objects. After a delay of just over a second, the subjects were queried about the scene they had just viewed.
While the subjects were good at remembering all of the objects in scenes containing four or fewer objects, they frequently made mistakes describing displays containing a larger number of objects, indicating that the storage capacity of visual short-term memory is about four.
The fMRI results revealed that activity in the posterior parietal cortex strongly correlated with the number of objects the subjects were able to remember. The magnitude of the neural response in this brain area increased with the number of objects viewed up to about four and leveled off after that, even when additional objects were presented.
A different team led by Edward Vogel of the University of Oregon at Eugene were able to use signals measured by electrodes attached to the scalp to precisely measure the size of each person's visual working memory.
A large increase in the subject's brain activity on the four-dot test indicated that his or her memory capacity had not been pushed to its limit. No increase in electrical activity indicated that his or her working memory had topped out on the two-dot test. By graphing these responses, the team worked out the exact size of each subject's working memory.
It is likely that the measured differences in visual memory have some genetic basis. With that in mind it would be interesting to use Vogel's technique to compare measured visual working memory with BDNF gene variations that affect visual and episodic memory capabilities.
Another team at Northwestern University and Drexel University has used fMRI to demonstrate that the problem solving mechanism that produces the "Eureka!" moment of discovering an answer works by a different mental mechanism than what is used to solve problems by more conventional methods.
Mark Jung-Beeman and colleagues mapped both the location and electrical signature of neural activity using functional magnetic resonance imaging (fMRI) and the electroencephalogram (EEG). Neural activity was mapped with fMRI while the participants were given word problems--which can be solved quickly with or without insight, and evoke a distinct Aha! moment about half the time they're solved. Subjects pressed a button to indicate whether they had solved the problem using insight, which they had been told leads to an Aha! experience characterized by suddenness and obviousness.
While several regions in the cerebral cortex showed about the same heightened activity for both insight and noninsight-derived solutions, only an area known as the anterior Superior Temporal Gyrus (aSTG) in the right hemisphere (RH) showed a robust insight effect. The researchers also found that 0.3 seconds before the subjects indicated solutions achieved through insight, there was a burst of neural activity of one particular type: high-frequency (gamma band) activity that is often thought to reflect complex cognitive processing. This activity was also mapped to the aSTG of the RH, providing compelling convergence across experiments and methods.
Problem-solving involves a complex network of brain regions to encode, retrieve, and evaluate information, but these results show that solving verbal problems with insight requires at least one additional component. Further, the fact that the effect occurred in RH aSTG suggests what that process may be: integration of distantly related information. Distinct neural processes, the authors conclude, underlie the sudden flash of insight that allows people to "see connections that previously eluded them."
Some problems are easier to solve because they require the use of straightforward procedures which has been trained to use. For example, if one solves a math problem with a known method of solution then there is no "Eureka!" moment when one calculates the answer. Whereas when a solution found by noticing previously unobserved connections there is more a sense of revelation. It is this latter case that involves a burst of activity in a part of the brain called the anterior Superior Temporal Gyrus (aSTG).
“For thousands of years, people have said that insight feels different from more straightforward problem-solving,” he said.
“We believe this is the first research showing that distinct computational and neural mechanisms lead to these breakthrough moments.”
The paper by Mark Jung-Beeman et. al. is available online here: Neural Activity When People Solve Verbal Problems with Insight.
It would be interesting to know whether people who are considered more creative in their fields have a bigger anterior Superior Temporal Gyrus (aSTG). Every time a part of the brain is discovered as key for some function the obvious question that arises is just how valuable would it be to enhance that part of the brain. The aSTG for reaching insights and the posterior parietal cortex for working short-term visual memory both strike me as useful areas to enhance to make one better at scientific and engineering work.
On the subject of other brain functions that it would be great to enhance see my post Low Latent Inhibition Plus High Intelligence Leads To High Creativity?
Advocates see robots serving not just as helpers - carrying out simple chores and reminding patients to take their medication - but also as companions, even if the machines can carry on only a semblance of a real dialogue.
The ideal results: huge savings in medical costs, reduced burdens on family and caretakers, and old and sick people kept in better health.
"This technology is really needed for the global community," said Russell Bodoff, executive director at the Center for Aging Services Technologies in Washington, D.C. "If you look 30 years out, we have what I would call a global crisis in front of us: that we will have many more aging people than we could ever deal with."
MACHIDA, Japan- With an electronic whir, the machine released a dollop of "peach body shampoo," a kind of body wash. Then, as the cleansing bubbling action kicked in, Toshiko Shibahara, 89, settled back to enjoy the wash and soak cycle of her nursing home's new human washing machine.
Some argue for immigration to supply workers to care for a growing population of the aged. But in order for that solution to work in the short term the immigrants must be so numerous to drive their wages down to the point that their wages are low enough to be affordable by elderly on fixed incomes. But if their wages are that low two problems immediately become apparent:
If import of cheap labor is not a viable solution then there are only two cost-effective solutions to the financial problems caused by aging populations:
It is pretty simple. Either we need to eliminate the use of human labor to take care of the elderly or we need to stop the transformation of younger bodies into elderly bodies. Now me, I prefer the second option (and I hope you do too). But since the development of rejuvenation therapies may take two or three decades it makes sense to also pursue the first option to reduce costs in the short and medium term. But since the projected costs of taking care of the elderly are going to become so huge it would also be very cost-effective to spend more on biomedical research aimed at developing rejuvenation therapies.
Video surveillance cameras are more widely used and popular in Britain than in the United States. The origins of this popularity can be traced back to a single incident where a video camera (in Brtain commonly called CCTV for Closed Circuit Television) recorded 2 10 year old boys leading 2 year old Jamie Bulger away to kill him. A CCTV recording at a shopping center led to the eventual identification and arrest of the suspects. Even since then the British public has supported and pressed for ever wider installation of video surveillance cameras.
With some observers predicting the country will have more surveillance cameras than people within a decade, civil liberty groups foresee a bleak, Orwellian future, where privacy is a thing of the past.
The British, already more surveilled by video cameras than the population of any other country, may soon be watched by tens of millions of cameras.
Despite the pitfalls of blanket surveillance, though, industry analysts predict that the number of CCTV cameras in Britain will soar to 25 million by 2007.
Think about that 25 million number for video cameras used for surveillance. Is that realistic? The total population of the UK is over 60 million people. Where will all the cameras be installed that would allow the numbers to add up to 60 million? Some will be installed in buses, trains, taxi cabs, police cars, bus stations, train stations, airports, stores, banks, office buildings, and other commercial and public locations. Many such vehicles and facilities already have video cameras today in both the United States and Great Britain. A major airport or a large building could easily get hundreds or perhaps even thousands of cameras with cameras located in staircases, hallways, elevators, lobbies, garages and aimed outside at approaches. Also, street lampposts are another place where cameras can be installed. Given that the costs are dropping for cameras, recording media, and network bandwidth the 25 million number seems plausible in the longer run. Though it is hard to see how tens of millions will be added in just a few years.
While governments and commercial establishments are embracing video cameras so are private citizens. Home CCTV for personal safety and convenience is also being embraced as costs fall and security concerns mount. In Britain at current exchange rates the cameras range anywhere from approximately $30 to $150 US dollars with complete home starter kits ranging around the $500 or so dollars.
One big limitation on the utility of video suveillance is that there are too many cameras providing video feeds and it is too expensive to pay watchers to simultaneously watch tens of millions of them. Most cameras are more useful for after-the-fact viewing to identify who committed a crime only after it has been committed. Image processing that automated identification of crimes in process would allow costs to fall much further and lead to even more widespread of video surveillance.
Another capability would increase the demand for video cameras: automated computer recognition of faces. A recent report from State University of New York at Stony Brook suggests a breakthrough on computer automated facial recognition by recognizing changes in the positions of facial muscles when a person makes different facial expressions.
Guan takes two snaps of a person in quick succession, asking subjects to smile for the camera. He then uses a computer to analyse how the skin around the subject's mouth moves between the two images. The software does this by tracking changes in the position of tiny wrinkles in the skin, each just a fraction of a millimetre wide.
As equipment costs drop and computer technologies for doing automated recognition of person and activities advance the demand for automated video surveillance will grow and we will live with increasing amounts of cameras watching what we do.
In an interview with the MIT Technology Review biogerontologist Aubrey de Grey states that treatments that would tripe mouse life expectancy could be developed within 10 years.
TR: You believe that tripling the remaining lifespan of two-year old mice is as little as 10 years away.
De Grey: That’s right, with adequate funding. The sort of funding that I tend to talk about is pretty modest, really—less than the amount the United States already spends on the basic biology of aging. I’m talking about a maximum of $100 million per year for 10 years. With that sort of money, my estimate is we would have a 90 percent chance of success in producing such mice.
Aubrey advocates use of an animal model to demonstrate that rejuvenation therapies could be developed for humans and he has founded the Methuselah Mouse Foundation to provide awards to scientists who break new records in mouse longevity.
It is very unfortunate that more money is not flowing into rejuvenatiion therapy development. With a level of funding for rejuvenation therapy develop which is less than 3% of the current yearly NIH budget tens or hunfreds of millions more of us would have a chance to eventually become young again.
Aubrey has given previous interviews about reversing the aging process here and here. My Aging Reversal archive has many other posts about Aubrey's views on why we can reverse aging within the lifetimes of many people who are currently alive and why we ought to try much harder to do the research that will let us reverse aging. Also see Aubrey's website about Strategies for Engineered Negligible Senescence (SENS) and how bodies could be treated so that they do not become older from one year to the next.
Update: A dwarf mouse named Yoda has turned 4 which is equivalent to about 136 human years.
ANN ARBOR, MI -Yoda, the world's oldest mouse, celebrated his fourth birthday on Saturday, April 10, 2004 . A dwarf mouse, Yoda lives in quiet seclusion with his cage mate, Princess Leia, in a pathogen-free rest home for geriatric mice belonging to Richard A. Miller, M.D., Ph.D., a professor of pathology in the Geriatrics Center of the University of Michigan Medical School.
Yoda was born on April 10, 2000 at the U-M Medical School . At 1,462-days-old, Yoda is now the equivalent of about 136 in human-years. The life span of the average laboratory mouse is slightly over two years.
“Yoda is only the second mouse I know to have made it to his fourth birthday without the rigors of a severe calorie-restricted diet,” Miller says. “He's the oldest mouse we've seen in 14 years of research on aged mice at U-M. The previous record-holder in our colony died nine days short of his fourth birthday. 100-year-old people are much more common than four-year-old mice.”
Genetic modifications of his pituitary and thyroid glands along with a reduced production of insulin make Yoda a dwarf who gets cold easily. Most of us have already reached our full sizes and so even when analogous forms of genetic engineering can be done to humans Yoda's modifications are not going to do us any good. However, every type of intervention that extends life provides insights that may lead to interventions that could be done to extend the lives of adult humans.
To put into perspective Yoda's human-equivalent of 136 years of life consider that the record for longest lived human is generally accepted to be French woman Jeanne Calment who lived over 122 years. But attempts to convert mouse years into human years have to be taken with a grain of sand. Genetically Yoda is not a natural mouse and the genetic engineering done to create his strain effectively makes the entire strain have an average life expectancy that is higher than that of regular mice. So why use natural mouse life expectancies to translate Yoda's age into human years?
The most important lesson demonstrated by Yoda's new mouse longevity record is that genetic manipulations can extend life expectancy. It may seem obvious to some readers to expect that, yes, life expectancy ought to be able to be improved by genetic manipulations. Still, scientists who demonstrate that mouse life extension can be done with today's biotechnology add weight to the argument that we can develop techniques to extend the lives of humans currently living rather than in some diistant future.
Jonathan Gregory, a climatologist at the University of Reading, UK, says global warming could start runaway melting on Greenland within 50 years, and it will "probably be irreversible this side of a new ice age". The only good news is that it a total meltdown is likely to take at least 1000 years.
We desperately need rejuvenation therapies that will allow us to live long enough to witness the complete melting of Greenland.
A meltdown of the massive ice sheet, which is more than 1.8 miles thick, would raise sea levels by an average seven yards, threatening countries such as Bangladesh, islands in the Pacific, and parts of Florida.
Further complicating the models are the contradictory effects a warmer climate may have on polar ice sheets. As temperatures rise, evaporation from the surrounding oceans will increase, sending more moisture inland. This will increase snowfall over high-elevation accumulation zones. On the other hand, the simultaneous increase in summer rains could accelerate the melting. So the net effect of increased precipitation is hard to predict.
One final uncertainty is the fate of the Gulf Stream, which delivers warm tropical water to the North Atlantic and keeps Europe much warmer than it would otherwise be. Some scientists believe that the Gulf Stream is weakening in response to the massive influx of fresh water from shrinking glaciers. If it became weak enough, they say, Greenland would chill down again and its ice sheet might stabilize.
In a nutshell we don't really know what is going to happen. But even if the prediction of melting is correct that does not mean humans in the future will not find a way to prevent the melting. It is inevitable (failing the destruction of the human race) that climate engineering techniques capable of preventing huge ice masses from melting will be developed because the amount of scientific knowledge and technological capability will be so great in the future that even willful climate modification will become possible. One approach might be to find ways to increase the reflectivity or albedo of the seas around Greenland so that they absorb less sunlight and therefore become cooler. Another approach would be to develop devices that float on the surface and use wave energy to squirt water into the air. This would increase evaporation and hence could increase precipitation and ice build-up if used selectively during winters and upwind of Greenland.
Some people are clearly not worried about the prospect of flooding coastal cities decades hence. Not the least bit bothered by the idea that he was decreasing the albedo of an iceberg and causing it to melt more rapidly Danish artist Marco Evaristti went searching for the ideal iceberg off the Western Greenland and painted it red.
Facing temperatures of minus 23 degrees C (minus 9 degrees F), it took about two hours for the 40-year-old artist to paint the exposed tip of the iceberg, which was about 900 square meters (1,080 square yards) in size.
He had a team of 20 people to help him and obviously at least 21 people are not afraid to accelerate the melting of a massive chunk of ice. See his red iceberg and an even nicer high resolution red iceberg picture on Evaristti's website.
Chicago police are more than doubling the number of video cameras watching city streets with the total going from 30 to 50. At the same time the police are adding gunshot detectors to the cameras for pinpointing the locations of guns that fire.
Saying they are improving something that works, Mayor Richard M. Daley and Chicago police officials Tuesday announced expansion of Operation Disruption, in which camera units are placed in areas to reduce violent crime and drug activity.
Fifty camera units to be equipped with gunshot-detection technology will be added to the 30 units installed in areas prone to gang violence and narcotics sales, Daley said at a news conference at police headquarters, 3510 S. Michigan Ave.
Existing pods will be retrofitted with the same technology as the new ones, and will able to pinpoint gunshots within 20 feet and transmit the data via a microwave network to two police surveillance centers, officials said.
Expect many more types of detectors to be developed and deployed for public safety and law enforcement functions. How much longer will it be before there are detectors that can pinpoint a scream, a cry for help, or the sound of cars colliding? It is possible to conceive of image processing algorithms that can detect a person collapsing on the ground or a person being chased by another person.
Nellie Joyce Carter lives in the 800-block of North Harding, which has had a camera for seven months. She says the neighborhood’s safer since the very visible deterrent was put into place. She parents were afraid to let their children play outside before. Now she says the camera keeps watch over the kids and a local park.
Like the initial $750,000 camera experiment, the $2.8 million expansion and upgrade is being paid for with drug forfeiture money. Drug dealers are literally paying for police to breathe down their necks.
If there were enough dirty money to go around, Mayor Daley said he would love to see cameras installed on every street corner in Chicago.
Coincidentally the US Army and Marines happen to be deploying the "Boomerang" gunshot location detector system to Iraq.
Sensors atop an aluminum pole on the back of a Humvee pick up supersonic shock waves to give an approximate location of gunfire, and sound waves measured from the muzzle blast narrow it some more.
A cigarette box-sized display on the dashboard or windshield then shows the findings. "Incoming, 5 o'clock," says a speaker inside the box.
This military system is not an expensive system. BBN Technologies is making these detectors for $10,000 a piece and the price is expected to drop to $3,000. Electronic detection and surveillance systems will continue to decline in price while becoming more sophisticated and precise. Therefore sensors will become ever more ubiquitous and will be used to detect an increasing number of types of events and activities.
Biotech instrumentation company Affymetrix has announced a new instrument for mapping DNA Single Nucleotide Polymorphisms (SNPs) which Affymetrix claims is much cheaper to operate than previous instruments designed for this purpose. SNPs are locations in the genome where not all humans or all members of another species have the same the DNA letter. The new Affymetrix instrument will reportedly lower the cost of human DNA SNP testing to 1 penny per DNA letter.
The 100K builds on the innovative, scalable, easy to use assay that Affymetrix pioneered with the GeneChip Mapping 10K Array. The 100K allows researchers to genotype over 100,000 SNPs using just two reactions. Previously, genotyping 100,000 SNPs would have required 100,000 PCR reactions, a hurdle that made this kind of research impractical. Before the advent of 100K, the commercial product for genotyping the most SNPs was Affymetrix' Mapping 10K.
“The power of 100,000 SNPs in a single experiment is enabling researchers to attempt unprecedented genetic studies at a genome-wide scale,” said Greg Yap, Sr. Marketing Director, DNA Analysis. “The GeneChip Mapping 100K Set is the first in a family of products that will enable scientists to identify genes associated with disease or drug response across the whole genome instead of just studying previously known SNPs or genes, and to study complex real-world populations instead of simple ones. To do this, we are making large-scale SNP genotyping not only quick and easy, but also affordable -- about 1 cent per SNP.”
About half of the SNPs on the 100K set are from public databases, while the other half are from the SNP database discovered by Perlegen Sciences, Inc. All of the SNPs on the 100K set are freely available and have been released into the public domain. Because the assays and arrays used in the 100K set are extremely scalable, more SNPs from both public sources and the Perlegen database will be added to next generation arrays.
Just a couple of years ago the cost of SNP assays was at about 50 cents per SNP. Industry analysts at the time were predicting SNP costs to fall to 1 cent per SNP within 2 years and sure enough if Affymetrix's press release is realistic those costs are just about to fall to the predicted 1 cent per SNP.
To put that number in some useful perspective, there may be about 10 million SNPs in humans but perhaps about only a half million SNPs that are in areas of the genome that affect human function. Most of the SNPs are in junk regions. The 500,000 estimate of functionally significant SNPs is a scientific guess at this point and could easily be off by a few hundred thousand plus or minus. But once those 500,000 SNPs are identified (which could easily take a few years yet) the cost of 1 penny per SNP to test them all would cost about $5,000.00 per person in US dollars to test a single person's complete genome for SNPs. The real cost would be higher since samples of portions of the genome would need to be isolated for experimental runs. The real cost might easily be several times that amount. That still wouldn't provide a complete picture of a person's genome since there are few other kinds of genetic variations (e.g. short tandem repeats or STRs). But SNPs are responsible for most genetic differences within a single species.
At this point the decline in SNP testing prices is useful chiefly to scientists since we don't know which SNPs are important let alone how they are important. Still, the lower costs for SNP testing will accelerate the rate at which important SNPs are identified and that will bring closer the day when it makes sense to for individuals to go get complete SNP testing.
Full DNA sequencing is also at about 1 cent per DNA letter.
Our price is competitive at just $.01 per base per sample, all inclusive. For example, total sequencing for a 7 Kb region on 48 samples would cost a total of 7000 x 48 x $.01 = $3,360. This pricing structure makes SNP discovery project estimates straight forward, avoiding the possible uncertainty of estimating the exact number of PCR amplicons, sequencing reactions, etc.
This price would make a complete 2.9 billion letter human genome sequencing for a single person cost about $29 million dollars. But it would really cost several times more than that since sequencing has to be done multiple times to catch errors in the sequencing and there are sections of the genome that are hard to sequence. Since whole genome sequencing is much more epensive ways to lower the cost of the SNP testing is what is attracting the most interest. When SNP testing costs by fall another order of magnitude the cost will be low enough for at least the more affluent among us to want to pay for personal SNP tests. Once the costs fall yet another order of magnitude SNP testing for the masses will become common place. My guess is that given the rate at which SNP testing costs continue to fall and the predictions of industry figures of much cheaper SNP testing we are at most 10 years away from the point of general population large scale SNP testing..
People who are plagued by irrational crippling fears have new hope. An extract of yohimbe tree bark helps mice adjust to and overcome fear four times faster than controls.
New findings at the UCLA Neuropsychiatric Institute demonstrate the potential of a substance found in yohimbe tree bark to accelerate recovery from anxiety disorders suffered by millions of Americans.
In the latest in a series of studies of how mice acquire, express and extinguish conditioned fear, the UCLA team finds yohimbine helps mice learn to overcome the fear faster by enhancing the effects of the natural release of adrenaline. Adrenaline prompts physiological changes such as increased heart and metabolism rates in response to physical and mental stress.
Writing in the March/April edition of the peer-reviewed journal Learning and Memory, the team reported that mice treated with yohimbine overcame their fear four times as fast as those treated with vehicle or propanolol, a medication commonly used to treat symptoms of anxiety disorders by blunting the physiological effects of adrenaline.
Yohimbine is most commonly used to treat erectile dysfunction. It can cause anxiety in susceptible persons, and should never be used without a doctor’s recommendation and supervision.
This is yet another small step down the road toward the eventual achievement of total control of emotional reactions. Some day it will become possible to adjust one's brain to have emotional reactions highly customized for any of a variety of desired purpose. It will become possible, for instance, to adjust emotional reactions to focus one's attentions and energies to achieve ambitious goals or to adjust one's reactions to make it easier to bear situations one can not escape from.
One big open question about this trend for the future is whether people will tend to choose emotional response patterns that make humanity more or less civilized and more or less likely to destroy ourselves.
Jeff Matovic, a 33 year old resident of Ohio, has experienced an enormous decrease in the twitches and uncontrollable movements caused by Tourette Syndrome.
CLEVELAND, April 1, 2004: A neurosurgical team at University Hospitals of Cleveland (UHC) has, for the first time in North America, applied a new surgical approach to the treatment of Tourette Syndrome, resulting in the immediate and nearly complete resolution of symptoms for the patient, who has suffered from this neurologic disorder since he was a child.
"We were genuinely amazed at the patient's response," says Robert J. Maciunas, MD, neurosurgeon at UHC and professor at Case Western Reserve University School of Medicine. He has used the technique called Deep Brain Stimulation (DBS) for the treatment of Parkinson's disease and tremor, and was impressed with this patient's dramatic reaction: the disappearance of the jerking motions, muscle tics and grunting associated with his Tourette's. "This technique holds great promise for patients suffering from this movement disorder, which often is diagnosed in childhood or early adolescence and can be completely debilitating."
Jeff Matovic, a Lyndhurst, Ohio, resident who grew up in Bay Village, was six years old when he was diagnosed with Tourette Syndrome, a neurobehavioral disorder characterized by sudden, repetitive muscle movements (motor tics) and vocalizations (vocal tics). Though standard therapy with medication controlled his movements for much of his boyhood, his condition severely worsened with age.
Prior to brain surgery, physicians at University Hospitals Movement Disorders Center mapped out regions of Jeff's brain, through MRI (magnetic resonance imaging) scans and 3-D computer images. Their goal was to locate the safest and most direct route to reach the cells inside the thalamus portion of the brain, involved in controlling Jeff's movements. By placing electrodes around those cells to deliver continuous high-frequency electrical stimulation, control messages are rebalanced throughout the movement centers in the brain. The electrodes are connected from the brain through wires under the skin (beneath the scalp, neck and upper chest) to an implanted battery just beneath the collarbone. In Jeff's case, since both sides of his body were affected by the movement disorder, he has electrodes implanted on both sides of his brain and tiny battery packs implanted on each side of his chest.
The doctors at University Hospitals of Cleveland are careful to point out that not everyone with Tourette Syndrome requires treatment. The first line of treatment is medication, which can be very effective. Surgical treatment is considered a last resort, and it is not clear how effective deep brain stimulation will ultimately prove for patients with this particular disorder.
In the United States, the Food and Drug Administration has approved deep brain stimulation for the treatment of Parkinson's disease, essential tremor and dystonia. "We've seen very positive responses in patients with Parkinson's disease. Studies of the DBS technique show that this stimulation can significantly reduce tremor and other symptoms in about three quarters of appropriately selected patients with Parkinson's." says Brian N. Maddux, MD, PhD, Jeff's neurologist at UHC and assistant professor at Case School of Medicine. "Patients with a different movement disorder called dystonia can take three months to respond to the electrical stimulation. We didn't know how Jeff would respond. Within hours after the stimulator was turned on, we observed the ceaseless movements become completely relaxed and he was able to walk normally. We were awestruck."
Matovic learned about the possibility of a surgical treatment and had to convince doctors to try this technique on his brain.
"They took a lot of convincing. They hadn't really done anything like this before for Tourette syndrome."
Even if only a small percentage of all Tourette sufferers need a surgical treatment the potential number of candidates for this treatment still runs into the thousands.
An estimated 200,000 people in the United States have Tourette, but only a small percentage suffer symptoms as severe as Matovic's.
Matovic shows he has an incredible drive to triumph over his disorder. In spite of his disorder Matovic managed to go to college, graduate, get married, and his wife is expecting a baby. Plus, he took it upon himself to find doctors to give him an experimental therapy involving brain surgery which previously had not been performed in the United States.
Most discussions of biotech center around medical uses. Though agricultural uses also attract considerable attention and, especially in Europe, considerable opposition. However, there are plenty of potential industrial applications. A major category of applications is the use of enzymes to catalyze complex reactions for which there are no conventional catalysts. One major reason that this category of biotechnology hasn't taken off more rapidly is that enzymes break down fairly easily. However, some recent research results from the Department of Energy's Pacific Northwest National Laboratory brighten the prospects for industrial applications of enzymes as these scientists have developed a method of using nanotech particles to stabilize enzymes to last for months.
RICHLAND, Wash. — Enzymes, the workhorses of chemical reactions in cells, lead short and brutal lives. They cleave and assemble proteins and metabolize compounds for a few hours, and then they are spent.
This sad fact of nature has limited the possibilities of harnessing enzymes as catalytic tools outside the cell, in uses that range from biosensing to toxic waste cleanup.
To increase the enzyme's longevity and versatility, a team at the Department of Energy's Pacific Northwest National Laboratory in Richland, Wash., has caged single enzymes to create a new class of catalysts called SENs, or single enzyme nanoparticles. The nanostructure protects the catalyst, allowing it to remain active for five months instead of hours.
"The principal concept can be used with many water-soluble enzymes," said Jungbae Kim, PNNL senior scientist who described the feat here today at the national meeting of the American Chemical Society.
"Converting free enzymes into these novel enzyme-containing nanoparticles can result in significantly more stable catalytic activity," added Jay Grate, PNNL laboratory fellow and SENs co-inventor.
Nanotech particle stabilized enzymes could be used to for cleaning up toxic waste sites or for breaking down toxins that are continually produced by industrial processes. They could also be used to keep a wide variety of surfaces (including places within human bodies) from accumulating an assortment of types of crud and undesirable material.
Among the uses Kim noted for SENs is the breakdown toxic waste-a single treatment could last months. Stabilized enzymes are also a prerequisite for many types of biosensors. And they may be of interest for coating surfaces, with application ranging from medicine (protecting implants from protein plaques) to shipping (keeping barnacles off hulls). PNNL is investigating several other applications in the environmental and life sciences.
A way to stablize enzymes increases the value of discovering enzymes which have different forms of activity. So this advance is likely to spur searches through all manner of species to find enzymes for a variety of specific industrial and medical purposes.
Viagra seems to speed up the acrosome reaction, so that by the time the sperm reaches the egg it has no digestive enzymes left to penetrate the outer layer. Sperm that have undergone this process are known as fully "reacted".
The researchers tested 45 samples of semen. They found that up to 79% more sperm were fully "reacted" in samples treated with Viagra.
This fertility finding contrasts with human data from clinical trials: Viagra’s effect on sperm movement and function was extensively studied, says Larry Lipshultz of Baylor College of Medicine in Houston, Texas who has been involved in trials of Viagra and its competitors, Levitra and Cialis. "I've reviewed all the data and never seen a fertility problem," he says.
The team is now studying the effects of Viagra on the sperm of men actually taking the drug. The initial findings in 17 men show Viagra does speed up sperm.
It is interesting to note that some men whose sperm fail to fertilize in convention IVF may have insufficient acrosome activity. So the use of Viagra by men who are trying to use IVF to start a pregnancy may be making their existing cause of the need for IVF even worse.
Most sex is for reasons other than reproduction. But these results suggest that those intending to start a pregnancy and those who are not adequately protecting against the possibility of pregnancy ought to think twice about using Viagra. Certainly if you are going to try to do IVF think twice about using Viagra to help get the sperm sample.
This work obviously needs to be repeated for Levitra and Cialis as well.
Researchers at Mount Sinai School of Medicine are first to strongly link a specific gene with autism. While earlier studies have found rare genetic mutations in single families, a study published in the April issue of the American Journal of Psychiatry is the first to identify a gene that increases susceptibility to autism in a broad population.
Approximately 1 in 1,000 people have autism or autistic disorder. It appears to be the most highly genetic of all psychiatric disorders. If a family with one autistic child has another child the chance that this child would be autistic is 50 to 100 times more likely to than would be expected by chance. However, it's clear that no single gene produces the disorder. Rather, the commonly accepted model states that it is a result of the accumulation of between five to ten genetic mutations.
"Identifying all or most of the genes involved will lead to new diagnostic tools and new approaches to treatment," said Joseph Buxbaum, PhD, Associate Professor of Psychiatry, Mount Sinai School of Medicine and lead author of the study.
Several studies have implicated a region on chromosome 2 as likely to be involved in autism. An earlier study by Dr. Buxbaum and his colleagues narrowed the target to a specific region on this chromosome. He and his colleagues conducted a systematic screening of this region in 411 families that have members with autism or autistic disorder. The families were recruited through The Seaver Autism Research Center at Mount Sinai School of Medicine and the Autism Genetic Resource Exchange.
They found genetic variations in one gene that occur with greater frequency in individuals with autism disease and their family members. This gene codes for a protein that is involved in production of ATP, the molecule that acts as fuel providing the energy cells need to function. The mutations identified lead to production of excessive amounts of this protein. Dysfunction of this gene could lead to irregularities in the production of molecules that fuel the cells. Since brain cells consume large amounts of energy even minor disruptions in production of such fuel can significantly affect the cells ability to function normally.
The abstract of the paper identifies the gene as Mitochondrial Aspartate/Glutamate Carrier SLC25A12.
Why don't we already know all the genes that contribute to autism? It costs too much to do genetic sequencing. If the cost of DNA sequencing was about 3 orders of magnitude lower then the entire genomes of tens of thousands of autistic and non-autistic people could be sequenced and we'd find out very quickly which genetic variations contribute to autism and to most other diseases as well.
Since the costs of DNA sequencing and of testing individual DNA letters (Single Nucleotide Polymorphisms or SNPs) continues to drop and since it is reasonable to expect the costs to drop by two or three orders of magnitude within the next 10 years I expect all the genetic variations that contribute to autism, Alzheimer's, Parkinson's and most other neurological diseases to be known by 2015. The rate of discovery of contributing genetic variations will rise by orders of magnitude as the costs drop by orders of magnitude.