2012 April 01 Sunday
Nanopore DNA Sequencer

While DNA sequencing costs have already plummeted by 6 orders of magnitude in about the last 10-12 years the use of nanopores (where a single small DNA strand will get pulled thru a carefully crafted sensor to have its DNA sequence read) hasn't reached the commercial stage. But nanopores seem like a logical next step in size and cost. Now some U Wash researchers believe they've found a way to make a big stride toward workable cheap nanopore DNA sequencers.

Researchers have devised a nanoscale sensor to electronically read the sequence of a single DNA molecule, a technique that is fast and inexpensive and could make DNA sequencing widely available.

The technique could lead to affordable personalized medicine, potentially revealing predispositions for afflictions such as cancer, diabetes or addiction.

"There is a clear path to a workable, easily produced sequencing platform," said Jens Gundlach, a University of Washington physics professor who leads the research team. "We augmented a protein nanopore we developed for this purpose with a molecular motor that moves a DNA strand through the pore a nucleotide at a time."

The researchers previously reported creating the nanopore by genetically engineering a protein pore from a mycobacterium. The nanopore, from Mycobacterium smegmatis porin A, has an opening 1 billionth of a meter in size, just large enough for a single DNA strand to pass through.

DNA sequencers have something very important in common with computers: The smaller you can make them the more powerful and cheaper they become. Biotechnology has started following the same pattern that the computer industry has been going thru for decades: smaller is cheaper and smaller is more powerful. So, for example, microfluidic devices hold out the promise of cheap and highly automated lab-on-a-chip devices controlled by elaborate software. This makes me very optimistic that the rate of advance in biotechnology will accelerate and enable development of effective rejuvenation treatments.

By Randall Parker    2012 April 01 10:23 PM   Entry Permalink | Comments (3)
2012 March 07 Wednesday
The Transition To A DNA Data Rich Environment

A New York Times piece by John Markoff looks at the rapid rate of decline in DNA sequencing costs and the implications for the rate of progress in biomedical science.

“For all of human history, humans have not had the readout of the software that makes them alive,” said Larry Smarr, director of the California Institute of Telecommunications and Information Technology, a research center that is jointly operated by the University of California, San Diego, and the University of California, Irvine, who is a member of the Complete Genomics scientific advisory board. “Once you make the transition from a data poor to data rich environment, everything changes.”

We are living thru that transition. The flood of human genetic sequencing data is easily going to rise by 6 orders of magnitude in this decade and probably much higher that. Not only will a substantial portion of the population get themselves sequenced but also each person with certain diseases (notably cancers) will get many different biopsies sequenced.

As an important example of how everything changes in a data rich environment British researchers find that different samples from the same tumor have more different than shared genetic mutations.

They found around two third of genetic faults were not repeated across other biopsies from the same tumour. The research was published in the New England Journal of Medicine.

The ability to fully sequence a cancer cell's genome will allow sequencing of many different cells from the same cancer to identify all the mutations and then to figure out which of the mutations are important in enabling the spread of cancers. A cancer patient then could get a series of cancer treatments aimed at each cancer cell subpopulation that has specific mutations which require special handling.

By Randall Parker    2012 March 07 09:55 PM   Entry Permalink | Comments (3)
2012 January 10 Tuesday
$1000 Genome Sequencing Coming In 2012?

Faster than expected. Life Technologies of Carlsbad California has announced a $149k genome sequencing machine that with a chip upgrade coming before the end of 2012 will sequence an entire human genome in less than a day. The machine is cheap compared to other products in this market and it is faster.

While the claim of a $1000 cost for sequencing a genome may be a premature exaggeration even for the end of 2012 the company is getting pretty close. The computer chip which can be used for only one genome and its associated biochemicals by themselves cost $1k. So the machine's price tag is in addition to the per genome cost. Also, a company using this machine to sequence a genome will have added labor, marketing, and other costs on top of a profit margin. Still, we are probably going to at least be below $5k per genome by the end of 2012.

An IEEE Spectrum Tech Talk blog post argues Moore's Law drove down the cost of DNA sequencing just it has the cost of computer power. Since the Life Technologies Ion Proton Sequencer does use transistors in wells on a small scale made possible by the semiconductor industry this seems like a correct analysis.

Since current market leader Illumina also just announced an upgrade to their existing sequencer that will sequence a genome in a day the cost reductions are happening in a competitive market and should translate into price reductions as well.

Illumina, Life Sciences, and other competitors will continue to find ways to cut costs. So I'm thinking 2013 looks like the year to get my full genome sequenced. We will need to look at not just price but also accuracy and extent of the sequencing done. How will each service compare a year from now in terms of error rates and thoroughness? Will they detect large copy variation? Once prices go below $1k it might make sense to get yourself sequenced by a couple of competing services and compare the results.

These low prices are going to drive up the rate of full genome sequencing. Therefore expect to see an explosion of discoveries on what the many genetic variants mean. What I'm especially looking forward to: genetically derived personal advice on ideal diet, exercise, sleep, and other lifestyle choices.

By Randall Parker    2012 January 10 10:32 PM   Entry Permalink | Comments (2)
2011 June 21 Tuesday
DNA Synthesis Costs Dropping Rapidly

An article in Technology Review about synthetic biology (e.g. create custom organism to make stuff) includes an interesting fact at the end: the costs of DNA synthesis is dropping as fast as the cost of DNA sequencing.

Fortunately, the cost of DNA synthesis technology, much like that of DNA sequencing technology, is dropping rapidly. George Church, director of the Center for Computational Genomics at Harvard, noted in his talk that the costs of both DNA synthesis and sequencing technologies have been decreasing at an astonishing rate—lately by a factor of 10 each year.

The dropping costs for DNA synthesis will accelerate the rate at which scientists try out new designs of genes and organisms.

Where does this lead in the long run? The biggest wild card: Far more people will be able to do genetic engineering. Just as computing power spread from large organizations to anyone who can afford a cell phone the number of people who can make new organisms will grow by orders of magnitude. Where does this lead to? Ecological disasters where introduced genetically engineered species wipe out natural species? Count me concerned.

By Randall Parker    2011 June 21 12:40 AM   Entry Permalink | Comments (15)
2011 June 09 Thursday
Bigger Genome Projects Undertaken

At the end of a press release from UC Davis about a research cooperation deal struck with a big genomics research institute in China the Chinese center's genome sequencing capacity is mentioned and it is quite large.

BGI was founded in 1999 as the Beijing Genomics Institute. It now has several branches and subsidiaries including: BGI-Shenzhen, a nonprofit research institute; BGI-Hong Kong, a private institute that manages international collaborations and transfers profits to BGI; and BGI-Americas, located in Boston, which just celebrated its one-year anniversary and announced new joint projects with the Broad Institute and the United Kingdom. BGI has about 4,000 employees and the capacity to sequence the equivalent of 1,600 complete human genomes each day.

What caught my eye: The ability to sequence 1,600 complete human genomes per day. In a year that is 584,000 genomes. Back in the 1990s it took years to sequence a single human genome. Now 1,600 human genomes or equivalent numbers of genomes from other species could be sequenced in a day. Rapidly declining sequencing costs will bring us to 1 million genomes sequenced per year pretty soon. Check out this graph showing how DNA sequencing costs have gone into a steep dive:

cost2_sml.jpg

This huge decline in sequencing costs is making possible some pretty ambitious efforts to sequence to find genetic variants that control important human traits. In particular, the BGI mention above brings to reminds me that BGI-Shenzhen is doing a big DNA sequencing effort on very smart Chinese kids in order find genetic variants that contribute to high intelligence. In 5 years they might have many such variants identified.

As long as the US Food and Drug Administration and similar regulatory agencies in Europe do not outlaw direct-to-consumer genetic testing it is going to become useful to get yourself genetically tested and, in the longer run, get your full genome sequenced.

By Randall Parker    2011 June 09 12:26 AM   Entry Permalink | Comments (9)
2011 April 09 Saturday
Big Gene Search Turns Up Obesity Gene

A mildly interesting discovery turns up a gene that might some day help lead to a treatment for insulin-resistant diabetes. But the actual discovery isn't the most important part of it.

LA JOLLA, CA – New research by scientists at The Scripps Research Institute and collaborating institutions has identified a key regulator of fat cell development that may provide a target for obesity and diabetes drugs.

In a paper published in the latest issue of Cell Metabolism, the scientists describe a protein called TLE3 that acts as a dual switch to turn on signals that stimulate fat cell formation and turn off those that keep fat cells from developing. TLE3 works in partnership with a protein that is already the target of several diabetes drugs, but their use has been plagued by serious side effects.

What is especially interesting: the scientists were able to test 18,000 genes to discover the importance of just one gene.

To find additional players in adipocyte formation, Saez, Tontonoz, and colleagues induced cells growing in a dish to differentiate into adipocytes. The scientists then individually tested the ability of 18,000 genes to augment the conversion of undifferentiated cells into fully functioning adipocytes, looking for genes that might play a role in this process.

In this way, they identified the gene encoding the TLE3 protein, which had never before been linked to fat development.

Our cells and bodies are enormously complex. Studying one gene at a time won't get us to many major treatments in our lifetimes. Only massive parallel searches and parallel interventions (e.g. with large numbers of gene arrays and microfluidic devices) can provide the massive amounts of information we need to do detailed reverse-engineering of the human body.

The speed with which research tools get more powerful is the rate-limiting factor for how soon we will get major rejuvenation therapies. Given sufficiently powerful tools the body can be reverse-engineered orders of magnitude faster.

By Randall Parker    2011 April 09 11:14 AM   Entry Permalink | Comments (0)
2011 January 15 Saturday
Time For Million Genomes Sequencing Project

Razib points to a debate about how fast and how far DNA sequencing costs will drop. John Hawks expects $50 for full genome sequencing in less than 5 years.

The inevitability of the $1000 genome has already made it irrelevant. We should expect a $1000 genome announcement this year. This will be hype, because the real $1000 genomes won't be here until...next year! Before the end of 2014, whole genome sequences at 4x coverage will cross the $100 mark. I think there's a good chance they will be less than $50 at that time.

Based on numbers I've seen, those numbers are around six months optimistic. Geneticists are already planning projects anticipating $100 genomes -- some suggest that the next big project should be a "Million Genomes", because there isn't any sense bothering with a hundred thousand.

My take: once we have really cheap prices for the sequencing scientists should stop trying to get funding for the sequencing. Just ask people to get their own genomes sequenced and submitted to the scientists for their research. People who submit a sample for genetic sequencing should be able to check boxes on a web page to specify which research projects should be able to get their genetic sequencing info (and this could be done for any medical tests we order for ourselves).

Bottom line: It is time for massive medical research projects that are organized virtually.

An enterprising private medical research foundation (the Howard Hughes Medical Institute comes to mind) could fund the development of a web site where people could upload their genetic testing and sequencing results along with lots of measurement and medical history data about them. People could enroll themselves in a massive web-based research project on genetics, diet, lifestyle, drugs, diseases, health problems, physical accomplishments, intellectual achievements, and other aspects of their lives. For example, people could use digital cameras to upload pictures of their eyes, face and body at different angles, the inside of their mouth, and other views. People could enter in basic info such as sex, birth date, weight, and other info as well as go thru forms that ask lots of health history questions.

People could also enroll into such a research project thru their doctor. They could elect to have all their medical tests, diagnoses, and drug prescriptions provided to the research project. Drug store chains could provide the option of having drug purchase histories uploaded to the massive medical history database built from all the data that flows into it from the web doctors' offices, medical testing labs, and other data sources.

By Randall Parker    2011 January 15 06:47 PM   Entry Permalink | Comments (10)
2010 October 29 Friday
Order Of Magnitude More Gene Sequencing In 2011

Nature estimates that the number of sequenced human genomes will go up by about an order of magnitude between now and the end of 2011.

Although far from comprehensive, the tally indicates that at least 2,700 human genomes will have been completed by the end of this month, and that the total will rise to more than 30,000 by the end of 2011.

This is an example of why I keep saying that the floodgates on genetic data are opening, that the rate of discovery of what genetic mutations mean is rapidly accelerating, and that we will soon learn enormous amounts about what many thousands of our genetic variants mean. The utility of getting yourself genetically tested is going to rise sharply.

Whole genome sequencing has started to enter into clinical medical practice.

It may be small-scale and without fanfare, but genomic medicine has clearly arrived in the United States. A handful of physicians have quietly begun using whole-genome sequencing in attempts to diagnose patients whose conditions defy other available tools.

The cost of full genome sequencing is now below $20k and falling. The ability of moderately affluent individuals to pay for their own genome sequencing for diagnostic purposes creates an additional source of genetic data. Unless regulators get in the way the ability of individuals to get their genomes sequenced will soon make individuals rather than large research centers the largest source of demand for genome sequencing services. This is a healthy development because individual demand will generate more sequence data and therefore more data to analyze to discover the meaning of human genetic variations. Anyone who pays to get their own genome sequenced who also volunteers to allow their genetic data to be used by researchers will help speed the search for the functional significance of all the genetic variants in humans.

Rapid changes often elicit opposition. In this case some commentators raise objections to personal genetic profiling services. But I see the direct-to-consumer (DTC) model of genetic testing and genetic sequencing as a great accelerator of the rate of production of genetic sequence information. I want the right to get myself thoroughly genetically tested. Tell your elected officials the law should recognize your right to get yourself genetically tested.

We are all genetically unique. You probably have about 60 unique genetic mutations. So the search for all the genetic variants is not going to end until all humans have their genomes sequenced.

Earlier this year, Jorde, who is on the 1000 Genomes Project steering committee, was part of the team that was the first to sequence the genome of an entire family – two parents and two children who live in Utah. As part of that study, published in March in Science, he estimated the rate at which genetic mutations are passed from generation to generation at 60 – meaning each parent passes 30 genetic mutations to their offspring. Most gene mutations are harmless, but understanding the rate at which mutations are passed among generations is an essential part of understanding the human biological clock, according to Jorde. To confirm his estimated mutation rate, which was half of what had been estimated previously by indirect methods, researchers in the current study sequenced the genomes of two families of three people each.

"We were delighted that the mutation rate estimate obtained from the 1000 Genomes Project was exactly the same as our estimate," Jorde said.

Scientists involved with the 1000 Genomes Project think they've now identified 95% of all genetic variations. Next comes the meaning of all these genetic differences and the ability to get yourself genetically tested for a low price. You can already get hundreds thousands of genetic differences checked for several hundred dollars. I'm starting to think seriously about getting detailed genetic tests.

By Randall Parker    2010 October 29 12:18 AM   Entry Permalink | Comments (8)
2010 July 18 Sunday
Illumina Full Genome Sequencing Costs Below $20k

The cost of genome sequencing continues its rapid descent.

Illumina now will charge $19,500 for the service. Users must follow a physician-mediated process. That's down from the $48,000 the firm was charging when actress Glenn Close revealed in March that Illumina had sequenced her genome.

But Illumina's price falls to $14,500 per person for groups of at least five that are referred by the same physician. And it falls to just $9,500 when a referring physician certifies that the sequencing could lead to a treatment for a patient's disease or condition.

Note the bit about "a physician-mediated process". Regulators won't let you find out our own genetic sequence without a visit to a doctor to get permission. What's the point of that visit? Will the doctor say no? If so, then what, you just don't get to know your own DNA sequence? If the doctor says no you can pay again to visit another doctor to try to get a yes. You can ask friends (or perhaps a social networking site) which doctors always say yes. The gatekeepers want to control the flow of medical information. I am opposed. What about you?

The US Food and Drug Administration has sent a letter to Illumina and other genetic sequencing companies telling them that their sequencing services for individuals falls under FDA authority. That does not bode well as I've previously discussed. Think my fears are overblown? Nope. In 2008 both California and New York State tried to prevent individuals from getting tests done on their own DNA. See this article from Wired and click thru on the links if you have any doubt.

When some overprotective Luddites from the California Department of Health Services sent cease-and-desist letters to thirteen genetic testing companies, they proved that someone in their office must have single nucleotide polymorphism that causes poor judgment. Interfering with the nascent industry is not a good idea for a plethora of reasons.

The messages, which were sent on June 9th, warned that each test must be ordered by a doctor and carried out in a laboratory that meets a strict set of standards. If not, the direct-to-consumer shops will face up to $3,000 per day in fines.

Hence the "physician-mediated process".

How most medical testing should work: Any drug store with a pharmacy department would be able to withdraw blood to submit for testing. Any pharmacist or suitably trained pharmacist's assistant should be able to take a blood sample (or a urine sample if they want to branch out). Then you pay and have the option of getting your results via a email, web site, physical mail, or a return visit. You should have the option of specifying which doctor(s) should also get your test results. You should also be able to have your results sent to an medical expert system web server that tracks all the medical information about you. Your medical history should reside in a cloud server that is independent any individual doctor's office.

Meanwhile researchers are getting ready to sequence and compare a few thousand genomes.

The 1000 Genomes Project, an international public-private consortium to build the most detailed map of human genetic variation to date, announces the completion of three pilot projects and the deposition of the final resulting data in freely available public databases for use by the research community. In addition, work has begun on the full-scale effort to build a public database containing information from the genomes of 2,500 people from 27 populations around the world.

This sounds like an impressive scientific effort, right? In a few short years 100 times that many genomes will be sequenced each year and then sequencing rates will go up by more orders of magnitude. Though if the FDA stands in the way Americans will have to go on vacation to other countries to get our genomes sequenced.

Update: To the extent that the US FDA and like-minded agencies reduce the availability of DNA sequencing services they slow the rate of technological advance. Fewer people paying to get their DNA sequenced means less revenue to use to generate future generation sequencing devices. This delays the day various benefits of sequencing will be realized. We will be less healthy than we otherwise would be absent regulatory meddling.

I am looking forward to the day when full genome sequencing becomes affordable. If millions of people can get their DNA sequenced then the meaning of genetic variants will become known much sooner. We will be able to alter our diets and lifestyles and choose drugs and other treatments suited to our personal genetic profiles sooner. We will avoid more diseases. We will live with fewer ailments and pains.

Cheap DNA sequencing available to the masses will enable voluntary sharing of health and DNA sequencing information in order to discover which genetic variants matter. Imagine web sites where you submit your DNA (with appropriately privacy safeguards) along with disease history and physical and cognitive characteristics. Such a large scale cooperative effort will enable much faster discovery of what each genetic variant does. This can make DNA sequencing results more useful to more people sooner.

By Randall Parker    2010 July 18 11:37 AM   Entry Permalink | Comments (9)
2010 July 17 Saturday
Projection On Future Rates of DNA Sequencing

Razib Khan points to a projection for the rate at which full human genome DNA sequencing will be done over the next 10 years.

The firm GenomeQuest has a blog, and on that blog they have a post, Implications of exponential growth of global whole genome sequencing capacity. In that post there are some bullet points with numbers. Here they are:

* 2001-2009: A Human Genome

* 2010: 1,000 Genomes – Learning the Ropes

* 2011: 50,000 Genomes – Clinical Flirtation

* 2012: 250,000 Genomes – Clinical Early Adoption

* 2013: 1 Million Genomes – Consumer Awareness

* 2014: 5 Million Genomes – Consumer Reality

* 2015-2020: 25 Million Genomes And Beyond – A Brave New World

In light of the many orders of magnitude drop in DNA sequencing costs we've already witnessed in the last 30 years I find these numbers plausible. The 2010s are going to be a period in which the functional significance of tens or hundreds of thousands of genetic variants become known and DNA sequencing becomes affordable for the middle class.

There many reasons to do genome sequencing. Not all these reasons involve clinical applications by health care providers, though most do.

  • Scientific discovery. The discovery drives the usefulness of sequencing for other purposes.
  • Future disease prediction coupled with efforts to avoid your genetic fate.
  • Genetic compatibility of drug choices. Some genetic variants make certain drugs harmful. e.g. a genetic variant that causes an antibiotic to cause liver toxicity.
  • Mating choices. Choose mates or IVF embryos based on genetic tests.
  • Career choices. Find out if you have the genes needed to excel in a sport or in an intellectual career or even other career choices.
  • Self understanding. Find out why you are what you are and how much of who you are is due to your genes.
  • Cancer treatments. Aim therapies at the mutations which a cancer has.

I expect DNA sequencing of cancer cells will become routine. Knowledge about cancer genome mutations will be used to choose treatments. I would not be surprised if DNA sequencing data gets used to design antibodies to attack cancer cells without attacking normal cells.

Since a person can have cancer for months or years and since cancer cells constantly mutate I expect oncologists to track cancers by using repeated sequencing of many cancer cell samples from each patient. Once full genome DNA sequencing costs just a few hundred dollars the cost will become a small fraction of total cancer treatment costs. So I do not see money as an obstacle to this style of usage of sequencing machines.

By Randall Parker    2010 July 17 10:29 PM   Entry Permalink | Comments (4)
2010 June 06 Sunday
Genetic Activity Mapped In Mouse Hypothalamus

The genes which are active in the hypothalamus in the brain have been identified.

By analyzing all the roughly 20,000 genes in the mouse genome, the team identified 1200 as strongly activated in developing hypothalamus and characterized the cells within the hypothalamus in which they were activated. The team then characterized the expression of the most interesting 350 genes in detail using another gene called Shh, for sonic hedgehog, as a landmark to identify the precise region of the hypothalamus in which these genes were turned on. This involved processing close to 20,000 tissue sections - painstakingly sliced at one-fiftieth of a millimeter thickness and then individually examined.

While the hypothalamus is small compared to the brain as a whole it does many things including regulation of temperature, hunger, thirst, and other bodily functions. Since it is complex with many functions (e.g. it releases some hormones that regulate the pituitary gland) the scientists had to cut it into small slices to look for signs of different cells carrying out different functions.

But what's most interesting here isn't the particular genes turned on in various parts of the hypothalamus (though that is interesting and quite useful information). No, what's most interesting is that the technology exists to do this type of research.

Think about it. genes were checked for activity in 20,000 tissue slices removed from the mouse hypothalamus. That this is even possible to do such sensitive testing of gene expression on such a massive scale tells us that this sort of research is possible to do on many other tissue types. The development of gene chips and microfluidic devices is enabling orders of magnitude increases in the rates of measurement of gene activity and other activity of cells. This bodes well toward the goal of really getting control of our cells to manipulate them to do repair and rejuvenation.

By Randall Parker    2010 June 06 09:57 AM   Entry Permalink | Comments (0)
2010 May 20 Thursday
Optical Nanopore DNA Sequencer

Faster DNA sequencing by feeding single DNA strands at a time thru nanopores.

BOSTON (5-19-10) -- Sequencing DNA could get a lot faster and cheaper – and thus closer to routine use in clinical diagnostics – thanks to a new method developed by a research team based at Boston University. The team has demonstrated the first use of solid state nanopores — tiny holes in silicon chips that detect DNA molecules as they pass through the pore — to read the identity of the four nucleotides that encode each DNA molecule. In addition, the researchers have shown the viability of a novel, more efficient method to detect single DNA molecules in nanopores.

"We have employed, for the first time, an optically-based method for DNA sequence readout combined with the nanopore system," said Boston University biomedical engineer Amit Meller, who collaborated with other researchers at Boston University, and at the University of Massachusetts Medical School in Worcester. "This allows us to probe multiple pores simultaneously using a single fast digital camera. Thus our method can be scaled up vastly, allowing us to obtain unprecedented DNA sequencing throughput."

The cost of DNA sequencing has fallen by orders of magnitude and the cost hits new lows followed quickly by still more lows. Nanopore sequencers are going to send costs still lower. The $1000 genome is on the horizon as a stepping stone to the $500 genome. Smaller stuff costs less. This is the same pattern of advance that makes semiconductor computer chips so cheap.

But what to do with this sequencing capability? If you get yourself sequenced in 2010 the number of genetic variants that you have which have well understood effects are few and far between. Costs have fallen far enough that we now have to wait for scientists to use these lower costs (which they are) to figure out what all the differences mean. In a few years the argument for getting yourself sequenced will be stronger more due to better understanding of what all the genetic differences mean than due the further drops in costs.

Cheap genetic sequencing isn't just about knowing what foods are best suited for your metabolism or whether your prospective spouse is likely to cheat on you (though it will get used for those purposes). Accelerated human evolution is likely to be the biggest impact in the long term. Detailed knowledge of genes that influence intelligence, beauty, athletic performance, and other characteristics will be used in selecting embryos to implant when doing in vitro fertilization (IVF). Babies born from IVF 10 years from now will be smarter, better looking, healthier, and possessed of other advantages over the average baby born from old style sex.

The biggest way that cheap genome sequencing will lengthen our lives might turn out to be by sequencing cancer cells in order to figure out which mutations drive the development of cancer. Detailed knowledge of cancer-causing mutations will enable scientists to develop gene therapies and RNA interference drugs that turn cancer cells back to normal or that very selectively kill cancer cells. Cheap DNA sequencing therefore might save your life.

By Randall Parker    2010 May 20 11:08 PM   Entry Permalink | Comments (0)
2010 January 19 Tuesday
New Diabetes Genetic Variants Found

More genetic variants that influence blood sugar and insulin have been identified.

A major international study with leadership from Massachusetts General Hospital (MGH) researchers has identified 10 new gene variants associated with blood sugar or insulin levels. Two of these novel variants and three that earlier studies associated with glucose levels were also found to increase the risk of type 2 diabetes. Along with a related study from members of the same research consortium, associating additional genetic variants with the metabolic response to a sugary meal, the report will appear in Nature Genetics and has been released online.

"Only four gene variants had previously been associated with glucose metabolism, and just one of them was known to affect type 2 diabetes. With more genes identified, we can see patterns emerge," says Jose Florez, MD, PhD, of the MGH Diabetes Unit and the Center for Human Genetic Research, co-lead author of the report. "Finding these new pathways can help us better undertand how glucose is regulated, distinguish between normal and pathological glucose variations and develop potential new therapies for type 2 diabetes.

This study illustrates how declining costs of genetic testing cause much larger steps forward in discovery of significant genetic variants. The 4 previously discovered genetic variants were probably not all discovered in the same scientific paper. Then along comes one piece of research that reports two and a half times more genetic variants in blood sugar and insulin levels than were previously known. The rate of discovery goes up because the scale of genetic testing has risen so much due to big drops in costs.

122,000 people and 2.5 million locations (SNPs or single nucleotide polymorphisms) in human genomes were examined.

Both studies were conducted by the Meta-Analyses of Glucose and Insulin-related Traits Consortium (MAGIC), a collaboration among researchers from centers in the U.S., Canada, Europe and Australia that analyzed gene samples from 54 previous studies involving more than 122,000 individuals of European descent. The study co-led by MGH scientists – along with colleagues from Boston University, University of Cambridge, University of Oxford and the University of Michigan – began by analyzing about 2.5 million gene variations (called SNPs) from 21 genome-wide searches for variants associated with glucose and insulin regulation in more than 46,000 nondiabetic participants. The 25 most promising SNPs from the first phase were then tested in more than 76,000 nondiabetic participants in 33 other studies, leading to new associations of nine SNPs with fasting glucose levels and one with fasting insulin and with a measure of insulin resistance.

More genetic variants for glucose and insulin levels await to be found.

"We were delighted that we were able to find so many SNPs associated with raised levels of glucose," says Dr Inês Barroso, from the Wellcome Trust Sanger Institute, "but amazed that we found only one strong association with levels of insulin. We don't think this is a technical difference, but that the genetics is telling us that the two measures, insulin and glucose, have different architectures, with fewer genes, rarer variants or greater environmental influence affecting insulin resistance."

The team have strong evidence that other genetic factors remain to be found: their study explains about ten per cent of the genetic effect on fasting glucose. They believe that there will be rarer variants with a larger impact that would not be found by a study such as this.

In the next 5 years we'll witness more discoveries about the meaning of genetic variants than we've seen in all previous history because genetic testing costs have fallen so far so fast. Costs have dropped by orders of magnitude and continue to drop rapidly. Scientists face a flood of data from which they will be able to tease out many discoveries.

By Randall Parker    2010 January 19 12:34 AM   Entry Permalink | Comments (6)
2009 November 15 Sunday
Genome 10K Project To Sequence 10000 Species

Out of the 60,000 vertebrate species still in existence an international group of scientists wants to sequence 10,000 of them.

Scientists have an ambitious new strategy for untangling the evolutionary history of humans and their biological relatives: a genetic menagerie made of the DNA of more than 10,000 vertebrate species. The plan, proposed by an international consortium of scientists, is to obtain, preserve, and sequence the DNA of approximately one species for each genus of living mammals, birds, reptiles, amphibians, and fish.

A bigger effort is needed to collect samples from many individual animals of each species so that their genetic diversity can be preserved in the face of declining numbers. Habitat loss is cutting into the numbers of many species. For some only the DNA samples will exist as the living species go extinct.

They think they can do this for about $5000 per species.

Known as the Genome 10K Project, the approximately $50 million initiative is “tremendously exciting science that will have great benefits for human and animal health,” Haussler said. “Within our lifetimes, we could get a glimpse of the genetic changes that have given rise to some of the most diverse life forms on the planet.”

The idea is to compare DNA sequences across the many vertebrate species to get idea of which genes can be traced back to common ancestors hundreds of millions of years ago. This effort will likely change the way that trees get drawn to show the relationships between species.

The orders of magnitude decline in DNA sequencing costs make this project possible.

The primary impetus behind the proposal is the rapidly expanding capability of DNA sequencers and the associated decline in sequencing costs. “We’ll soon be in a situation where it will cost only a few thousand dollars to sequence a genome,” Haussler said. “At that point, most of the cost will be getting samples, managing the project, and handling data.”

By Randall Parker    2009 November 15 09:48 PM   Entry Permalink | Comments (2)
2009 November 05 Thursday
Genome Sequencing Cost Drops Below $5000

Futuristic speculative questions sometimes become present day practical questions. Have you asked yourself what price you'd be willing to pay to get your genome fully sequenced?

Complete Genomics, a start-up based in Mountain View, CA, has again lowered the stick in the financial limbo dance of human genome sequencing, announcing in the journal Science that it has sequenced three human genomes for an average cost of $4,400. The most recently sequenced genome--which happens to be that of genomics pioneer George Church--cost just $1,500 in chemicals, the cheapest published yet.

This doesn't mean you can get your genome sequenced for $4400. They also had labor, equipment, and lab space costs as well as data post-processing costs. But the overall costs are still very low.

Their accuracy rate exceeds that of previous efforts at complete genome sequencing.

In order to estimate their error rate, the researchers tested 291 random novel non-synonymous variants by targeted sequencing in sample NA07022. Based on the results, they calculated an error rate of about one in 100,000 bases, which the company claims "exceeds the accuracy rate achieved in other published complete genome sequences."

While the price keeps dropping the practical value of a person's genetic sequence is rising with more discoveries about what all the genetic differences mean. The rapid descent in genome sequencing costs has advanced so far that lower prices matter less than what you can do with the information. At this point the bigger improvements to the value equation for getting a full genome sequencing will come from scientific discoveries about what all the genetic variations mean.

The lower prices will lead to a flood of genetic data that will lead to discoveries about what the data means. In a few years knowing your full genetic sequence will become quite useful.

Recently Pauline C. Ng, Sarah S. Murray, Samuel Levy and J. Craig Venter sent genetic samples to genetic testing services Navigenics and 23andme and wrote a paper in nature comparing the results. The two companies were pretty accurate in their testing. But their interpretations of the results differed and were speculative. Click thru and read the details. We do not yet know enough about the real significance of the vast bulk of the genetic differences.

By Randall Parker    2009 November 05 10:36 PM   Entry Permalink | Comments (5)
2009 September 10 Thursday
$20000 Per Genome Sequencing For 8 At A Time

Just a month ago Stephen Quake sequenced his genome for $50000. That represents a drop of 80% from the $250k cost of a year ago and orders of magnitude lower than the cost 10 years ago. But if you go out and pay $50k to get your genome sequenced you are probably spending too much. MIT's Technology Review reports a company called Complete Genomics has dropped the cost of genome sequencing even further.

And CEO Clifford Reid says the company will soon start charging $20,000 per genome for an order of eight genomes or more, and $5,000 apiece for an order of 1,000 or more-with variable pricing in between.

How low does the price have to get for you to pay to get your genome sequenced?

The biggest problem at this point is just what do you do with the information? We are going to go thru a period where genome sequencing is really cheap but the information about your DNA letter sequence isn't of much help to most people. We need to know what all the differences mean.

Once the significance of lots of genetic sequence information becomes known how useful will it be in daily life? We'll certainly know ourselves better. But if, say, you've got some genetic variants that increase your risk of cancer what to do with this information? Perhaps get colonoscopies more often if your risk of colon cancer is elevated. But not all cancers lend themselves to preventative testing and not all tests are easy to be done.

I expect understanding of genetic variations will play a big role in changing mating choices and in embryo selection. Knowledge of genetic variants will help some in dietary choices too. Got any ideas on how detailed knowledge of your genetic variations will be useful to you?

By Randall Parker    2009 September 10 08:41 PM   Entry Permalink | Comments (3)
2009 August 10 Monday
Stephen Quake Sequences His Genome For $50k

Stanford microfluidics researcher Stephen Quake used a new DNA sequencing machine from Helicos Biosciences (a company he co-founded) to sequence his own genome in a week to 95% completeness for only $50,000.

The first few times that scientists mapped out all the DNA in a human being in 2001, each effort cost hundreds of millions of dollars and involved more than 250 people. Even last year, when the lowest reported cost was $250,000, genome sequencing still required almost 200 people. In a paper published online Aug. 9 by Nature Biotechnology, a Stanford University professor reports sequencing his entire genome for less than $50,000 and with a team of just two other people.

In other words, a task that used to cost as much as a Boeing 747 airplane and required a team of people that would fill half the plane, now costs as much as a mid-priced luxury sedan and the personnel would fill only half of that car.

“This is the first demonstration that you don’t need a genome center to sequence a human genome,” said Stephen Quake, PhD, professor of bioengineering. “It’s really democratizing the fruits of the genome revolution and saying that anybody can play in this game.”

What we see here is a rate of cost decline that is much faster than the rate at which computing power drops in cost. The cause is very similar: use smaller and smaller devices to manipulate matter on a smaller scale. Small is cheap. This trend makes me optimistic that the rate of advance of development of rejuvenation therapies is accelerating. More powerful and cheaper tools should give us better treatments faster.

Quake discovered he has a rare gene that causes heart problems. But he also has genes that make him respond well to statins. Not clear if those genes predict how much his cholesterol will drop on statins or whether he'll have side effects from taking statins. Given that statins sometimes cause damaging side effects on muscle and also on memory the ability to get yourself genetically checked to predict drug side effects will be one of the earliest uses of widespread gene sequencing.

Quake also has a gene associated with disagreeableness. I wonder whether this gene makes smart driven people more creative and productive by making them less accepting of the status quo.

The sequencing machine detected both single letter differences (single nucleotide polymorphisms) and larger copy number variations where sections of the genome are repeated.

The study, conducted by Dmitry Pushkarev, Norma Neff and Stephen Quake was carried out using less than four runs of a single HeliScope™ Single Molecule Sequencer, and achieved 28X average coverage of the human genome. The sequencing allowed the detection of over 2.8 million single nucleotide polymorphisms (SNPs), of which over 370,000 were novel. Validation with a genotyping array demonstrated 99.8% concordance. The unbiased nature of the single-molecule sequencing approach also allowed the detection of 752 copy number variations (CNVs) in this genome.

Note the point about how many were novel. We have lots of SNPs still to discover.

Once this technology enters into wide use a flood of genetic data will be produced. We then need to do a massive compare of people for their genetic differences, health differences, personalities, intelligence, preferences, appearances, and other qualities to try to pin down which genetic variations matter and how.

By Randall Parker    2009 August 10 08:40 PM   Entry Permalink | Comments (3)
2009 August 04 Tuesday
UCLA Chip Does 1024 Chemical Reactions At Once

Small, fast, cheap, and automated microfluidic chips are cutting the cost of research and drug development. A team at UCLA has developed a chip that can screen for binding of many drugs in parallel against drug targets such as enzymes.

A team of UCLA chemists, biologists and engineers collaborated on the technology, which is based on microfluidics — the utilization of miniaturized devices to automatically handle and channel tiny amounts of liquids and chemicals invisible to the eye. The chemical reactions were performed using in situ click chemistry, a technique often used to identify potential drug molecules that bind tightly to protein enzymes to either activate or inhibit an effect in a cell, and were analyzed using mass spectrometry.

This chip can do over 1000 chemical reactions at once to check for inhibitors of an enzyme.

While traditionally only a few chemical reactions could be produced on a chip, the research team pioneered a way to instigate multiple reactions, thus offering a new method to quickly screen which drug molecules may work most effectively with a targeted protein enzyme. In this study, scientists produced a chip capable of conducting 1,024 reactions simultaneously, which, in a test system, ably identified potent inhibitors to the enzyme bovine carbonic anhydrase.

The 1,024 chemical reactions were all done in parallel in a few hours. Next the scientists intend to develop automated means to measure the results.

A thousand cycles of complex processes, including controlled sampling and mixing of a library of reagents and sequential microchannel rinsing, all took place on the microchip device and were completed in just a few hours. At the moment, the UCLA team is restricted to analyzing the reaction results off-line, but in the future, they intend to automate this aspect of the work as well.

The cost cutting and experiment acceleration that come from microfluidic chips lead me to expect a big acceleration in the rate of many advance for many biomedical research problems. There's an obvious parallel here with smaller, faster, and cheaper computer chips. Small mass-produced chips cut costs and speed progress.

By Randall Parker    2009 August 04 10:41 PM   Entry Permalink | Comments (3)
2009 July 27 Monday
Gene Editing Sped Up By Orders Of Magnitude

Another example of why the future is coming sooner than you might expect.

BOSTON, Mass. (July 26, 2009) — High-throughput sequencing has turned biologists into voracious genome readers, enabling them to scan millions of DNA letters, or bases, per hour. When revising a genome, however, they struggle, suffering from serious writer's block, exacerbated by outdated cell programming technology. Labs get bogged down with particular DNA sentences, tinkering at times with subsections of a single gene ad nauseam before moving along to the next one.

A team has finally overcome this obstacle by developing a new cell programming method called Multiplex Automated Genome Engineering (MAGE). Published online in Nature on July 26, the platform promises to give biotechnology, in particular synthetic biology, a powerful boost.

Led by a pair of researchers in the lab of Harvard Medical School Professor of Genetics George Church, the team rapidly refined the design of a bacterium by editing multiple genes in parallel instead of targeting one gene at a time. They transformed self-serving E. coli cells into efficient factories that produce a desired compound, accomplishing in just three days a feat that would take most biotech companies months or years.

We can't predict future rates of progress based on past rates of progress because enabling technologies can pop up (like above) that suddenly can shift the rate of progress into very high gear.

Imagine applying this technique above to reengineering bacteria or algae to make liquid biofuels such as biodiesel. That's still not easy to do because scientists do not yet know which genetic changes they'd need to make to achieve a useful genetically engineered biodiesel producing organism. But the actual genetic modification won't be the hard part. The hard part will be knowing which mods to make.

Biomass energy production with ponds of genetically engineered organisms is probably one of the harder problems for which genetic engineering might be done. Biomass energy production requires very high efficiencies at very large scale. Relatively easier problems include producing drugs which get used in milligram or gram doses.

Where could very rapid automated gene customization deliver its greatest punch in medical treatment? How about antibody production against cancers? Large numbers of antibodies could be produced to try against each cancer. Might work.

By Randall Parker    2009 July 27 10:02 PM   Entry Permalink | Comments (8)
2009 June 12 Friday
Personal Genome Sequencing Hits $48,000 Price

How cheap does DNA sequencing have to get before you'll pay to get sequenced?

The cost of a personal genome has dropped from about the price of a luxury sedan to, well, the price of a slightly less luxurious nice car. Illumina, a genomics technology company headquartered in San Diego, announced the launch of a $48,000 genome-sequencing service at the Consumer Genetics Conference in Boston on Wednesday.

The declining costs first have to enable scientists to discover what a lot more of the genetic variations mean before I'll seriously consider ponying up the money to get my own genome sequenced. But the day is coming when getting your own genome sequenced will make sense.

Because of the huge price drops the amount of sequencing getting done has gone up by orders of magnitude. This means the flood of discoveries will rise for years to come. I'm especially looking for this flood of data to provide useful personal nutrigenomics advice.

By Randall Parker    2009 June 12 12:09 AM   Entry Permalink | Comments (13)
2009 April 05 Sunday
Robot Formulates Hypotheses And Does Experiments

Using algorithms for a limited form of artificial intelligence a robot named Adam used knowledge about yeast genetics to formulate hypotheses and carry out experiments.

As reported in the latest issue of the journal Science, Adam autonomously hypothesized that certain genes in the yeast Saccharomyces cerevisiae code for enzymes that catalyze some of the microorganism's biochemical reactions. The yeast is noteworthy, as scientists use it to model more complex life systems.

Adam then devised experiments to test its prediction, ran the experiments using laboratory robotics, interpreted the results, and used those findings to revise its original hypothesis and test it out further. The researchers used their own separate experiments to confirm that Adam's hypotheses were both novel and correct--all the while probably wondering how soon they'd become obsolete.

The automation of lab work is the wild card that makes the future progress of biological science hard to predict. How fast will robots take over lab work? Will they hit a sudden critical mass of capabilities at some point where suddenly they'll enable an order of magnitude speed-up (or even greater) in the rate of progress of biomedical research? Will this happen in the 2020s? 2030s?

The robots will take over intellectually easier tasks first.

Ross King from the department of computer science at Aberystwyth University, and who led the team, told BBC News that he envisaged a future when human scientists' time would be "freed up to do more advanced experiments".

Robotic colleagues, he said, could carry out the more mundane and time-consuming tasks.

"Adam is a prototype but, in 10-20 years, I think machines like this could be commonly used in laboratories," said Professor King.

By Randall Parker    2009 April 05 11:11 PM   Entry Permalink | Comments (5)
2009 April 04 Saturday
Lifestyle After Cancer Diagnosis Matters For Survival

Smokers, problem drinkers, couch potatoes, and people who don't each much fruit all have lower survival rates once diagnosed with cancer.

ANN ARBOR, Mich. — Head and neck cancer patients who smoked, drank, didn't exercise or didn't eat enough fruit when they were diagnosed had worse survival outcomes than those with better health habits, according to a new study from the University of Michigan Comprehensive Cancer Center.

"While there has been a recent emphasis on biomarkers and genes that might be linked to cancer survival, the health habits a person has at diagnosis play a major role in his or her survival," says study author Sonia Duffy, Ph.D., R.N., associate professor of nursing at the U-M School of Nursing, research assistant professor of otolaryngology at the U-M Medical School, and research scientist at the VA Ann Arbor Healthcare System.

Each of the factors was independently associated with survival. Results of the study appear online in the Journal of Clinical Oncology.

The researchers surveyed 504 head and neck cancer patients about five health behaviors: smoking, alcohol use, diet, exercise and sleep. Patients were surveyed every three months for two years then yearly after that.

Smoking was the biggest predictor of survival, with current smokers having the shortest survival. Problem drinking and low fruit intake were also associated with worse survival, although vegetable intake was not. Lack of exercise also appears to decrease survival.

It could be that the smokers get more deadly cancer in the first place. All these influences might act on the body before one gets cancer. For example, sustained oxidative stress will age the immune system more rapidly. So once you get cancer your immune response to it will be weaker if you've been living a dissipated lifestyle.

What I'd like to see: Compare cancer survival time to telomere length at time of cancer diagnosis. The other factors above might work by shortening telomeres and worsening the ability of the body to fight off cancer. Obesity and stress accelerate chromosome telomere tip aging and chronic stress shortens immune cell telomeres. Regards the reference above to lack of exercise: a sedentary lifestyle shortens telomeres too. Plus, getting less vitamin D appears to shorten telomeres as well.

By Randall Parker    2009 April 04 04:11 PM   Entry Permalink | Comments (2)
2009 February 21 Saturday
Mass Spectrometer To Speed Bone Nutrition Studies

Why pay for years long and expensive diet studies on bone health when a powerful scientific instrument can get you answers in 7 weeks?

The proposal also takes advantage of the analytical expertise of a company in the Purdue Research Park, Bioanalytical Systems Inc., and Purdue's PRIME Lab, a one-of-a-kind rare isotope laboratory. The PRIME Lab's accelerator mass spectrometer will allow researchers to monitor bone loss in 50 days that otherwise would take two to four years, Weaver said.

"For our osteoporosis study, for example, we'll be able to use just nine people and test them for seven different products in two years," Weaver said. "Without the PRIME lab, it would take two years to test just one product."

A general trend that continues into the future: faster and cheaper ways to do scientific and medical research. But I still want a time machine that'll let me jump ahead 30 years to get rejuvenation therapies immediately.

H.G. Wells missed this. But the biggest benefit of a time machine would be full body rejuvenation. Just jump far enough ahead that you come out when stem cell therapies and other strategies for engineered negligible senescence have become mature, safe, and cheap. Of course, you might land in a police state or a Borg consciousness.

By Randall Parker    2009 February 21 03:45 PM   Entry Permalink | Comments (2)
2009 February 10 Tuesday
2019 All Babies Will Get DNA Sequencing At Birth?

DNA sequencing costs are falling so far so fast that in 10 years DNA sequencing of babies will be commonplace at birth. Cuckolds will learn of their plight while standing outside hospital delivery rooms.

Every baby born a decade from now will have its genetic code mapped at birth, the head of the world's leading genome sequencing company has predicted.

A complete DNA read-out for every newborn will be technically feasible and affordable in less than five years, promising a revolution in healthcare, says Jay Flatley, the chief executive of Illumina.

Only social and legal issues are likely to delay the era of “genome sequences”, or genetic profiles, for all. By 2019 it will have become routine to map infants' genes when they are born, Dr Flatley told The Times.

Of course, this won't be commonplace in the poorer countries. But in industrialized countries a complete DNA sequence at birth will come to be seen as prudent for many reasons. Most obvious: Before the father's name gets placed on the birth certificate the hospital will verify just who is dad.

Genetic diseases that cause damage when the wrong foods are consumed will be known about from the start. Also, knowledge of genetic factors that contribute to autism might eventually become useful to help initiate treatment that'll alter the direction of brain development to make the disorder less severe.

Why else get sequenced at birth? To transfer the data to exclusive competitive kindergartens and grade schools which will of course evaluate applications for admission of Jill and Johnnie at least partially based on their genetic potential.

By Randall Parker    2009 February 10 12:15 AM   Entry Permalink | Comments (7)
2008 October 06 Monday
$5000 Complete Gene Sequencing In 2Q 2009

Our biotechnological future is coming even faster than I expected. A Mountain View California biotech start-up, Complete Genomics, operating in stealth mode since a 2006 founding, has announced availability of personal complete DNA sequencing for $5000 in 2Q 2009. If they pull this off it is an amazing achievement.

The cost of determining a person’s complete genetic blueprint is about to plummet again — to $5,000.

That is the price that a start-up company called Complete Genomics says it will start charging next year for determining the sequence of the genetic code that makes up the DNA in one set of human chromosomes. The company is set to announce its plans on Monday.

Cheap! Starting to feel tempted yet?

Some scientists associated with this start-up are heavy hitters.

"I have great confidence that it's right," said Lawrence Berkeley National Lab geneticist Michael Eisen. "I don't know exactly what the underlying method is, but George Church isn't a kidder."

George Church is a Harvard University geneticist who helped found the Human Genome Project and was responsible for the first commercial genome sequence. He's also an adviser to the Mountain View, California-based Complete Genomics, provider of the $5,000 genome — and joining Church is Illumina co-founder Mark Chee, Institute for Systems Biology president Leroy Hood and Massachusetts Institute of Technology bioengineer Douglas Lauffenburger

But Complete Genomics is going to start out aiming at institutional customers. You might have to wait for individual customer access.

Based in Mountain View, Calif., Complete Genomics has raised $46 million in three rounds of financing since its incorporation in 2006. Unlike its commercial next-gen sequencing rivals – Roche/454, Illumina, Applied Biosystems (ABI) and Helicos – Complete Genomics will not be selling individual instruments, but rather offer a service aimed initially at big pharma and major genome institutes.

Complete Genomics is building what Reid calls “the world’s largest complete human genome sequencing center so we can sequence thousands of complete human genomes, so that researchers can conduct clinical trial-sized studies.” If all goes according to plan, that 32,000-square-feet facility will deliver 1,000 human genomes in 2009 and an eye-popping 20,000 genomes in 2010.

At $5000 per genome I think they'll easily sell out their capacity for the first 1000 genomes in 2009.

With materials costs of only $1000 per genome they think they'll sequence 1 million genomes in the next 5 years.

The company also said it intends to open additional genome sequencing centers across the U.S. and abroad. Over the next five years, the company projects that 10 such centers will be able to sequence 1 million complete human genomes.

Our problem becomes how to make sense of all this DNA sequencing data? That data needs to be matched with lots of physical measurements, medical histories, psychometric tests, exercise tests, and other data gathering to allow correlation of all the DNA sequencing differences with various human characteristics.

The approach uses DNA nanoballs. These things are called concatamers which probably refers to concatenation.

The first step is to prepare a gridded array of up to a billion DNA nanoballs, or DNBs. These DNBs are concatamers of 80-basepair (bp) mate-paired fragments of genomic DNA, punctuated with synthetic DNA adapters. The 80-bp fragments are derived from a pair of roughly 40-bp fragments that reside a known distance apart (say 500 bases or 10,000 bases). “We insert an adapter to break the 40 bases into 20 bases or 25 bases,” which acts like “a zip code or address into the DNA,” says Drmanac.

The sample preparation amplifies the DNA templates in solution rather than an emulsion or on a platform. It produces about 10 billion DNBs – each about 290 nm in diameter – in just 1 ml solution. “We spent lots of energy to make them small and sticky to the surface,” says Drmanac. The DNBs are spread onto a surface gridded with 300-nm diameter wells (prepared using photolithography) spaced just 1 micron apart. The DNBs settle into the wells like so many balls dropping into the pockets of a roulette wheel.

One promising use of cheap DNA sequencing data is in the study of cancer cells. Sequencing of tens of thousands or even hundreds of thousands of cancers will lead to much better identification of how DNA mutations contribute to cancer development.

So how cheap would DNA sequencing have to get before you'd pony up to get yourself sequenced? Once the price gets down to a few thousand dollars I'll be waiting more for useful information the sequencing data can provide than for a further price reduction. We are going to have to wait a few more years before we know enough about genetic differences for personal sequencing to provide useful information to most of us.

By Randall Parker    2008 October 06 10:16 PM   Entry Permalink | Comments (10)
2008 July 25 Friday
DNA Sequencing Technology Continues Rapid Advance

Technological progress in DNA sequencing continues to amaze. But political obstacles to widespread gene testing threaten to prevent full use of these advances. First off, Alexis Madrigal surveys the rapid progress in DNA sequencing technology.

A prominent genetics institute recently sequenced its trillionth base pair of DNA, highlighting just how fast genome sequencing technology has improved this century.

Every two minutes, the Wellcome Trust Sanger Institute sequences as many base pairs as all researchers worldwide did from 1982 to 1987, the first five years of international genome-sequencing efforts.

That speed is thanks to the technology underlying genomics research, which has been improving exponentially every couple of years, similar to the way computer tech improves under Moore's Law.

The DNA sequencing technologies under development promise to bring more huge strides in speed and cost reduction of DNA sequencing in the next decade.

What's clear is that the DNA sequencing technology pipeline is deep and ready to deliver innovation and reduced cost for years to come. Within the next decade, nanopores, tiny holes about 1.2 nanometers across, combined with new microscopy techniques, could even allow scientists to "read" individual DNA bases as easily as we read the letters A, C, T, G.

DNA sequencing company Pacific Biosciences just got an infusion of $100 million from Intel and other investors. Their technology might make 15 minute human DNA sequencing possible by 2013. We are that close to an enormous explosion in available DNA sequencing data.

The Menlo Park, California-based company believes SMRT will lead to a transformation in the field of DNA sequencing that will facilitate sequencing of individual genomes as part of routine medical care. Pacific Bioscience has estimated its next-generation sequencer will be available as early as 2010 and has anticipated that by 2013, its technology will be able to sequence a genome in 15 minutes.

The Pacific Biosciences technology watches individual DNA polymerase enzymes holding onto nucleotides. We are talking way small scale.

But in order to be able to detect fluorescence from just a single nucleotide without interference from others that also float around in the system, the observation volume must be made much smaller.

Enter zero-mode waveguides, or ZMWs, which are tiny wells with metal sides and a glass bottom that are made by punching holes tens of nanometers wide in a 100-nanometer aluminum film that is deposited on glass. When a laser is shone at the wells from below, it cannot penetrate them because its wavelength is bigger than the hole. The effect is similar to how microwaves cannot exit the perforated screen of a microwave oven door.

However, some attenuated light forms an evanescent field just inside the well near its bottom, creating a tiny illuminated detection volume of 20 zeptoliters, small enough to observe a single molecule of DNA polymerase holding on to a nucleotide, but no surrounding fluorescent molecules.

So you'll be able to afford to get your DNA sequenced within 5 years. Now our problem is to figure out what all the genetic differences mean. We need to collect detailed medical histories and other information about a large number of volunteers (preferably millions) so that we can compare that data along with DNA sequencing data to discover which genetic differences cause functional differences.

Update: Looking forward to getting your DNA sequenced? The states of California and New York require a doctor's permission for you to get genetic testing and sequencing done. How dare they!

You may think you own your blood and saliva and that you're free to take some of it and send it to a lab for whatever type of analysis you want.

The state of California disagrees.

If you're a California resident, the state department of health has forbidden companies that do direct-to-consumer genetic analysis from selling their services to you -- unless a doctor has given you permission to learn about your own DNA.

You aren't allowed to get information about yourself without a doctor's supervision? We are considered incapable of interpreting the results. I say that is besides the point. We aren't always capable of making other decisions that we make either. What damage is the state trying to prevent in making this rule?

The state of California is trying to get web-based gene testing services to stop serving Californians.

California becomes the second big state to crack down on companies that offer gene tests to consumers via the Web. This week, the state health department sent cease-and-desist letters to 13 such firms, ordering them to immediately stop offering genetic tests to state residents.

Could other bloggers please take up this issue and write posts complaining about this regulatory action?

By Randall Parker    2008 July 25 11:57 PM   Entry Permalink | Comments (1)
2008 April 20 Sunday
$100 Human Genome Sequencing Within Sight?

Long time readers know that I expect much more rapid advances in biotechnology because biological research is coming to resemble the computer industry with miniature lab devices designed for low cost mass manufacture and automated use. The devices operate on biological systems at the scale of individual cells and molecules. Here's another example of how much this trend cuts costs and speeds progress. Microfluidic devices will enable personal complete DNA sequencing for only $100.

It currently costs roughly $60,000 to sequence a human genome, and a handful of research groups are hoping to achieve a $1,000 genome within the next three years. But two companies, Complete Genomics and BioNanomatrix, are collaborating to create a novel approach that would sequence your genome for less than the price of a nice pair of jeans--and the technology could read the complete genome in a single workday. "It would have been absolutely impossible to think about this project 10 years ago," says Radoje Drmanac, chief scientific officer at Complete Genomics, which is based in Mountain View, CA.

Such a low cost will of course be achieved using nanofluidic devices. Basically, something like computer chips but designed to manipulate individual molecules of DNA.

Each DNA molecule will be threaded into a nanofluidics device, made by Philadelphia-based BioNanomatrix, lined with rows of tiny channels. The narrow width of the channels--about 100 nanometers--forces the normally tangled DNA to unwind, lining up like a train in a long tunnel and giving researchers a clear view of the molecule.

Cheap DNA sequencing will revolutionize the way many people mate. People will surreptitiously check the DNA sequences of prospective mating partners. "Does she have the genes I want to give to my children? If not, I'll make up some excuse about how we have different goals in life and just move on." Or "Does he have the genetic right stuff? If not, I'll tell him he's not spiritual enough for me and say I have to end it". Just how will people lie in order to avoid telling someone they are too genetically inferior for baby making?

Then there's the markets for donor sperm and eggs. With the ability to select among large numbers of egg donors and a far larger number of sperm donors the use of DNA testing will enable buyers to get much closer to their ideal genetic profile. Expect the resulting kids to be smarter, healthier, with different personalities (how exactly?) and far better looking. People who use donor sperm and egg will produce smarter and more successful kids than the average people who choose mates who will help them raise their genetically own kids.

How much and how soon will microfluidic devices speed up the development of stem cell therapies? Genetic selection of sperm, eggs, and fertilized embryos will certainly speed up human evolution. But stem cell therapies will let us rev up and rejuvenate our existing old natural design bodies.

By Randall Parker    2008 April 20 10:31 AM   Entry Permalink | Comments (10)
2008 March 14 Friday
DNA Sequencing Method Uses $60k Of Reagents Per Person

While the cost does not include labor or capital equipment the $60,000 for the reagents is an impressive achievement.

FOSTER CITY, Calif. -- Applied Biosystems (NYSE:ABI), an Applera Corporation business, today announced a significant development in the quest to lower the cost of DNA sequencing. Scientists from the company have sequenced a human genome using its next-generation genetic analysis platform. The sequence data generated by this project reveal numerous previously unknown and potentially medically significant genetic variations. It also provides a high-resolution, whole-genome view of the structural variants in a human genome, making it one of the most in-depth analyses of any human genome sequence. Applied Biosystems is making this information available to the worldwide scientific community through a public database hosted by the National Center for Biotechnology Information (NCBI).

Does anyone reading this know (or have a way to find out) how many days or weeks this sequencing took to do?

Applied Biosystems was able to analyze the human genome sequence for a cost of less than $60,000, which is the commercial price for all required reagents needed to complete the project. This is a fraction of the cost of any previously released human genome data, including the approximately $300 million1 spent on the Human Genome Project. The cost of the Applied Biosystems sequencing project is less than the $100,000 milestone set forth by the industry for the new generation of DNA sequencing technologies, which are beginning to gain wider adoption by the scientific community.

The earliest automated DNA sequencing machine developed at CalTech (using a mass spectrometer design developed for a Mars mission) required a full time lab technician to purify the existing highest quality reagents to an even higher purity that the sequencing machine needed.

These scientists did multiple sequencings of the same genome which is needed in order to get good accuracy.

Under the direction of Kevin McKernan, Applied Biosystems' senior director of scientific operations, the scientists resequenced a human DNA sample that was included in the International HapMap Project. The team used the company's SOLiD System to generate 36 gigabases of sequence data in 7 runs of the system, achieving throughput up to 9 gigabases per run, which is the highest throughput reported by any of the providers of DNA sequencing technology.

The 36 gigabases includes DNA sequence data generated from covering the contents of the human genome more than 12 times, which helped the scientists to determine the precise order of DNA bases and to confidently identify the millions of single-base variations (SNPs) present in a human genome. The team also analyzed the areas of the human genome that contain the structural variation between individuals. These regions of structural variation were revealed by greater than 100-fold physical coverage, which shows positions of larger segments of the genome that may vary relative to the human reference genome.

"We believe this project validates the promise of next-generation sequencing technologies, which is to lower the cost and increase the speed and accuracy of analyzing human genomic information," said McKernan. "With each technological milestone, we are moving closer to realizing the promise of personalized medicine."

Before we get to personalized medicine we are going to discover what a huge number of genetic variations do to make us different in mind and body. Our perceptions of what we are as humans will be fundamentally altered. Most notably people will come out on the other side of this wave of discoveries with an altered and reduced view of the power of free will.

By Randall Parker    2008 March 14 10:59 PM   Entry Permalink | Comments (3)
2008 January 23 Wednesday
1000 Genomes Project To Accelerate Genetic Discoveries

Remember when sequencing the DNA of just a single person was a great achievement? Now an international project will sequence 1000 times as many human genomes.

An international research consortium today announced the 1000 Genomes Project, an ambitious effort that will involve sequencing the genomes of at least a thousand people from around the world to create the most detailed and medically useful picture to date of human genetic variation. The project will receive major support from the Wellcome Trust Sanger Institute in Hinxton, England, the Beijing Genomics Institute, Shenzhen (BGI Shenzhen) in China and the National Human Genome Research Institute (NHGRI), part of the National Institutes of Health (NIH).

Drawing on the expertise of multidisciplinary research teams, the 1000 Genomes Project will develop a new map of the human genome that will provide a view of biomedically relevant DNA variations at a resolution unmatched by current resources. As with other major human genome reference projects, data from the 1000 Genomes Project will be made swiftly available to the worldwide scientific community through freely accessible public databases.

“The 1000 Genomes Project will examine the human genome at a level of detail that no one has done before,” said Richard Durbin, Ph.D., of the Wellcome Trust Sanger Institute, who is co-chair of the consortium. “Such a project would have been unthinkable only two years ago. Today, thanks to amazing strides in sequencing technology, bioinformatics and population genomics, it is now within our grasp. So we are moving forward to build a tool that will greatly expand and further accelerate efforts to find more of the genetic factors involved in human health and disease.”

Scientists think they've found the genetic variations which are carried by at least 10% of the human population. Now they want to look for rarer variations that are carried by as few as 1% of the population.

The scientific goals of the 1000 Genomes Project are to produce a catalog of variants that are present at 1 percent or greater frequency in the human population across most of the genome, and down to 0.5 percent or lower within genes. This will likely entail sequencing the genomes of at least 1,000 people. These people will be anonymous and will not have any medical information collected on them, because the project is developing a basic resource to provide information on genetic variation. The catalog that is developed will be used by researchers in many future studies of people with particular diseases.

“This new project will increase the sensitivity of disease discovery efforts across the genome five-fold and within gene regions at least 10-fold,” said NHGRI Director Francis S. Collins, M.D., Ph.D. “Our existing databases do a reasonably good job of cataloging variations found in at least 10 percent of a population. By harnessing the power of new sequencing technologies and novel computational methods, we hope to give biomedical researchers a genome-wide map of variation down to the 1 percent level. This will change the way we carry out studies of genetic disease.”

Within a few years this project will be collecting more sequence information in 2 days than was collected in all of last year.

“This project will examine the human genome in a detail that has never been attempted – the scale is immense. At 6 trillion DNA bases, the 1000 Genomes Project will generate 60-fold more sequence data over its three-year course than have been deposited into public DNA databases over the past 25 years,” said Gil McVean, Ph.D., of the University of Oxford in England, one of the co-chairs of the consortium’s analysis group. “In fact, when up and running at full speed, this project will generate more sequence in two days than was added to public databases for all of the past year.”

The acceleration of DNA sequencing technologies is going forward much faster than the Moore's Law rate of advance of computer power which takes a couple of years to achieve a single doubling of power. DNA sequencing technologies are speeding up by orders of magnitude in a few years.

The 1000 Genomes Project will probably be followed by the Million Genomes Project to find very rare genetic variations. Plus, at the same time we are witnessing a flood of discoveries about what each of the genetic variations mean in terms of disease risk and about which genetic variations cause which differences between people. We are getting very close to the discovery of large numbers of genetic variations that determine cognitive abilities and behavioral tendencies. Within 10 years embryo selection guided by genetic testing will become the rage among those who want to have the highest performing offspring.

By Randall Parker    2008 January 23 09:28 PM   Entry Permalink | Comments (1)
2008 January 20 Sunday
Venture Capital Biotech Investments Surged In 2007

Venture capitalists see biotechnology as about to take off.

Venture capitalists pumped a record $9.1 billion into privately held U.S. biotechnology and medical device companies last year, in hopes of making discoveries they can sell to larger drugmakers.

Biotechnology and medical device companies raised 20 percent more cash in the U.S. last year than in 2006, according to a report by accounting firm PricewaterhouseCoopers and the National Venture Capital Association.

This bodes well for the development of rejuvenation therapies. Biotechnology is going to advance much more rapidly with lots of venture capital investments flowing into start-ups. The amounts of money getting invested suggests the venture capitalists think biotechnology has finally advanced far enough that it can really start delivering large returns on investment.

If you look at the chart on page 3 of the full report (PDF) you will see that the second quarter of 2007 (2Q 07) was a stronger quarter than 3Q 07 for venture capital investment overall and for biotechnology and for medical devices and equipment.

But you will also notice one category is leaping upward very rapidly: Industrial/Energy. It nearly doubled from $543 million in 2Q 07 to $921 million in 3Q 07. That puts it close to the $1,091 million for biotech in Q3 07. High oil prices are probably causing a shift of investment from biotech and other areas to energy. As we move past the peak of oil production and the world decline of available oil starts to take hold that shift could intensify. So Peak Oil is an obstacle to the development of rejuvenation therapies.

By Randall Parker    2008 January 20 04:02 PM   Entry Permalink | Comments (3)
2007 December 12 Wednesday
Scientists Simulate DNA Nanopore Sequencer

The trend of using computer semiconductor technologies to manipulate biological material promises to revolutionize biological science and biotechnology. Orders of magnitude cost reductions become possible when very small devices are fabricated to manipulate cells and components of cells. Researchers at University of Illinois have created a simulated design for a nanopore-based DNA sequencer that could drastically cut DNA sequencing costs.

CHAMPAIGN, Ill. — Using computer simulations, researchers at the University of Illinois have demonstrated a strategy for sequencing DNA by driving the molecule back and forth through a nanopore capacitor in a semiconductor chip. The technique could lead to a device that would read human genomes quickly and affordably.

Being able to sequence a human genome for $1,000 or less (which is the price most insurance companies are willing to pay) could open a new era in personal medicine, making it possible to precisely diagnose the cause of many diseases and tailor drugs and treatment procedures to the genetic make-up of an individual.

“Despite the tremendous interest in using nanopores for sequencing DNA, it was unclear how, exactly, nanopores could be used to read the DNA sequence,” said U. of I. physics professor Aleksei Aksimentiev. “We now describe one such method.”

Cheap DNA sequencing is going to most dramatically change reproductive practices. Once embryos can be fully DNA tested and the meaning of all genetic variations become known then a substantial fraction of the population will use in vitro fertilization and pre-implantation genetic diagnosis (PIGD or PGD) to select embryos to start pregnancies with. That act of selection will speed up human evolution by orders of magnitude even before we start introducing genetic variations with genetic engineering.

By Randall Parker    2007 December 12 11:05 PM   Entry Permalink | Comments (17)
2007 October 21 Sunday
Orders Of Magnitude Advances In DNA Sequencing Technologies

An article in The Scientist provides a sense of how much DNA sequencing costs have fallen. At the bottom of that page they show 3 costs from 3 different sequencing instruments for doing a sequencing of the Drosophila fly genome. The established ABI 3730 has a sequencing cost for this job of $650,000. The 454 Life Sciences instrument costs $132,000 for the same job. Big cut in cost, right? But if you paid $132,000 you paid too much. Using the Solexa instrument costs $12,500 for the same job. Wow.

The article states that each of these instruments are more appropriate for different classes of problems. For example, RNA sequencing is one kind of problem and the article reports on a huge advance in how much RNA sequencing one MIT lab can now do with newer machines:

David Bartel at MIT's Whitehead Institute for Biomedical Research and colleagues have been using new sequencing technologies to investigate new classes of small RNAs. With standard sequencing in 2003, Bartel says he was happy to get 4,000 RNAs sequenced. In 2006, using 454 sequencing he could get 400,000, and this year, using the Solexa instrument, he'll get 50 million.

So Bartel is getting 4 orders of magnitude more data per year over just 4 years time. He can ask questions and look for answers in areas that were totally beyond his reach just 4 years ago. Of course 4 years from now he'll be able to ask still more questions he can't ask now and get answers at an even faster rate. This pattern of advance makes me very optimistic about how much scientists and bioengineers will be able to accomplish in 10 and 20 years time. These tools have become so powerful because they've become smaller. The pattern is very similar to the pattern we see in the computer industry. Successive waves of technology become smaller, faster, cheaper, more powerful.

When do these advances reach a point where, say, stem cell manipulation to produce useful therapies becomes really easy? There's a point on the road ahead where therapies we can only dream about today become easy to create. Once we can produce replacement parts using cell therapies and organs grown in vats full body rejuvenation (with the unfortunate exception of the brain) will be within reach. We'll also need really excellent gene therapies to take on the more difficult task of brain rejuvenation. Though cell therapies will still deliver benefits to the brain, for example in the form of rejuvenated blood vessels.

By Randall Parker    2007 October 21 02:02 PM   Entry Permalink | Comments (0)
2007 October 02 Tuesday
Micro-Incubator For Cells Automates Experiments

Silicon technology applied to microfluidics is going to revolutionize biological science.

Integrating silicon microchip technology with a network of tiny fluid channels, some thinner than a human hair, researchers at The Johns Hopkins University have developed a thumb-size micro-incubator to culture living cells for lab tests.

In a recent edition of the journal IEEE Transactions on Biomedical Circuits and Systems, the Johns Hopkins researchers reported that they had successfully used the micro-incubator to culture baby hamster kidney cells over a three-day period. They said their system represents a significant advance over traditional incubation equipment that has been used in biology labs for the past 100 years.

"We don't believe anyone has made a system like this that can culture cells over a period of days autonomously," said Jennifer Blain Christen, lead author of the journal article. "Once it's set up, you can just walk away."

Note the lack of need for daily labor-intensive care. The system is automated. Automation speeds progress, cuts costs, increases consistency and quality.

I expect that the rate of advance in biological sciences and biotechnology is going to greatly accelerate in the next few decades because of microfluidics and computer simulations. Experiments will get done more rapidly and with larger numbers of experiments done in parallel as cheap devices lower the material and labor costs of each experiment.

This ability to accelerate advances makes me very optimistic about the prospects for the development of rejuvenation therapies. Automation will enable the development of much more powerful manipulations of cells and tissues. The automation and miniaturization will enable cheap ways to introduce experimental conditions and measure the results automatically.

By Randall Parker    2007 October 02 11:01 PM   Entry Permalink | Comments (8)
2007 September 04 Tuesday
Genome Testing Market Set To Double?

One market analysis projects the market for gene sequencing and testing equipment to double in the next 5 years.

While the days of high market growth, driven by the human genome project, are behind us, the era of personal genomics is yet to begin. Next generation genomics technologies are breathing new life into the market, and are expected to contribute to the robust growth of the U.S. genomics market between 2005 and 2012. Top industry participants are successfully developing specific applications for each evolutionary stage of the genomics research process, and are likely to maintain revenue streams, while strategically positioning themselves to penetrate the future markets for clinical applications of genomic technologies.

New analysis from Frost & Sullivan (drugdiscovery.frost.com), Strategic Analysis of U.S. Genomics Markets, reveals that revenues in this market totaled $1.85 billion in 2006, and is likely to reach $3.69 billion in 2012.

That doubling in revenue will occur along with a huge increase in the amount of DNA sequence produced per dollar spent. Leading industry figures expect a 3 order of magnitude drop in sequencing costs perhaps as soon as 5 years from now.

Scientists are doing most of the DNA sequencing for their own research purposes today. But at some point in the next 5 to 10 years the desire to learn one's own personal genome sequences will become the biggest source of demand for DNA sequencing services. Also demand will grow for surreptitious DNA sequencing services so that people can learn the DNA sequences of love interests, prospective employees, celebrities, and business competitors. Science will turn up all sorts of practical uses of DNA sequence information and your genetic privacy will become very hard to protect.

By Randall Parker    2007 September 04 11:42 PM   Entry Permalink | Comments (0)
2007 August 30 Thursday
Neurons Grown In Microfluidic Chambers

Microfluidic chips are going to speed up the rate of biological experimentation by orders of magnitude. Here is another example of the power of microfluidics for studying biological systems.

CHAMPAIGN, Ill. — Researchers at the University of Illinois have developed a method for culturing mammalian neurons in chambers not much larger than the neurons themselves. The new approach extends the lifespan of the neurons at very low densities, an essential step toward developing a method for studying the growth and behavior of individual brain cells.

The technique is described this month in the journal of the Royal Society of Chemistry – Lab on a Chip.

“This finding will be very positively greeted by the neuroscience community,” said Martha Gillette, who is an author on the study and the head of the cell and developmental biology department at Illinois. “This is pushing the limits of what you can do with neurons in culture.”

The small scale allows much greater sensitivity of measurement.

First, the researchers scaled down the size of the fluid-filled chambers used to hold the cells. Chemistry graduate student Matthew Stewart made the small chambers out of a molded gel of polydimethylsiloxane (PDMS). The reduced chamber size also reduced – by several orders of magnitude – the amount of fluid around the cells, said Biotechnology Center director Jonathan Sweedler, an author on the study. This “miniaturization of experimental architectures” will make it easier to identify and measure the substances released by the cells, because these “releasates” are less dilute.

“If you bring the walls in and you make an environment that’s cell-sized, the channels now are such that you’re constraining the releasates to physiological concentrations, even at the level of a single cell,” Sweedler said.

The method used to create the microfluidic chambers

Second, the researchers increased the purity of the material used to form the chambers. Cell and developmental biology graduate student Larry Millet exposed the PDMS to a series of chemical baths to extract impurities that were killing the cells.

This technique allows measurement of cellular secretions.

Millet also developed a method for gradually perfusing the neurons with serum-free media, a technique that resupplies depleted nutrients and removes cellular waste products. The perfusion technique also allows the researchers to collect and analyze other cellular secretions – a key to identifying the biochemical contributions of individual cells.

This technique allows neurons to live longer in culture. Hence more experimental data can be collected and more kinds of processes studied.

This combination of techniques enabled the research team to grow postnatal primary hippocampal neurons from rats for up to 11 days at extremely low densities. Prior to this work, cultured neurons in closed-channel devices made of untreated, native PDMS remained viable for two days at best.

The development of microfluidic devices will bring changes in biotechnology as revolutionary as the changes which miniaturization have caused in the electronics industry. Microfluidics will enable massive parallelism and automation of experiments at very low cost.

By Randall Parker    2007 August 30 11:22 PM   Entry Permalink | Comments (0)
2007 August 22 Wednesday
Microfluidic Chip Manipulates Lab Worms

MIT researchers have developed a microfluidic chip that automates research on the worm Caenorhabditis elegans (C. elegans).

Genetic studies on whole animals can now be done dramatically faster using a new microchip developed by engineers at MIT.

The new "lab on a chip" can automatically treat, sort and image small animals like the 1-millimeter C. elegans worm, accelerating research and eliminating human error, said Mehmet Yanik, MIT assistant professor of electrical engineering and computer science.

The advance rate in biotechnology is going to accelerate because the technologies developed by the computer industry to work at increasingly smaller scales are getting reused to develop chips that can do biological research. The "lab on a chip" approach is going to allow an automation and acceleration of biological experiments that will speed up research by orders of magnitude.

Each worm can get routed through the chip and manipulated in different ways to do a very large variety of experiemnts in an automated fashion..

"Normally you would treat the animals with the chemicals, look at them under the microscope, one at a time, and then transfer them," Yanik said. "With this chip, we can completely automate that process."

The tiny worms are flowed inside the chip, immobilized by suction and imaged with a high resolution microscope. Once the phenotype is identified, the animals are routed to the appropriate section of the chip for further screening.

The worms can be treated with mutagen, RNAi or drugs before they enter the chip, or they can be treated directly on the chip, using a new, efficient delivery system that loads chemicals from the wells of a microplate into the chip.

"Our technique allows you to transfer the animals into the chip and treat each one with a different gene silencer or a different drug," Yanik said.

Chips can be mass produced at low cost. Chips that can manipulate whole worms can probably manipulate cells or small groups of cells. So this chip has application beyond C. elegans.

By Randall Parker    2007 August 22 12:08 AM   Entry Permalink | Comments (0)
2007 August 17 Friday
Scientists Use Accelerated Evolution To Develop New Enzyme

Natural selection can only select between mutations that occur naturally. The number of mutations that might occur naturally in humans is limited by the number of humans and by which mutations occur in each human. In theory if one could search through a much larger set of mutations one should be able to find genes which code for better enzymes and better versions of other components of our body. Some scientists have shown that they can generate and test a large number of potential enzymes to find new designs.

Living cells are not the only place where enzymes can help speed along chemical reactions. Industrial applications also employ enzymes to accelerate reactions of many kinds, from making antibiotics to removing grease from clothing. For the first time, scientists have created a completely new enzyme entirely in vitro, suggesting that industrial applications may one day no longer be limited to enzymes that can be derived from natural biological sources.

HHMI investigator Jack W. Szostak and Burckhard Seelig, a postdoctoral associate in his Massachusetts General Hospital and Harvard Medical School laboratory, show in a paper published in the August 16, 2007, issue of the journal Nature the steps they took to create the artificial enzyme, an RNA ligase that catalyzes a reaction joining two types of RNA chains.

This group at Harvard thinks they can develop better tools to select for enzymes that rise to a higher level of performance.

Szostak's approach relies instead on evolution. The technique enabled the researchers to generate a new RNA ligase without any pre-existing model of how it would work. According to Szostak, “There is no known biological enzyme that carries out this reaction.”

To create one, the researchers assembled a library of 4 trillion small protein molecules - each with slight variations on an initial protein sequence — then subjected those molecules to evolutionary selection in the laboratory. “Here,” Szostak says, “the hard work is in designing a good starting library, and an effective selection process. Since we do not impose a bias on how the enzyme does its job, whatever mechanism is easiest to evolve is what will emerge.”

The enzyme that emerged from the group's experiments is what Szostak characterizes as “small and not very stable, and not very active compared to most biological enzymes.” Nevertheless, Szostak's group is optimistic about their ability to select for versions of the enzyme that are more stable and more active.

Evolution by selection between whole organisms is too slow a way to turn up better designs. Computer simulations and automated lab equipment that generates more real life variations of proteins will some day allow us to search much more deeply through the space of all possible protein shapes to turn up much better genes. In order to give ourselves higher performing bodies we will some day replace some human genes with variants found in labs.

By Randall Parker    2007 August 17 12:02 AM   Entry Permalink | Comments (2)
2007 June 11 Monday
Cheaper Method Finds More Genes Involved In Disease

A new report on a set of genes discovered which contribute to a form of heart disease is less interesting for the discovery than for the tools developed which made the discovery possible. Development of a much cheaper and very sensitive technique for measuring message RNA expression levels enabled the discovery.

The one-gene, one-disease concept is elegant, but incomplete. A single gene mutation can cause many other genes to start—or stop—working, and it may be these changes that ultimately cause clinical symptoms. Identifying the complete set of affected genes used to appear impossible. Not anymore.

Studying genetically modified mice, researchers led by Christine E. Seidman, a Howard Hughes Medical Institute investigator at Brigham and Women's Hospital, and her husband Jonathan G. Seidman, who is at Harvard Medical School, have identified hundreds of genes with altered expression in preclinical hypertrophic cardiomyopathy. The study, which is coauthored by colleagues at Harvard Medical School, is published in the June 9, 2007, issue of the journal Science. The discovery could help scientists define the pathways that lead to the disease and lead to the discovery of targets for early detection, prevention, and treatment.

A new technique provides a highly sensitive way of measuring gene expression levels.

To obtain a complete picture of the genetic changes associated with the disease, the researchers developed a new gene sequencing technique called polony multiplex analysis of gene expression, or PMAGE. The technique can find messenger RNA transcripts—the directions for making a protein, spun out from the DNA of an active gene—that occur as rarely as one copy for every three cells.

PMAGE drops costs by an order of magnitude.

The industry standard for gene sequencing is serial active gene expression, or SAGE. "There are a couple of labs that have been dedicated to developing this technology," Seidman said, including HHMI investigator Bert Vogelstein at Johns Hopkins and George Church at Harvard. But PMAGE analysis costs between 1/20 and 1/9 of a comparable SAGE analysis, making it more appropriate for the kind of large-scale expression profiling undertaken in this study, she explained. "With SAGE, you can't afford to sequence 4 million transcripts."

These order of magnitude cost drops in assorted techniques for measuring genetic sequences and gene expression levels just keep coming. As the costs of measurement and data collection keep falling the rate at which scientists figure out what genes do keeps accelerating.

Many more order of magnitude cost drops for genetics and molecular biology lay in store in the future. A coming enormous flood of discoveries enabled by biotechnological advances will sweep through and revolutionize medicine.

By Randall Parker    2007 June 11 11:56 PM   Entry Permalink | Comments (0)
2007 June 02 Saturday
James Watson DNA Sequence Marks Drop In Costs

DNA double helix co-discoverer James D. Watson has had his DNA sequenced at a much lower cost than previous genome sequencing attempts.

On Thursday, James Watson was handed a DVD containing his entire genome, sequenced in the past few months by 454, a company based in Branford, CT, that's developing next-generation technologies for efficiently reading the genome. At a cost of $2 million, 454 sequenced Watson's genome for roughly an order of magnitude less than it would have cost using traditional machines.

...

The $2 million and two months that it took to sequence Watson's genome is a far cry from the more than ten years and $3 billion required for the Human Genome Project's reference genome, released in 2003.

454 Life Sciences claims their DNA sequencing cost for Watson's genome was only $1 million.

454 Life Sciences Corporation, in collaboration with scientists at the Human Genome Sequencing Center, Baylor College of Medicine, announced today in Houston, Texas, the completion of a project to sequence the genome of James D. Watson, Ph.D., co-discoverer of the double-helix structure of DNA. The mapping of Dr. Watson's genome was completed using the Genome Sequencer FLX(TM) system and marks the first individual genome to be sequenced for less than $1 million.

But 454 and other sequencing technology companies say that costs have already dropped another order of magnitude.

And technology companies like Illumina, Applied Biosystems and 454 Life Sciences, which solicited Dr. Watson’s DNA to prove its abilities, say the price of a complete human genome has already dropped to $100,000. They are competing for a $10 million “X prize” to sequence 100 human genomes within 10 days. (Dr. Watson’s took about two months.)

The rapid advance of DNA testing technologies is possible because DNA is small and DNA testing relies on computer chip technologies. While I've made this claim for years this latest news provides a much dramatic demonstration that this trend is really happening. This rate of advance that bodes well for future advances across a wide range of biotechnologies.

What will come from very cheap DNA sequencing? Lots of things:

  • Discovery of genetic variations that contribute to disease risk.
  • Discovery of genetic variations that contribute to intelligence, personality characteristics, and behavioral tendencies including criminal tendencies.
  • Discovery of "best of breed" genetic variations that contribute to superior athletic performance, vocal ability, dancing ability, and other areas where humans can excel.
  • Discovery of genetic variations that contribute to appearances such as genes for eye and hair color, complexion, hair texture, facial shape, and other attributes that contribute to visual desirability.
  • Acceleration of cancer research as researchers gain the ability to identify many more genetic mutations that occur in cancers as the cancers develop.
  • Acceleration of research into accumulation of mutations that cause aging as researchers gain the ability to cheaply compare youthful and old cells to identify mutations that most contribute to aging.
  • Changes in mating choices as people start testing prospective mates for suitable genetic traits for offspring and even for genetic attributes that contribute to personality attributes.
  • Changes in choices for sperm and egg donors based on genetic testing results.
  • Cheap DNA testing will make use of sperm donors more attractive to women who can't find genetically suitable men to marry.
  • More people will use in vitro fertilization (IVF) to start pregnancies since they'll be able to genetically test a set of embryos to choose one that has more of what they want to pass along to their offspring.

Most of us will live to see full genome testing become commonplace.

By Randall Parker    2007 June 02 10:23 PM   Entry Permalink | Comments (9)
2007 May 15 Tuesday
Large DNA Structural Variations Target Of Study

Single DNA letter differences have garnered most of the popular and scientific attention for the study of human genetic differences. But larger genetic differences such as large copy variations (where people differ in how many copies they have of genes and sections of genes) have come under greater scrutiny as researchers have developed techniques to measure these differences. Studies of large DNA structural variations have begun to bear fruit.

A major new effort to uncover the medium- and large-scale genetic differences between humans may soon reveal DNA sequences that contribute to a wide range of diseases, according to a paper by Howard Hughes Medical Institute investigator Evan Eichler and 17 colleagues published in the May 10, 2007, Nature. The undertaking will help researchers identify structural variations in DNA sequences, which Eichler says amount to as much as five to ten percent of the human genome.

Past studies of human genetic differences usually have focused on the individual “letters” or bases of a DNA sequence. But the genetic differences between humans amount to more than simple spelling errors. “Structural changes — insertions, duplications, deletions, and inversions of DNA — are extremely common in the human population,” says Eichler. “In fact, more bases are involved in structural changes in the genome than are involved in single-base-pair changes.”

Efforts to estimate the amount of genetic difference between people and groups have produced underestimates of the real differences. The newer studies of genetic differences which measure large copy variations (e.g. differences in the number of copies of genes or sections of genes) are finding much larger differences between humans. I suspect these differences show how much local selective pressure humans experiences in each local environmental niche they moved into. We are not as alike as the politically correct would have us believe.

Eichler and colleagues are searching for large copy variations in DNA taken from 62 people.

Using DNA from 62 people who were studied as part of the International HapMap Project, they are creating bacterial “libraries” of DNA segments for each person. The ends of the segments are then sequenced to uncover evidence of structural variation. Whenever such evidence is found, the entire DNA segment is sequenced to catalog all of the genetic differences between the segment and the reference sequence.

The result, says Eichler, will be a tool that geneticists can use to associate structural variation with particular diseases. “It might be that if I have an extra copy of gene A, my threshold for disease X may be higher or lower.” Geneticists will then be able to test, or genotype, large numbers of individuals who have a particular disease to look for structural variants that they have in common. If a given variant is contributing to a disease, it will occur at a higher frequency in people with the disease.

Their use of DNA from people studied in the International HapMap Project creates synergies between the databases generatd by each effort. The International HapMap Project involves measuring single letter differences. Some of the single letter differences correlated with variations in structures such as deletion mutations and in number of copies of each gene.

We live in the twilight of the era of when little has been known about how genetic variations create human variations in health, appearances, intelligence, personality and other human qualities. 20 years from now we are going to know in enormous detail which genetic variations matter and how they matter. Continued declines in the cost of DNA testing will provide scientists with orders of magnitude more genetic data than they have now.

Once we know what most of the genetic variations mean I expect many changes in how we live our lives. For example, I expect those involved in romantic courtships to either surreptitiously get DNA samples from potential mates or demand DNA testing info as a prelude to serious courting.

By Randall Parker    2007 May 15 09:38 PM   Entry Permalink | Comments (0)
2007 May 07 Monday
Chip Speeds Molecule Receptor Binding Affinity Search

Here's another example of the trend toward massive parallelism and micro-miniature devices for manipulating biological systems. A chip can monitor the binding affinity of 12,000 molecules at a time.

May 1, 2007 -- A chemist at Washington University in St. Louis is making molecules the new-fashioned way — selectively harnessing thousands of minuscule electrodes on a tiny computer chip that do chemical reactions and yield molecules that bind to receptor sites. Kevin Moeller, Ph.D., Washington University professor of chemistry in Arts & Sciences, is doing this so that the electrodes on the chip can be used to monitor the biological behavior of up to 12,000 molecules at the same time.

Moeller thinks he can automate the production of a variety of molecules and the testing of their affinity to receptors.

But, with an electrochemically addressable computer chip, provided in great abundance by one of his sponsor's, CombiMatrix in Seattle, Moeller saw a way of probing the binding of a library with a receptor without the need for washing by putting each member of the molecular library by an electrode that can then be used to monitor its behavior.

The electrochemically addressable chips being used represent a new environment for synthetic organic chemistry, changing the way chemists and biomedical researchers make molecules, build molecular libraries and understand the mechanisms by which molecules bind to receptor sites.

"We believe we can move most of modern synthetic organic chemistry to this electrochemically addressable chip. In this way, a wide variety of molecules can be generated and then probed for their biological behavior in real-time," said Moeller. "It's a tool, still being developed, to map receptors. We're right at the cusp of things."

Cells are covered with and contain a large variety of receptors. The ability to automate the production and screening of compounds that might bind at each kind of receptor can accelerate the search for new drugs and other biomedically useful compounds.

Biochips controllable by computers open up the prospect of highly automated science. Rather than mess around with test tubes, beakers, and the like scientists will run software that'll create and automatically test millions of chemicals looking for desired interactions.

By Randall Parker    2007 May 07 11:22 PM   Entry Permalink | Comments (0)
2007 May 01 Tuesday
$10 Device Synthesizes DNA

Polymerase chain reactions (PCRs) are widely used to synthesize DNA as part of DNA sequencing work. Dr. Victor Ugaz at Texas A&M University has found a way to speed up the PCR DNA copying process at very low cost.

A pocket-sized device that runs on two AA batteries and copies DNA as accurately as expensive lab equipment has been developed by researchers in the US.

The device has no moving parts and costs just $10 to make. It runs polymerase chain reactions (PCRs), to generate billions of identical copies of a DNA strand, in as little as 20 minutes. This is much faster than the machines currently in use, which take several hours.

The development of cheap miniature devices is the future of biotechnology and is going to do to biotechnology what miniaturization has done to computer technology. Therefore we should expect a huge acceleration of the rate at which biological science advances and the development of very cheap methods of repair of aged bodies.

Dr. Ugaz uses convection to move the PCR process through a series of steps.

Currently, PCR faces a time issue, as it is typically ran in a thermocycler, averaging between one and three hours. Using a convective flow system, the process runs faster and more efficient, using natural convection and buoyancy forces to create the required temperature cycles.

The abstract for Dr. Ugaz's grant application for this research from 2004 shows he's been working on this problem for a few years:

By eliminating the need for dynamic external temperature control, a convective flow-based system is capable of achieving performance equal to or exceeding that of conventional thermocyclers in a greatly simplified format, This level of simplicity is a significant departure from previous attempts to construct novel thermocycling equipment, where added complexities often far outweigh any potential performance gains, We propose a research effort targeted at developing a new generation of thermocycling equipment offering improved performance at a significantly lower cost, thereby making PCR practical for use in a wider array of settings.

Researchers like Dr. Ugaz who work on methods to speed up and miniaturize technology used to manipulate biological materials are going catalyze a revolution in biomedical science and biotechnology.

Thanks to Brock McCusick for the tip.

By Randall Parker    2007 May 01 10:38 PM   Entry Permalink | Comments (12)
2007 April 10 Tuesday
Nanopore Channels Detect DNA Molecules

Some Purdue University researchers have developed a silicon-based device that can anchor strands of DNA in nanopores for use in DNA testing.

WEST LAFAYETTE, Ind. - Researchers at Purdue's Birck Nanotechnology Center have shown how "nanopore channels" can be used to rapidly and precisely detect specific sequences of DNA as a potential tool for genomic applications in medicine, environmental monitoring and homeland security.

The tiny channels, which are 10 to 20 nanometers in diameter and a few hundred nanometers long, were created in silicon and then a single strand of DNA was attached inside each channel.

Other researchers have created such channels in the past, but the Purdue group is the first to attach specific strands of DNA inside these silicon-based channels and then use the channels to detect specific DNA molecules contained in a liquid bath, said Rashid Bashir, a professor in the School of Electrical and Computer Engineering and the Weldon School of Biomedical Engineering.

The reuse of computer industry technologies to manipulate and measure biological materials at very small scales promises to accelerate the rate of advance of biotechnology and biological science.

The method makes use of known sequences to detect affinities between anchored and floating strands of DNA.

"When the DNA molecules in the bath are perfectly complementary to those in the channels, then this current pulse is shorter compared to when there is even a single base mismatch," Iqbal said. Being able to detect specific DNA molecules quickly and from small numbers of starting molecules without the need to attach "labels" represents a potential mechanism for a wide variety of DNA detection applications.

Note that this isn't really sequencing where any order of DNA letters can be detected. This approach requires use of strands of DNA that have known sequences. So it won't work well for detecting relatively rare genetic variations (and we each have some rare genetic variations). But nanopore-based DNA sequencers might eventually perform full sequencing of genomes so that all genetic variations existing in one organism can be detected.

By Randall Parker    2007 April 10 09:43 PM   Entry Permalink | Comments (1)
2007 February 22 Thursday
Method Measures Hundreds Of Thousands Of Gene Interactions

Scientists at the University of California at San Francisco have developed a way to quickly measure hundreds of thousands of interactions between genes in parallel.

Sometimes it helps to have a “cheat sheet” when you are working on a problem as difficult as deciphering the relationships among hundreds of thousands of genes. At least that's the idea behind a powerful new technique developed by Howard Hughes Medical Institute (HHMI) researchers to analyze how genes function together inside cells.

The new approach is called epistatic miniarray profiles (E-MAP). The scientists who developed it — HHMI investigator Jonathan S. Weissman, HHMI postdoctoral fellow Sean Collins, and colleague Nevan Krogan, who are all at the University of California, San Francisco — have used E-MAP to unravel a key process that prevents DNA damage during cellular replication.

In the first use of this technique researchers tested for 200,000 different gene interactions.

Using the new technique, which enabled them to rapidly analyze more than 200,000 gene interactions, the researchers have made a discovery that helps explain how cells mark which sections of DNA have been replicated during cell division. If the marking process goes awry, DNA becomes damaged as it is copied.

Hundreds of yeast colonies can be grown in the same agar plate and their speed of growth can be measured and analyzed automatically with software. Since yeast share many genes with humans these studies will turn up interactions that provide insight into human biology as well.

The key to E-MAPs is the ability to eliminate single genes or gene pairs and then analyze how each change impacts the growth of yeast colonies. Each yeast colony grows in a tiny spot on an agar plate, and each plate holds around 750 colonies. Software makes it possible to determine the growth rate of each colony and then compare the effect on growth of eliminating one gene at a time with the effect when two genes are simultaneously disabled.

The scientists looked only at the genes involved in maintaining and replicating chromosomes.

The end result is a database that details the functional relationship of each gene to every other gene studied, revealing cases where the product of one gene depends on a second gene in order to carry out its cellular functions. In this experiment, Weissman's team looked at 743 yeast genes involved in basic chromosome biology. “We wanted to look at everything that had to do with chromosome biology, including DNA replication, DNA repair, transcription to RNA, and so on,” said Weissman. “These are very basic cellular processes that are conserved from yeast to man.”

But this same technique could be applied to other subsets of genes to study other aspects of cellular metabolism. This is the way biology is going: Rather than studying one or two things at once thousands of genes or interactions get measured at a time. Automated equipment and methods for working with large numbers of very small samples allows massive parallelism and orders of magnitude more data collected per experiment.

By Randall Parker    2007 February 22 11:33 PM   Entry Permalink | Comments (0)
2007 February 15 Thursday
Thin Silicon Slice Makes Great Biochemical Filter

A slice of semiconductor silicon turns out to make a useful filter for small biological molecules.

A newly designed porous membrane, so thin it's invisible edge-on, may revolutionize the way doctors and scientists manipulate objects as small as a molecule.

The 50-atom thick filter can withstand surprisingly high pressures and may be a key to better separation of blood proteins for dialysis patients, speeding ion exchange in fuel cells, creating a new environment for growing neurological stem cells, and purifying air and water in hospitals and clean-rooms at the nanoscopic level.

At more than 4,000 times thinner than a human hair, the new barely-there membrane is thousands of times thinner than similar filters in use today.

This silicon is from the crystals routinely grown for computer semiconductor chip manufacturing. So here's yet another example of how the computer semiconductor industry is producing materials moldable into biologically useful devices.

The membrane is a 15-nanometer-thick slice of the same silicon that's used every day in computer-chip manufacturing. In the lab of Philippe Fauchet, professor of electrical and computer engineering at the University, Striemer discovered the membrane as he was looking for a way to better understand how silicon crystallizes when heated.

He used such a thin piece of silicon—only about 50 atoms thick—because it would allow him to use an electron microscope to see the crystal structure in his samples, formed with different heat treatments.

Back in the 1950s, 1960s, and well into the 1970s all computers were seen as large devices that filled up large rooms. But beneath the surface a technological revolution of doublings in power and halvings in costs kept repeating again and again. Suddenly the computer chips became cheap enough to put into desktop personal computers and computing became useful for the masses. Well, the same is going to happen with microfluidic devices and DNA gate arrays.

After years of technological changes only visible inside of research labs the technological advances for making miniature biochips will reach a critical mass where suddenly they will spread out into the mass market. Personal DNA testing in the private of your own home will give you your DNA sequence uploaded into your home computer. Also, implantable biochips will let you watch your blood chemistry in real time and microfluidic devices will make it possible for you to synthesize your own drugs and other treatments.

What I see coming: downloadable free software that'll program your home microfluidic biochips to make unapproved and restricted drugs and biochemical components. Just as we can download software that'll enhance what our computers can do we will be able to download an ever growing set of programs with instructions for orchestrating microfluidic biochips to more and more kinds of biochemical products.

By Randall Parker    2007 February 15 12:12 AM   Entry Permalink | Comments (2)
2007 February 13 Tuesday
Chip Separates Single DNA Strands

As regular readers know, I keep arguing that the biological sciences and biotechnology are going to advance at a rate similar to the rate of advance in the computer industry. Why? Computer technologies adapted to labs such as microfluidic devices and DNA gate arrays will displace old style flasks, beakers, human-viewed microscopes, and the like. Here's another example of this trend. Some scientists at U Wisc-Madison have used computer chip fabrication technologies to produce a nanoscale device that can separate out individual strands of DNA in preparation for sequencing them.

Now, however, scientists have developed a quick, inexpensive and efficient method to extract single DNA molecules and position them in nanoscale troughs or "slits," where they can be easily analyzed and sequenced.

The positioning in troughs is a needed precursor step before reading the DNA letters in each strand. So these scientists have moved a big (or incredibly small) step closer toward very small and therefore very cheap DNA sequencing devices.

The technique, which according to its developers is simple and scalable, could lead to faster and vastly more efficient sequencing technology in the lab, and may one day help underpin the ability of clinicians to obtain customized DNA profiles of patients.

The new work is reported this week (Feb. 8, 2007) in the Proceedings of the National Academies of Science (PNAS) by a team of scientists and engineers from the University of Wisconsin-Madison.

"DNA is messy," says David C. Schwartz, a UW-Madison genomics researcher and chemist and the senior author of the PNAS paper. "And in order to read the molecule, you have to present the molecule."

Since computer technology will drive biological technology forward at a rate similar to what we see in the computer industry the future rate of development of new knowledge and eventually new treatments will far exceed what we've seen in the past.

The computer industry is providing the technologies that are accelerating the rate of biotechnological advancement. Semiconductor fabrication technology provided these researchers the tools they needed to fabricate a device that can separate out single strands of DNA.

To attack the problem, Schwartz and his colleagues turned to nanotechnology, the branch of engineering that deals with the design and manufacture of electrical and mechanical devices at the scale of atoms and molecules. Using techniques typically reserved for the manufacture of computer chips, the Wisconsin team fabricated a mold for making a rubber template with slits narrow enough to confine single strands of elongated DNA.

The ability to sequence individual DNA strands will cost less than sequencing of larger amounts of material. Mass production of chips that can sequence DNA from a single cell will make personal DNA profiles commonplace. Also, the ability to sequence a single cell's DNA will find use in criminology, cancer research, and in choice of custom cancer treatments.

By Randall Parker    2007 February 13 09:18 PM   Entry Permalink | Comments (8)
2007 January 30 Tuesday
Bioengineers Simulate Human Metabolism

Some bioengineers at UCSD are building a model to simulate parts of human metabolism.

Bioengineering researchers at UC San Diego have painstakingly assembled a virtual human metabolic network that will give researchers a new way to hunt for better treatments for hundreds of human metabolic disorders, from diabetes to high levels of cholesterol in the blood. This first-of-its-kind metabolic network builds on the sequencing of the human genome and contains more than 3,300 known human biochemical transformations that have been documented during 50 years of research worldwide.

Note that these people are engineers, not scientists. They are treating the human body as just another complex system to engineer. They are using simulation just as engineers simulate airplanes, cars, and other systems designed by humans. Their simulations are a prelude to efforts to re-engineer the human body.

Simulations allow more rapid testing of much larger combinations of conditions. For human bodies simulations will allow testing of drugs and other treatments without need for the huge sums of money used in real human trials and also without the need to wait for lots of real wall clock time to go by. Plus, simulations can check out dangerous scenarios that would be far too risky to try with real humans.

An increasing portion of all biomedical research and development will take place in simulations. The cost of computing will continue to decline as the software becomes more complex and the data from real lab experiments feed in to make the models increasingly more realistic.

The simulation can predict the behavior of actual human cells.

In a report in the Proceedings of the National Academy of Sciences (PNAS) made available on the journal's website on Jan. 29, the UCSD researchers led by Bernhard Ø Palsson, a professor of bioengineering in the Jacobs School of Engineering, unveiled the BiGG (biochemically, genetically, and genomically structured) database as the end product of this phase of the research project.

Each person's metabolism, which represents the conversion of food sources into energy and the assembly of molecules, is determined by genetics, environment, and nutrition. In a demonstration of the power and flexibility of the BiGG database, the UCSD researchers conducted 288 simulations, including the synthesis of testosterone and estrogen, as well as the metabolism of dietary fat. In every case, the behavior of the model matched the published performance of human cells in defined conditions.

This simulation is limited to known interactions and transformations done by cellular components. As more interactions become discovered and characterized these additional pieces of the puzzle can get added to existing simulations such as this one at UCSD. Fortunately, biochips which measure proteins and genes keep getting more powerful. For example, see my post Chip Measures Protein Binding Energies In Parallel

By Randall Parker    2007 January 30 10:43 PM   Entry Permalink | Comments (2)
2007 January 08 Monday
Chip Measures Protein Binding Energies In Parallel

To accelerate the pace of biological research we need automation and miniaturization to drive down costs. The development of miniature silicon devices that can measure biological systems with a high degree of parallelism is going to drive down costs by orders of magnitude just as happened in the computer industry. The trend toward labs on a chip continues to accelerate. In a recent example of this trend Stanford microfluidics researcher Stephen Quake and collaborator Sebastian Maerkl have developed a silicon chip that can measure the affinity of transcript factor proteins (which regulate gene expression) for sections of DNA with simultaneous measurements of 2400 pairs of proteins and DNA fragments.

To understand complex biological systems and predict their behavior under particular circumstances, it is essential to characterize molecular interactions in a quantitative way, Quake said. Binding energy-the energy with which one protein bind to another or to DNA-is one important quantitative measurement researchers would like to know. But these interactions are highly transient and often involve extremely low binding affinities, so they are difficult to measure on a large scale. To overcome this hurdle, Quake and Maerkl set out to develop a microlaboratory that could trap a type of protein known as a transcription factor. Once the transcription factor was trapped, the scientists hoped to measure the binding energy of the transcription factor bound to specific DNA sequences.

But simply measuring the binding energy between a transcription factor and a single DNA sequence is not enough, Quake said. He said it would be more meaningful to know the energy involved in a transcription factor binding to many different DNA sequences. This would give researchers a more complete picture of the “DNA binding energy landscape” of each transcription factor.

To determine the binding energy landscape, Quake and Maerkl's microlaboratory needed to conduct thousands of binding-energy experiments at once. The apparatus they created, which they called mechanically induced trapping of molecular interactions (MITOMI), consists of 2,400 individual reaction chambers, each controlled by two valves and including a button membrane. Each of the chambers is less than a nanoliter in volume. That's one-billionth of a liter—enough to hold a snippet of human hair only as long as the hair's diameter. The MITOMI apparatus fits over a 2,400-unit DNA microarray, or gene chip, onto which the researchers can dab minute amounts of DNA sequences. Each spot of DNA is then enclosed in its own reaction chamber.

Quake wants to use this approach to map all the protein-protein binding energies of a single organism. The ability to use semiconductor industry manufacturing processes to cheaply mass produce silicon chips will make this possible.

The ability to conduct many measurements cheaply and in parallel will eventually enable the use of simulations to carry out much biological research. The measurements of biological phenomena made by silicon chips will serve as useful data to feed into computer simulations.

According to Quake, MITOMI brings scientists closer to an important goal. “To test theories of systems biology, we should now be able to predict biology without making any measurements on the organism itself,” he said.

Technologies from the computer industry are accelerating the rate of advance of biomedical science. This trend is why I expect the defeat of almost all diseases in the lifetimes of some people who are already alive. Technologies to achieve full body rejuvenation will even stop the process of aging.

By Randall Parker    2007 January 08 06:26 PM   Entry Permalink | Comments (0)
2006 December 28 Thursday
Gene Chips Will Lower Genetic Testing Costs

Illumina has a gene chip that can check for the presence of 650,000 distinct single letter differences in human genetic code.

One of the makers of these new gene chips, San Diego-based Illumina, is now looking ahead to the next phase of medical genetics. The company has recently acquired new diagnostic and sequencing technologies, which it plans to use to better identify medically relevant genes. Ultimately, the goal is to diagnose risk of specific diseases and identify the best treatment options for certain patients.

The Illumina chip contains 650,000 short sequences of DNA that can identify SNPs (single nucleotide polymorphisms), carefully selected from a map of human genetic variation known as the HapMap (see "A New Map for Health"). Each SNP represents a spot of the genome that frequently varies among individuals and acts as a signpost for that genomic region. Scientists use the chip to search for genetic variants that are more common in a group of people with the disease of interest.

This chip illustrates why the application of computer industry semiconductor process technologies to biology are so going to lower the cost of doing biological research and biomedical testing and treatment. The computer industry has developed technology to produce chips in bulk at low and declining cost.

Initially these chips will be used for research. Their lower costs will speed up the search for the meaning of genetic variations. Same sized research budgets will produce more genetic testing results each year as gene chip prices fall. Already the ability to look at 650,000 genetic variations in a single person with a single chip is going to cause a huge increase in the rate of genetic testing.

As these chips help scientists discover the significance of an increasing number of genetic variations the result will be discovery of variations whose existence becomes useful for each individual to know. For example, prospective parents wanting a particular eye or hair color or facial shape will be able to use gene chips to do pre-implantation genetic diagnosis (PGD or PIGD) on embryos fertilized in a lab (in vitro fertilization or IVF).

As soon as SNPs (single nucleotide polymorphisms or single letter differences in genetic code) are discovered for facial features, hair texture, hair and eye color, height, musculature, intelligence, and other attributes the use of gene chips to test for these attributes in embryos will explode. We could be 5 years away from the start of extensive genetic testing of embryos.

A Stanford team says the reading device of their genetic chip design could fit in a shoe box.

Stanford researchers have integrated an array of tiny magnetic sensors into a silicon chip containing circuitry that reads the sensor data. The magnetic biochip could offer an alternative to existing bioanalysis tools, which are costly and bulky.

"The magnetic chip and its reader can be made portable, into a system the size of a shoebox," says Shan Wang, professor of materials science and electrical engineering at Stanford University, in Palo Alto, CA. Its small size, he says, could make it useful at airports for detecting toxins, such as anthrax, and at crime scenes for DNA analysis.

Reductions in the size of genetic testing equipment also reduce the ability of governments to regulate the use of genetic testing. Want to ban genetic testing of employees and prospective employees? Kinda hard to do if a device the size of a shoe box can let you test dandruff flakes or hair droppings from a job interviewee. Easily find out whether the guy or gal has genetic variations associated with greater honesty or a greater proclivity to steal. Throw in the identification of some genetic variations that affect level of work motivation and lots of smaller employers especially will do secret genetic testing of job prospects.

If some governments try to ban genetic testing of embryos expect to see other countries keep embryo genetic testing unregulated. Then watch how a lot more babies get conceived on "vacations" to Caribbean islands or other countries that see big profits in medical tourism. Then for that fraction of the human race which embraces gene testing of embryos the rate of evolution will skyrocket. Anyone who doesn't jump on this will find their offspring left behind in the job market.

By Randall Parker    2006 December 28 01:26 PM   Entry Permalink
2006 November 25 Saturday
One Third Of MIT Engineers Work On Biology Problems

Here's another example of why I expect we will see great increases in the rate of progress in biotechnology.

One-third of the engineers at MIT now work on biological problems, according to Graham C. Walker, MIT biology professor. Yet it can be challenging for biology and engineering students to understand each other.

The divide, deeper than mere semantics, can touch on basic cultural differences, he says. "Even among top-level scientists, our fundamental ways of conducting inquiry differ, depending on our interests and training."

Teaching introductory biology to MIT undergraduates, Walker experiences the disciplinary disconnect firsthand. "It's a constant challenge," he says, "to find ways to make biology comprehensible and relevant to students who think like engineers."

Professor Walker has a $1 million grant from the Howard Hughes Medical Institute to figure out better ways to teach biology to engineers. MIT now has a biological engineering degree program. These are signs of the times.

Biology used to advance at a snail's pace because its tools were so primitive. The influx of talent from semiconductor engineering and other engineering disciplines has greatly sped up the rate of progress in the field and promises to speed it up by orders of magnitude in the future. The field of microfluidics chases the idea of highly automated and cheap labs on a chip.

Imagine a chip made using semiconductor processes that has lots of reaction vessels and miniature tubes and valves, all digitally controllable. No more pipettes. No petri dishes. No lab techs making mistakes from the tedium. Software will be able to carry out long experimental sequences. Computer programs with limited domain-specific artificial intelligence will even be able to generate hypotheses and carry out experiments. That's where the world of biology is going.

Pure simulation is also going to play a larger role in biological research. Rather than use real cells and real organisms an increasing fraction of biological research will take place in computer simulations using math and known rules of behavior of biological components and systems. The faster the computers become the more of all biological research will become doable in mathematical models written in software.

By Randall Parker    2006 November 25 01:49 PM   Entry Permalink | Comments (4)
2006 October 21 Saturday
Biochip Speeds Cell Electrical Measurements 60 Times

Advances in instrumentation are accelerating the rate at which scientists can do experiments.

WEST LAFAYETTE, Ind. — Purdue University researchers have developed a biochip that measures the electrical activities of cells and is capable of obtaining 60 times more data in just one reading than is possible with current technology.

In the near term, the biochip could speed scientific research, which could accelerate drug development for muscle and nerve disorders like epilepsy and help create more productive crop varieties.

"Instead of doing one experiment per day, as is often the case, this technology is automated and capable of performing hundreds of experiments in one day," said Marshall Porterfield, a professor of agricultural and biological engineering who leads the team developing the chip.

The device works by measuring the concentration of ions — tiny charged particles — as they enter and exit cells. The chip can record these concentrations in up to 16 living cells temporarily sealed within fluid-filled pores in the microchip. With four electrodes per cell, the chip delivers 64 simultaneous, continuous sources of data.

This additional data allows for a deeper understanding of cellular activity compared to current technology, which measures only one point outside one cell and cannot record simultaneously, Porterfield said. The chip also directly records ion concentrations without harming the cells, whereas present methods cannot directly detect specific ions, and cells being studied typically are destroyed in the process, he said. There are several advantages to retaining live cells, he said, such as being able to conduct additional tests or monitor them as they grow.

One (I think mistaken) argument made against the practicality of pursuing Aubrey de Grey's SENS (Strategies for Engineered Negligible Senescence) proposal to reverse aging is that the problems we need to solve in order to reverse aging won't become solvable in the next few decades. Specifically, one group of critics recently argued that a rate of biotechnological advance that is faster than the semiconductor industry's Moore's Law would be required in order to solve the problems needed to reverse the aging process within the lifetimes of people currently alive. But I think these critics are missing an obvious reason why biotechnology can advance more rapidly than computer semiconductor technology.

The biochip reported above is able to speed up the collection of cellular metabolic information with a leap forward that is many times greater than the rate at which Intel co-founder Gordon Moore' predicted that computers would become faster. It is very important to notice why this advance was possible: The advances made in the semiconductor industry that allow manipulations at very small scales that took decades to achieve are now being harnessed to make sensors and other automated instrumentation for biological experimentation. The development of biochips which manipulate and measure matter on a small scale can therefore happen much more rapidly than semiconductor advances.

In a nutshell, we have the technology to do lots of small scale manipulations and measurements. Scientists and engineers who apply that technology to biological problems can therefore make huge leaps in the development of capabilities to study and manipulate biological systems.

By Randall Parker    2006 October 21 10:06 PM   Entry Permalink | Comments (3)
2006 October 04 Wednesday
10 Million Dollar Archon X Prize For Genomics

The X Prize Foundation has announced the largest medical prize in modern history with the goal to speed up the development of DNA sequencing technology.

Washington D.C. (October 4, 2006) — The X PRIZE Foundation announced today the $10 million Archon X PRIZE for Genomics — A multi-million dollar incentive to create technology that can successfully map 100 human genomes in 10 days. The prize is designed to usher in a new era of personalized preventative medicine and stimulate new avenues of research and development of medical sciences.

Lots of big names have lined up in support of this prize.

On hand to help the X PRIZE Foundation make this historic announcement were some of the industries top minds representing the full landscape of this exciting new foray into biotechnology. Speakers at the press conference included Dr. J. Craig Venter, Chairman and CEO of the J Craig Venter Institute, Dr. Francis Collins, Director of the National Human Genome Research Institute, Anousheh Ansari, First Female Private Space Explorer and Co-Founder & Chairman Prodea Systems, Inc., Sharon Terry, President and CEO of the Genetic Alliance, Billy Tauzin, President and CEO of the Pharmaceutical Research and Manufacturing Association and Dr. Stewart Blusson, President of Archon Minerals. Archon Minerals is the title sponsor of the Archon X PRIZE for Genomics after a generous multi-million dollar donation by Dr. Blusson.

Some argue that cheap DNA sequencing will revolutionize medicine by making personalized treatments possible.

Rapid genome sequencing is widely regarded as the next great frontier for science and will eventually allow doctors to determine an individuals’ susceptibility to disease and even the genetic links to cancer. Mapping your genetic code is like taking an X-Ray allowing doctors to see inside your genetic past and eventually help determine your genetic future.

Only after we have access to affordable and fast genome sequencing will we be able to take advantage of the countless benefits. This technology helps us refine and perfect our knowledge and practice of preventive medical treatments and procedures. Preventing disease is the next best thing to curing disease.

The ability to compare the DNA sequences and medical histories of millions of people will lead to the identification of genetic variations that provide many different advantages. But I suspect the biggest benefit will come from identification of genetic variations that determine levels of intelligence and differences in personality.

The X Prize Foundation founder thinks the prize model will speed up medical advances.

"The X PRIZE Foundation has created a unique philanthropic prize model designed to stimulate research and accelerate development of radical breakthroughs that will benefit humanity," explains Dr. Peter H. Diamandis, Founder and Chairman of the X PRIZE Foundation. "The Archon X PRIZE for Genomics will revolutionize personalized medicine and custom medical treatment, forever changing the face of medical research and making genome sequencing affordable and available in every hospital and medical care facility in the world."

Personalized medicine will come in many forms. For example, some drugs are dangerous to a small fraction of the population and now are kept off the market because there's no way to identify who is at risk. If we all knew our DNA sequences then doctors could choose drugs that are compatible with our personal sequences and optimized for our sequences.

Preventive measures could be tailored to our indivdual risks too. If we each knew which genetic variations we have that increase or decrease our risks for various disease we could choose lifestyles that reduced some of our greatest risks. Though I have to say the potential to do this has been overstated. For some genetic risks there's not a diet or exercise program that is going to help.

Drugs tailored to our personal genetic sequences are still only going to be drugs. Risk profiles for diseases by themselves won't prevent the diseases. What we need are repair capabilities and for that we need stem cell therapies and gene therapies. Lots of DNA sequencing information will help in the development of stem cell and gene therapies. But the development of those therapies will depend more heavily on instrumentation advances in areas other than DNA sequencing.

Three teams have already signed up for the competition. VisiGen Biotechnologies, Inc. is based out of Houston, TX and is led by Susan Hardin Ph.D., 454 Life Sciences is a Connecticut based company headed up by Christopher McLeod and the third team, which is made up of the Westheimer Institute for Science and Technology, the Foundation for Applied Molecular Evolution, and Firebird Biomolecular Sciences LLC. They make their home in Gainesville, FL and Steve Benner is the team leader. Many other companies have inquired and more teams are expected to register soon.

Highly visible competition is a good thing. Lots of teams will work harder not just for money but for fame too.

Is faster DNA sequencing technology the greatest tool we need to accelerate the rate of advance of biotechnology? I do not think so. What we really need are better tools for watching how genes control each other. Conceptually what we need is a genomic debugger that lets scientists watch how each step of genetic regulation takes place. Which gene activation leads to which other genes getting activated or deactivated and by what mechanisms?

We also need faster and cheaper ways to measure methylation patterns on DNA. Methyl groups (a carbon with 3 hydrogens attached to it) get placed on the DNA double helix backbone to control which genes get turned on. DNA methylation patterns are part of a larger category of information called epigenetic state. The epigenetic state of a cell determines whether it is a liver cell or kidney cell or embyronic stem cell or other cell type.

In order to develop stem cell therapies and to grow replacement organs and other body parts we need the ability to cheaply and rapidly read and manipulate epigenetic state. Prizes which reward the development of better tools for reading and setting epigenetic state would do more to accelerate biomedical progress than prizes for faster DNA sequencing. But DNA sequencing is easier to describe and has gotten far more publicity.

The X Prize Foundation has not yet worked out how exact criteria for what constitutes success in doing DNA.

Some experts foresee a medical revolution if the cost of DNA sequencing could be brought down low enough that a person’s genome could be decoded as part of routine treatment. Several companies have developed novel methods of sequencing, with the eventual goal of decoding a human genome for as little as $10,000.

The X Prize Foundation has not yet determined a critical parameter of its prize, that of how complete the genomes need to be. The present “complete” human genome has many gaps and is only as complete as present technology can make it.

The prize needs criteria on how to check the error rate of sequencing and also what percentage of the genome has to be sequenced. Some parts of the genome are extremely hard to sequence and also have little value. So it does not make sense to require contestants to sequence those parts.

Thanks to Methuselah Mouse Prize co-founder David Gobel for the heads-up on this announcement.

By Randall Parker    2006 October 04 09:16 PM   Entry Permalink | Comments (1)
2006 October 01 Sunday
Dip Pen Nanolithography Will Accelerate Biosciences

A device that can write very small patterns of material will acclerate research in biological sciences.

Researchers have developed a device that uses 55,000 perfectly aligned, microscopic pens to write patterns with features the size of viruses. The tool could allow researchers to study the behavior of cells at a new rate of speed and level of detail, potentially leading to better diagnostics and treatments for diseases such as cancer.

The device builds on a technique called dip-pen nanolithography, which was first developed in 1999 by Chad Mirkin, professor of chemistry, medicine, and materials science and engineering at Northwestern University. In that system, the tip of a single atomic force microscope (AFM) probe is dipped in selected molecules, much as a quill pen would be dipped in ink. Then the molecules slip from the tip of the probe onto a surface, forming lines or dots less than 100 nanometers wide. Their size is controlled by the speed of the pen.

Because it operates at room temperature, the dip-pen tool is particularly useful for working with biological materials, such as proteins and segments of DNA that would be damaged by high-energy methods like electron beam lithography.

Both biological research and computing will benefit from this device.

"This development should lead to massively miniaturized gene chips, combinatorial libraries for screening pharmaceutically active materials and new ways of fabricating and integrating nanoscale or even molecular-scale components for electronics and computers," said Chad A. Mirkin, director of Northwestern's International Institute for Nanotechnology and George B. Rathmann Professor of Chemistry, who led the research.

"In addition, it could lead to new ways of studying biological systems at the single particle level, which is important for understanding how cancer cells and viruses work and for getting them to stop what they do," he said. "Essentially one can build an entire gene or protein chip that fits underneath a single cell."

The rate of advance of biological research and biotechnology is increasingly driven by technology developed in the semiconductor industry. The technological trends that make computer power increase so rapidly are increasingly driving an acceleration of the rate at which biotechnology advances.

By Randall Parker    2006 October 01 10:04 PM   Entry Permalink | Comments (0)
Superoxide Assay To Accelerate Aging Studies

A new method to measure superoxide promises to accelerate the study of how free radicals cause aging.

The OSU scientists, in collaboration with Molecular Probes-Invitrogen of Eugene, Ore., found a chemical process to directly see and visualize "superoxide" in actual cells. This oxidant, which was first discovered 80 years ago, plays a key role in both normal biological processes and – when it accumulates to excess – the destruction or death of cells and various disease processes.

"In the past, our techniques for measuring or understanding superoxide were like blindly hitting a box with a hammer and waiting for a reaction," said Joseph Beckman, a professor of biochemistry and director of the OSU Environmental Health Sciences Center. "Now we can really see and measure, in real time, what's going on in a cell as we perform various experiments."

This technique allowed them to learn in 3 months as much as they did in the previous 15 years. So that's a factor of 60 speed up in the rate at which they can figure out certain aspects of how cells work.

In research on amyotrophic lateral sclerosis, or Lou Gehrig's Disease, which is one of his lab's areas of emphasis, Beckman said they have used the new technique to learn as much in the past three months about the basic cell processes as they did in the previous 15 years. Hundreds of experiments can now rapidly be done that previously would have taken much longer or been impossible.

Theories of aging causes which cast the mitochondrion as a sort of Achilles Heel will be testable using this new method of measuring superoxide.

"This will enable labs all over the world to significantly speed up their work on the basic causes and processes of many diseases, including ALS, arthritis, diabetes, Parkinson's disease, Alzheimer's disease, heart disease and others," Beckman said. "And it should be especially useful in studying aging, particularly the theory that one cause of aging is mitochondrial decay."

The rate of advance of biological science and biotechnology is accelerating. Previously untestable hypotheses are becoming testable. Previously highly time-consuming methods of measurement are being replaced with faster, cheaper, and more automated methods of measurement as new sensors and new assays are developed. This is why I'm optimistic that many who are alive today will live to see the defeat of aging.

By Randall Parker    2006 October 01 09:22 PM   Entry Permalink | Comments (0)
2006 September 25 Monday
Faster Method To Read Epigenetic Information Tested

Methyl groups (a carbon and 3 hydrogens attached to a larger molecule) get placed on outsides of DNA double helixes in order to control gene expression. A group of scientists has found that embryonic cell type have unqiue patterns of methyl group attachment (methylation) that make them different from other cell types.

San Diego, Calif. -- Scientists from the Burnham Institute for Medical Research (BIMR) and Illumina Inc., in collaboration with stem cell researchers around the world, have found that the DNA of human embryonic stem cells is chemically modified in a characteristic, predictable pattern. This pattern distinguishes human embryonic stem cells from normal adult cells and cell lines, including cancer cells. The study, which appears online today in Genome Research, should help researchers understand how epigenetic factors contribute to self-renewal and developmental pluripotence, unique characteristics of human embryonic stem cells that may one day allow them to be used to replace diseased or damaged cells with healthy ones in a process called therapeutic cloning.

Embryonic stem cells are derived from embryos that are undergoing a period of intense cellular activity, including the chemical addition of methyl groups to specific DNA sequences in a process known as DNA methylation. The methylation and demethylation of particular DNA sequences in the genome are known to have profound effects on cellular behavior and differentiation. For example, DNA methylation is one of the critical epigenetic events leading to the inactivation of one X chromosome in female cells. Failure to establish a normal pattern of DNA methylation during embryogenesis can cause immunological deficiencies, mental retardation and other abnormalities such as Rett, Prader-Willi, Angelman and Beckwith-Wiedemann syndromes.

This result is entirely unsurprising. Methylation is an expected way that cells get controlled to act like one cell type rather than other cell types. For example, methyl groups can prevent proteins from binding to a section of DNA and thereby prevent cellular machnery from reading specific genes that have been methylated.

The way more exciting result here is the development of technology for measuring methylation of hundreds of sites on DNA at a time.

Until recently, DNA methylation could only be studied one gene at a time. But a new microarray-based technique developed at Illumina enabled the scientists conducting this new study to simultaneously examine hundreds of potential methylation sites, thereby revealing global patterns. "Analyzing the DNA methylation pattern of hundreds of genes at a time opens a new window for epigenetic research," says Dr. Jian-Bing Fan, director of molecular biology at Illumina. "Exciting insights into development, aging, and cancer should come quickly from understanding global patterns of DNA methylation."

The ability to rapidly read large amounts of epigenetic information is more important than the results from any one set of experiments that collect epigenetic data.

This report provides yet another illustration of how advances in instrumentation for biological systems are accelerating the rate of advance of biological science and technology.

To examine global DNA methylation patterns in human embryonic stem cells, the researchers analyzed 14 human embryonic stem cell lines from diverse ethnic origins, derived in several different labs, and maintained for various times in culture. They tested over 1500 potential methylation sites in the DNA of these cells and in other cell types and found that the embryonic stem cells shared essentially identical methylation patterns in a large number of gene regions. Furthermore, these methylation patterns were distinct from those in adult stem cells, differentiated cells, and cancer cells.

"Our results suggest that therapeutic cloning of patient-specific human embryonic stem cells will be an enormous challenge, as nuclei from adult cells will have to be epigenetically reprogrammed to reflect the specific DNA methylation signature of normal human embryonic stem cells," explains Dr. Jeanne Loring, co-director of the stem cell center at BIMR. "This reinforces the need for basic research directed at understanding the fundamental biology of human embryonic stem cells before therapeutic uses can be considered."

Some day techniques to change methylation patterns on the genome will be found. Those techniques will make it much easier to change cells into any desired cell type for therapeutic purposes. The ability to rapidly read methylation patterns will make it easier to test techniques tin the development of ways to change methylation patterns. So advances in reading methylation patterns will lead to advances for growing replacement organs and for creating stem cell therapies.

Another point: The increase in ability to read methylation patterns sounds like it was of orders of magnitude. Some people argue that anti-aging therapies are a distant prospect because even at Moore's Law (which is a doubling time for computer power of about 18 to 24 months) rates of advance it will take a long time before biotechnology cna reverse full body aging. But the advance reported above for reading methylation patterns sounds like it was much faster than the rate of Moore's Law. But biotechnology can advance more rapidly than computer technology did because biotechnology is in the process of harnessing techniques first developed for the computer industry over a period of decades.

Think of it this way: The move to put biochemical tests and sensors on chips amounts to jumping biotechnology over onto computer semiconductor technology. But that semiconductor technology took decades to develop and now biotechnology is starting to get moved over onto semiconductor devices. This allows biotech to capture in a relatively short period of time the gains of decades of semiconductor technology. So I'm not surprised to read about sudden orders of magnitude increases in the ability to do biological experiments using silicon chips.

By Randall Parker    2006 September 25 11:46 PM   Entry Permalink | Comments (0)
2006 September 07 Thursday
Project To Produce Knockouts For All Mouse Genes

Efforts to collect information about gene, cell, and organism function are becoming increasingly systematic and comprehensive. For example, complete genetic sequencing is being done on an ever increasing list of species. Activity levels of thousands of genes get measured simultaneously using gene array chips. As costs fall and gene array chips become more powerful gene expression gets measured in more tissue types and under more conditions (e.g. at different ages and in the presence of different kinds of illnesses). New techiniques allow quick measuring for the presence of many proteins and other compounds at the same time.

Efforts to develop lab animal strains are similarly becoming more ambitious. For many years scientists have created mouse breeds in which specific genes are deactivated. This allows scientists to more easily discover what purposes each gene serves in cells and organisms. The US National Institutes of Health are joining 2 other efforts to an ambitious effort produce gene knockout mouse strains for each of the genes found in mice.

BETHESDA, Md., Thurs., Sept. 7, 2006 – The National Institutes of Health (NIH) today awarded a set of cooperative agreements, totaling up to $52 million over five years, to launch the Knockout Mouse Project. The goal of this program is to build a comprehensive and publicly available resource of knockout mutations in the mouse genome. The knockout mice produced from this resource will be extremely useful for the study of human disease.

Some knockouts will be fatal. Embryos will fail to develop without some key genes. But discovering which genes are absolutely essential is itself quite useful knowledge. Also, once each gene has been knocked out knocking out pairs of genes in the same mouse can yield yet more useful knowledge. What is harder and done less often: Create mice that express too much of each gene. Also, genes can be created which can be turned on and off by administered drugs.

The NIH effort joins two other efforts already underway.

The NIH Knockout Mouse Project will work closely with other large-scale efforts to produce knockouts that are underway in Canada, called the North American Conditional Mouse Mutagenesis Project (NorCOMM), and in Europe, called the European Conditional Mouse Mutagenesis Program (EUCOMM). The objective of all these programs is to create a mutation in each of the approximately 20,000 protein-coding genes in the mouse genome.

"Knockout mice are powerful tools for exploring the function of genes and creating animal models of human disease. By enabling more researchers to study these knockouts, this trans-NIH initiative will accelerate our efforts to translate basic research findings into new strategies for improving human health," said NIH Director Elias A. Zerhouni, M.D. "It is exciting that so many components of NIH have joined together to support this project, and that the NIH Knockout Mouse Project will be working hand-in-hand with other international efforts. This is scientific teamwork at its best."

This ambitious project has become possible due to technological advances in methods to manipulate DNA.

Knockout mice are lines of mice in which specific genes have been completely disrupted, or "knocked out." Systematic disruption of each of the 20,000 genes in the mouse genome will allow researchers to determine the role of each gene in normal physiology and development. Even more importantly, researchers will use knockout mice to develop better models of inherited human diseases such as cancer, heart disease, neurological disorders, diabetes and obesity. Recent advances in recombinant DNA technologies, as well as completion of the mouse genome sequence, now make this project feasible.

The technological advances in tools will continue to keep happening. Advances in science do not come from a simple constant rate accumulation of knowledge. Advances in science and technology produce techniques and tools that accelerate the rate at which experiments can be done and make it possible to do new kinds of experiments and measure and manipulate more kinds of systems and phenomena.

Three quarters of the genes in mice do not yet have knockout versions.

To date, academic researchers around the world have created mouse knockouts of about 4,000 genes. In addition, a random disruption strategy has been used by the International Gene Trap Consortium to mutate 8,000 mouse genes. Due to some overlap between these efforts, about 15,000 genes remain to be knocked out in the mouse genome.

Genetic similarities between species mean the identification of purposes for mouse genes will yield insights into corresponding genes in humans and other organisms. We will learn the purposes of human genes much sooner due to the ability of scientists to knock out genes in mice.

By Randall Parker    2006 September 07 07:48 PM   Entry Permalink | Comments (1)
2006 May 01 Monday
Berkeley Group Puts DNA Sequencer On Chip

UC Berkeley chemistry professor Richard A. Mathies along with his Ph.D. candidates Robert G. Blazej and Palani Kumaresan have taken the standard Sanger process for DNA sequencing and shrunk a DNA sequencer down to a chip.

The upshot, Mathies says, is that the chips' small size and integration should reduce reagent and personnel costs to the point where it should be possible to sequence a complete genome for as little as $50,000. Mathies, whose team publishes its work online this week in Proceedings of the National Academy of Sciences, says UC Berkeley has licensed patents on the technology to Microchip Biotechnologies of Dublin, California.

They've combined all the steps on a chip.

The hand-held device is able to combine these three main sequencing steps – thermal cycling (to generate the different length DNA strands); sample purification; and capillary electrophoresis – into a single automated process. The size of the device means it requires a fraction of the expensive chemical reagents normally needed for DNA sequencing, greatly reducing the running costs.

Small means cheap. The smaller the cheaper.

Here's the abstract of the PNAS paper:

An efficient, nanoliter-scale microfabricated bioprocessor integrating all three Sanger sequencing steps, thermal cycling, sample purification, and capillary electrophoresis, has been developed and evaluated. Hybrid glass-polydimethylsiloxane (PDMS) wafer-scale construction is used to combine 250-nl reactors, affinity-capture purification chambers, high-performance capillary electrophoresis channels, and pneumatic valves and pumps onto a single microfabricated device. Lab-on-a-chip-level integration enables complete Sanger sequencing from only 1 fmol of DNA template. Up to 556 continuous bases were sequenced with 99% accuracy, demonstrating read lengths required for de novo sequencing of human and other complex genomes. The performance of this miniaturized DNA sequencer provides a benchmark for predicting the ultimate cost and efficiency limits of Sanger sequencing.

Here's the company which is commercializing the UC Berkeley work:

Microchip Biotechnologies Inc. (MBI) has been formed to commercialize leading-edge microfluidic and nanofluidic sample preparation technologies for the DNA sequencing, biodefense, and other life sciences applications. Founded in July 2003 by leaders in the field of life science instrumentation who helped create the genomics revolution and by leaders in microfluidics, MBI is developing breakthrough patent-pending nanofluidic technologies into products that meet the needs for high-quality nanoscale sample preparation.

Based in part on technologies exclusively optioned from the University of California at Berkeley, MBI is creating a product platform to produce a family of scaleable NanoBioProcessor™ products that perform sample preparation as a front-end for existing and future analytical instruments. The NanoBioProcessor™ will introduce mini-robotics with on-chip nanofluidic processing controlled by on-chip MOV™ valves and pumps. This novel technology to create arrays of valves and pumps has the potential to revolutionize fluidics and make complex devices manufacturable and affordable. MBI is also developing bead-based technologies to capture and purify biological materials before on-chip bioprocessing.

With so many academic and commercial research groups trying to drive down the cost of DNA sequencing by orders of magnitude and with so many demonstrating promising technologies the days of high priced DNA sequencing look to be short-lived.I'll be surprised of DNA squencing for a person costs more than $10,000 10 years from now.

Really cheap DNA sequencing will lead to massive comparisons of DNA sequence differences between people in combination with large numbers of details collected about each person. Medical histories, life histories, IQs, personality tests, and other measures of each person will get compared in combination with their DNA sequences in order to identify all the genetic variations that cause differences in who we are.

By Randall Parker    2006 May 01 10:06 PM   Entry Permalink | Comments (1)
2006 April 30 Sunday
New Technology To Lower DNA Synthesis Costs

Xiaolian Gao, a University of Houston biology and biochemistry professor and adjunct professor in chemistry and biomedical engineering, says she's developing technology that will lower the cost of gene synthesis by two orders of magnitude.

This developing technology by Gao and her associates has the potential to significantly reduce the economic barrier to make complete functioning organisms that can produce energy, neutralize toxins and make drugs and artificial genes. These organisms may eventually be used in alternative energy sources, natural product synthesis and discovery of novel protein therapeutic molecules, as well as in gene therapy procedures to treat genetic disorders, such as Parkinson's and diabetes, that could yield profound benefits for human health and quality of life.

"Synthetic genes are like a box of Lego building blocks," Gao said. "Their organization is very complex, even in simple organisms. By making programmed synthesis of genes economical, we can provide more efficient tools to aid the efforts of researchers to understand the molecular mechanisms that regulate biological systems. There are many potential biochemical and biomedical applications."

Using current methods, programmed synthesis of a typical gene cluster costs thousands of dollars. The system developed by Gao and her partners employs digital chemistry technology similar to that used in making computer chips and thereby reduces cost and time factors drastically. Her group estimates that the new technology will be about one hundred times more cost- and time-efficient than current technologies.

The harnessing of electronic technologies to solve problems in biological science and biotechnology will lower costs and accelerate the rate of advance by orders of magnitude. Both the reading of DNA (sequencing) and the writing (synthesis) will become extremely cheap.

By Randall Parker    2006 April 30 02:14 PM   Entry Permalink | Comments (3)
TJ Rodgers Sees Demand For Nanopore DNA Sequencers

T.J. Rodgers, founder and CEO of Cypress Semiconductor, says start-up companies looking to automate DNA sequencing are looking for semiconductor manufacturers which can build nanopore devices for DNA sequencing.

EET: A few colleges--Stanford, MIT, others--have created interdisciplinary programs to marry electrical engineering and biology. Do you see a coming together of those disciplines in the commercial space anytime soon?

Rodgers: I have two data points. In gene sequencing, once you understand the sequence of genes in an organism, with bioengineering you can go in and change one gene and modify the characteristic of a plant, as opposed to [just doing] a crude DNA swap where you put two plants together. For instance, you can [engineer] a grape that's more drought-tolerant than it was before but still makes great pinot noir.

[The technology needed] to understand the gene sequence--that's going to go to silicon. There are startups in Silicon Valley coming into our company saying they want us to build holes so small that one DNA molecule will fit in them. They want to watch it fluoresce and find out what it is. And they want millions of chips.

Advances in semiconductor technology are driving advances in biotechnology. Once DNA sequencing can be done on chips the cost will plummet orders of magnitude.

Parenthetically, Cypress's ownership of photovoltaics maker SunPower now provides Cypress with most of its market capitalization.

Cypress owns 85 percent of SunPower, which went public in November. It is valued near $2.5 billion, with its stock trading at $17.24. SunPower's capitalization is about $2.38 billion; since its offering, its stock has risen from $24.42 to a closing high of $44.07. This suggests that much of the value of Cypress these days comes from SunPower.

If Rodgers could find a way to build useful semiconductors for a Cypress-funded DNA sequencing start-up then he could repeat the tentative success of his SunPower investment.

By Randall Parker    2006 April 30 02:03 PM   Entry Permalink | Comments (0)
Founder Population Genetic Scans Accelerate

An article in MIT's Technology Review reports on a Canadian company Genizon Biosciences that is using the genetic homogeneity of the French Quebec founder population to investigate genetic causes of disease just as deCODE Genetics does in Iceland. To someone with an interest in the accelerating rate of biotechnological advance (FuturePundit and I hope quite a few readers of FuturePundit) the most interesting part of the article mentions that Genizon has used improvements gene chip technology to speed up their genetic studies by more than an order of magnitude.

The initial Genizon map, completed in 2004, was created from 1,500 members of the Quebec founder population and had about 81,000 markers. Genizon has now improved its gene hunting capabilities even further, by using a gene chip produced by Illumina, a genetic toolkit company in California, which incorporates markers from both the HapMap and original Quebec map, for a total of more than 350,000 markers per individual. Studies that initially took scientists three months now take just a week, says John Hooper, president and chief executive officer Genizon.

3 months is 13.5 weeks versus 1 week for the same test run now. So in just a few years they sped up their testing by over an order of magnitude. I keep running into reports where researchers mention their experiments use new technology that has sped up their experiments by orders of magnitude and that they can now collect more data and more quickly. These boosts in productivity are going to produce discoveries and effective treatments for diseases which have long been incurable.

To uncover genetic variants that increase risk for a disease, scientists start with DNA from patients and use the gene chips to sift through the markers, searching for particular variants that appear more frequently in people with the disease. Once scientists have identified genes of interest, they create a map of the interacting genes.

Among the diseases they are looking at for genetic risk factors: Crohn's disease, asthma, schizophrenia, baldness, longevity, Attention Deficit Hyperactivity Disorder, Type II diabetes, osteoporosis, and macular degeneration.

The Genizon web site also has an interesting table of genetically homogeneous founder populations around the world. I had no idea that north east Finland has a genetically homogeneous population only about 15 to 20 generations old. Also, the people in Newfoundland are pretty genetically homogeneous. Also, Costa Ricans from the central valley of Costa Rica meet criteria for homogeneity of recent enough vintage that make them good genetic study candidates.

The costs of DNA studies will drop by more orders of magnitude in the next couple of decades. Scientists will identify the vast bulk of genetic variations that affect our disease risk, physical performance, intelligence, behavior, and other characteristics.

By Randall Parker    2006 April 30 12:05 PM   Entry Permalink | Comments (2)
2006 April 06 Thursday
Nanopore Design For Cheap DNA Sequencing Identified

A nanopore that has electrodes that can measure the electrical differences between genetic code letters could enable very cheap and fast DNA sequencing.

A team led by physicists at the University of California, San Diego has shown the feasibility of a fast, inexpensive technique to sequence DNA as it passes through tiny pores. The advance brings personalized, genome-based medicine closer to reality.

The paper, published in the April issue of the journal Nano Letters, describes a method to sequence a human genome in a matter of hours at a potentially low cost, by measuring the electrical perturbations generated by a single strand of DNA as it passes through a pore more than a thousand times smaller than the diameter of a human hair. Because sequencing a person’s genome would take several months and millions of dollars with current DNA sequencing technology, the researchers say that the new method has the potential to usher in a revolution in medicine.

“Current DNA sequencing methods are too slow and expensive for it to be realistic to sequence people’s genomes to tailor medical treatments for each individual,” said Massimiliano Di Ventra, an associate professor of physics at UCSD who directed the project. “The practical implementation of our approach could make the dream of personalizing medicine according to a person’s unique genetic makeup a reality.”

The physicists used mathematical calculations and computer modeling of the motions and electrical fluctuations of DNA molecules to determine how to distinguish each of the four different bases (A, G, C, T) that constitute a strand of DNA. They based their calculations on a pore about a nanometer in diameter made from silicon nitride—a material that is easy to work with and commonly used in nanostructures—surrounded by two pairs of tiny gold electrodes. The electrodes would record the electrical current perpendicular to the DNA strand as the DNA passed through the pore. Because each DNA base is structurally and chemically different, each base creates its own distinct electronic signature.

While these researchers have identified the needed configuration the nanopore electrodes they haven't yet built the device. But they say the problem might be solved soon.

The researchers caution that there are still hurdles to overcome because no one has yet made a nanopore with the required configuration of electrodes, but they think it is only a matter of time before someone successfully assembles the device. The nanopore and the electrodes have been made separately, and although it is technically challenging to bring them together, the field is advancing so rapidly that they think it should be possible in the near future.

The researchers also expect this method to have a lower error rate than the current Sanger method for sequencing DNA.

Cheap DNA sequencing will bring many benefits and changes. A massive comparison of the DNA sequences of millions of people along with recording a large quantity of other information about each person (physical shape, hair/eye/skin color, health records, IQ, education, values, preferences, etc) will lead to the fast identification of genetic variations that contribute to a huge range of human differences.

By Randall Parker    2006 April 06 10:12 PM   Entry Permalink | Comments (9)
2006 February 24 Friday
On Falling DNA Sequencing Costs

The latest 454 Life Sciences DNA sequencer might lower the cost of sequencing a complete human genome to below $10 million.

454’s Genome Sequencer 20 ($500,000) uses an on-bead sequencing-by-synthesis approach to generate some 40 million bases of raw data per four-hour run, meaning I could squeeze out a human genome in just under a month. With per-run reagent costs of $6,000, my genome would cost a mere $900,000.

But that’s just one sequence pass, and according to vice president for molecular biology Michael Egholm, “It’s simply ludicrous to say you can sequence a human genome with 1x coverage.” He suggests 8x or 15x coverage, which would boost my costs to between $7.2 million and $13.5 million, and increase my sequencing time to about a year.

The whole article is worth reading and covers some of the academic and commercial efforts to drive down the cost of DNA sequencing.

While some rich people could afford to get their DNA sequenced now they'd derive very little benefit from doing so. The meaning of a small number of genetic variations is known. But they can be tested for individually.

Harvard professor George Church may have found a way to drive DNA sequencing costs down by a factor of 5 or more.

Church knew that a key to making gene sequencing fast and affordable lay in miniaturizing the process. He coats a slide with millions of microscopic beads, each impregnated with chemicals that light up when exposed to DNA base pairs. A digital camera fitted to a microscope photographs the pattern, and software decodes the results. His process is more than 250 times faster than conventional technology. In short, rather than take seven years to sequence the human genome, Church's machines can theoretically do it in less than a week. He says "theoretically" because he and his students have only decoded the DNA of E. coli, which is 1/1000th the size of the human genome. Based on his current costs, he thinks he could decode a human genome for about $2.2 million.

On Church's $2.2 million estimate see my August 2005 post "Harvard Group Lowers DNA Sequencing Cost Order Of Magnitude".

The first human genome cost $3 billion to sequence and costs have already fallen by over 2 orders of magnitude since then.

The PGP is an offshoot of the Human Genome Project, the massive government effort to read and put in proper sequence all 3 billion bits of human DNA. The project was completed in 2003 at about $3 billion - about $1 for each of the tiny chemical units, called bases, that make up the human genome.

Since then, better technology and greater efficiency have brought down the cost to $10 million - less than a penny per base - for a complete DNA sequence, according to Jeffery Schloss, the director of technology development at the National Human Genome Research Institute, a federal agency in Bethesda, Md.

The institute is financing a campaign to cut the cost of sequencing a genome to $10,000 by 2009 and drive it all the way down to $1,000 by 2014. An affordable $1,000 genome is biology’s next dream.

The cost of sequencing fell during the first sequencing of the human genome. So most of the $3 billion dollar amount represents a higher cost than had been achieved by the completion of the sequence.

Church is trying to recruit people into his Personal Genome Project at the Harvard Medical School where the recruits would allow comparison of their DNA sequences with lots of other information about them. We need this sort of research and on a massive scale with tens of thousands or even hundreds of thousands of enrollees. People should be enrolled long before DNA sequencing becomes cheap because some connections between genes and other characteristics will require longitudinal studies (i.e. studies that follow people for years and even decades).

Update: I see that we are on the cusp of a big change as a result of dropping DNA sequencing costs. 10 or 15 years from now people will use genetic analyses to formulate custom diet advice (nutritional genomics) to reduce the risk of diseases or choose the best diet for losing weight or putting on muscle. They will surreptitiously get DNA samples from their romantic interests to decide whether the other person has genetic profile good enough to warrant trying to marry (genetic tendency to cheat? intelligence? disease risks? violent tendencies? genetic tendencies toward laziness or conscientiousness?). Surreptitious DNA sample collection will find use for other purposes. Some sharp smaller employers will collect DNA from job interviewees (e.g. from a coffee cup) to analyse their DNA for personality tendencies and approximate level of intelligence.

Drugs will get developed for specific genetic profiles. Some will get preventative genetic therapies based on their genetic profiles to avoid diseases before they get sick.

Of course, mate selection is a pretty slow way to get the genes that you desire for your offspring and most will not be able to secure genetically ideal mates. More women will turn toward using sperm donors to get exactly what they want. Eventually gene therapies on sperm, eggs, and embryos will replace much of the coming increased use of sperm donors. But my guess is there will be a 5 to 15 year period during which use of sperm donors will soar before gene therapies provide better alternatives.

By Randall Parker    2006 February 24 10:13 PM   Entry Permalink | Comments (17)
2006 January 30 Monday
X Prize For DNA Sequencing Announced

The X Prize Foundation folks have decided to offer a new prize for DNA sequencing.

The Santa Monica, Calif., foundation plans to offer a $5 million to $20 million prize to the first team that completely decodes the DNA of 100 or more people in a matter of weeks, according to foundation officials and others involved.

Such speedy gene sequencing would represent a technology breakthrough for medical research. It could launch an era of "personal" genomics in which ordinary people can learn their complete DNA code for less than the cost of a wide-screen television.

Details of the award are being worked out, and officials say they don't expect anyone to claim the prize for at least five to 10 years.

I am all for orders of magnitude faster and cheaper DNA sequencing. However, I question the granularity of this prize. I'd rather see prizes for advances in microfluidics and other technologies that would be for goals that can be achieved more quickly and which could be done by smaller groups. Multiple research teams could very easily make contributions that get used in the final effort to win this prize and yet not be the actual team that travels that final distance. University groups with limited resources that are working on various aspects of the larger problem but not on pieces big enough to solve the entire problem aren't going to be incentivized by this prize.

Still, this prize will certainly increase the attention on the DNA sequencing problem and will probably hasten the day cheap DNA sequencing technology is developed. Plus, the sorts of big money people getting attracted to prizes (e.g. Google co-founder Larry Page joined the X Prize board) bodes well for the future funding of big science and technology prizes.

Prizes legitimize and promote technological goals to the public.

"Prizes change the public perception about an issue," says Peter Diamandis, founder and chairman of the X Prize Foundation in Santa Monica, Calif. People begin to believe that a problem is solvable. "The more prize money, the more the issue is seen as important by the public."

Last June, the Bill & Melinda Gates Foundation put an exclamation point after "grand challenge" when it announced one of the richest in history. The Grand Challenges for Global Health pledged $436.6 million (including $31.6 million from British and Canadian sources) toward solving some of the world's worst health problems. Preliminary funds have been granted to 43 groups attacking 14 challenges. They include: developing vaccines to prevent malaria, tuberculosis, and HIV that don't require refrigeration, needles, or multiple doses; finding new ways to stop the spread of insect-borne diseases; and developing more nutritious crops to feed the hungry.

Of course, the coolest prize with the most relevance to the eventual stopping and reversing of the aging process is the Methuselah Mouse Prize for finding ways to make laboratory mice live longer. The latter article above even quotes Methuselah Mouse Prize cofounder David Gobel who some of you have noticed posting in the comments section of FuturePundit posts. The general buzz big new prize announcements helps promote each prize. The Ansari X Prize success with Burt Rutan's SpaceShipOne flight into space has relegitimized prizes as a way to accelerate technological development.

Update: Aside: Corporations ought to use internal prizes for achieving various desired technological breakthroughs. Internally corporations are like command economies. Engineers and managers who come up with ways to save or make the company large sums of money rarely get much reward for their efforts. Companies could get a lot more innovation if they offered prizes internally for technological developments and cost saving innovations.

By Randall Parker    2006 January 30 09:16 PM   Entry Permalink | Comments (2)
2006 January 22 Sunday
Sanger DNA Database Doubles Every 10 Months

One reason why I am optimistic that rejuvenation therapies using Strategies for Engineered Negligible Senescence can reverse the aging process within the lifetimes of most people reading this is that the rate of advance of biotechnology is increasingly resembling the rate of advance of electronic technology. The rate of accumulation of DNA sequence information in a public database is doubling every 10 months.

The Archive is 22 Terabytes in size and doubling every ten months - perhaps the largest single scientific database in Europe, if not the world.

The database is large even compared to major non-DNA computer databases.

Martin Widlake, Database Services Manager at the Wellcome Trust Sanger Institute said: "At 22 000 GB the Trace Archive is in the Top Ten UNIX databases in the world. That's not bad for a research organisation of 850 employees in the countryside just outside Cambridge."

"It is possibly the biggest single (acknowledged) scientific RDBMS database in Europe, if not the world."

All the data are freely available to the world scientific community (http://trace.ensembl.org/), as a resource to geneticists all over the globe. When a researcher is studying a disease or gene, they can download the genetic information known about the area they are studying.

Luckily technologies for computer speeds and hard disk capacities are undergoing their own rapid rates of doubling and so computers will probably keep up with storage and processing needs of projects to reverse engineer and understand human and other species genomes.

Ray Kurzweil's argument that the technological advance is eventually going to accelerate to a rate we can't even comprehend (see The Singularity Is Near: When Humans Transcend Biology) seems plausible to me because of the doubling rates we see in computer speed, storage capacities, and fiber optic information transmission rates. On top of that we now have biotechnology advancing with rates that are highly analogous to the rates we've been watching in semiconductor technology for decades. Our perception of the rate of change from year to year or decade to decade up to this point does not tell us much about the rate of change 20 years from now because the rate of change is accelerating.

By Randall Parker    2006 January 22 10:21 PM   Entry Permalink | Comments (9)
2005 August 26 Friday
Public DNA Sequence Databases Reach 100 Gigabases Of Data

The big public DNA sequencing databases have hit the 100 gigabase mark.

The world’s three leading public repositories for DNA and RNA sequence information have reached 100 gigabases [100,000,000,000 bases; the ’letters’ of the genetic code] of sequence. Thanks to their data exchange policy, which has paved the way for the global exchange of many types of biological information, the three members of the International Nucleotide Sequence Database Collaboration [INSDC, www.insdc.org] – EMBL Bank [Hinxton, UK], GenBank [Bethesda, USA] and the DNA Data Bank of Japan [Mishima, Japan] all reached this milestone together.

Graham Cameron, Associate Director of EMBL’s European Bioinformatics Institute, says "This is an important milestone in the history of the nucleotide sequence databases. From the first EMBL Data Library entry made available in 1982 to today’s provision of over 55 million sequence entries from at least 200,000 different organisms, these resources have anticipated the needs of molecular biologists and addressed them – often in the face of a serious lack of resources."

100,000,000,000 DNA letters sequenced sound like a lot? I find this disappointing. The human genome is estimated to be in the neighborhood of about 2.9 gigabses. So 100 gigabases is only enough to represent the DNA sequences for about 34 people. Suddenly sounds a lot less staggering.

All this information from many organisms helps scientists in many ways. However, in 5 or 10 years DNA sequencing costs will drop down to $1000 or perhaps even $100 per person. Then literally hundreds of millions of people will get their DNA sequenced and orders of magnitude more sequencing of other species will get done.

Cheap DNA sequencing would lead very quickly to the identification of DNA sequences that contribute to many disease risks, longevity, personality, intelligence, and assorted abilities and aspects of appearance. Identification of genetic variations that contribute to differences in disease risks and longevity will help guide the genetic engineering of stem cells to create stem cells which maximize longevity and health improvements.

With cheap DNA sequencing drug side effects due to genetic variations would become much more avoidable and drug development efforts would hit fewer failures in late stage testing due to harmful side effects in small portions of populations. Hence the rate of drug development would accelerate.

Money spent by government DNA sequencing projects doing sequencing of organisms with today's technology would be better spent on more research to develop cheaper sequencing methods. Though efforts already underway promise cheaper DNA sequencing methods in the not too distant future. Check out previous posts in my Biotech Advance Rates for reports on efforts to cut DNA sequencing costs by orders of magnitude.

By Randall Parker    2005 August 26 03:45 PM   Entry Permalink | Comments (1)
2005 August 05 Friday
Harvard Group Lowers DNA Sequencing Cost Order Of Magnitude

Biotechnology increasingly resembles electronics technology as costs fall by multiples.

BOSTON-August 4, 2005-The theoretical price of having one's personal genome sequenced just fell from the prohibitive $20 million dollars to about $2.2 million, and the goal is to reduce the amount further--to about $1,000--to make individualized prevention and treatment realistic.

The sharp drop is due to a new DNA sequencing technology developed by Harvard Medical School (HMS) researchers Jay Shendure, Gregory Porreca, George Church, and their colleagues, reported on August 4 in the online edition of Science. The team sequenced the E. coli bacterial genome at a fraction of the cost of conventional sequencing using off-the-shelf instruments and chemical reagents. Their technology appears to be even more accurate and less costly than a commercial DNA decoding technology reported earlier this week.

The commercial DNA decoding technology they are referring to is from 454 Life Sciences Corporation and you can read about it in my post "New Tool Speeds Up DNA Sequencing By 100 Times". Whether the Harvard or 454 Life Sciences approach can go further in lowering DNA sequencing costs in the long run remains to be seen. But these are not the only two efforts aimed at lowering DNA sequencing costs and another company or academic group might yet bypass both of them.

The Church group built their sequencer using an assembly of existing technologies. How creative.

The Church group's technology is based on converting a widely available and relatively inexpensive microscope with a digital camera for use in a rapid automated sequencing process that does not involve the much slower electrophoresis, a mainstay of the conventional Sanger sequencing method.

"Meeting the challenge of the $1,000 human genome requires a significant paradigm shift in our underlying approach to the DNA polymer," write the Harvard scientists.

The new technique calls for replicating thousands of DNA fragments attached to one-micron beads, allowing for high signal density in a small area that is still large enough to be resolved through inexpensive optics. One of four fluorescent dyes corresponding to the four DNA bases binds at a specific location on the genetic sequence, depending on which DNA base is present. The fragment then shines with one of the four colors, revealing the identity of the base. Recording the color data from multiple passes over the same sequences, a camera documents the results and routes them to computers that reinterpret the data as a linear sequence of base pairs.

In their study, the researchers matched the sequence information against a reference genome, finding genetic variation in the bacterial DNA that had evolved in the lab.

"These developments give the feeling that improvements are coming very quickly," said HMS professor of genetics Church, who also heads the Lipper Center for Computational Genetics, MIT-Harvard DOE Genomes to Life Center, and the National Institutes of Health (NIH) Center for Excellence in Genomic Science.

"The cost of $1,000 for a human genome should allow prioritization of detailed diagnostics and therapeutics, as is already happening with cancer," Church said.

The Church lab is a member of the genome sequencing technology development project of the NIH-National Human Genome Research Institute.

I predict that within 10 years most of us who survive the coming killer flu pandemic will know our primary DNA sequences. Many debates about the degree of heritability of various human characteristics - especially cognitive characteristics - will then be resolved very quicky. The knowledge spawn many uses and not just medical uses (valuable though they will be). Social science as a field will become orders of magnitude more productive as genetics becomes a much better controlled variable in social science studies.

I'm also looking forward to improvements in stem cell therapy development when many genetic sequences that affect longevity are identified. We'll be able to use stem cells not only to build replacement parts but also to build longer lasting replacement parts.

Thanks to Brock Cusick for the heads-up.

By Randall Parker    2005 August 05 09:43 AM   Entry Permalink | Comments (4)
2005 August 01 Monday
New Tool Speeds Up DNA Sequencing By 100 Times

DNA sequencing keeps getting faster and cheaper.

454 Life Sciences Corporation, a majority-owned subsidiary of CuraGen Corporation , today announced the publication of a new genome sequencing technique 100 times faster than previous technologies. This is the first new technology for genome sequencing to be developed and commercialized since Sanger-based DNA sequencing. 454's proprietary technology is described in the paper "Genome sequencing in microfabricated high-density picoliter reactors," in the July 31, 2005, online issue of Nature, with the print edition of the paper to follow later in the year. The technique was demonstrated by repeatedly sequencing the bacterial genome Mycoplasma genitalium in four hours, with up to and exceeding 99.99% accuracy. With a 100-fold increase in throughput over current sequencing technology, 454 Life Sciences' instrument system opens up new uses for sequencing, including personalized medicine and diagnostics, oncology research, understanding third world diseases, and providing fast responses to bioterrorism threats and diagnostics.

"It is clear that sequencing technology needs to continue to become smaller, faster and less expensive in order to fulfill the promise of personalized medicine," said Francis S. Collins, M.D., Ph.D., Director of the National Human Genome Research Institute. "We are excited that our support of sequencing technology development is yielding results and we look forward to the applications of such innovative technologies in biomedical research and, ultimately, the clinic."

In May 2004, the NHGRI awarded a grant to 454 Life Sciences to help fund the scale-up of 454 Life Sciences' technique toward the sequencing of larger genomes, starting with bacterial genomes, and to develop the Company's ultraminiaturized technology as a method to sequence routinely individual human genomes. The scalable, highly parallel system described in this article sequenced 25 million base pairs, at 99% or better accuracy, in a single four hour run. The researchers illustrated the technique by sequencing the genome of the Mycoplasma genitalium bacterium.

"Much like the personal computer opened up computing to a larger audience, this work will enable the widespread use of sequencing in a number of fields, and ultimately place machines in your doctor's office," stated Jonathan Rothberg, Ph.D., senior author and 454 Life Sciences' Founder and Chairman of the Board of Directors. "This sequencing technique, leveraging the power of microfabrication, is 100 times faster than standard sequencing methods at the start of its development cycle. We expect, as with computers, for it to get more powerful and cheaper each year, as we continue to advance and miniaturize the technology."

To repeat a frequent FuturePundit theme: Biotechnology is going to advance at the rate of computer technology because biotechnology is shifting toward the use of very small scale devices. The current cost of human DNA sequencing is in the tens of millions of dollars per person. But that high cost won't last for much longer.

The half million dollar sequencing machine uses 1.6 million tiny reaction wells in parallel.

The novel sequencing technique, designed by Jonathan M. Rothberg of 454 Life Sciences Corp. in Branford, Conn., and his colleagues, uses tiny fiber-optic reaction vessels that measure just 55 micrometers deep and 50 micrometers across--a slide containing 1.6 million wells takes up just 60 square millimeters.

The 25 million base pairs that this machine sequenced in 4 hours should be compared to the approximately 3 billion base pairs in the human genome. If the machine's ability to process 25 million base pairs in 4 hours scales up to larger genomes then the human genome would take 480 hours or 20 days on this machine. But there are additional challenges in sequencing a large genome such as breaking it down into doable pieces. Also, the genome has to be read several times to correct for errors.

This article has details on the process used by this sequencing instrument.

The 454 approach involves shearing the starting material DNA using a nebulizer. Rothberg explains: “[We] nebulize the DNA into little fragments, shake it in oil and water, so each DNA fragment goes into a separate water droplet. So instead of bacteria, we separate the DNA into drops. Then we do PCR, so every drop has 10 million copies. Then we put in a bead, drive the DNA to the bead, so instead of the cloning and robots, one person can prepare any genome.”

The DNA-covered beads are loaded into the microscopic hexagonal wells of a fiber-optic slide, which contains about 1.6 million wells. In 454’s benchtop instrument, chemicals and reagents flow over the beads in the wells. Solutions containing each nucleotide are applied in a repetitive cycle, in the order T-C-A-G. Excess reagent is washed away using a nuclease, before a fresh solution is applied. This cycle is repeated dozens of times.

The researchers see their techology following a similar pattern to the development of integrated circuits which have sped up at the rate predicted by Intel co-founder Gordon Moore with his famous Moore's Law.

“Future increases in throughput, and a concomitant reduction in cost per base, may come from the continued miniaturization of the fibre-optic reactors, allowing more sequence to be produced per unit area – a scaling characteristic similar to that which enabled the prediction of significant improvements in the integrated circuit at the start of its development cycle.”

Future iterations of their design will increase the level of parallelization while at the same time keeping costs the same per instrument or even lowering costs per instrument. So these folks have an approach that will drive down DNA sequencing costs by orders of magnitude.

Rothberg expects individual DNA sequencing for medical purposes once DNA sequencing costs fall to $20,000.

Jonathan Rothberg, board chairman of 454 Life Sciences, said the company was already able to decode DNA 400 units at a time in test machines. It was working toward sequencing a human genome for $100,000, and if costs could be further reduced to $20,000 the sequencing of individual genomes would be medically worthwhile, Dr. Rothberg said.

Another very important application of cheap DNA sequencing technology which is rarely mentioned is in social sciences. Cheap DNA sequencing will allow controlling for genetic influences on behavior in social science experiments. Most existing social science research results will be discredited by experiments that control for genetic influences. The excessive assumption of environmental influences on human behaviors and abilities will be discredited. This will lead to the disproof of key assumptions underlying beliefs of factions on both the political Left and the political Right.

By Randall Parker    2005 August 01 10:14 AM   Entry Permalink | Comments (16)
2005 May 05 Thursday
Helicos Biosciences May Offer $5000 DNA Sequencing By 2007

Cambridge Massachusetts 2003 venture capitali start-up Helicos Biosciences claims that by 2007 Helicos will be selling a machine that will sequence a person's genome for $5000.

Helicos’s first commercial sequencing machines will be ready for sale by the end of 2006 or early 2007, says president and CEO Stan Lapidus.

Get your DNA sequenced in 3 days for $5000.

When Helicos’s commercial machine is released, says Lapidus, it will sequence a whole genome start to finish in three days and for a cost of $5,000.

Currently the cost is in the tens of millions. If Helicos achieves their goal then prices for DNA sequencing will drop by over 4 orders of magnitude.

Helicos has licensed technology developed by then CalTech biophysicist Stephen Quake's group (Quake is now at Stanford Medical School). Quake's group published a 2003 paper in the Proceedings of the National Academy of Sciences that is probably what led to the founding of Helicos. Here is the abstract which describes a way to read individual bases (letters in the DNA alphabet) from a single strand of DNA.

The completion of the human genome draft has taken several years and is only the beginning of a period in which large amounts of DNA and RNA sequence information will be required from many individuals and species. Conventional sequencing technology has limitations in cost, speed, and sensitivity, with the result that the demand for sequence information far outstrips current capacity. There have been several proposals to address these issues by developing the ability to sequence single DNA molecules, but none have been experimentally demonstrated. Here we report the use of DNA polymerase to obtain sequence information from single DNA molecules by using fluorescence microscopy. We monitored repeated incorporation of fluorescently labeled nucleotides into individual DNA strands with single base resolution, allowing the determination of sequence fingerprints up to 5 bp in length. These experiments show that one can study the activity of DNA polymerase at the single molecule level with single base resolution and a high degree of parallelization, thus providing the foundation for a practical single molecule sequencing technology.

Solexa, which is mentioned in the original article above as planning to get to market with a DNA sequencing machine before Helicos, is also pursuing a technology for reading individual strands of DNA.

Helicos BioSciences, a tiny company in Cambridge, Mass., is beginning an ambitious effort to sequence single molecules of DNA by running them through microscopically small channels. (Other techniques generally require billions upon billions of copies of the target DNA.) So is Solexa, a U.K. company whose technique involves attaching stretches of a single DNA molecule to the surface of a chip and analyzing them via laser light and fluorescent tags that identify particular DNA "letter" sequences.

Incredibly cheap DNA sequencing will be occasion for a massive biomedical and social science project to compare the DNA sequences and large amounts of biomedical and behavioral and other information between millions of people. By comparing DNA sequences alongside large quantities to detailed information we will be able to find genetic variations for general and specialized intelligence, personality, disease risks, criminal records, career paths, other behavioral tendencies, esthetic preferences, and numerous other human characteristics.

The DNA sequence comparisons combined with detailed comparisons of other characteristics of individuals will produce results that will shatter the politically correct beliefs that now dominate academic social science. The political Left will suffer a greater undermining of their beliefs than the Right. But conservatives and libertarians will also find many of their beliefs challenged by science. Many religious beliefs about human nature will also be challenged by contradictory evidence found in our DNA. Humans will come out looking far more determined by nature. If Helicos achieves their 2007 goal then by 2010 the political debate in America, Europe, and in many other parts of the world will be irrevocably changed.

Thanks to Brock Cusick for the tip on Helicos.

By Randall Parker    2005 May 05 09:22 AM   Entry Permalink | Comments (26)
2005 April 28 Thursday
Leroy Hood Sees Great Advances In Biomedical Testing Devices

Leroy Hood of the Institute for Systems Biology says nanotech-based devices will make thousands of blood measurements from a person's blood sample within 5 to 8 years.

"We will over the next five to eight years have a measurement device, a nanotechnology device, that can make thousands of measurements very rapidly and very inexpensively," he said, referring to blood protein analysis.

Patterns in blood proteins and other compounds will be used to ascertain the status of every organ in the body.

"Each human organ has, through the blood, a unique molecular fingerprint that reports the status of that organ. Hence, if we can read these blood molecular fingerprints, we will have the capacity to assess health and diseases," he said.

Back around 1980 or 1981 while Hood was still at Cal Tech (i.e. before Bill Gates put up a lot of money to get him to move to the University of Washington) Hood developed the first automated DNA sequencer using a mass spectrometer that was originally developed for a Mars probe (Mariner I think). I saw Hood deliver a talk about this device at that time when he visited a different university. One thing stood out in my mind: The instrument was so sensitive that he had a lab tech working full time taking the purest available commercial grade reagents and purifying the reagents further to make them pure enough for this instrument. Yet the instrument was an absolute marvel compared to more manual methods of DNA sequencing that were then in use. Well, since then DNA sequencing has become several orders of magnitude cheaper and faster and in the next couple of decades will most certainly become still several more orders of magnitude cheaper and easier. So will a larger array of other biological and medical tests.

Testing costs will plummet in the next decade and the range of what is testable will expand by orders of magnitude. Within 20 years (and probably much sooner for people managing some chronic diseases) we'll have minilabs embedded in us measuring our blood that will be readable by radio signals. We'll also be able to spit and breathe into devices in our bathrooms that will quickly analyse for signs of hundreds and maybe even thousands of illnesses. Even toilets will eventually have embedded disease detection sensors.

Hood also foresees a coming shift in medicine from reaction to diseases to predicting diseases before diseases happen so that diseases can be prevented.

Hood predicts that in the next 10 to 20 years, systems biology will provide two breakthroughs: First, it will allow physicians to predict an individual's health makeup -- his genetic predispositions and other key indicators that might make him healthy or sick. Second, it will provide powerful new tools for preventing disease.

"We'll move from a mode of medicine that's largely reactive to one that's predictive and preventive," he says.

Diseases will be detected at much earlier stages. This will not always help. Some known diseases exist in all adults at very early stages and yet we currently can't do anything about them. For example, most middle aged people already have cancerous cells in their bodies. Really, I'm not making this up. We have cancer cells that are stuck in small areas because they have not yet mutated to start secreting compounds which cause blood vessels to grow (pro-angiogenesis compounds as distinct from anti-angiogenesis compounds which are used against some forms of cancer). So our cancers are stuck in half millimeter nodes between capillaries and have reached the limits of their ability to grow given the amount of food and oxygen they can acquire from existing blood vessels nearby.

The mutational events that allow early stage cancers to start secreting angiogenesis enhancing compounds are probably impossible to predict. Too much randomness is involved in mutation to allow precise predictions to be made. Though some genetic sequences will no doubt be found that make such mutations more or less likely to happen. What we need at this stage is the ability to kill those early stage cancers that sit there for decades waiting for a mutation that will free them from their food limits and set them off growing. We also need the ability to kill senescent cells that secrete compounds which encourage the growth of cancer cells (and that is not the only bad thing senescent cells do btw).

Still, precise disease prediction will be feasible for many diseases. For example, I would expect disease prediction to be much more feasible for osteoarthritis, heart disease, kidney disease, and other ailments that strike once sufficient damage has accumulated over time. Also, predictive capabilities will become more useful as our toolbox of treatments expands. Why wait for a knee to become painful and severely damaged if we can detect the problem years earlier and send in stem cells to do repairs before repair becomes much more difficult?

Disease prediction will also provide much greater incentive for changes in behavior. If your doctor can tell you that you will definitely get kidney failure in 12 to 15 years or heart disease in 20 to 25 years if you do not change your ways you'll have a much greater incentive to shape up and take blood pressure medicine or cholesterol lowering drugs or to improve your diet and get more exercise. Embedded sensors could even be combined with your PDA or wrist watch to warn you when your stress level from lousy food, lack of sleep, or lack of exercise has gotten high enough to accelerate your damage accumulation rate above some level that you decide is acceptable.

By Randall Parker    2005 April 28 09:56 PM   Entry Permalink | Comments (5)
2005 April 04 Monday
PCSK9 Gene New Target For Cholesterol Lowering Drugs

The gene for the protein PCSK9 is a promising new target for the development of a new class of cholesterol lowering drugs.

DALLAS – March 29, 2005 – Mice lacking a key protein involved in cholesterol regulation have low-density lipoprotein, or "bad" cholesterol, levels more than 50 percent lower than normal mice, and researchers suggest that inhibiting the same protein in humans could lead to new cholesterol-lowering drugs.

In a study to be published in the Proceedings of the National Academy of Sciences and available online this week, researchers at UT Southwestern Medical Center deleted the Pcsk9 gene in mice. The gene, present in both mice and humans, makes the PCSK9 protein, which normally gets rid of receptors that latch onto LDL cholesterol in the liver. Without this degrading protein, the mice had more LDL receptors and were thus able to take up more LDL cholesterol from their blood.

"The expression of LDL receptors is the primary mechanism by which humans lower LDL cholesterol in the blood," said Dr. Jay Horton, associate professor of internal medicine and molecular genetics and senior author of the study. "This research shows that in mice, deleting the PCSK9 protein results in an increase in LDL receptors and a significant lowering of LDL cholesterol."

The results of this study illustrate how new drug development begins. When a protein is found to play a key role in regulating some part of metabolism that can contribute to disease development or disease prevention then suddenly the pharmaceutical companies have a new target for for drug development. Pharmaceutical firms may well react to this report by screening tens of thousands of chemical compounds to look for compounds that bind to the protein PCSK9. Or (and more expensively), the pharma companies could grow cells that express PCSK9 in culture and then introduce compounds to see if the compounds increase or decrease the amount of PCSK9 found in those cells.

Another important thing to note about this study: The investigation relied on the ability to produce a mouse strain which has the PCSK9 gene deleted or disabled. The techniques used to knock out genes and produce special strains of mice are incredibly valuable and have sped up the process of identifying what functional purposes are served by the tens of thousands of genes shared by humans and mice.

It is not hard to imagine other ways that genes could be modified in mice to study gene function. For example, every gene has a promoter region which controls when a gene is expressed. Special promoter sequences can be placed in front of each gene to change when it will be turned on. One can place a special promoter on a gene that will turn it on only in the presence of a particular drug. Or multiple copies of a gene could be inserted to increase the level of expression of the gene.

In the future the process of producing gene knock-out mice will become more automated. Also, implantable miniature blood sensors will allow automated and cheap checking of blood hormones, gasses, proteins, and other compounds. Work like the research done PCSK9 gene knock-out study will some day be done in a much more automated way that will allow many effects of many genes to be checked in parallel.

Humans have been identified who have low levels of PCSK9 gene and low LDL cholesterol.

On average, mice lacking the Pcsk9 gene, called knockout mice, had blood LDL cholesterol levels of 46 milligrams per deciliter, while wild-type mice had levels around 96 mg/dl, a difference of 52 percent.

Dr. Horton's research is consistent with findings from another recent UT Southwestern study showing that humans with mutations in their PCSK9 gene, which prevented them from making normal levels of PCSK9 protein, had LDL cholesterol levels 40 percent lower than individuals without the mutation. That study, based on data gathered from nearly 6,000 participants in the Dallas Heart Study, was published in February in Nature Genetics. The research was led by Dr. Helen Hobbs, director of the Dallas Heart Study and of the Eugene McDermott Center for Growth and Development, and Dr. Jonathan Cohen, associate professor of internal medicine.

"The lower cholesterol levels of humans with mutations in PCSK9, combined with the results of our studies in mice, suggest that variations in the levels of the PCSK9 protein significantly affect blood cholesterol levels, and compounds that inhibit this protein may be useful for the treatment of high cholesterol," Dr. Horton said.

The latter study on humans illustrates another way that proteins and genes will be identified for drug development: Genetically compare humans who have a condition or a disease with humans who do not have that condition or disease. See if there are any consistent genetic difference between the groups. Advances that lower the cost of DNA sequencing and other genotyping technologies (e.g. single nucleotide polymorphism testing using gene chips) will eventually lower the cost of doing human genetic comparison studies by orders of magnitude. This too will accelerate the identification of targets for drug development.

Advances that accelerate the rate of identification of functions of genes will produce larger numbers of targets for drug development. Further into the future the identification of the purposes of various genes will provide targets for the development of gene therapies as well. Why take a drug for decades to keep your cholesterol low when you will be able to get a gene therapy that will modify your PCSK9 gene to lower your LDL cholesterol to a range optimal for long term vascular health?

By Randall Parker    2005 April 04 03:23 PM   Entry Permalink | Comments (0)
2005 February 25 Friday
Purdue Researchers Develop Miniaturized Membrane Tester Chip

Purdue University researchers are reporting demonstration of a prototype chip that allows thousands of chemical compounds to be tested simultaneously for their ability to change the operation of cell membrane pumps.

Researchers at Purdue University have built and demonstrated a prototype for a new class of miniature devices to study synthetic cell membranes in an effort to speed the discovery of new drugs for a variety of diseases, including cancer.

The researchers created a chip about one centimeter square that holds thousands of tiny vessels sitting on top of a material that contains numerous pores. This "nanoporous" material makes it possible to carry out reactions inside the vessels.

The goal is to produce "laboratories-on-a-chip" less than a half-inch square that might contain up to a million test chambers, or "reactors," each capable of screening an individual drug, said Gil Lee, the project's leader and an associate professor of chemical engineering.

"What we are reporting now is a proof of concept," said Lee, one of three researchers who wrote a paper that details new findings in the current issue (Feb. 15) of the journal Langmuir. The two other researchers are Zhigang Wang, a postdoctoral fellow at Purdue; and Richard Haasch, a research scientist at the University of Illinois at Urbana-Champaign.

The work is part of overall research being carried out by an interdisciplinary team of scientists and engineers who are members of a Center for Membrane Protein Biotechnology. The center was created at Purdue in 2003 through a grant from the Indiana 21st Century Research and Technology Fund, established by the state of Indiana to promote high-tech research and to help commercialize innovations.

The vessels discussed in the research paper are cylindrical cavities that are open at the top and sealed at the bottom with a material called alumina, which contains numerous pores measured in nanometers, or billionths of a meter.

Researchers are working to duplicate how cell membranes function on chips in order to test the potential effectiveness of new drugs to treat diseases. Membranes, which surround cells and regulate the movement of molecules into and out of the cells, contain a variety of proteins, some of which are directly responsible for cancer's ability to resist anti-tumor chemotherapy drugs. These proteins act as tiny pumps that quickly remove chemotherapy drugs from tumor cells, making the treatment less effective. Cancer cells exposed to chemotherapy drugs produce a disproportionately large number of the pumps, causing the cells to become progressively more resistant to anticancer drugs.

Engineers and scientists in the Purdue center are trying to find drugs that deactivate the pumps, which would make the chemotherapy drugs more effective. The researchers are developing synthetic cell membranes to mimic the real thing and then plan to use those membranes to create chips containing up to 1 million test chambers. Each chamber would be covered with a membrane containing the proteins, and the chambers could then be used to search for drugs that deactivate the pumps, Lee said.

Such an advanced technology could be used to quickly screen millions of untested drug compounds that exist in large pharmaceutical "libraries." The chips could dramatically increase the number of experiments that are possible with a small amount of protein.

"It's been very hard to study these proteins because they are difficult to produce in large quantities," Lee said. "The devices we have created offer the promise of making chips capable of running thousands of reactions with the same amount of protein now needed to run only about 10 reactions."

Findings being reported in the paper detail how researchers created the device with the same "microfabrication" techniques used to make computer chips. The reactors range in diameter from about 400 to 60 microns, or millionths of a meter. Human hairs are about 100 microns wide.

Note that there are two advantages to this chip. First off, more tests can be run at once in parallel. But also note that the protein used in the test is difficult to isolate or produce. The small size of each cell on the chip allows much smaller amounts of the protein to be used per test. So the protein can be used to run orders of magnitude more tests.

The ability to run orders of magnitude more tests in parallel will speed up many types of experiments. Notably, it will speed up screening for drugs to use against cancer as noted above. Cancer cells develop many mutations that give them resistance to anti-cancer chemotherapies and other anti-cancer agents. Once the tools area developed to very rapidly test resistant cancers against drugs that might block their mechanisms of resistance the possibility opens up that anti-cancer drugs might be able to be developed so quickly that once drug-resistant cancer cells emerge drugs that block the mechanisms of resistance could be found before patients would otherwise be expected to die.

In a recent report on how lung cancer becomes resistant to the anti-cancer drug gefitinib (Iressa) scientists guessed that the mechanism of resistance had to do with a mutation in epidermal growth factor protein (EGFR) which is where gefitinib binds to stop cancer growth. These scientists at Harvard's Beth Israel Deaconess Medical Center (BIDMC) sequenced the EGFR gene in a patient with gefitinib-resistant cancer and sure enough their hunch was right. Well, people are dying from their gefitinib-resistant cancers and other cancer drug resistant cancers. What we need are DNA sequencing technologies and other technologies for testing cancer cells and screening drug candidates that are extremely fast. then once a drug-resistant mutation shows up in a patient the identification of the mechanism of drug resistance and the search for a new drug will be able to be done before the patient dies.

Once we have biochips and nanotechnology that are fast enough to adapt to the cancers faster than the cancers can adapt to each new round of drug treatment then it will be possible to cure cancer. The shift toward the use of technologies from the semiconductor industry to do testing and manipulation of biological systems is going to lead to orders of magnitude faster and cheaper ways to develop drugs and other disease treatments.

By Randall Parker    2005 February 25 12:28 AM   Entry Permalink | Comments (1)
2005 January 17 Monday
Proposal For Open Source Drug Development

How about open source development of drugs for Third World tropical diseases?

Only about 1% of newly developed drugs are for tropical diseases, such as African sleeping sickness and dengue fever. While patent incentives have driven commercial pharmaceutical companies to make Western health care the envy of the world, the commercial model only works if companies can sell enough patented products to cover their R&D costs and produce profits for shareholders. The model thus fails in the developing world, where few patients can afford to pay patented prices for drugs. The solution to this devastating problem, say Stephen Maurer, Arti Rai, and Andrej Sali in the premier open-access medical journal PLoS Medicine, is to adopt an "open source" approach to discovering new drugs for neglected diseases.

They call their approach the Tropical Diseases Initiative (www.tropicaldisease.org), or TDI. "We envisage TDI as a decentralized, Web-based, community-wide effort where scientists from laboratories, universities, institutes, and corporations can work together for a common cause."

What would open-source drug discovery look like? "As with current software collaborations, we propose a website where volunteers could search and annotate shared databases. Individual pages would host tasks such as searching for new targets, finding chemicals to attack known targets, and posting data from related chemistry and biology experiments. Volunteers could use chat rooms and bulletin boards to announce discoveries and debate future research directions. Over time, the most dedicated and proficient volunteers would become leaders."

The key to TDI's success, they argue, is that any discovery would be off patent. An open-source license would keep all discoveries freely available to researchers and--eventually--manufacturers. The absence of patents, and the use of volunteer staff, would contain the costs of drug development.

Reflecting the range of issues that a proposal like this must address the three fellows making this proposal, Stephen Maurer, Arti Rai and Andrej Sali, are two lawyers and a computational biologist respectively.

You can read the full journal article for free. Note that open source drug development is becoming possible because of advances in computer and communications technology.

Ten years ago, TDI would not have been feasible. The difference today is the vastly greater size and variety of chemical, biological, and medical databases; new software; and more powerful computers. Researchers can now identify promising protein targets and small sets of chemicals, including good lead compounds, using computation alone. For example, a SARS protein similar to mRNA cap-1 methyltransferases—a class of proteins with available inhibitors—was recently identified by scanning proteins encoded by the SARS genome against proteins of known structure [9]. This discovery provides an important new target for future experimental validation and iterative lead optimization. More generally, existing projects such as the University of California at San Francisco's Tropical Disease Research Unit (San Francisco, California, United States) show that even relatively modest computing, chemistry, and biology resources can deliver compounds suitable for clinical trials [10]. Increases in computing power and improved computational tools will make these methods even more powerful in the future.

As computers and modelling software become steadily cheaper the rate of advance of biomedical research will accelerate. More work will be done in computer simulations. More collaborations across great distances will take place. Ideas and results will be shared far more rapidly.

However, I see at least one big problem with this approach: If patent royalties can not be earned off of drugs developed in this approach then there will be no company with the financial incentives to pay the hundreds of millions of dollars it would take to fund taking an open source drug through the drug approval processes of more industrialized countries. So the drug would never be developed for First World country uses. Whether the potential First World uses were identical or for a completely different purposes the financial incentive would be lacking to pay for getting a drug through First World clinical trials and regulatory hoops.

Take, for example, artemisinin. Henry Lai and Narendra Singh at the University of Washington Department of Bioengineering have found that artemisinin, a compound used in less developed countries to treat malaria, works against cancer cells. But while preliminary research has found anti-cancer effects in animals and there are desperate cancer patients taking it on their own (since it is from an herb it is available over the counter as an herbal extract) the problem with artemisinin's being unpatentable prevents it from attracting major investment from big pharma companies. It may be possible for companies to develop similar compounds that are patentable that work the same way or take artemisinin or one of its active forms and attach it to antibodies to target cancer cells and get a patentable treatment that way. But the fact that artemisinin and artemether (another form of the compound) are not patentable has slowed development of this therapy for cancer.

By Randall Parker    2005 January 17 12:44 PM   Entry Permalink | Comments (8)
2005 January 15 Saturday
Rapid Gene Synthesizer Will Enable Custom Microbe Construction

A new method to synthesize long sequences of DNA lowers costs and increases speed of synthesis by orders of magnitude. (same article here)

HOUSTON, Dec. 22, 2004 – Devices the size of a pager now have greater capabilities than computers that once occupied an entire room. Similar advances are being made in the emerging field of synthetic biology at the University of Houston, now allowing researchers to inexpensively program the chemical synthesis of entire genes on a single microchip.

Xiaolian Gao, a professor in the department of biology and biochemistry at UH, works at the leading edge of this field. Her recent findings on how to mass produce multiple genes on a single chip are described in a paper titled "Accurate multiplex gene synthesis from programmable DNA microchips," appearing in the current issue of Nature, the weekly scientific journal for biological and physical sciences research.

"Synthetic genes are like a box of Lego building blocks," Gao said. "Their organization is very complex, even in simple organisms. By making programmed synthesis of genes economical, we can provide more efficient tools to aid the efforts of researchers to understand the molecular mechanisms that regulate biological systems. There are many potential biochemical and biomedical applications."

Most immediately, examples include understanding the regulation of gene function. Down the road, these efforts will improve health care, medicine and the environment at a fundamental level.

Long time FuturePundit readers have heard me argue that the rate of advance of biotechnology is increasingly resembling the rate of advance of electronics technologies such as silicon chip fabrication, hard drive fabrication, and fiber optic fabrication where the technologies produce gains in capacities and speeds that give them doubling times in months to years. Well for just this one biotechnological capability - the rate at which DNA can be synthesized - the advance has been by two whole orders of magnitude in a single step forward. This rate of increase is at least briefly many times faster than the rates of increase of the electronics technologies.

Using current methods, programmed synthesis of a typical gene costs thousands of dollars. Thus, the prospect of creating the most primitive of living organisms, which requires synthesis of several thousand genes, would be prohibitive, costing millions of dollars and years of time. The system developed by Gao and her partners employs digital technology similar to that used in making computer chips and thereby reduces cost and time factors drastically. Gao's group estimates that the new technology will be about one hundred times more cost- and time-efficient than current technologies.

With this discovery, Gao and her colleagues have developed a technology with the potential to make complete functioning organisms that can produce energy, neutralize toxins and make drugs and artificial genes that could eventually be used in gene therapy procedures. Gene therapy is a promising approach to the treatment of genetic disorders, debilitating neurological diseases such as Parkinson's and endocrine disorders such as diabetes. This technology may therefore yield profound benefits for human health and quality of life.

Is this the sign of more things to come? Will the rate of increase of DNA sequencing technologies take a step forward as great as the step just made by DNA synthesis technologies? A couple of order of magnitude decrease in DNA sequencing costs would put personal DNA sequencing within reach for wealthy people anyway and would produce a great acceleration in the rate at which the effects of various genetic sequence variations are identified.

This advance in DNA sequencing sounds great, right? But some of you must be thinking that this technology could be used for nefarious purposes to construct dangerous pathogens such as smallpox or a massive killer influenza strain. Well, this fear is not limited only to the circles of educated laymen. Nicholas Wade of the New York Times reports that some scientists are concerned that fast DNA synthesis technology will make construction of dangerous pathogens too easy. (same article here and here

"This has the potential for a revolutionary impact in the ease of synthesis of large DNA molecules," said Richard Ebright, a molecular biologist at Rutgers University with an interest in bioterrorism.

"This will permit efficient and rapid synthesis of any select agent virus genome in very short order," he added, referring to the list of dangerous pathogens and toxins that must be registered with the Centers for Disease Control and Prevention.

George Church of Harvard, one of the collaborators in the development of these machines, is so concerned about the potential danger of this technology that he would like to see the machines sold only to labs that register with the government.

Most of the really dangerous pathogens have had their DNA sequenced and published in the public domain. For example, the genomes of at least a dozen pox viruses including smallpox have been sequenced and published. Feel safe with the knowledge that some governments are contracting production of large amounts of smallpox vaccine? Well, greater understanding many viruses (and very likely smallpox among them) will eventually provide enough understanding to allow the construction virus variations that have surface antigen structures different enough that existing viruses will not provide much if any protection.

Now of course the problem with highly transmissible pathogens such as smallpox as bioterrorism agents is that they are likely to spread across the world and into societies that terrorists may be seeking to protect against perceived threats coming from other societies in a Clash of Civilizations. Most terrorist groups are therefore likely to rule out the use highly transmissible pathogens as terrorism weapons. A guy like Osama Bin Laden must understand that if he releases a large amount of smallpox in America then good Muslims will die when the disease inevitably spreads across international borders.

Still, it is not at all impossible that some religious fringe group could decide God has called on it to kill people all over the world because the human race has rejected some special message that all humans should recognize as obvious. Still other terrorists could decide that God has told them that only the unfaithful will be felled by some pathogen.

Technologies and capabilities are needed to enable better responses to a deadly outbreak of a lethal natural or human-made disease pandemic. We need biotechnologies that accelerate by orders of magnitude the rate at which new vaccines and drug treatments can be developed. Also, we need to develop more capabilities to "harden" society against a major pandemic in the same way that militaries harden bunkers against bombs. We need ways to reduce inter-human contacts by orders of magnitude while still allowing the bulk of normal operations of society to be performed. We need rapidly manufacturable face masks, building air filters, and other technologies that could reduce the ease of transmission of airborne pathogens.

By Randall Parker    2005 January 15 09:34 PM   Entry Permalink | Comments (9)
2004 October 14 Thursday
NHGRI Aims For $100,000 Genome Sequencing Cost In 5 Years

The US government's National Human Genome Research Institute (NHGRI) is allocating $38.4 million dollars over the next few years to development of cheaper DNA sequencing technologies (and FuturePundit thinks this is still too little, too late).

BETHESDA, Md., Thurs., Oct. 14, 2004 – The National Human Genome Research Institute (NHGRI), part of the National Institutes of Health (NIH), today announced it has awarded more than $38 million in grants to spur the development of innovative technologies designed to dramatically reduce the cost of DNA sequencing, a move aimed at broadening the applications of genomic information in medical research and health care.

NHGRI's near-term goal is to lower the cost of sequencing a mammalian-sized genome to $100,000, which would enable researchers to sequence the genomes of hundreds or even thousands of people as part of studies to identify genes that contribute to cancer, diabetes and other common diseases. Ultimately, NHGRI's vision is to cut the cost of whole-genome sequencing to $1,000 or less, which would enable the sequencing of individual genomes as part of medical care. The ability to sequence each person's genome cost-effectively could give rise to more individualized strategies for diagnosing, treating and preventing disease. Such information could enable doctors to tailor therapies to each person's unique genetic profile.

DNA sequencing costs have fallen more than 100-fold over the past decade, fueled in large part by tools, technologies and process improvements developed as part of the successful effort to sequence the human genome. However, it still costs at least $10 million to sequence 3 billion base pairs – the amount of DNA found in the genomes of humans and other mammals.

"These grants will open the door to the next generation of sequencing technologies. There are still many opportunities to reduce the cost and increase the throughput of DNA sequencing, as well as to develop smaller, faster sequencing technologies that meet a wider range of needs," said NHGRI Director Francis S. Collins, M.D., Ph.D. "Dramatic reductions in sequencing costs will lead to very different approaches to biomedical research and, eventually, will revolutionize the practice of medicine."

In the first set of grants, 11 teams will work to develop "near term" technologies that, within five years, are expected to provide the power to sequence a mammalian-sized genome for about $100,000. In the second set, seven groups will take on the longer-term challenge of developing revolutionary technologies to realize the vision of sequencing a human genome for $1,000 or less. The approaches pursued by both sets of grants have many complementary elements that integrate biochemistry, chemistry and physics with engineering to enhance the whole effort to develop the next generation of DNA sequencing and analysis technologies.

"These projects span an impressive spectrum of novel technologies – from sequencing by synthesis to nanopore technology. Many of these new approaches have shown significant promise, yet far more exploration and development are needed if these sequencing technologies are to be useful to the average researcher or physician," said Jeffery Schloss, Ph.D., NHGRI's program director for technology development. "We look forward to seeing which of these technologies fulfill their promise and achieve the quantum leaps that are needed to take DNA sequencing to the next level."

Note that to get from $10 million to $100,000 per genome in 5 years would be a 2 order of magnitude drop that would be almost as much of a drop in scale as happened in the previous 10 years. But maybe the various research teams can pull it off.

To get to the $1000 genome requires a further 2 orders of magnitude drop in costs. Note that the press release provide any indication of when that goal might be reached. You click through and read the full press release and you will notice that the bulk of the funding ($31.5 million by my calculations) is for achieving the short-term $100,000 genome goal. Much smaller amounts of money are allocated toward the development of technologies that will enable much more radical advances. This seems like a mistake to me.

In my opinion too much money has been spent on using sequencing technologies and not enough on developing new sequencing technologies. Even this $38 million is not much for development of new sequencing technologies since on a per year basis it amounts to well less than $20 million per year (it is hard to calculate an exact amount since some of the grants are 2 years and some are 3 years). When the federal government is spending many hundreds of millions per year (I'm too lazy to look up NHGRI's total yearly budget but this is a very small fraction of it) using sequencing technologies that are orders of magnitude more expensive than what we could have in a few years then it seems obvious to me that the money spent over the last few years on sequencing should mostly have gone to develop cheaper technologies. The focus on short-term results by using current technologies is very unoptimal.

Update: To put the spending for faster DNA sequencing techniques in perspective the National Human Genome Research Institute has a total budget of almost a half billion dollars.

Mr. Chairman, I am pleased to present the President's budget request for the National Human Genome Research Institute for fiscal year 2005, a sum of $492,670,000, which reflects an increase of $13,842,000 over the FY 2004 Final Conference appropriation.

The National Institutes of Health are spending over $28 billion per year.

President Bush yesterday (February 2) sent to Congress a $28.6 billion budget request for the National Institutes of Health (NIH) in fiscal year 2005, a 2.6% increase of $729 million over the current year's funding. The National Science Foundation (NSF) would receive a 2.5% increase of around $140 million to $5.7 billion, but the Centers for Disease Control and Prevention (CDC) would be cut by 8.9% to $4.3 billion, a reduction of $408 million.

Aside: As the baby boomers begin to retire and an enormous fiscal crisis erupts I expect total NIH spending will go down, not up. More money will go toward treating the already sick with existing technologies rather than doing the scientific research and technological research that could so revolutionize medicine that people will rarely get sick.

One reason that biomedical scientists ought to get on the aging-reversal rejuvenation SENS (Strategies for Engineered Negligible Senescence) bandwagon is that when the fiscal crisis erupts medical and biological researchers need to have a rosier future achievable by research to sell to the public. Nothing less than an incredibly rosy scenario of rejuvenation and the end of most diseases will be enough of an enticement to keep the research bucks flowing and growing when the strains on the US federal budget become enormous.

By Randall Parker    2004 October 14 02:06 PM   Entry Permalink | Comments (3)
2004 August 09 Monday
Fluorescent Labelling Tracks Activity Of Multiple Genes In Single Cell

A team at UC San Diego has developed a technique in Drosophila fruit fly cells that can watch the level of gene expression for several genes at once in living cells.

“Multiplex labeling has allowed us to directly map the activation patterns of micro-RNA genes, which were hitherto undetectable,” says William McGinnis, a professor of biology at UCSD and co-principal investigator of the study. “Micro-RNAs were known to be important in development, but this is the first evidence indicating that these genes can control the embryonic body plan.”

Different colored fluorescent molecules can be used to identify transcripts from different genes in the same cell. It works even if one gene is much more active than another, because the amount of fluorescence of each color is quantified separately.

“When using the microscope to measure the fluorescence, the light is fanned out into a rainbow, and each color is read through a separate channel,” explains Bier. “That way if the light is 90 percent blue and ten percent yellow, it might look blue to the naked eye, but the microscope detects each color present.”

According to Bier, multiplex labeling fills a gap in developmental biologists’ toolkit between gene chips, which can identify several hundred gene transcripts at a time, but not their location, and methods that can reveal the identity and location of up to three gene transcripts simultaneously—though not if they are in the same cell. So far the researchers have used multiplex labeling to visualize the activity of up to seven genes at the same time, but they predict it will be possible to increase this to 50.

Newly developed, ultra-bright fluorescent molecules make the multiplex labeling technique possible. The fluorescent molecules were provided by Molecular Probes, Inc., and the company’s scientists also shared their expertise with the UCSD researchers. Developing an effective way to attach the fluorescent molecule to the RNAs complementary to the gene transcripts, and perfecting the overall labeling process were also pivotal in the development of the technique.

These researchers say additional work has to be done to adjust their technique to work in other species. They foresee its eventual use in the study of cancer tumor development and in other diseases and normal biological processes.

This research is yet example of how biologists are developing techniques that are speeding up the rate at which biological systems can be studied and understood.

By Randall Parker    2004 August 09 01:48 AM   Entry Permalink | Comments (1)
2004 July 27 Tuesday
MIT Electron Microsope Offers Higher Resolution For Biological Molecules

Paul Matsudaira, a Whitehead Institute Member, professor of biology, and professor of bioengineering at MIT, has just set up a unique new electron microscope with the ability to image biological molecules at near-atomic resolution.

Deep in MIT’s labyrinthine campus, the Whitehead/MIT BioImaging Center, a collaboration launched a few years ago with seed funding from the W. M. Keck Foundation, and headed by Matsudaira, has set up its new digs. Right in the middle, sequestered in a specially designed, environmentally isolated room, is the Center’s prize possession: a $2M cryoelectron microscope, the JEOL 2200FS.

The first one of its kind in the world for biology problems, the microscope is designed to image the smallest biological molecules at near-atomic resolution, surpassing what most other microscopes can offer.

Like all electron microscopes, this one images electrons as they pass through an object. Placing the microscope in a climate-controlled room isolated from vibrations, magnetic fields, and even people—the microscope is operated remotely—helps stabilize these easily perturbed electrons, thus improving image quality. In spite of these environmental safeguards, some electrons lose energy simply by colliding with atoms, often clouding the image that the microscope detects. A built-in energy filter acts as a sort of funnel, collecting only the electrons that have not lost energy. Put another way, it only photographs electrons that are in focus. Knowing a protein’s shape is intrinsic to understanding its function, so this sort of imaging makes for more than just a pretty picture.

The energy filter makes this electron microscope unique.

Although a handful of other microscopes in the world are capable of imaging at such a resolution, the lack of an energy filter forces a reliance on computer applications to complete the images. Says Matsudaira, “This one just doesn’t have to work as hard as the others to get the same results.”

The tools keep getting better and so the rate of bioscientific and biotechnological progress keeps on accelerating.

By Randall Parker    2004 July 27 11:22 PM   Entry Permalink | Comments (0)
2004 July 15 Thursday
Stanford Team Develops New Way To Generate Potential Drug Compounds

A team in the Pehr Harbury lab at Stanford has developed a method that may allow the automated generation of a large number of organic compounds as drug candidates through molecular breeding.

Traditionally, developing small molecules for research or drug treatments has been a painstaking enterprise. Drugs work largely by binding to a target protein and modifying or inhibiting its activity, but discovering the rare compound that hits a particular protein is like, well, finding a needle in a haystack. With a specific protein target identified, scientists typically either gather compounds from nature or synthesize artificial compounds, then test them to see whether they act on the target.

The birth of combinatorial chemistry in the early nineties promised to revolutionize this laborious process by offering a way to synthesize trillions of compounds at a time. These test tube techniques have been refined to "evolve" collections of as many as a quadrillion different proteins or nucleic acids to bind a molecular target. These techniques are called molecular breeding, because like traditional livestock and crop breeding techniques, they combine sets of genotypes over generations to produce a desired phenotype. Molecular breeding has been restricted to selecting protein or nucleic acid molecules, which have not always been the best lead compounds for drugs. Conventional synthetic organic chemistry, which has traditionally been a better source of candidate drugs, has not been amenable to this type of high throughput molecular breeding.

But this bottleneck has potentially been overcome and is described in a series of three articles by David Halpin et al. in this issue of PLoS Biology. By inventing a genetic code that acts as a blueprint for synthetic molecules, the authors show how chemical collections of nonbiological origin can be evolved. In the first article, Halpin et al. present a method for overcoming the technical challenge of using DNA to direct the chemical assembly of molecules. In the second, they demonstrate how the method works and test its efficacy by creating a synthetic library of peptides (protein fragments) and then showing that they can find the "peptide in a haystack" by identifying a molecule known to bind a particular antibody. The third paper shows how the method can support a variety of chemistry applications that could potentially synthesize all sorts of nonbiological "species." Such compounds, the authors point out, can be used for drug discovery or as molecular tools that offer researchers novel ways to disrupt cellular processes and open new windows into cell biology. While medicine has long had to cope with the evolution of drug-resistant pathogens, it may now be possible to fight fire with fire.

The first, second, and third articles are available on-line for reading. All PLoS Biology are available to be read without any cost to the reader.

Peptides (which are just are just a sequence of amino acids and serve as components of larger protein molecules) and DNA are hard to get into the body because they tend to get broken down before absorption. Even if they are injected into the bloodstream they stand a pretty good chance of being broken down before they reach a desired target. Whereas a lot of synthetic compounds can be absorbed and reach their targets more easily without getting broken down by enzymes. So the most interesting aspect of these papers is the claim (at least as far as I understand it) that they can use this technique to generate chemical compounds that are not DNA or peptides.

The punch line is in the third article.

Beyond the direct implications for synthesis of peptide–DNA conjugates, the methods described offer a general strategy for organic synthesis on unprotected DNA. Their employment can facilitate the generation of chemically diverse DNA-encoded molecular populations amenable to in vitro evolution and genetic manipulation.

The need that they are trying to solve is the generation of a large number of different compounds to test more rapidly as potential antibiotics against bacteria. The sort of Holy Grail would be a method to do high volume automated means of generating compounds, testing against pathogens, and then feeding that back into the generator mechanism to make more variations most like those variations that had the strongest effects against the pathogens. The hope is that when a new drug-resistant pathogen pops up then by sheer brute force so many compounds could be tried against it so rapidly that in a relatively short period of time antibiotics effective against the new pathogen strain would be identified.

By Randall Parker    2004 July 15 04:46 PM   Entry Permalink | Comments (2)
2004 July 13 Tuesday
CD ELISA Medical Test Achieves First Success

A test commonly done in medical testing and in research may some day be done more quickly and cheaply on the surface of compact discs.

COLUMBUS, Ohio – Ohio State University engineers and their colleagues have successfully automated a particular medical test on a compact disc (CD) for the first time -- and in a fraction of the normal time required using conventional equipment.

The ELISA biochemical test -- one of the most widely used clinical, food safety, and environmental tests -- normally takes hours or even days to perform manually. Using a specially designed CD, engineers performed the test automatically, and in only one hour.

The patent-pending technology involves mixing chemicals inside tiny wells carved into the CD surface. The spinning of the CD activates the tests.

In a recent issue of the journal Analytical Chemistry, the engineers report that the CD successfully detected a sample of rat antibody -- a standard laboratory test -- using only one-tenth the usual amount of chemicals.

This first demonstration paves the way for CDs to be used to quickly detect food-borne pathogens and toxins, said L. James Lee, professor of chemical and biomolecular engineering at Ohio State. The same technology could one day test for human maladies such as cancer and HIV, using a very small cell sample or a single drop of blood.

Lee estimated that the first commercial application of the concept is at least two years away.

“This study shows that the technology is very promising, but there are challenges to overcome,” he said. “We have been working on designing special valves and other features inside the CD, and better techniques for controlling the chemical reactions.”

“When we work on the micro-scale, we can perform tests faster and using less material, but the test also becomes very sensitive,” he explained. As chemicals flow through the narrow channels and reservoirs carved in the CD, interactions between individual molecules become very important, and these can affect the test results.

These scientists are working on automating the ELISA test which is a very widely used type of biological test.

ELISA, short for enzyme linked immunosorbent assay, is normally conducted in much larger reservoirs inside a microtiter plate -- a palm-sized plastic grid that resembles an ice cube tray.

Microtiter plates are standard equipment in chemical laboratories, and ELISA testing is a $10-billion-per-year industry. It is the most common test for HIV. Still, the test is tedious and labor-intensive, in part because of the difficulty in mixing chemicals thoroughly enough to get consistent results.

“Everyone working in the life sciences labs would fall in love with this revolutionary CD system for ELISA because it's easier, faster and cheaper to use,” said Shang-Tian Yang, professor of chemical and biomolecular engineering at Ohio State and collaborator on the project. Yang and Lee are founding a company to commercialize the CD technology. Until then, product development is being handled by Bioprocessing Innovative Company, Inc., a company in which Yang is a part owner.

Automated techniques that scale down the size of the devices that do the testing make tests easier, faster, and cheaper to do while at the same time making the tests more sensitive. This will speed up the rate of scientific progress while also lowering the costs of doing science. But it is also going to change the way medicine is done. Rather than taking one trip to the doctor to give blood and other samples with a follow-up trip to get the results the trend is going to be toward in-office testing. You will walk in to a doctor's office, the doctor will decide on what tests to do, and then you will get the tests done and then see the doctor again to review the results during the same office visit. This will save both time and money.

For more on the use of CDs to do biological and medical tests see my previous posts CD Will Simultaneously Test Concentrations Of Thousands Of Proteins and CD Player Turned Into Bioassay Molecule Detection Instrument.

By Randall Parker    2004 July 13 02:35 PM   Entry Permalink | Comments (0)
2004 June 30 Wednesday
New Magnetic Resonance Imaging Method Measures Electrical Activity

Whitehead Institute Fellow Alan Jasanoff at MIT is developing new techniques to use functional Magnetic Resonance Imagining (fMRI) to study electrical activity in the brain.

Identifying such networks is a goal that drives Jasanoff, who is pioneering new fMRI techniques that go beyond blood flow to expose the brain’s electrical activity—a series of impulses that transmits messages between neurons. The techniques are still experimental, so Jasanoff works with laboratory animals to isolate neural circuits involved in simple behaviors. “What we learn about simple behaviors in animals guides us toward an understanding of more complex behaviors in humans,” Jasanoff says. “Our findings can influence the direction of human research.”

Researchers trying to “get inside the brain” during experimental research traditionally have relied on electrodes wired directly into neural tissue. This process is not only invasive and cumbersome, it’s also limited in terms of its spatial coverage—electrodes gather data only from the area to which they are attached. Jasanoff’s research is offering another option, namely, a set of MRI contrasting, or imaging, agents that can selectively be activated by the brain’s electrical currents. “My approach will provide a direct assay for neural activity deep within the brain,” Jasanoff says. “This is unlike anything that is currently available.”

To date, Jasanoff’s focus has been on establishing a way to test imaging agents for fMRI in single brain cells of an oversized housefly called a “blowfly.” He presented the blowfly brain imaging approach in a 2002 article in the Journal of Magnetic Resonance, and demonstrated an oxygen imaging application using the setup in a 2003 article in the journal Magnetic Resonance in Medicine. Now Jasanoff is completing work on two new brain imaging agents, and intends to adapt the agents so they can be used safely in higher organisms, for instance, rodents. Studies in animals are necessary before the agents can be used in experiments with human subjects, a step in the research that Jasanoff notes is many years away.

The current use of fMRI to measure blood flow limits how much information can be discovered. Imaging agents that make possible the measure of actual patterns of electrical activity would allow fMRI to far more directly measure brain activity than is currently the case. So Jasanoff's work is potentially very important for brain studies.

While this work is years away from being applied to humans it illustrates a larger trend familiar to long time FuturePundit readers: biological assay tools are becoming more powerful. Our ability to measure biological structures and activity is steadily increasing. Phenomena that are now difficult to study and to manipulate will become increasingly easier to watch and to change.

By Randall Parker    2004 June 30 10:15 AM   Entry Permalink | Comments (1)
2004 June 29 Tuesday
Harvard Team Pursues Polony Method Of Rapid Genetic Sequencing

The National Human Genome Research Institute (NHGRI) portion of the US National Institutes of Health (NIH) has awarded a Centers of Excellence in Genomic Science (CEGS) to a team at Harvard Medical School to develop cheaper and faster DNA sequencing technologies.

At Harvard, a team led by George Church, Ph.D., will address the biomedical research community's need for better and more cost-effective technologies for imaging biological systems at the level of DNA molecules (genomes) and RNA molecules (transcriptomes). The center will receive $2 million annually in CEGS funding for five years.

Specifically, the Harvard center plans to further develop polymerase colony sequencing technologies for studying sequence variation in biological systems. In this highly parallel method of nucleic acid analysis, a sample of DNA is dispersed as many short fragments in a polyacrylamide gel affixed to a microscope slide. Researchers then add an enzyme called DNA polymerase, which copies each DNA fragment repeatedly, forming tiny, localized sets of identical fragments. These sets of fragments are embedded in the gel in a manner reminiscent of bacterial colonies, which has prompted scientists to refer to them as "polonies."

Next, the polonies are exposed sequentially to free DNA bases tagged with fluorescent markers in the presence of a different enzyme, and the incorporation of those bases into the polonies is monitored with a scanning machine. This produces a read-out of the DNA sequence from each polony. A computer program then assembles the DNA sequences from the individual polonies into an order that reflects the complete sequence of the original DNA sample. The ordering process is accomplished by aligning the sequences of the individual polonies with a reference DNA sequence, such as the sequence produced by the Human Genome Project. In addition to its application in DNA sequencing, polony technology can be used to study the transcriptome (RNA content) of cells and to determine differences in genome sequence between different individuals (genotypes and haplotypes).

The technology developed by Church's team currently can read a slide with 10 million polonies in about 20 minutes, making it one of the swiftest DNA sequencing methods now available. With the further development planned at the center, the technology has the potential to lead to quicker, more cost-effective ways of sequencing individual genomes for use in research or clinical settings. Producing a high-quality draft of a mammalian-sized genome currently costs about $20 million, but NHGRI's aim is to dramatically reduce that cost to $1,000 over the next 10 years.

"In order to reach that ambitious goal, we will need to develop a completely integrated system that requires very small volumes and utilizes very inexpensive instruments. Ideally, the system would cost no more than a good desktop computer," said Dr. Church.

Cheap fast DNA sequencing will allow individuals to have their DNA sequenced. That ability will usher in a new era where the knowledge of a given person's DNA sequences will lead to many changes. Among the practices that will become commonplace once personal DNA sequencing becomes cheap:

  • Nutritional genomics. Each individual will be given dietary advice customized to their genetic predispositions.
  • Pharmacogenomics. One aspects of this will be personalized drug selection to avoid adverse reactions and to enhance therapeutic effects.
  • Development of drugs aimed at subsets of the general population who will most benefit from particular mechanisms of drug action.
  • Identification of individuals who are especially susceptible to particular environmental toxins so they know to avoid such toxins.
  • Identification of all genetic variations that influence intelligence and personality. This will lead initially to changing mating rituals as people pursue potential mates with most heavily desired genetic variations.
  • As part of changes in mating strategies people will surreptitiously steal tissue samples from each other in order to sequence each other's DNA. Genetic privacy will become impossible to protect.
  • Many women will opt to use sperm bank sperm to get their ideal male genetic donation.
By Randall Parker    2004 June 29 08:24 PM   Entry Permalink | Comments (3)
2004 June 15 Tuesday
MIT Group Develops Automated Means For Testing Stem Cells

A recurring theme of this blog is that automation and miniaturization of laboratory devices are speeding up the rate of advance of biological science and biotechnology. Miniaturization is being done in conjunction with parallelization to enable faster and cheaper testing and manipulation of cells and materials. This trend is analogous to the acceleration of computers by making their parts ever smaller and many technologies developed by the computing industry are helping to enable the acceleration of biological advances. Well, yet another example of this trend is an MIT report on the development of miniaturized arays for growing and testing embryonic stem cells.

CAMBRIDGE, Mass.--An MIT team has developed new technology that could jump-start scientists' ability to create specific cell types from human embryonic stem cells, a feat with implications for developing replacement organs and a variety of other tissue engineering applications.

The scientists have already identified a simple method for producing substantially pure populations of epithelial-like cells from human embryonic stem cells. Epithelial cells could be useful in making synthetic skin.

Human embryonic stem cells (hES) have the potential to differentiate into a variety of specialized cells, but coaxing them to do so is difficult. Several factors are known to influence their behavior. One of them is the material the cells grow upon outside the body, which is the focus of the current work.

"Until now there has been no quick, easy way to assess how a given material will affect cell behavior," said Robert Langer, the Germeshausen Professor of Chemical and Biomedical Engineering. Langer is the senior author of a paper on the work that will appear in the June 13 online issue of Nature Biotechnology.

The new technique is not only fast; it also allows scientists to test hundreds to thousands of different materials at the same time. The trick? "We miniaturize the process," said Daniel G. Anderson, first author of the paper and a research associate in the Department of Chemical Engineering. Anderson and Langer are coauthors with Shulamit Levenberg, also a chemical engineering research associate.

The team developed robotic technology to deposit more than 1,700 spots of biomaterial (roughly 500 different materials in triplicate) on a glass slide measuring only 25 millimeters wide by 75 long. Twenty such slides, or microarrays, can be made in a single day. Exposure to ultraviolet light polymerizes the biomaterials, making each spot rigid and thus making the microarray ready for "seeding" with hES or other cells. (In the current work, the team seeded some arrays with hES and some with embryonic muscle cells.)

Each seeded microarray can then be placed in a different solution, including such things as growth factors, to incubate. "We can simultaneously process several microarrays under a variety of conditions," Anderson said.

Another plus: the microarrays work with a minimal number of cells, growth factors and other media. "That's especially important for human embryonic stem cells because the cells are hard to grow, and the media necessary for their growth are expensive," Anderson said. Many of the media related to testing the cells, such as antibodies, are also expensive.

In the current work, the scientists used an initial screening to find especially promising biomaterials for the differentiation of hES into epithelial cells. Additional experiments identified "a host of unexpected materials effects that offer new levels of control over hES cell behavior," the team writes, demonstrating the power of quick, easy screenings.

My guess is that this technique is also usable for testing adult stem cells and more differentiated cell types.

This report brings to mind a pair of recent reports by USCD researchers where they tested tens of thousands of molecules to find a molecule called cardiogenol C that will turn embryonic stem cells into heart muscle cells and a molecule called reversine that will turn adult muscle cells into stem cells. In each case the ability to develop screening methods to test the effects of large numbers of molecules on cells in culture enabled the discovery of useful molecules.

The development of the enabling technologies that accelerate by orders of magnitude the search for compounds that change the internal regulatory state of cells is more important than any particular discovery made with the tools. There are many useful discoveries that could be made sooner if only ways can be developed to do tests more cheaply, more rapidly, more accurately, and with greater sensitivity.

By Randall Parker    2004 June 15 03:14 AM   Entry Permalink | Comments (0)
2004 June 03 Thursday
Naked DNA Gene Therapy Technique Advance

Reports of advances in gene therapy research have not been coming anywhere near as fast as reports about stem cell research advances. However, some University of Wisconsin researchers have achieved exciting successes in delivering genes into laboratory animals.

One group, consisting of researchers from the Medical School, the Waisman Center and Mirus Bio Corporation, now reports a critical advance relating to one of the most fundamental and challenging problems of gene therapy: how to safely and effectively get therapeutic DNA inside cells.

The scientists have discovered a remarkably simple solution. They used a system that is virtually the same as administering an IV (intravenous injection) to inject genes and proteins into the limb veins of laboratory animals of varying sizes. The genetic material easily found its way to muscle cells, where it functioned as it should for an extended period of time.

“I think this is going to change everything relating to gene therapy for muscle problems and other disorders,” says Jon Wolff, a gene therapy expert who is a Medical School pediatrics and medical genetics professor based at the Waisman Center. “Our non-viral, vein method is a clinically viable procedure that lets us safely, effectively and repeatedly deliver DNA to muscle cells. We hope that the next step will be a clinical trial in humans."

Wolff conducted the research with colleagues at Mirus, a biotechnology company he created to investigate the gene delivery problem. He will be describing the work on June 3 at the annual meeting of the American Society of Gene Therapy in Minneapolis, and a report will appear in a coming issue of Molecular Therapy. The research has exciting near-term implications for muscle and blood vessel disorders in particular.

Love that bit about "near-term implications". This technique could be used to treat Duchenne’s muscular dystrophy and a number of other diseases.

Duchenne’s muscular dystrophy, for example, is a genetic disease characterized by a lack of muscle-maintaining protein called dystrophin. Inserting genes that produce dystrophin into muscle cells could override the defect, scientists theorize, ensuring that the muscles with the normal gene would not succumb to wasting. Similarly, the vein technique can be useful in treating peripheral arterial occlusive disease, often a complication of diabetes. The disorder results in damaged arteries and, frequently, the subsequent amputation of toes.

What’s more, Wolff says, with refinements the technique has the potential to be used for liver diseases such as hepatitis, cirrhosis and PKU (phenylketonuria).

In the experiments, the scientists did not use viruses to carry genes inside cells, a path many other groups have taken. Instead, they used “naked” DNA, an approach Wolff has pioneered. Naked DNA poses fewer immune issues because, unlike viruses, it does not contain a protein coat (hence the term “naked”), which means it cannot move freely from cell to cell and integrate into the chromosome. As a result, naked DNA does not cause antibody responses or genetic reactions that can render the procedure harmful.

Researchers rapidly injected “reporter genes” into a vein in laboratory animals. Under a microscope, these genes brightly indicate gene expression. A tourniquet high on the leg helped keep the injected solution from leaving the limb.

“Delivering genes through the vascular system lets us take advantage of the access blood vessels have — through the capillaries that sprout from them — to tissue cells,” Wolff says, adding that muscle tissue is rich with capillaries. Rapid injection forced the solution out of the veins into capillaries and then muscle tissue.

The injections yielded substantial, stable levels of gene activity throughout the leg muscles in healthy animals, with minimal side effects. “We detected gene expression in all leg muscle groups, and the DNA stayed in muscle cells indefinitely,” notes Wolff.

In addition, the scientists were able to perform multiple injections without damaging the veins. “The ability to do repeated injections has important implications for muscle diseases since to cure them, a high percentage of therapeutic cells must be introduced,” he says.

The researchers also found that they could use the technique to successfully administer therapeutically important genes and proteins. When they injected dystrophin into mice that lacked it, the protein remained in muscle cells for at least six months. Similar lasting power occurred with the injection of erythropoietin, which stimulates red blood cell production.

Furthermore, in an ancillary study, the researchers learned that the technique could be used effectively to introduce molecules that inhibit — rather than promote — gene expression, a powerful new procedure called RNA interference.

Given a way to reliably and safely deliver genes there are literally thousands of different diseases that potentially could be treated with gene therapy. The lack of good mechanisms for delivery of genes has been the major factor holding back the development of gene therapies. Whether this mechanism will turn out to be safe remains to be seen. For a gene therapy technique to work one has to have ways to avoid causing gene expression in the wrong kinds of cells other than the desired target type. One also has to avoid overexpression in the desired target cell type and achieve fairly even distribution to all cells of the target type. Plus, there is also a very real worry about risk of damage to genomes that could cause cancer. Whether this latest therapy technique will get tripped up by one or more of these problems remains to be seen.

By Randall Parker    2004 June 03 12:38 PM   Entry Permalink | Comments (0)
2004 May 19 Wednesday
CD Will Simultaneously Test Concentrations Of Thousands Of Proteins

A report from Purdue University drives home the point that demand for consumer mass market electronics is funding the development of technologies that are greatly improving scientific and medical instrumentation.

A team led by physicist David D. Nolte has pioneered a method of creating analog CDs that can function as inexpensive diagnostic tools for protein detection. Because the concentration of certain proteins in the bloodstream can indicate the onset of many diseases, a cheap and fast method of detecting these biological molecules would be a welcome addition to any doctor's office. But with current technology, blood samples are sent to laboratories for analysis – a procedure that only screens for a few of the thousands of proteins in the blood and also is costly and time-consuming.

"This technology could revolutionize medical testing," said Nolte, who is a professor of physics in Purdue's School of Science. "We have patented the concept of a 'bio-optical CD,' which could be a sensitive and high-speed analog sensor of biomolecules. Technology based on this concept could provide hospitals with a fast, easy way to monitor patient health."

CDs ordinarily store digital information – such as computer data or music – as billions of tiny "pits" in their surface. These microscopic pits, which represent binary ones or zeroes depending on their size, are etched in concentric tracks circling the midpoint from the inner to the outer edge of a CD.

"It is these pits which we transform into miniature test tubes," Nolte said. "Each pit can hold a trace quantity of a chemical that reacts to a certain protein found in the blood."

Blood contains more than 10,000 proteins that physicians would like to monitor, and Nolte said up to 10,000 tracks on a CD could be paired up with a different protein.

"Each ring of pits, or 'track,' on the CD could be coated with a different protein," he said. "Once the surface of a BioCD has been exposed to a blood serum sample – which would not need to be larger than a single drop – you could read the disk with laser technology similar to what is found in conventional CD players. Instead of seeing digital data, the laser reader would see how concentrated a given protein had become on each track."

Each pit is only a few micrometers – millionths of a meter – in diameter, but is nevertheless large enough to hold many thousands of individual detector molecules, each of which could pair up with and bond to a single protein molecule. The pits' capacity, Nolte said, would make the Bio-CDs an analog, rather than merely digital, screening tool.

An argument familiar to long time FuturePundit readers is that the rate of advance in biological science and biotechnology is accelerating because instrumentation is improving in speed, cost, and sensitivity by orders of magnitude. Much of this acceleration is happening for reasons unrelated to funding levels in biological sciences because many of the enabling technologies for this acceleration are being developing in other fields and industries for other purposes. This report about the use of consumer CD devices to read thousands of proteins at once illustrates this this idea rather nicely.

This isn't to say that basic research funding for biology is a waste of money. These Purdue researchers wouldn't be building their device if they didn't have the funding to do so. Also, once this device is functional it will be useful as a tool to carry out basic research more productively. Plus, there are plenty of potential new types of biological instrumentation, for instance using microfluidics, where it is taking years of basic research to solve the problems that block attempts to build useful devices.

Also read about earlier work by scientists at UC San Diego: CD Player Turned Into Bioassay Molecule Detection Instrument

By Randall Parker    2004 May 19 10:40 AM   Entry Permalink | Comments (2)
2004 May 15 Saturday
New DNA Testing Method Holds Promise For Clinical Use

A new DNA testing method promises to enable DNA testing in doctors' offices.

EVANSTON, Ill. --- Since the advent of the polymerase chain reaction (PCR) nearly 20 years ago, scientists have been trying to overturn this method for analyzing DNA with something better. The “holy grail” in this quest is a simple method that could be used for point-of-care medical diagnostics, such as in the doctor’s office or on the battlefield.

Now chemists at Northwestern University have set a DNA detection sensitivity record for a diagnostic method that is not based on PCR -- giving PCR a legitimate rival for the first time. Their results were published online today (April 27) by the Journal of the American Chemical Society (JACS).

“We are the first to demonstrate technology that can compete with -- and beat -- PCR in many of the relevant categories,” said Chad A. Mirkin, director of Northwestern’s Institute for Nanotechnology, who led the research team. “Nanoscience has made this possible. Our alternative method promises to bring diagnostics to places PCR is unlikely to go -- the battlefield, the post office, a Third World village, the hospital and, perhaps ultimately, the home.”

The new selective and ultra-sensitive technology, which is based on gold nanoparticles and DNA, is easier to use, considerably faster, more accurate and less expensive than PCR, making it a leading candidate for use in point-of-care diagnostics. The method, called bio-bar-code amplification (BCA), can test a small sample and quickly deliver an accurate result. BCA also can scan a sample for many different disease targets simultaneously.

The Northwestern team has demonstrated that the BCA method can detect as few as 10 DNA molecules in an entire sample in a matter of minutes, making it as sensitive as PCR. The technology is highly selective, capable of differentiating single-base mismatches and thereby reducing false positives.

As the term "polyermase chain reaction" indicates the existing test relies upon the polymerase enzyme. Enzymes break down, work best in narrow temperature ranges and are usually a pain to deal with. While some progress has been made on. While some progress has been made of late on how to stabilize enzymes they are still best avoided if the goal is to make a cheap reusable test with a long shelf-life.

Some of the DNA testing methods that will be developed use in for clinical settings will likely turn out to be amenable to further technological refinement to become simple enough for mass market consumer use as well. Once DNA testing kits become available over the counter genetic privacy will be impossible to protect.. Even fingerprints will be used as sources of DNA samples.

There are plenty of precedents for the development of home medical testing kids. Consider the blood sugar testing kits used by diabetics or the pregnancy testing kits that are advertised so widely on television. Mass market DNA testing kits will be developed because the market demand is there and as this latest report demonstrates technologies can be developed that will drive down costs and increase ease of use.

By Randall Parker    2004 May 15 03:06 PM   Entry Permalink | Comments (0)
2004 April 29 Thursday
Less Regulation Would Increase Number Of Drugs Developed

Alex Tabarrok of Marginal Revolution draws attention to a paper by Nobel Prize winning economist Gary Becker of the Milken Institute’s Center for Accelerating Medical Solutions on the desirability of removing the US Food and Drug Administration's (FDA) power to hold off drug approval until efficacy is demonstrated. (PDF format)

Treatment of serious diseases usually offers options that trade off various risks, for example, forcing comparisons between the likely quality of life and its expected length. No one is really able to read the thoughts of others well enough to make informed decisions on their behalf. This is why patients – not regulators or even physicians – should have ultimate control over treatment. I mention regulators because government authorities, in particular the Food and Drug Administration, make it extremely difficult for the very ill to gain access to unproven therapies. This barrier became more daunting when the FDA instituted regulations in 1962 that signifi cantly raised the cost and lengthened the time it takes to bring new drugs to market.

Before that year, pharmaceutical makers had to show only that drugs appeared to be safe for the vast majority of patients likely to take them. Assuring safety required clinical trials along with other evidence and was not an easy obstacle to overcome. But I do not want to argue here the issue of how safe is safe enough, and I assume that an FDA safety standard, perhaps even a strengthened one, is desirable.

The 1962 regulations, however, went beyond safety to add an efficacy standard. That is, clinical trial evidence would from then on have to strongly support claims that products significantly aid in the treatment of specific diseases or conditions. Indeed, in the final stage of mandated clinical tests, randomized trials must show that those treated with a drug are significantly helped compared with a control group.

This efficacy test greatly lengthened the average time between discovery and approval. Although in recent years the FDA has maintained a “fast track” for high priority drugs, bringing a new therapy to market takes an average of 12 to 15 years. The typical drug must be tested on some 6,000 patients, increasing total development costs by about 40 percent. It follows that a return to a safety standard alone would lower costs and raise the number of therapeutic compounds available. In particular, this would include more drugs from small biotech firms that do not have the deep pockets to invest in extended efficacy trials. And the resulting increase in competition would mean lower prices – without the bureaucratic burden of price controls. In turn, cheaper and more diverse drugs would induce insurance companies and public providers to cover many more new drugs, even when their efficacy was uncertain.

Elimination of the efficacy requirement would give patients, rather than the FDA, the ultimate responsibility of deciding which drugs to try. Presumably, the vast majority of patients would continue to rely on the opinions of physicians about which drugs to use. But many people whose lives are at risk want proactive in reporting what is known about the value of drugs in treating diseases, making data available through the Internet and other consumer-friendly media.

One of the more depressing aspects of serious disease is the sense of impotence – that very sick persons can do little to help themselves. This may explain why placebos sometimes generate positive effects. Indeed, Anup Malani of the University of Virginia found that patients receiving placebos in double-blind trials of ulcer drugs reported significantly less pain than they had before the trials.

Giving people whose lives are threatened by serious diseases greater access to safe, promising (albeit unproven) drugs and other treatments would help their psychological state. More important, it would lower the cost and hasten the development of therapies.

Even a simple safety standard seems excessive for a disease like cancer. The vast bulk of the treatments for cancer are highly toxic. The whole problem with trying to selectively kill cancer cells is that they share too much in common with normal cells aside from their uncontrolled growth.

Alex and Daniel B. Klein have a web site dedicated to examining FDA regulation, FDAReview.org. From the web site:

Medical drugs and devices cannot be marketed in the United States unless the U. S. Food and Drug Administration (FDA) grants specific approval. We argue that FDA control over drugs and devices has large and often overlooked costs that almost certainly exceed the benefits. We believe that FDA regulation of the medical industry has suppressed and delayed new drugs and devices, and has increased costs, with a net result of more morbidity and mortality. A large body of academic research has investigated the FDA and with unusual consensus has reached the same conclusion.

Drawing on this body of research, we evaluate the costs and benefits of FDA policy. We also present a detailed history of the FDA, a review of the major plans for FDA reform, a glossary of terms, a collection of quotes from economists who have studied the FDA, and a bibliography with many webbed links.

From a rights perspective I am especially opposed to allowing the FDA to continue to have the power to slow the delivery of new drugs to the market for diseases that are fatal. If you have just been told you have 1 or 2 or 5 years to live because of some form of cancer why shouldn't you be free to try any drug against your disease? Why should the government have the power to "protect" you. Protect you from what? Death? You are already going to die. Sure, an experimental therapy may kill you sooner. But suppose you have a bone cancer that is going to take 5 or 6 or 7 disabling and extremely painful years to kill you. Isn't it your right as owner of your own body and life to gamble on some alternatives?

There is also the compelling economic case for fewer regulatory obstacles which Gary Becker and many other economists make. As it stands now there are very few organizations with the financial wherewithal to bring a drug all the way to market. This limits how many drugs will be brought all the way through all the steps. Medicinal chemist and blogger Derek Lowe delivers a daily source of sobering commentary on all the scientific, financial, and regulatory obstacles to bringing new drugs to market. The cost of bringing a new drug to market is over $800 million dollars (more like $900 million) Such a large sum of money makes investors extremely risk averse and any drug is a longer shot gamble just isn't going to be developed.

One of the costs is the work that needs to be done for the regulatory process. But another substantial cost is time to market. The longer it takes the higher the cost is in interest for the money to develop the drug. If the efficacy stage was cut out of the regulatory process then the cost of the work on the efficacy stage of regulatory approval as well as the interest cost on all the earlier stages would drop.

Perhaps the "efficacy" stage of FDA approval could be left in place as an optional stage. Anyone who wanted to take only drugs that were approved for efficacy would be free to do so. Any drug company would be free to start selling once the safety stage (or some less restrictive stage in the case of cancer drugs) was passed. The drug company could start selling then and still go after "efficacy" approval in order to later use that additional approval in marketing to doctors and patients.

The result of this proposed reduction in FDA power would be to increase the rate at which new drugs are tried, increase the number of organizations developing new drugs, and to lower the cost of drugs. The rate of advance of biomedical resesarch and biotechnology will accelerate if the cost and time to market for drugs is reduced. We will have longer life expectancies and lower medical bills if the efficacy stage of testing is removed as a requirement for new drug approval.

By Randall Parker    2004 April 29 12:28 PM   Entry Permalink | Comments (5)
2004 April 07 Wednesday
DNA Sequencing Costs Continue to Decline

Biotech instrumentation company Affymetrix has announced a new instrument for mapping DNA Single Nucleotide Polymorphisms (SNPs) which Affymetrix claims is much cheaper to operate than previous instruments designed for this purpose. SNPs are locations in the genome where not all humans or all members of another species have the same the DNA letter. The new Affymetrix instrument will reportedly lower the cost of human DNA SNP testing to 1 penny per DNA letter.

The 100K builds on the innovative, scalable, easy to use assay that Affymetrix pioneered with the GeneChip Mapping 10K Array. The 100K allows researchers to genotype over 100,000 SNPs using just two reactions. Previously, genotyping 100,000 SNPs would have required 100,000 PCR reactions, a hurdle that made this kind of research impractical. Before the advent of 100K, the commercial product for genotyping the most SNPs was Affymetrix' Mapping 10K.

“The power of 100,000 SNPs in a single experiment is enabling researchers to attempt unprecedented genetic studies at a genome-wide scale,” said Greg Yap, Sr. Marketing Director, DNA Analysis. “The GeneChip Mapping 100K Set is the first in a family of products that will enable scientists to identify genes associated with disease or drug response across the whole genome instead of just studying previously known SNPs or genes, and to study complex real-world populations instead of simple ones. To do this, we are making large-scale SNP genotyping not only quick and easy, but also affordable -- about 1 cent per SNP.”

About half of the SNPs on the 100K set are from public databases, while the other half are from the SNP database discovered by Perlegen Sciences, Inc. All of the SNPs on the 100K set are freely available and have been released into the public domain. Because the assays and arrays used in the 100K set are extremely scalable, more SNPs from both public sources and the Perlegen database will be added to next generation arrays.

Just a couple of years ago the cost of SNP assays was at about 50 cents per SNP. Industry analysts at the time were predicting SNP costs to fall to 1 cent per SNP within 2 years and sure enough if Affymetrix's press release is realistic those costs are just about to fall to the predicted 1 cent per SNP.

To put that number in some useful perspective, there may be about 10 million SNPs in humans but perhaps about only a half million SNPs that are in areas of the genome that affect human function. Most of the SNPs are in junk regions. The 500,000 estimate of functionally significant SNPs is a scientific guess at this point and could easily be off by a few hundred thousand plus or minus. But once those 500,000 SNPs are identified (which could easily take a few years yet) the cost of 1 penny per SNP to test them all would cost about $5,000.00 per person in US dollars to test a single person's complete genome for SNPs. The real cost would be higher since samples of portions of the genome would need to be isolated for experimental runs. The real cost might easily be several times that amount. That still wouldn't provide a complete picture of a person's genome since there are few other kinds of genetic variations (e.g. short tandem repeats or STRs). But SNPs are responsible for most genetic differences within a single species.

At this point the decline in SNP testing prices is useful chiefly to scientists since we don't know which SNPs are important let alone how they are important. Still, the lower costs for SNP testing will accelerate the rate at which important SNPs are identified and that will bring closer the day when it makes sense to for individuals to go get complete SNP testing.

Full DNA sequencing is also at about 1 cent per DNA letter.

Our price is competitive at just $.01 per base per sample, all inclusive. For example, total sequencing for a 7 Kb region on 48 samples would cost a total of 7000 x 48 x $.01 = $3,360. This pricing structure makes SNP discovery project estimates straight forward, avoiding the possible uncertainty of estimating the exact number of PCR amplicons, sequencing reactions, etc.

This price would make a complete 2.9 billion letter human genome sequencing for a single person cost about $29 million dollars. But it would really cost several times more than that since sequencing has to be done multiple times to catch errors in the sequencing and there are sections of the genome that are hard to sequence. Since whole genome sequencing is much more epensive ways to lower the cost of the SNP testing is what is attracting the most interest. When SNP testing costs by fall another order of magnitude the cost will be low enough for at least the more affluent among us to want to pay for personal SNP tests. Once the costs fall yet another order of magnitude SNP testing for the masses will become common place. My guess is that given the rate at which SNP testing costs continue to fall and the predictions of industry figures of much cheaper SNP testing we are at most 10 years away from the point of general population large scale SNP testing..

Note: The Affymetrix press release link was found in a post on the Personal Genome blog.

By Randall Parker    2004 April 07 02:53 AM   Entry Permalink | Comments (3)
2004 March 27 Saturday
Special Computer Speeds Protein Folding Calculations

A parallel computer designed for high energy physics is speeding calculations for protein folding by 3 orders of magnitude.

MONTREAL, CANADA -- Scientists at the U.S. Department of Energy's Brookhaven National Laboratory are proposing to use a supercomputer originally developed to simulate elementary particles in high-energy physics to help determine the structures and functions of proteins, including, for example, the 30,000 or so proteins encoded by the human genome. Structural information will help scientists better understand proteins' role in disease and health, and may lead to new diagnostic and therapeutic agents.

Unlike typical parallel processors, the 10,000 processors in this supercomputer (called Quantum Chromodynamics on a Chip, or QCDOC, for its original application in physics) each contain their own memory and the equivalent of a 24-lane superhighway for communicating with one another in six dimensions. This configuration allows the supercomputer to break the task of deciphering the three-dimensional arrangement of a protein's atoms -- 100,000 in a typical protein -- into smaller chunks of 10 atoms per processor. Working together, the chips effectively cut the computing time needed to solve a protein's structure by a factor of 1000, says James Davenport, a physicist at Brookhaven. This would reduce the time for a simulation from approximately 20 years to 1 week.

"The computer analyzes the forces of attraction and repulsion between atoms, depending on their positions, distances, and angles. It shuffles through all the possible arrangements to arrive at the most stable three-dimensional configuration," Davenport says.

This is a familiar theme to long-time FuturePundit readers: the rate of advance in biological science and technology is accelerating because technological advances are producing tools which allow scientists to find answers literally orders of magnitude faster. While I post more often about instrumentation advances the ability of computers to simulate biological systems may turn out to be more important in the long run. Many experiments that are now done through lab work will in the future instead be done with computer simulations.

By Randall Parker    2004 March 27 02:01 PM   Entry Permalink | Comments (1)
2004 March 15 Monday
Cal Tech Quake Lab Extracts DNA From Single Cell

The Stephen Quake lab at the Californian Institute of Technology (CalTech) has developed a microfluidic device that will extract the DNA from a single cell.

By shrinking laboratory machines to minute proportions, California scientists have built a postage stamp-sized chip that drags DNA from cells. The device might one day shoulder some of scientists' routine tasks.

Microfluidic devices are going to revolutionize biological science and medicine because they will lower costs and increase automation by orders of magnitude.

The chip requires thousands to millions times less of the expensive chemicals required to isolate and process nucleic acids such as DNA and RNA. Once commercialized, it could have a profound impact on the nucleic acid isolation market, which is worth $232 million per year in the United States alone. Current leaders in that market include Qiagen in Germany, Sigma-Aldrich in St. Louis and Amersham Biosciences in Britain.

Parallel processing samples on a chip will speed up the rate of analysis and lower costs.

Steve Quake's team describes the general architecture for parallel processing of nanoliter fluids for biotechnology in a letter in the March 15 Nature Biotechnology. “We think it's an important milestone in general biological automation,” he told The Scientist.

Automation lowers costs and the ability to use smaller samples and smaller amounts of reagents also lowers costs. But another advantage of microfluidics is that enables the measurement of things that otherwise could not be measured at all. It is often very difficult to get large samples in the first place. Many cell types are difficult or impossible to grow in culture. Also, growing cells in culture will change their internal chemical state. The ability to look inside and measure the contents and structure of a single cell will allow many types of experiments and tests that are just not possible today.

Also see my previous posts on the Quake lab's work: Cal Tech Chemical Lab On A Silicon Chip and Microfluidics to Revolutionize Biosciences.

By Randall Parker    2004 March 15 11:13 AM   Entry Permalink | Comments (0)
2004 February 27 Friday
Cryo-Electron Microscopy Provides Clearer Picture Of Ribosomes

An advance in cryo-electron microscopy instrumentation will enable the mechanisms of antibiotic resistance due to bacterial ribosome mutations to be understood more rapidly.

By refining a technique known as cryo-electron microscopy, researchers from Imperial College London and CNRS-Inserm-Strasbourg University have determined how the enzyme RF3 helps prepare the protein-making factory for its next task following disconnection of the newly formed protein strand.

The team's success in capturing the protein-making factory, or ribosome, in action using cryo-electron microscopy will help scientists to begin deciphering the molecular detail of how many antibiotics interfere with the final steps of protein synthesis - an area not currently targeted in antibiotics research.

Professor Marin van Heel of Imperial's Department of Biological Sciences and senior author of the study says:

"Many antibiotics kill bacteria by interfering with their protein-making factories, or ribosomes. But bacteria can often become resistant by mutating their ribosome machinery. Observing ribosomes in action helps us understand which areas of the protein complex evolve such resistance quickly. This information could then be used to develop new antibiotics that target the more stable regions.

"We've used cryo-electron microscopy in a similar way to time lapse footage. It has allowed us to visualise how one cog in a cell's protein manufacturing plant operates. By refining the technique even further we hope to be able to visualise the molecular interactions on an atomic scale. This kind of knowledge has applications across the board when you are trying to work out key factors in diagnosis, treatment or cause of many diseases."

Professor van Heel pioneered cryo-electron microscopy 10 years ago. Since then it has become an essential part of many structural biologists' toolkit. It overcomes the problem of weak image contrast in electron microscopy and avoids the difficult and time-consuming process of growing crystals that can be analysed using X-ray diffraction.

As professor van Heel points out, this technique is applicable to the study of the shape and action of many other types of molecules in cells.

Rapid freezing provides a snapshot of what ribosomes were doing at that moment of time. Also, the freezing prevents higher electron doses from changing the shape of the ribosomes and hence a larger electron dose can be used to get a clearer picture. This is analogous to how a bright camera flashbulb provides more light to get a better picture.

Electron microscopy images are created by firing electrons at the sample but this process rapidly damages the biological material. To overcome this degradation problem researchers use a low dose of radiation, which leads to extremely poor image quality. "It's like trying to see in the dark," says Professor van Heel.

"Cryo-electron microscopy uses a sample in solution which is rapidly frozen by plunging it into liquid ethane and maintaining it at the same temperature as liquid nitrogen," he explains.

"This maintains the 3D structure of the molecule and gives you instant access to the cellular process of interest. Also, the effect of freezing the sample before electron microscopy reduces radiation damage. This makes it possible to apply a higher electron dose, which gives a clearer image."

The goal is to create a movie of the molecular level changes that happen to ribosomes as they perform protein synthesis.

"After the X-ray structure of the ribosome became available a few years ago, one might think we already know all there is to know about protein synthesis," explains Professor van Heel.

"But we've still got so much to learn about the precisely synchronised series of steps that occurs. Researchers only became aware of the existence of ribosomes 50 years ago but they've a mystery since the creation of life 3.5 billion years ago. By improving the high resolution images we can create using cryo-electron microscopy our long term goal is to create a movie of protein synthesis on an atomic scale."

Advances in instrumentation speed up the rate of advances in basic biological science, biomedical research, and biotechnology by providng scientists with better tools for watching and manipulating biological systems. As a result each year witnesses a faster rate of discovery than the year previous. When people ask when various diseases will be cured what they ought to ask is when will the tools biologists have to use become advanced enough to enable biologists to figure out the causes and to develop effective treatments? While that is perhaps a more precise question it is still very difficult to answer.

By Randall Parker    2004 February 27 09:20 AM   Entry Permalink | Comments (0)
2004 February 19 Thursday
Smart Vivarium Technology To Automate Animal Studies

Advances in electronics and software are being harnessed at UC San Diego to automate the monitoring and analysis of lab animals used in research.

Computer scientists and animal care experts at the University of California, San Diego (UCSD) have come up with a new way to automate the monitoring of mice and other animals in laboratory research. Combining cameras and distributed, non-invasive sensors with elements of computer vision, information technology and artificial intelligence, the Smart Vivarium project aims to enhance the quality of animal research, while at the same time enabling better health care for animals.

The pilot project is led by Serge Belongie, an assistant professor in Computer Science and Engineering at UCSD’s Jacobs School of Engineering. It is funded entirely by the California Institute for Telecommunications and Information Technology [Cal-(IT)²], a joint venture of UCSD and UC Irvine. “Today a lot of medical research relies on drug administration and careful monitoring of large numbers of live mice and other animals, usually in cages located in a vivarium,” said Belongie. “But it is an entirely manual process, so there are limitations on how often observations can be made, and how thoroughly those observations can be analyzed.”

This work at UCSD is still at a fairly early stage and the project is really of a rather open-ended nature. For decades to come advances in image processing algorithms, artificial intelligence algorithms, and other areas of computer science will combine with continuing advances in sensors and in computer speed and storage capacity to enable more useful information to be automatically derived from computerized automated monitoring systems. This project is definitely a step in a direction that promises to drastically lower costs and speed the rate of advancement of behavioral and biomedical research.

The ability to collect more data in a single experiment will reduce the number of experiments that need to be done. This will both speed research and lower costs.

UCSD is a major biological sciences research center, and animal-care specialists believe the technology under development could dramatically improve the care of research animals. “The Smart Vivarium will make better use of fewer lab animals and lead to more efficient animal health care,” said Phil Richter, Director of UCSD’s Animal Care Program, who is working with Belongie on the project. “Sick animals would be detected and diagnosed sooner, allowing for earlier treatments.” The technology would also help to reduce the number of animals needed in scientific investigations. “In medical research, experiments are sometimes repeated due to observational and analytical limitations,” said Belongie. “By recording all the data the first time, scientists could go back and look for different patterns in the data without using more mice to perform the new experiment.”

For many of the same reasons, the underlying technology could be useful for the early diagnosis and monitoring of sick animals in zoos, veterinary offices and agriculture. (“Early detection of lameness in livestock,” noted Belongie, “could help stop the transmission of disease.”) The computer scientist also intends to seek collaboration with the San Diego Zoo and other local institutions for practical field deployment of the monitoring systems as part of an upcoming study.

The total amount of data collected per experiment will go up by orders of magnitude with this system.

As for improvements in medical research from the continuous monitoring of lab animals, Belongie expects at least an improvement of two orders of magnitude in the automated collection and processing of monitoring data. “Continuous monitoring and mining of animal physiological and behavioral data will allow medical researchers to detect subtle patterns expressible only over lengthy longitudinal studies,” noted Belongie. “By providing a never-before-available, vivarium-wide collection of continuous animal behavior measurements, this technology could yield major breakthroughs in drug design and medical research, not to mention veterinary science, experimental psychology and animal care.”

Advances in computer hardware and software technologies serve as major enablers for advances in biomedical research, environmental reearch, and other aspects of biological and behavioral research. Continued rapid advances in computing technologies in coming decades will improve the productivity of researchers by orders of magnitude above current levels of productivity. Therefore the rate of advance of all the biological sciences will accelerate dramatically.

By Randall Parker    2004 February 19 01:26 PM   Entry Permalink | Comments (0)
2004 February 10 Tuesday
Stanford Researchers Develop Fast Cheap Way To Silence Genes

A Stanford team has developed a way to do RNA interference (RNAi) on many genes in a way that is cheap and fast enough to allow much wider use in research laboratories.

STANFORD -- Sometimes the first step to learning a gene's role is to disable it and see what happens. Now researchers at the Stanford University School of Medicine have devised a new way of halting gene expression that is both fast and cheap enough to make the technique practical for widespread use. This work will accelerate efforts to find genes that are involved in cancer and the fate of stem cells, or to find genes that make good targets for therapeutic drugs.

The technique, published in the February issue of Nature Genetics and now available online, takes advantage of small molecules called short interfering RNA, or siRNA, which derail the process of translating genes into proteins. Until now, these molecular newcomers in genetics research have been difficult and expensive to produce. Additionally, they could impede the activity of known genes only, leaving a swath of genes in the genetic hinterlands unavailable for study.

"siRNA technology is incredibly useful but it has been limited by expense and labor. A better method for generating siRNA has been needed for the whole field to move forward," said study leader Helen Blau, PhD, the Donald E. and Delia B. Baxter Professor of Pharmacology. She said some companies are in the process of creating pools, or libraries, of siRNA molecules for all known genes in specific organisms but these libraries aren't yet available.

Pathology graduate students George Sen, Tom Wehrman and Jason Myers became interested in creating siRNA molecules as a way of screening for genes that alter the fate of stem cells -- cells that are capable of self-renewal and the primary interest of Blau's lab. The students hoped to block protein production for each gene to find out which ones play a critical role in normal stem cell function.

"I told them that creating individual siRNAs to each gene was too expensive," said Blau. Undaunted, the students came up with a protocol for making an siRNA library to obstruct expression of all genes in a given cell -- including genes that were previously uncharacterized. They could then pull individual molecules like books from a shelf to test each one for a biological effect.

The team had several hurdles to overcome in developing their protocol. The first was a size limit -- an siRNA molecule longer than 29 subunits causes wide-ranging problems in the cell. The key to overcoming this barrier was a newly available enzyme that snips potential siRNA molecules into 21-subunit lengths. A further step copied these short snippets into a form that could be inserted into a DNA circle called a plasmid. When the researchers put a single plasmid into a cell, it began churning out the gene-blocking siRNA molecule.

The group tested their approach by creating a handful of siRNA molecules to genetically disable three known genes. In each case, their technique generated siRNA that effectively blocked the gene in question.

Wehrman said this technique of creating siRNA molecule libraries could be widely used to find genes that, when disabled, cause cells to become cancerous or alter how the cells respond to different drugs. These genes could then become potential targets for drugs to treat disease.

A paper in the same issue of Nature Genetics described a similar way of creating siRNA libraries. "Having two unrelated groups working on the same problem shows there has been a real need for the technology," Blau said. The Stanford group has filed a patent for its technique.

Here is yet another reason why the rate of advance in biological research is accelerating. Better tools and techniques speed the rate at which experiments can be done and increase the amount of information that can be collected.

The abstract for the Stanford team's work is here. The other team the press release mentions is a Japanese team from the University of Tokyo. You can read their abstract here.

On a related note also read my recent post on the results of another team's effort to develop a technique to interfere with the activity of many genes at once using RNA interference: Massively Parallel Gene Activity Screening Technique Developed

Update Another report from MIT Whitehead scientist David Bartel and MIT assistant professor of biology Chris Burge on computational methods for finding a type of RNA called microRNA which regulates RNA expression.

CAMBRIDGE, Mass. (Jan. 28, 2004) – Research into the mechanics of microRNAs, tiny molecules that can selectively silence genes, has revealed a new mode of gene regulation that scientists believe has a broad impact on both plant and animal cells. Fascinated by the way microRNAs interfere with the chemical translation of DNA into protein – effectively silencing a targeted gene – scientists are exploring the role that these miniature marvels play in normal cell development and how they might be used to treat disease.

A critical component of understanding how microRNAs work in humans has been identifying which genes’ microRNAs silence and what processes they control. In a recent study, scientists identified more than 400 human genes likely targeted by microRNAs, taking an important step toward defining the relationship between microRNAs and the genes they target, including those linked to disease and other vital life functions.

...

In 2003, Bartel and Chris Burge, an assistant professor of biology at MIT, developed a computational method able to detect the microRNA genes in different animals. Using this method, they estimated that microRNAs constitute nearly 1 percent of genes in the human genome, making microRNA genes one of the more abundant types of regulatory molecules.

Bartel and Burge then set out to apply a similar approach to defining the relationship between microRNAs and the genes they target. Last month in the journal Cell, their labs reported that they have created a new computational method, called TargetScan, which does just that.

For each microRNA, TargetScan searches a database of messenger RNAs (mRNAs) – chemical messages that transcribe DNA into protein – for regions that pair to portions of the microRNA, and assigns a score to the overall degree of pairing that could occur between the microRNA and each mRNA. Those mRNAs that have high scores conserved in three or more organisms are predicted as targets of the microRNA.

Using this method, the team identified more than 400 genes in the human, mouse and rat genomes likely to be regulated by microRNAs. In addition, TargetScan predicted an additional 100 microRNA targets that are conserved in humans, mice, rats and the pufferfish.

According to Burge, 70 percent of targets predicted by TargetScan are likely to be authentic microRNA targets and the experimental data in the paper supports that a majority of their predictions are correct.

The take-home lesson here is that advances in the development of computer algorithms and the development of better tests and instrumentation are all accelerating the rate at which scientists can figure out systems of gene expression and genetic regulation in cells.

By Randall Parker    2004 February 10 11:02 AM   Entry Permalink | Comments (2)
2004 February 05 Thursday
Massively Parallel Gene Activity Screening Technique Developed

Researchers at Howard Hughes Medical Institute (HHMI), Harvard Medical School, the University of Heidelberg and the Max Planck Institute for Molecular Genetics in Germany have demonstrated in the fruit fly Drosophila a general technique usable in any organism to simultaneously assay thousands of genes to determine whether each gene is involved in a particular aspect of cell function.

“A major challenge now that many genome sequences have been determined, is to extract meaningful functional information from those projects,” said HHMI researcher Norbert Perrimon, who directed the study. “While there are a number of analytical approaches that can measure the level of gene expression or the interaction between proteins, ours is really the first high-throughput, full-genome screening method that allows a systematic interrogation of the function of every gene.”

The technique uses double-stranded RNA made to match every known gene in the target organism that is of interest. The double-stranded RNA causes a phenomenon called RNA interference (RNAi) which blocks the action of the corresponding RNA strand which gets made from each gene. Nornally cellular machinery reads a gene in the DNA and creates what is called messenger RNA (mRNA) which has a matching sequence to that gene. Then that mRNA is read to make proteins. But dsRNA prevents that step and therefore blocks the creation of proteins. This effectively blocks the gene from having any effect and then the automated assay system of these researchers watches what the effect is on cell growth or on whatever other aspect of cellular activity the system could be set up to measure.

The screening technique developed by Perrimon and his colleagues builds on methods developed in one of the hottest areas of biology, RNA interference (RNAi) research. In RNAi, double-stranded RNA (dsRNA) that matches the messenger RNA produced by a given gene degrades that messenger RNA — in effect wiping out the function of that gene in a cell. RNAi is widely used as a research tool to selectively erase the cellular contributions of individual genes to study their function.

In their mass screening technique, Perrimon and his colleagues first created a library of 21,000 dsRNA that corresponded to each of the more than 16,000 genes in the Drosophila genome. They then applied each of these dsRNA molecules to cultures of Drosophila cells and assayed how knocking down the function of a targeted gene affected cell numbers in the cultures. This basic measure, said Perrimon, revealed genes that are not only involved in general cell growth, but also in the cell cycle, cell survival and other such functions.

The researchers then selected 438 genes for further characterization. The degradation of these genes produced profound affects on cell number. “Out of this subset, we found many that produced proteins involved in general metabolic processes such as the ribosomes that are components of the protein synthesis machinery,” said Perrimon. “But we also found genes that are more specific to cell survival.”

According to Perrimon, only 20 percent of the genes that were identified had corresponding mutations — an important characteristic for studying gene function. “The classic approach to studying gene function is to identify mutations in genes and select those that produce interesting phenotypes that yield insight into function,” said Perrimon. “But this approach has never really given us access to the full repertoire of genes. With this high-throughput technology, however, we can study the function of a complete set of genes. We can systematically identify all the genes involving one process.”

The technique can be used to screen for genes involved in intercellular communication, cancer cell proliferation, and other cellular activity. Combined with drug screening the technique can accelerate the search for drugs that operate on particular cellular pathways and processes.

The RNAi assay will contribute to the screening of new drugs, he said. “One exciting aspect of this approach is that we can combine our assay with screening of potential therapeutic compounds,” he said. “One of the big problems in the pharmaceutical industry is that researchers may discover pharmacologically active compounds but have no idea what their targets are in the cell. However, it would be possible to perform coordinated screens — one for compounds that interfere with a target pathway and an RNA interference screen for genes that act in that pathway. This correlation would allow you to match the compounds with the proteins they affect in a much more useful way.”

One can see by reading between the lines here how this technique has to be built on top of a lot of other existing tools that automate the creation of needed components. There has to be a fairly automated existing technique to generate all the different kinds of dsRNA strands used in this techinque. Also, the technique must rely on an automated tool for, feeding cells, measuring cell growth, and doing other steps in this process.

Results of studies for new ways to treat diseases or discoveries of ways that genes and cells work get a lot of press attention. But the ability to automate and therefore accelerate massive parallel screening and manipulation of genes, proteins, and other parts of cells is what makes possible the faster rate of discoveries of disease causes and disease treatments. Cells are so complex with so many pieces, subsystems, and types of interactions that only with the development of massively parallel techniques can we hope to fully figure out how cells work and how to cure most diseases in the next few decades.

By Randall Parker    2004 February 05 12:13 PM   Entry Permalink | Comments (4)
2003 December 08 Monday
Optical Force Clamps Allow Observation Of Single RNA Polymerase Enzyme

Steven M. Block, a professor of biological sciences and of applied physics at Stanford University, and his team have developed two-dimensional optical force clamps that can monitor the action of a single RNA polymerase (RNAP) enzyme.

In a new study in the journal Nature, Block and his colleagues present strong evidence to support this proofreading hypothesis. Their results -- based on actual observations of individual molecules of RNAP -- are posted on Nature's website: http://www.nature.com. In another set of experiments published in the Nov. 14 issue of Cell magazine, the researchers discovered that RNAP makes thousands of brief pauses as it pries open and copies the DNA double helix.

"Together these two papers push the study of single proteins to new limits," Block said. "We've been able to achieve a resolution of three angstroms -- the width of three hydrogen atoms -- in our measurements of the progress of this enzyme along DNA. In so doing, we've been able to visualize a backtracking motion of just five bases that accompanies RNAP error-correction or proofreading."

Both studies were conducted using two-dimensional optical force clamps -- unique instruments designed and built by the Block lab. Located in soundproofed and temperature-controlled rooms in the basement of Stanford's Herrin labs, these devices allow researchers to trap a single molecule of RNAP in a beam of infrared light, and then watch in real time as it moves along a single molecule of DNA.

"We've been able to reduce drift and noise in our instruments to such an extent that we can see the tiniest motions of these molecules, through distances that are less than their own diameters," Block explained. "Studying one macromolecule at a time, you learn so much more about its properties, but these kinds of experiments were just pipedreams 15 years ago."

This is an example of why the rate of advance in biological science is not constant. The development of instrumentation that can study components of biological systems down on the scale at which they operate will allow these systems to be figured out orders of magnitude more quickly. The biggest reason we still know only a small fraction of what there is to understand about cells and diseases is that we can't watch what happens down at the level at which events actually take place. Continued advances in the ability to build smaller devices and smaller sensors will make observable that which it has previously never been possible to observe.

By Randall Parker    2003 December 08 07:06 PM   Entry Permalink | Comments (2)
2003 November 19 Wednesday
On The Declining Costs Of DNA Sequencing

DNA sequencing costs have already dropped by orders of magnitude and promise to drop by many more orders of magnitude in the future. (The Scientist website, free registration required - and an excellent site worth the time to register)

Sequencing costs have dropped several orders of magnitude, from $10 per finished base in 1990 to today's cost, which Green estimates at about 5 or 6 cents per base for finished sequence and about 2 to 4 cents for draft sequence. For some comparisons, draft sequence is adequate. Last spring NHGRI projected future cost at about a cent per finished base by 2005.

Although the plummeting price of sequencing is welcome, it is due to incremental improvements on the basic technology. “What we're all praying for is one of those great breakthroughs—a new technology that will allow us to read single-molecule sequences, or whatever the trick is going to be that will give us several orders of magnitude increase in speed and reduced cost,” Robertson said.

The article quotes Eric Green, scientific director of the National Human Genome Research Institute (NHGRI) that it costs about $50 million to $100 million to sequence each vertebrate species. This is an argument for spending a lot more money on basic research on the areas such as microfluidics and nanopore technology that will lead to orders of magnitude cheaper DNA sequencing technologies.

The lengthy NHGRI vision statement includes a section on dreams of future basic biotechnology advances that would be of particular value. (also see same article here)

During the course of the NHGRI's planning discussions, other ideas were raised about analogous 'technological leaps' that seem so far off as to be almost fictional but which, if they could be achieved, would revolutionize biomedical research and clinical practice.

The following is not intended to be an exhaustive list, but to provoke creative dreaming:

  • the ability to determine a genotype at very low cost, allowing an association study in which 2,000 individuals could be screened with about 400,000 genetic markers for $10,000 or less;
  • the ability to sequence DNA at costs that are lower by four to five orders of magnitude than the current cost, allowing a human genome to be sequenced for $1,000 or less;
  • the ability to synthesize long DNA molecules at high accuracy for $0.01 per base, allowing the synthesis of gene-sized pieces of DNA of any sequence for between $10 and $10,000;
  • the ability to determine the methylation status of all the DNA in a single cell; and
  • the ability to monitor the state of all proteins in a single cell in a single experiment.

Methylation of DNA is used by cells to regulate gene expression. The DNA methylation pattern in a cell is part of the epigenetic state of the cell which is basically the information state of the cell outside of the primary DNA sequence. The ability to determine the methylation state of all the DNA in a cell would be incredibly valuable for understanding cell differentiation and that, in turn, would be incredibly valuable for the development of cell therapies and for the development of the ability to grow replacement organs. Plus, the ability to read DNA methylation patterns in cancer cells would be very valuable for understanding the changes that cause cells to become cancerous. Methylation pattern changes may be essential for carcinogenesis in some or all cancer types. The development of ability to reverse a methylation pattern change may eventually be useful as an anti-cancer treatment.

Speedier and cheaper bioassay technologies have a great many applications both in research and in clinical treatment. For instance, lab-on-a-chip technology promises to allow instant diagnosis of bacterial infections and more precise and lower cost choices of antibiotics to treat them.

ST's polymerase chain-reaction-on-chip system, announced at last year's Chips to Hits conference, is a good example of the type of heterogeneous integration required in this new field. The chip contains microfluidic channels and reaction chambers heated with electronic resistors. DNA samples are amplified on chip in the chambers and piped to a DNA sample array for optical analysis.

"We want to transfer the complexity of large-scale labs onto these chips and use volume manufacturing to reduce cost," LoPriore said. Once it is in volume production, the MobiDiag system will replace lab equipment costing more than $10,000 with a small-scale unit costing only a few thousand, he said.

Equally significant is the short response time of the diagnostic system. Today, a patient with an unspecified infection needs to take a broad-spectrum antibiotic until a diagnosis is obtained that allows a switch to a narrow-spectrum antibiotic. With the new system, the doctor would know immediately which pathogen to target. "This could save billions of dollars per year, just in the cost of antibiotics," LoPriore said.

A lot of press reports are dedicated to reporting various discoveries about such things as how cells work, whether some drug works, or what is the best diet. But what excites me the most are advances in instrumentation and assay technologies. With a much better set of tools all discoveries could be made much sooner and with less effort and all treatments could be developed much more rapidly.

By Randall Parker    2003 November 19 12:12 PM   Entry Permalink | Comments (3)
2003 October 08 Wednesday
Nutrients Change Embryonic DNA Methylation, Risk Of Disease

Fairly small changes in nutrient levels caused dramatic developmental changes in mice.

In a study of nutrition's effects on development, the scientists showed they could change the coat color of baby mice simply by feeding their mothers four common nutritional supplements before and during pregnancy and lactation. Moreover, these four supplements lowered the offspring's susceptibility to obesity, diabetes and cancer.

Results of the study are published in and featured on the cover of the Aug. 1, 2003, issue of Molecular and Cellular Biology.

"We have long known that maternal nutrition profoundly impacts disease susceptibility in their offspring, but we never understood the cause-and-effect link," said Randy Jirtle, Ph.D., professor of radiation oncology at Duke and senior investigator of the study. "For the first time ever, we have shown precisely how nutritional supplementation to the mother can permanently alter gene expression in her offspring without altering the genes themselves."

In the Duke experiments, pregnant mice that received dietary supplements with vitamin B12, folic acid, choline and betaine (from sugar beets) gave birth to babies predominantly with brown coats. In contrast, pregnant mice that did not receive the nutritional supplements gave birth predominantly to mice with yellow coats. The non-supplemented mothers were not deficient in these nutrients.

That choice of nutrients was not just a wild guess on the part of the researchers. Those nutrients are all known to serve as methyl donors in a number of metabolic pathways. Methylation is done to many other compounds besides DNA and most of the pathways that use these nutrients to methylate do so to serve other purposes such as changing hormones into other kinds of hormones (and in some cases the methyls are chopped off rather than added).

What is interesting to note about folic acid's role is that the US government and other governments have authorized the addition of folic acid to grain foods in order to lower the risk of spina bifida. At the same time, lots of people are trying to get more folic acid in their diets in order to reduce the risk of Alzheimer's, heart disease (on the theory that folic acid will catalyze pathways that will lower blood homocysteine and, by doing so, reduce the rate of atherosclerotic build-up), and other diseases whose risks may be lower if one gets more folic acid in one's diet. So if increased folic acid will cause changes in human development by increasing DNA methylation we are going to eventually see some unexpected changes in health between generations. Whether those changes will be for good or ill or, perhaps, a mixture of both remains to be seen. We really can only guess at this point. If this worries you keep in mind that industrialization has certainly caused large changes in the nutritional composition of our diets already. Folic acid fortification of food is just one of many changes in our diets and environments that are probably changing human development in all sorts of ways which have not yet been identified.

A study of the cellular differences between the groups of baby mice showed that the extra nutrients reduced the expression of a specific gene, called Agouti, to cause the coat color change. Yet the Agouti gene itself remained unchanged.

Just how the babies' coat colors changed without their Agouti gene being altered is the most exciting part of their research, said Jirtle. The mechanism that enabled this permanent color change – called "DNA methylation" -- could potentially affect dozens of other genes that make humans and animals susceptible to cancer, obesity, diabetes, and even autism, he said.

This ability to change the expression of genes without actually changing their primary sequence is very important. Some genetic engineering in the future aimed at changing offspring will likely be done by causing epigenetic changes rather than changes in the primary DNA sequence. What will be interesting to see is whether more precise methods of controlling methylation can be developed. The use of diet to increase the amount of methylating nutrients that a fetus is exposed to is an intervention that has rather broad effects. Some of the resulting methylations may have desired effects but other methylations near other genes might cause effects that are harmful.

"Our study demonstrates how early environmental factors can alter gene expression without mutating the gene itself," said Rob Waterland, Ph.D., a research fellow in the Jirtle laboratory and lead author of the study. "The implications for humans are huge because methylation is a common event in the human genome, and it is clearly a malleable effect that is subject to subtle changes in utero."

During DNA methylation, a quartet of atoms -- called a methyl group – attaches to a gene at a specific point and alters its function. Methylation leaves the gene itself unchanged. Instead, the methyl group conveys a message to silence the gene or reduce its expression inside a given cell. Such an effect is referred to as "epigenetic" because it occurs over and above the gene sequence without altering any of the letters of the four-unit genetic code.

In the treated mice, one or several of the four nutrients caused the Agouti gene to become methylated, thereby reducing its expression – and potentially that of other genes, as well. Moreover, the methylation occurred early during gestation, as evidenced by its widespread manifestation throughout cells in the liver, brain, kidney and tail.

"Our data suggest these changes occur early in embryonic development, before one would even be aware of the pregnancy," said Jirtle. "Any environmental condition that impacts these windows in early development can result in developmental changes that are life-long, some of them beneficial and others detrimental."

If such epigenetic alterations occur in the developing sperm or eggs, they could even be passed on to the next generation, potentially becoming a permanent change in the family line, added Jirtle. In fact, data gathered by Swedish researcher Gunnar Kaati and colleagues indicates just such a multi-generational effect. In that study of nutrition in the late 1800s, boys who reached adolescence (when sperm are reaching maturity) during years of bountiful crop yield produced a lineage of grandchildren with a significantly higher rate of diabetes. No cause-and-effect link was established, but Jirtle suspects epigenetic alterations could underlie this observation.

What is frustrating about a report like this is that it is a reminder that some environmental factors that we can easily manipulate (e.g. the amount of folic acid in the diet at each stage in development) are probably creating a large variety of effects which we simply don't know enough about to know that we should want to manipulate relevant the controllable factors. Some of us might have gotten lucky because our mothers happened to eat more folic acid rich lentils and greens at one stage of development when it would have helped us in some way and perhaps at less of them at some other stage where the effect of more methylation activity might have been more harmful. But others happened to have gotten dosed with methylating nutrients at other stages of development and therefore got stuck with a worse result in terms of predisposition for obesity, diabetes, cancer, hair texture, or who knows what else.

"We used a model system to test the hypothesis that early nutrition can affect phenotype through methylation changes," said Jirtle. "Our data confirmed the hypothesis and demonstrated that seemingly innocuous nutrients could have unintended effects, either negative or positive, on our genetic expression."

For example, methylation that occurs near or within a tumor suppressor gene can silence its anti-cancer activity, said Jirtle. Similarly, methylation may have silenced genes other than Agouti in the present study – genes that weren't analyzed for potential methylation. And, the scientists do not know which of the four nutrients alone or in combination caused methylation of the Agouti gene.

Herein lies the uncertainty of nutrition's epigenetic effects on cells, said Jirtle. Folic acid is a staple of prenatal vitamins, used to prevent neural tube defects like spina bifida. Yet excess folic acid could methylate a gene and silence its expression in a detrimental manner, as well. The data simply don't exist to show each nutrient's cellular effects.

Moreover, methylating a single gene can have multiple effects. For example, the Agouti gene regulates more than just coat color. Mice that over-express the Agouti protein tend to be obese and susceptible to diabetes because the protein also binds with a receptor in the hypothalamus and interferes with the signal to stop eating. Methylating the Agouti gene in mice, therefore, also reduces their susceptibility to obesity, diabetes and cancer.

Not only can increased methylation of a single gene have multiple effects but increased methylation caused by nutritional supplementation would likely cause increased methylation on other genes and thereby have yet more other effects. But if that is not complicated enough already note that in this experiment the nutrients caused mice that would otherwise be obese to be skinny instead. Well, not all mice are genetically predisposed toward obesity. The effects of supplementation using these same nutrients may cause different effects in other strains of mice. For instance, a gene that is not expressed much in this mouse strain might be expressed more in another mouse strain but folic acid might down-regulate it in the other mouse strain while not causing the down-regulation effect in this mouse strain.

The Duke researchers say much more work of this sort will be forthcoming from many labs.

"One of the things we need to do now is refine this model and see how generalizable it is," Waterland said. He noted that he and Jirtle still don't know which one or combination of the nutritional supplements created the changes in the experimental mice. They also want to know if other genes might be involved, and if the supplements caused any negative effects.

"You're going to see more of this kind of work," Jirtle said, "not just from our group, but from all over."

Variations in nutrient levels are just one cause of changes in methylation patterns that can produce enduring effects. Environmental stimuli can cause changes in methylation patterns that cause changes in personality and behavior.

Methylation that occurs after birth may also shape such behavioral traits as fearfulness and confidence, said Dr. Michael Meaney, a professor of medicine and the director of the program for the study of behavior, genes and environment at McGill University in Montreal.

For reasons that are not well understood, methylation patterns are absent from very specific regions of the rat genome before birth. Twelve hours after rats are born, a new methylation pattern is formed. The mother rat then starts licking her pups. The first week is a critical period, Dr. Meaney said. Pups that are licked show decreased methylation patterns in an area of the brain that helps them handle stress. Faced with challenges later in life, they tend to be more confident and less fearful.

For more on Dr. Meaney's work see here and here.

While back in the 1950s Dr. Spock confidently held forth on what were the best child-raising practices he really didn't know what he was talking about. A half century later there are still lots of ways in which child-rearing practices might be having unrecognized subtle effects upon human offspring. For instance, there might be some way of handling babies in the first few months of life which, analogous to the licking behavior of mouse mothers, will increase or decrease the risk of anxiety or depression because the practice changes methylation patterns in the brain. As of yet we just don't know what these ways are.

Fortunately just as there was a large effort to sequence the human genome and there is now a large effort to map all the genetic variations there is also a project to map all the places where methylation happens on the human genome and what the effects are of methylation on all the sites where it happens.

The Human Epigenome Project follows the completion of the Human Genome Project and aims to map the way methyl groups are added to DNA across the entire human genome. These "epigenetic" changes are believed to turn genes on and off. Scientists at the Wellcome Trust Sanger Institute in Cambridge, UK, and Epigenomics, a Berlin-based company, are leading the project.

Update: If, as I expect, many nutritional and other factors are discovered that influence human development in enduring ways the result will be a wide embrace of much more management of maternal diet and environment and of the environment of newborns. The speculative advice of the pop psych baby books of the past and present will be replaced with much more precise and accurate advice derived from a scientifically rigorous model of human development. Genetic profiles of mothers and embryos will be used to develop highly customized instructions for diet and environment in order to increase the odds that some parentally chosen desired ideal epigenetic programming of each baby will be achieved. Therefore the development of much greater knowledge of human development may well result in a more regimented and demanding approach to having and caring for babies. While this may make having a baby an even greater ordeal than it is already it should be expected that pharmaceuticals will eventually be developed that will be able to ensure a particular epigenetic outcome with less work on the part of mothers and fathers to manage the pre- and post-natal environment.

By Randall Parker    2003 October 08 01:16 AM   Entry Permalink | Comments (4)
2003 September 26 Friday
First Draft Sequence Of Dog Genome Is Published

A first draft sequence of a dog's geome has been sequenced.

Dog Genome Published by Researchers at TIGR, TCAG New technique, partial shotgun-genome sequencing at 1.5X coverage (6.22 million reads) of genome, provides a useful, cost-effective way to increase number of large genomes analyzed

Analysis reveals that 650 million base pairs of DNA are shared between dog and humans including fragments of putative orthologs for 18,473 of 24,567 annotated human genes; Data provide necessary tools for identifying many human and dog disease genes

Since not all the dog genome has been sequenced this surely represents a minimum estimate on the percentage of genes held in common. Also, not all human genes have been identified and some of the undiscovered ones might turn out to be shared with dogs as well..

September 25, 2003

Rockville, MD - Researchers at The Institute for Genomic Research (TIGR) and The Center for the Advancement of Genomics (TCAG) have sequenced and analyzed 1.5X coverage of the dog genome. The research, published in the September 26th edition of the journal Science, asserts that a new method of genomic sequencing, partial shotgun sequencing, is a cost-effective and efficient method to sequence and analyze many more large eukaryotic genomes now that there are a number of reference genomes available with which to compare. This important new study was funded by the J. Craig Venter Science Foundation.

The TIGR/TCAG team assembled 6.22 million sequences of dog DNA for nearly 80% coverage of the genome. Comparing the dog sequence data with current drafts of the human and mouse genome sequences showed that the dog lineage was the first to diverge from the common ancestor of the three species and that the human and dog are much more similar to each other at the genetic level than to the mouse. The group also identified 974,400 single nucleotide polymorphisms (SNPs) in the dog. These genetic variations are important in understanding the genes that contribute to diseases and traits among various breeds of dogs.

The identified SNPs are probably only a fraction of the total number of SNPs that dogs have. If humans are a reliable indicator then we can expect that eventually millions of dog SNPs will be found. So, yes, your dog really is unique.

The dog genome sequencing project, led by Ewen Kirkness, Ph.D., investigator at TIGR, revealed that more than 25% or 650 million base pairs of DNA overlap between human and dog. The sequence data was used to identify an equivalent dog gene for 75% of known human genes. In addition to this core set of common genes, the sequence data has revealed several hundred gene families that have expanded or contracted since divergence of the dog lineage from our common ancestor. For example, the dog genome is predicted to encode a much greater diversity of olfactory receptors than we find in human - which may contribute to their keen sense of smell.

"In little more than a decade genomics has advanced greatly and we now have approximately 150 completed genomes, including the human, mouse and fruit fly, in the public domain," said J. Craig Venter, Ph.D., president, TCAG. "With each sequenced genome the level of information gleaned through comparative genomics is invaluable to our understanding of human biology, evolution, and basic science research. Our new method is an efficient and effective way of sequencing that will allow more organisms to be analyzed while still providing significant information."

Most of those 150 completed genomes are for bacteria and other species that have smaller genomes. So that is not as impressive as it first sounds.

Conservation of the dog and human genome sequences is not restricted to genes, but includes an equally large fraction of the genomes for which functions are not yet known. "Understanding why these sequences are so highly conserved in different species after millions of years of divergent evolution is now one of the most important challenges for mammalian biologists," says Kirkness.

Comparing genomes between species is a great way to find active important areas. An area that is conserved across species must not be a junk unused area.

The first rough draft sequence of the dog genome was done using DNA from Craig Venter's standard poodle Shadow. It shows that even though our common ancestor with dogs is more distant we are genetically more similar to dogs than to mice.

The study confirms that, while dogs and wolves diverged from the common ancestor of all mammals long before early humans and mice did, dogs are much more closely related to humans than mice are.

"The wolf line diverged a little earlier, but the mouse is evolving faster," Venter said.

One likely reason for the more rapid divergence is that mice have shorter lifespans and shorter reproductive cycles. So mice have gone thru more generations since they split off than have dogs or humans. Another reason could be that their ecological niches exerted more selective pressures on them.

Because so many people love their doggies a great deal of medical knowledge has been amassed about dogs.

"Dogs enjoy a medical surveillance and clinical literature second only to humans, succumbing to 360 genetic diseases that have human counterparts," comment O'Brien and Murphy. "Dogs have been beneficial for standard pharmaceutical safety assessment and also for ground-breaking gene therapy successes."

Dogs get a lot of the same symptoms for many disorders. Though unfortunately they aren't as good at explaining how they feel.

This study demonstrates the extent to which DNA sequencing technology has become faster.

It is a point echoed by Dr Stephen O'Brien from the US National Cancer Institute: "NHGRI recently estimated that in the next four years, US sequencing centres alone could produce 460 billion bases - the equivalent of 192 dog-sized genomes at [just under the Tigr/TCAG] coverage."

Since there are 4,600 mammalian species we are still some way away from having a complete sequencing library of all mammals. Plus, the genetic variations are as important as the basic sequences and we still do not know what all the genetic variations are for a single species let alone what they all mean.

In about 10 or 20 years the cost of DNA sequencing will fall so far and the speed of DNA sequencing machines will increase so much that the sequencing of a genome will be doable in less than a day. At that point what seems like an amazing accomplishment today will seem pretty commonplace.

By Randall Parker    2003 September 26 02:20 PM   Entry Permalink | Comments (1)
2003 September 23 Tuesday
Nanolasers To Speed Up Search For Mitochondria Protecting Drugs

Paul Gourley and colleagues at the US Department of Energy's Sandia National Laboratories have developed a nanolaser technique that can be used to accelerate screening for drugs that protect mitochondria and neurons.

“Our goal is make the brain less susceptible to diseases like Lou Gehrig’s,” says Sandia researcher Paul Gourley, a physicist who grew up in a family of doctors.

Preliminary work thus far has shown the biolaser (which recently won first place in the DOE’s annual Basic Energy Sciences’ competition for using light to quantify characteristics of anthrax spores) is able to measure mitochondrial size through unexpected bursts of light given up by each mitochondrion. The laser, using the same means, can also measure the swelling effect caused by the addition of calcium ions — the reaction thought to be the agent of death for both mitochondria and their host cells. The researchers expect to introduce neuroprotectant drugs into experiments this month, and be able to test hundreds of possible protective substances daily instead of two or three formerly possible.

“If we can use this light probe to understand how mitochondria in nerve cells respond to various stimuli, we may be able to understand how all cells make life or death decisions — a step on the road, perhaps, to longer lives,” says Gourley.

To do that, he says, scientists must understand how a cell self-destructs, which means understanding how mitochondria send out signals that kill cells as well as energize them.

If compounds can be found that protect mitochondria those compounds may protect neurons from the effects of many kinds of mitochondrial dysfunction and perhaps slow the accumulation of damage in mitochondria that occurs with age.

“Cyclosporin protects mitochondria better than anything else known, but it is not a perfect drug,” says Keep. “It has side effects, like immunosuppression. Unrelated drugs may have a similar protective effect on mitochondria. Gourley’s device will lead to a rapid screening device for hundreds of cyclosporin derivatives or even of chemical compounds never tested before.”

While testing with conventional methods would take many people and many batches of mitochondria, says Keep, the nanolaser requires only tiny amounts of mitochondria and drug to test.

“With one tube on the left flowing in a number of mitochondria per second, and microliters of different drugs in different packets flowing in to join them on the right, we could rapidly run through hundreds of different compounds. Each mitochondrion scanned through the analyzer would show if there were a change in its lasing characteristics. That would determine the effectiveness of chemical compounds and identify new and even better neuroprotectants.”

Of course this is just one step in a drug development process. All the compounds to be screened would still have to be synthesized. Also, any compounds which look good using this assay method would still need to be tested in whole cells, lab animals, and eventually humans. But this report is a sign of the times. Techniques that use micro-scale and nano-scale components to speed various laboratory procedures by orders of magnitude represent the future of biological science and biotechnological development.

By Randall Parker    2003 September 23 02:57 PM   Entry Permalink | Comments (0)
2003 September 21 Sunday
Paul Allen Brain Atlas To Map All Brain Genes In Mouse

Paul Allen is ponying up $100 million dollars to map all the genes that are activated in mouse brain cells in 3 to 5 years.

Microsoft co-founder Paul Allen has donated $100 million to launch a private research organization in Seattle devoted to deciphering the links between our genes and our brain.

Dr. Thomas Insel, director of the National Institute of Mental Health, says that approximately 6,000 genes are thought to be expressed only in brain with many more that are expressed in the brain and also in other parts of the body as well.

Insel and his team at the NIH have been working on a gene map of the mouse brain since 1999 and hope to soon publish the location of several hundred genes. Allen's project, Insel noted, is on a much larger scale aiming to identify 10,000 genes a year.

Note that Allen is therefore accelerating work on identifying genes active in the brain by at least an order magnitude over that of the NIMH Brain Molecular Anatomy Project.

While the anatomy project can analyze 600 to 800 genes a year, Dr. Boguski's team is shooting for about 10,000 genes a year.

This work will make it easier to find the equivalent genes that control human brain development and on-going operation.

Because mice and humans have 99 percent of the same genes, scientists hope the map of the mouse brain will provide a template for comparison with the human brain.

Insel says these genes would all have been identified over the next couple of decades but that Allen's money is going to compress the amount of time to discover them down to only a few years. Well then hurray for Paul Allen!

Once all the genes which are expressed in the brain are identified the bigger job of figuring out how each affects the brain will still remain to be done.

It's like opening a box filled with parts to build two tables and there are 30,000 parts and no instructions. There is no map," says Mark Boguski, a longtime genomics researcher who is the senior director of the Allen Brain Atlas team. "We have to figure out which are for the brain, and then we have to figure out how they are put together or what they do."

Still, just by knowing which genes are expressed in the brain the next step will be able to be done much more quickly. Just being able to compare people with different genetic variations for brain genes will lead to the much more rapid identification of genetic variations that affect intelligence, personality type, tendencies toward specific forms of behavior, and susceptibility to a large assortment of neurological and mental disorders such as Alzheimer's Disease, Parkinson's Disease, depression, and anxiety.

This first effort by the Allen Institute for Brain Science is known as the Allen Brain Atlas project.

The first endeavor of the Allen Institute for Brain Science is the Allen Brain Atlas project, the planning for which has been underway for two years. For decades, scientists have been eager for an intense, focused effort to develop a compendium of information that could serve as a foundation for general brain research. Instead of researching genes one at a time, the Allen Brain Atlas project will give scientists an unprecedented view of that portion of the genome that is active in the brain.

From the Allen Brain Atlas web site:

Why build a brain atlas?
The human brain has been an object of mystery and wonder since antiquity. It defines who we are as a species and as individuals—our emotions, thoughts and desires—and controls many of the body’s essential, but unconscious functions, such as breathing and heart rate.

Our understanding of how the brain is organized and how it works is still in the very early stages. Basic processes of memory and cognition remain a mystery. While it is estimated that the human brain contains a trillion different nerve cells or neurons, capable of making up to a thousand different connections each, scientists don’t know how many subtypes of neurons exist, how they are linked up in circuits, or how they work.

Despite more than a century of research, classical neuroanatomists still cannot agree on the boundaries of different brain regions or even their names. In some regions of the brain, there is such fundamental disagreement about mapping regional boundaries that it is almost like comparing maps of Western Europe from 100 years ago to today. An accurate, definitive map is of utmost importance if we want to develop new therapies for neurological disorders such as Alzheimer’s, schizophrenia, depression, and addiction, or simply to understand the essence of what makes us human.

This project demonstrates the value of "Big Science" funding in molecular biology and genetics to achieve major goals. Note that DNA double helix co-discoverer James Watson is serving as one of Allen's advisors on this Brain Atlas project. Watson is advocating a Manhattan Project style effort to map all the genes expressed in each type of cancer in order to rapidly develop far more effective treatments for cancer. Watson thinks his proposed project could be done for a few hundred millions of dollars. We are at the point where the instrumentation and techniques for doing DNA sequencing and gene expression measurement with gene arrays have gotten fast enough that such ambitious projects can be done within a few years to provide substantial benefits fairly rapidly.

By Randall Parker    2003 September 21 04:40 PM   Entry Permalink | Comments (8)
2003 August 12 Tuesday
All Lipid Metabolites To Be Characterized In Macrophages

A consortium of researchers is going to identify and discover interactions of all lipid metabolites in macrophages.

The five-year, $35 million grant from the National Institute of General Medical Sciences (NIGMS) will support more than 30 researchers at 18 universities, medical research institutes, and companies across the United States, who will work together in a detailed analysis of the structure and function of lipids. The principal investigator of this collaboration is Edward Dennis, Ph.D, professor of chemistry and biochemistry in UCSD’s Division of Physical Sciences and UCSD’s School of Medicine.

Dennis notes that while sequencing the human genome was a scientific landmark, it is just the first step in understanding the diverse array of systems and processes within and among cells. Establishment of this consortium is a significant step in an emerging field called “metabolomics,” or the study of metabolites, chemical compounds that “turn on or off cellular responses to food, friend, or foe,” he explained.

Lipids are a water-insoluble subset of metabolites central to the regulation and control of normal cellular function, and to disease. Stored as an energy reserve for the cell, lipids are vital components of the cell membrane, and are involved in communication within and between cells. For example, one class of lipids, the sterols, includes estrogen and testosterone.

The initial phases of the project, known as Lipid Metabolites And Pathways Strategy (LIPID MAPS), will be aimed at characterizing all of the lipid metabolites in one type of cell. The term “Lipidomics” is used to describe the study of lipids and their complex changes and interactions within cells. Because this task is too extensive for a single laboratory to complete, researchers at participating centers will each focus on isolating and characterizing all of the lipids in a single class. This information will then be combined into a database (at http://www.lipidmaps.org) to identify networks of interactions amongst lipid metabolites and to make this information available to other researchers. Shankar Subramaniam, Ph.D., professor of chemistry and bioengineering at UCSD’s Jacobs School of Engineering and San Diego Supercomputer Center, will coordinate this aspect of the project.

The cell type selected for study is the macrophage, best known for its role in immune reactions, for example scavenging bacteria and other invaders in the body. Macrophage cells from mice will be used, rather than human cells, because there exists a “library” of mouse cells with specific genetic mutations. By studying cells missing certain genes, the research team will attempt to identify what genes code for those enzymes key in synthesis and processing of lipid metabolites. Christopher Glass, M.D., Ph.D., professor of cellular and molecular medicine at UCSD’s School of Medicine, will coordinate the macrophage biology and genomics aspects of the consortium.

What is interesting about this project is that it typifies a trend in biological sciences toward the more systematic and thorough collection of information on all the parts of each subsystem or category of cellular metabolism. Systematic efforts to collect data on cellular metabolites and components will provide the raw data needed to construct of far more detailed models of cellular metabolism. Coded up as computer programs these models will eventually be able to predict how each imaginable intervention in cellular metabolism will affect all the subsystems in a cell and in the larger organism that the cell is part of. Computer models built with sufficient detail will allow simulation runs to a serve in place of laboratory experiments that currently require real cells and real organisms. The systematic collection of data on the subsystems and components of cells will therefore lead to a great acceleration in the rate of biological science and in biological engineering.

Update: In another sign of the times the LIPID MAPS project has been made possible by instrumentation advances.

The LIPID MAPS project has become possible thanks to the refinement of mass spectrometers, which determine the type and quantity of lipids in a mixture. Today, machines can identify hundreds of lipids in a sample simultaneously using electron spray ionization mass spectrometry. Particles are fired at lipid molecules to chisel off shards, the chemical structures of which are determined one by one.

The development of better instrumentation does more to accelerate the rate of advance in biology than any other factor.

By Randall Parker    2003 August 12 01:29 AM   Entry Permalink | Comments (1)
2003 July 13 Sunday
James D. Watson Calls For Manhattan Project To Cure Cancer In 10 Years

Nobel Prize winning Co-discoverer of the DNA double helix James D. Watson says a large effort to systematically collect information on the genetic makeup of many cancers should be done.

The world's scientific establishment is frustrating research into cancer, which could probably be cured in 10 years if fought through a central agency, according to one of the world's most eminent scientists.

...

Along with one of Australia's top expatriate scientists, Bruce Stillman, Dr Watson is pushing for an international effort to map the genetic makeup of all cancers. It would be similar to the sequencing of the human genome, a task completed this year, but would cost much less - up to $A300 million compared to the $A4.5 billion spent on the human genome.

As I understand it, Watson's argument is that rather than parcelling smaller amounts of money out to many different scientists to study which ever facet of cancer they find interesting there should be a big systematic effort to examine a large number of cancer cell lines and to look at their gene expression for a large number of genes.

Watson is saying, in essence, we now have the tools to collect the genetic information we need in order to discover the changes in genetic regulatory mechanisms that cause cancer. Given that it is possible to do this he says we should spend the money on a big effort to collect the information we need to.

Hey, suppose he is right. But suppose the tools are expensive to use. If discovering a cure for cancer was going to cost, say, $500 billion dollars would you be for or against? I do not think it would really cost that much. But even if it did I'd be for it. Put that number in perspective. The US economy produces about $10 trillion dollars per year in goods and services. The health part of that is around 14% (give or take a percentage point - didn't look up the latest figures) and so is about $1.4 trillion per year. So what is $500 billion in the bigger scheme of things?

The late Lewis Thomas, former director of Sloan-Kettering, observed in his book Lives Of A Cell: Notes Of A Biology Watcher that diseases are expensive to treat when we do not have effective treatments that get right at the causes. He cited, for example, tuberculosis sanitariums. People had to be kept in professionally staffed institutions for long periods of time and could not work or take care of family while sick. But along came drugs that cured TB and the people walked out in a few weeks. The cost savings were enormous. Similarly, the cost savings that will come from a cure for cancer will be enormous. Even if we spent hundreds of billions on Watson's Manhattan Project to cure cancer we'd gain it back many times over because effective treatments would be far cheaper than radiation therapy, chemotherapy, and the other treatments currently used that have horrible side-effects and which can not even cure most cancer patients.

The Director of the US National Cancer Institute says death from cancer can be stopped in 15 years.

But with the recently announced historic completion of the Human Genome Project, and other advances in molecular biology and proteomics, medical science is about to take its largest leap, probably since the discovery of antibiotics.

The results for the prevention, diagnosis and treatment of cancer are expected to be profound. "We are now in a position to rapidly and continuously accelerate the engine of discovery, so we can eliminate suffering and death from cancer by 2015," said Dr. Andrew von Eschenbach, Director of the National Cancer Institute. "We may not yet be in a position to eliminate cancer entirely," he continued, "but eliminating the burden of the disease by preemption of the process of cancer initiation and progression is a goal within our grasp."

Cancer can be thought of as an information problem. Each cancer has genes that have been deleted and other genes that have been upregulated or downregulated. Go thru a large number of cancer cells and collect the information about the state of a large number of genes and it may be possible to deduce exactly what genetic switching combinations can cause or stop a cell from being cancerous. From that information it may be possible to devise effective therapies using RNA interference and other gene therapies. Also, drugs could be developed to target any genetic switching mechanism which is identified to be important.

By Randall Parker    2003 July 13 10:20 AM   Entry Permalink | Comments (3)
2003 July 08 Tuesday
Progress Toward $1000 Per Person DNA Sequencing

The Scientist has a good review of all the approaches being pursued to dramatically lower the cost of complete genome sequencing. (free registration required)

Some companies estimate that within the next five years, technical advances could drop the cost of sequencing the human genome low enough to make the "thousand-dollar genome" a reality. Whether or not that happens, new sequencing approaches could in the short term facilitate large-scale decoding of smaller genomes. In the long term, low-cost, rapid human genome sequencing could become a routine, in-office diagnostic test--the first step on the road to truly personalized medicine.

Companies discussed in the article which are pursuing approaches to radically lower the cost of DNA sequencing include VisiGen Biotechnologies, 454 Life Sciences, Solexa, and US Genomics. A number of university research labs are also pursuing approaches that may radically lower the cost of DNA sequencing including that of Daniel Branton at Harvard (using nanopores), George Church at Harvard (particularly his work on polymerase colony or polony technology) and Watt W. Webb of Cornell.

The approach being pursued by Cornell professor of applied and engineering physics Watt Webb's group is particularly interesting because the ability to optically watch the behavior a single biomolecule at a time could be used for many research purposes.

His report on watching individual molecules at work, "Zero-Mode Waveguides for Single-Molecule Analysis at High Concentrations," appears in the Jan. 31 issue of the journal Science. The article, which is illustrated on the cover of Science, also is authored by a multidisciplinary group of Cornell researchers: Michael Levene, an optics specialist and postdoctoral associate in applied and engineering physics; Jonas Korlach, a biologist who is a graduate student in biochemistry, molecular and cell biology; former postdoctoral associate Stephen Turner, now president and chief scientific officer of Nanofluidics, a Cornell spin-off (read the story); Mathieu Foquet, a graduate student in applied and engineering physics; and Harold Craighead, professor of applied and engineering physics.

"This is an example of the possibilities provided by integrating nanostructures with biomolecules," said Craighead, the C.W. Lake Jr. Professor of Productivity, who also is co-director of the National Science Foundation (NSF)-funded Nanobiotechnology Center at Cornell. "It represents a major step in the ability to isolate a single active biomolecule for study. This can be extended to other biological systems."

More on Webb's approach.

A new technique for the determination of the sequence of a single nucleic acid molecule is being developed in our laboratory. In its principle, the activity of a nucleic acid polymerizing enzyme on the template nucleic acid molecule to be sequenced is followed in real time. The sequence is deduced by identifying which base is being incorporated into the growing complementary strand of the target nucleic acid by the catalytic activity of the polymerase at each step in the sequence of base additions. Recognition of the time sequence of base additions is achieved by detecting fluorescence from appropriately labeled nucleotide analogs as they are incorporated into the growing nucleic acid strand.

Because efficient DNA synthesis occurs only at substrate concentrations much higher than the pico- or nanomolar regime typically required for single molecule analysis, zero-mode waveguide nanostructures have be developed as a way to overcome this limitation. They effectively reduce the observation volume to tens of zeptoliters, thereby enabling an inversely proportional increase in the upper limit of fluorophore concentration amenable to single molecule detection. Zero-mode waveguides thus extend the range of biochemical reactions that can be studied on a single molecule level into the micromolar range.

The cost of DNA sequencing looks set to drop dramatically. The big question is just how fast will the costs drop. DNA sequencing still costs several million per person. But that is orders of magnitude cheaper than it was 10 years ago. The newer approaches that attempt to read a single strand of molecules could be made very cheap if only they can be made to work in the first place.

See also my previous posts on approaches to lower DNA sequence costs in the Biotech Advance Rates archive.

By Randall Parker    2003 July 08 11:08 AM   Entry Permalink | Comments (0)
2003 June 25 Wednesday
Leroy Hood: $1000 Personal DNA Sequencing In 10 Years

Speaking at the Biotechnology Industry Organization (BIO) Convention 2003 in Washington DC Leroy Hood, president of the Institute for Systems Biology in Seattle, predicts personal genome sequencing for under $1000 within 10 years.

Within about 10 years, advances in nanotechnology and other predictive models will allow the fast and cheap sequencing of individuals' genomes, Hood said, which in turn will lead to advances in predictive medicine. As scientists are able to look at 30,000 or more genes for each patient, doctors could use such genome sequences to predict what health problems the individual patient is likely to face, he said.

"I think we will have an instrumentation that could well bring sequencing of the human genome ... down to a 20-minute process and do it for under $1,000," Hood said. "This changes the way we think about predictive medicine."

Suppose the 10 year goal for reducing DNA sequencing costs is achieved. Will the ability to get one's own DNA sequenced be useful? Francis Collins, director of the National Human Genome Research Institute at the U.S. National Institutes of Health, is promoting the goal of identifyng all the genetic risk factors for major diseases in 7 or 8 years.

Collins called on the biotechnology and medical communities to identify the risk factors for all major common diseases in the next seven to eight years.

Once we can cheaply sequence each person's DNA then genetic variations that increase the risk of side effects from taking specific drugs will be identified for a large number of drugs. Combine the ability to guide drug choice with the ability to more accurately predict the risk of developing various illnesses and it will become possible to devise low-risk strategies for reducing the risk of developing heart disease, cancer, and other diseases.

Once DNA sequencing is cheap and the risk factors are all identified the big pay-off will come from the development of treatments that cancel out and eliminate the risks that particular genetic variations cause. Drugs, gene therapy, and other approaches will be developed to reduce or eliminate specific risk factors. A great deal gets written about the potential ways that individual genetic privacy needs to be protected in order to prevent discrimination against people who have genetic risk factors for diseases. But individual genetic risk profiles offer far more potential for benefit than harm.

Look at it this way: if one person is a risk for a variety of illnesses due to genetic risk factors and another person has little in the way of genetic risks then who is in a better position to benefit from the knowledge of their genetic risk profile? The person who has the risks needs to know about them in order to act to somehow mitigate some of those risks. The person who doesn't have the genetic risks benefits far less from being told they do not have the risks because the knowledge really doesn't help them do anything to protect their health.

Someone who has a genetic endowment that'll keep them alive til age 95 even if they smoke and eat junk food is far less in need of biotechnological advances that are designed to deal with genetically-caused health risks. Someone who has a genetic variation that puts then at enormous risk of getting heart disease or cancer by age 55 is in desperate need of the treatments that will be designed to cancel out genetic risks. The good news is that leading figures in biotechnology and science are increasingly of the opinion that the day is not far off when we will each individually begin to benefit from detailed knowledge of our individual genetic make-ups.

By Randall Parker    2003 June 25 06:04 PM   Entry Permalink | Comments (4)
2003 June 14 Saturday
Industry Continues To Advance DNA Assaying Technologies

The Scientist has published their annual review of the burgeoning microfluidics industry which includes a description of the 451 Life Sciences DNA sequencing technology. (free registration required)

Multiplexed amplification and sequencing reactions take place in the 75- picoliter wells of the company's high-density (~300,000 well) PicoTiter™ plates. Sequencing is accomplished with synthesis reactions, which produce light that is captured by the instrument's detection system. The current configuration will support resequencing of many strains of viruses and bacteria, and ultimately the de novo sequencing of bacterial and viral genomes.

454 Life Sciences' goal is to cut orders of magnitude of time off the sequencing of an entire genome.

As a majority-owned subsidiary of CuraGen, 454 Life Sciences follows CuraGen's tradition of innovative, industrialized, high-throughput solutions to bio-product development bottlenecks. As CuraGen's focus is on genomics-based pharmaceutical development to address unmet medical needs, 454 Life Sciences' mission is to develop and commercialize instrument systems to conduct whole genome analysis in a massively parallel fashion. Together, engineers and scientists from both companies are actively working together to develop and refine technology that can analyze entire genomes in days, instead of years, thus addressing bottlenecks currently impeding product development across the life sciences industry. In essence, CuraGen receives the benefits from being the "first user" of 454 Life Sciences' instruments and technology.

The Scientist also has published an article on 4 highly parallel approaches to whole genome Single Nucleotide Polymorphism (SNP) testing. The approach followed by Affymetrix illustrates how rapidly the rate of SNP testing is accelerating.

In 1999, the Santa Clara, Calif.-based company released its GeneChip HuSNP™, capable of profiling 1,200 SNPs simultaneously. Now its GeneChip Mapping10K Array (in early access) genotypes 10,000 SNPs per assay. By the end of the year, Affymetrix expects to begin offering early access to next-generation products that can genotype 100,000 SNPs per assay across multiple arrays.

Another company mentioned in the second article is Illumina which clams their Sentrix™ 96 multi-array matrix enables parallel processing of up to 150,000 SNPs

BeadArray fiber bundle arrays contain nearly 50,000 individual, light-conducting fiber strands which are chemically etched to create a microscopic well at the end of each strand. Each bead in the array con-tains multiple copies of covalently attached oligonucleotide probes, and up to 1,500 unique probe sequences are represented in each array, with approximately 30-fold redundancy of each bead type.

Note the use of bundles of large numbers of optical fibers. It illustrates how communications and electronics technologies are being used in biological instrumentation systems because they are of the right scale to do what is being attempted: make things smaller in order to make them more sensitive, faster, more parallel, and cheaper.

What is perhaps most encouraging about these reports is that the story of advances in DNA sequencing and SNP assaying is increasingly a story about industrial technology developers than about academic researchers. All the assaying systems described in these articles are made by companies, not university research groups (though it would not be surprising if university research groups collaborate with some of them in their development). The development of much cheaper and faster DNA assaying tools is not a distant prospect waiting on unpredictable advances in basic science. Rather, it is happening now as dozens of companies refine existing products and roll out new products that offer dramatic improvements over previous generation products that are just a couple of years old.

By Randall Parker    2003 June 14 01:03 PM   Entry Permalink | Comments (0)
2003 June 12 Thursday
Microfluidic Pump May Work As Insulin Delivery System

C.J. Zhong of Binghamton University has developed a microfluidic pump with no moving parts.

An assistant professor of chemistry at Binghamton since 1998, Zhong refers to the invention as a "pumpless pump" because it lacks mechanical parts. The pumping device is the size of a computer chip and could be fabricated at a scale comparable to an adult's fingernail. The device comprises a detector, a column filled with moving liquid, and an injector. The pumping action is achieved when a wire sends an electrical voltage to two immiscible fluids in a tiny column, perhaps as small as the diameter of a hair. Applying opposite charges to each side of the column causes the fluids to oscillate, thereby simulating the action of a pump. In some ways, the tiny system works like a thermostat: it takes a small sample, analyzes it, and tells other components how to act in response.

Zhong's device has significant potential in the treatment of diabetes because it is small enough to be inserted into and remain in the body where it would conduct microfluidic analysis, constantly measuring the need for insulin and, then, delivering precise amounts of insulin at the appropriate times. Because the detector would remain constantly at work, the device could eliminate the need for regular blood tests. Moreover, because less time would have passed between infusions of insulin, it is likely that insulin levels could be better maintained, without soaring and surging as dramatically as they sometimes do with present day treatment strategies. While his device is not an "artificial pancreas," Zhong says that it could well prove to be an integral part of a system that could someday become just that.

Diabetics are not the only ones who will benefit from the tiny pumping device, developed by Zhong and his research team of undergraduate and graduate students and a post-doctoral researcher. Any small, closed environment could benefit from tiny equipment that requires little fuel and produces no waste, he said.

Zhong sees the use of microfluidics to automate and shrink down the size of science laboratory equipment as offering substantial advantages.

Making lab equipment smaller and more efficient is one of Zhong's chief research goals. It's a goal he sees as highly achievable.

"Look at the computer," he said. "Twenty years ago, it was huge. Now it's tiny." He eventually hopes to create what he calls a "lab on a chip," by shrinking down all of the equipment in a chemistry lab to the size of computer chips. Smaller equipment not only uses fewer resources, he said, but creates less waste.

While microfluidics will provide much better methods to do drug delivery this will not be the source of the greatest benefits from microfluidics. The absolutely revolutionary benefit from microfluidics will be that it will speed up the rate of advance of basic biological science. The biggest problem holding back medical advance is not a need for better ways to deliver drugs or to monitor a person's body for signs of disease. Our biggest problem is that we do not know enough about how genes and cells and organisms operate. We need more automated, faster, and cheaper ways to take apart biological systems to understand how they work in much greater detail. The greatest promise of microfluidics is that it will be able to lower costs, speed up experiments, and make experiments far more sensitive.

A lot of people wonder why, after decades of trying, we still do not have a general cure for cancer. The reason is pretty simple: we do not have tools that are sophisticated enough to understand cancer and normal cells well enough to be able to target cancer cells with enough selectivity. Microfluidics will provide much better tools for taking apart biological systems whether the purpose is to study cancer, the aging process, or any other biological problem.

By Randall Parker    2003 June 12 03:14 PM   Entry Permalink | Comments (0)
2003 May 29 Thursday
Venter, Duke U Initiate Search For Genetic Causes Of Disease

Craig Venter says at his new Center for the Advancement of Genomics DNA sequencing will cost only 1 dollar per 800 base pairs.

At his new center, the cost of sequencing DNA will be as low as $1 for 800 DNA units, he said, a substantial saving on current costs.

That works out to about 0.125 cents per base pair. Let us put that in some recent historical perspective. In 1998 DBA sequencing cost 50 cents per base pair.

When we started the project in the late '80s, it cost about $5 to sequence a base pair; that has dropped to about 50 cents per base pair,

As of November 2002 the U.S. Human Genome Research Project of the US Department of Energy was quoting a cost of 9 cents per finished base pair. This makes comparisons a bit difficult. The effort to sequence the human genome involved repeated sequencing to look for errors. Is Venter quoting a verified sequencing cost of 0.125 cents per base pair or just a first pass cost that low? Either way his cost is at least an order of magnitude lower than the DNA sequencing costs of just a couple of years ago. However, his cost still puts the cost of sequencing a person's complete genome (about 2.9 billion base pairs) in the millions of dollars. Costs are still a few orders of magnitude too high for the sequencing of one's own genome to become commonplace.

There are research groups and venture capital start-up companies working on more radical advances in DNA sequencing. See here and here for example.

Venter's institute has signed an agreement with Duke University to collaborate to discover the genetic contributions to various diseases and to develop faster and cheaper tests for genetic variations that contribute to disease.

Part of their goal is to identify genetic hiccups found in major illnesses such as heart disease, cancer, infectious diseases, even sickle cell anemia. But it’s also to find accurate, inexpensive tests that will tell individuals what’s likely to make them ill long before they’re in danger, so they can opt for preventive measures — maybe even genetic "repair patches."

As the cost of DNA sequencing continues to drop the scale and number of efforts to discover the genetic causes of diseases will continue to rise. Most importantly, the rate at which the genetic causes of disease are discovered will steadily accelerate year after year until the vast bulk of the genetic variations that contribute to disease are identified.

By Randall Parker    2003 May 29 12:56 AM   Entry Permalink | Comments (0)
2003 May 17 Saturday
Lab-On-A-Chip Designs Start To Become Useful

Small Times has a good article with interviews of researchers and industry leaders in the field of microfluidic chip development for biological science and biotechnology applications.

To illustrate the level of integration achieved with lab-on-a-chip, Knapp points to the Agilent 2100 Bioanalyzer. Using Caliper developed LabChip technology, the device measures out a specific quantity of protein sample, separates the protein mixture by size, stains the mixture with a fluorescent dye, and then de-stains the protein so that only the proteins are labeled. It then presents those results in a timed fashion to the optical detector. "You don't have to pour a gel. You don't have to load a gel. You don't have to stain or de-stain a gel. You don't have to scan the gel," he says. "All of those functions are integrated into the device."

The Microfluidic chip industry is going to produce successive generations of designs that are going to be progressively more complex, smaller, cheaper, and longer lasting. Just as microprocessors became useful for an increasingly large number of applications as they went thru this cycle so it shall be with microfluidics.

The microfluidics industry has developed more slowly than analysts had forecast.

Perhaps most unfortunate of all, microfluidics companies never were able to convince customers of the merits of the technology. What Frost & Sullivan predicted would be a $3.4 billion market by 2004 is just $175 million in 2003, according to the market research firm's analyst Nate Cosper.

But novel applications of microfluidics technology are being developed. For instance, a new prototype device will sort out the most viable sperm for use in in vitro fertilization.

University of Michigan researchers have developed prototype microfluidic devices that can automatically and rapidly sort sperm and isolate the most viable swimmers for injection into an egg. The Microscale Integrated Sperm Sorter does it all on one disposable device.

Many top university labs are working on microfluidics and more generally on what is called BioMEMs (where MEMS stands for MicroElectroMechanical Systems). For instance the Quake group at CalTech is working on "a DNA sequencing technology based on microfabricated flow channels and single molecule fluorescence detection". The promise of this technology is to allow faster and cheaper DNA sequencing using much smaller sample sizes.

By Randall Parker    2003 May 17 08:21 PM   Entry Permalink | Comments (0)
2003 May 12 Monday
Scott Gottlieb On The Real Future of Medicine

Writing in The New Atlantis Scott Gottlieb has written a survey of how computer technology, gene arrays, and other advances are transforming how drugs are developed, diseases are detected, and treatments are delivered. His essay is entitled The Future of Medical Technology.

This new ability to diagnose and treat certain diseases early, from infectious agents like hepatitis C to degenerative ailments such as Alzheimer’s and Parkinson’s, may obviate the need for the types of tissue, organ, or stem cell therapies that often attract the most public attention. Moving from wet lab to computer, from random to rational drug design, from species biology to the individual unique DNA profile, companies adopting the in silico paradigm are unlocking the long-hyped promise of genomic medicine, making targeted drugs and diagnosis a reality and drug development faster, cheaper, and better.

While the ability to detect diseases earlier is helpful the real problem with a disease like Parkinson's or Alzheimer's is that there is currently no way to halt disease progression regardless of when it is discovered. Early detection of a neurodegenerative disease at this point pretty much just allows you to start worrying about it sooner.

Of course, some day there probably will be treatments that will halt disease progression for some diseases and early intervention with, say, gene therapy may allow later cell therapy or growth of replacement organs to be avoided. But early detection is not going to eliminate most of the demand for stem cell therapies and replacement organs. Organs grow old. Adult stem cell pools become senescent. Also, accidents in the form of everything from physical trauma to toxic chemical exposures happen. There are going to be plenty of uses for stem cells and replacement organs no matter how many advances are made in drug developement and in gene therapy.

The only thing that is going to reduce the reduce the demand for embryonic stem cells is the development of techniques that allow adult cell types (and not just adult stem cell types) to be transformed into other cell types including other stem cell types. This will come with time. The ways that cell diffentiation state is controlled will be elucidated. An increasing number of techniques for manipulating cell differentiation state (e.g. gene therapies, hormones, drugs developed for that purpose) will be found.

Gottlieb is on firmer ground when he describes the future potential of computers to speed drug development and generally to speed the rate at which biological systems are figured out.

In the future, a supercomputer sitting in an air-conditioned room will work day and night, crunching billions of bits of information to design new drugs. Multiplying at the speed of Moore’s Law, which predicts that computer processing power doubles every three years, this drug discovery machine will never need to rest or ask for higher pension payments. It will shape how we use the abundance of genomic information that we are uncovering and will be the deciding factor for the success of medicine in an age of digitally driven research.

The big challenge of biological systems is that they are complex and small. They are hard to watch. We do not know most of what there is to know about what goes on in cells. We can not predict how molecules we might introduce will interact with the existing systems in cells. Our problem is that we need tools that are commensurate to the systems we are trying to understand. We need the ability to sense more things at the same time continuously and cheaply. We need faster DNA sequencing. We need better tools for manipulating biological molecules very precisely on their own scale. After decades of chasing cancer, neurodegenerative diseases, and other disease what is changing is that we can begin to see the day coming when we will have those tools which operate at the scale of biological systems and that will make it fairly easy to take apart, manipulate, and predict the behavior of biological systems.

Computers are great general enablers for the development of instrumentation. They collect data, control actuators, and process the data. But semiconductors are being used in ways that go beyond just connecting to sensors and collecting dat from them. Semiconductor technology is being used to scale sensing and manipulating systems down to the level of biological systems. Silicon chips are being used to sense and interact with biological systems. Silicon chips are even being made into mini-chemistry labs. Tools are being developed that operate on the same level as the systems under study.

The other way that computers are contributing is in simulations. But the "rational drug design" process that Gottlieb reports on in his article is still an ideal to strive toward that lies in the future. There have been a few success stories. But computers are not yet fast enough and we do not have enough information on all the proteins in cells to be able to simulate how a drug will interact with a real biological system. For a sense of how drug development is done currently read Derek Lowe. When he writes about drug development he gives a real sense of where drug development is at when he describes the inability to get drugs to where they are desired and only where they are desired.

We have enough trouble just getting our compounds out of the intestines and into the blood; subtleties past that are often out of our range. As far as targeting things to specific spots inside the cell, that's generally not even attempted. What we shoot for is selectivity against the enzyme or receptor we're targeting (as much as we can assay for it, which sometimes isn't much.) Then we just try to get the compound into the cells and hope for the best.

Numerous unforeseeable problems come up. Cell outer surfaces and internals have enormous numbers of different surfaces. They have many different proteins that are constantly changing in shape and presenting new surfaces for possible drug binding. A drug which is developed to be aimed at a particular receptor might also end up having affinity for other types of receptors whose existence are not even suspected. Truly rational drug design will happen when it becomes possible to predict in advance whether a drug will reach the desired target receptor and that it will bind only on that receptor. We are a long way away from being able to do that. All these unforeseeable problems mean that in drug development there is still a very large element of luck at every stage of development.

Derek also has a great recent post on the same theme that I've struck above: we need tools that get down inside a cell to watch and manipulate it on the scale that a cell operates.

But I think the general trend is unstoppable. If we're going to understand the cell, we're going to have to get inside it and mess with it on its level. There are doubtless plenty of great ideas out there that haven't been hatched yet (or have been and are being kept quiet until they've been checked out.) For example, I'd be surprised if someone isn't trying to mate nanotechnology with RNA interference in some way. (There's a hybrid of two hot fields; I'll bet that grant application gets funded!) It all bears watching - or participating in, if you're up for it.

Here is the most important point I'd like to make about the future of medicine: the most powerful future treatments will not be classical drugs. Cell therapies will be incredibly powerful and of course they will be cells, not drugs. Granted, cells will be manipulated by drugs as part of the preparation to make them suitable for delivery. But the cells, properly programmed in their DNA, will be the main agents of therapy when they are used. Also, replacement organs are going to become incredibly important. Another major type of treatment will be gene therapy. The gene therapy will be more akin to a computer program than a classical chemical drug.

The biggest change coming in medicine is coming as a consequence of a fundamental limitation of classical drug compounds: they do not carry enough information. Cells, organs, and complete bodies are very complex information processing systems. The genome is akin to an extremely complex computer program. It seems unreasonable to expect that when a very complex information processing system goes seriously awry that the most serious problems that arise can be dealt with using molecules which have such low information content. By contrast, gene therapy and cell therapy should both be thought of as therapeutic agents that have much higher information content.

By Randall Parker    2003 May 12 12:48 AM   Entry Permalink | Comments (1)
2003 February 12 Wednesday
Cornell Group Can Watch One Molecule At A Time

Cornell University scientists have developed the means to optically watch a single biological molecule at a time.

Until now, researchers were constrained from seeing individual molecules of an enzyme (a complex protein) interacting with other molecules under a microscope at relatively high physiological concentrations -- their natural environment -- by the wavelength of light, which limits the smallest volume of a sample that can be observed. This, in turn, limits the lowest number of molecules that can be observed in the microscope's focal spot to more than 1,000. Internal reflection microscopes have managed to reduce the number of molecules to about 100. But because this number is still far too high to detect individual molecules, significant dilution of samples is required.

The researchers have discovered a way around these limitations, and in the process reduced the sample being observed 10,000-fold to just 2,500 cubic nanometers (1 nanometer is the width of 10 hydrogen atoms, or 1 billionth of a meter), by creating a microchip that actually prevents light from passing through and illuminating the bulk of the sample. The microchip, engineered from aluminum and glass in the Cornell Nanoscale Science and Technology Facility, a NSF-funded national center, contains 2 million holes (each called a waveguide), some as tiny as 40 nanometers in diameter, or one-tenth of the wavelength of light.

Small droplets of a mixture containing enzymes and specially prepared molecules were pipetted into wells on the microchip and placed the chip in an optical microscope. Each of the chip's holes is so tiny that light from a laser beam is unable to pass through and instead is reflected by the microchip's aluminum surface, with some photons "leaking" a short distance into the hole, on the bottom of which an enzyme molecule is located.

These few leaking photons are enough to illuminate fluorescent molecules, called fluorophores, attached as "tags" to nucleotides (molecules that make up the long chains of DNA) in the sample. In this way, the researchers were able to observe, for the first time, the interaction between the ligand (the tagged nucleotide) and the enzyme in the observation volume (the region of the mixture that can be seen).

The problem until now has been seeing exactly how long an interaction between a biological molecule and an enzyme takes and how much time elapses between these interactions. This is complicated by the need to distinguish those molecules interacting with the protein and those just passing by. "A freely moving molecule will come in and out of the observation volume very quickly -- on the order of a microsecond. But if it interacts with the enzyme it will sit there for a millisecond," says Levene. "There are three orders of magnitude difference in the length of time that we see this burst of fluorescence. So now it's very easy to discriminate between random occurrences of one ligand and a ligand interacting with the enzyme."

Says Webb: "We see only one fluorescent ligand at a time, so we can now follow the kinetics [movement and behavior] in real time of individual reactions." He adds, "We can actually see the process of interaction."

This is pretty impressive. The problem with biology has always been that most important mechanisms are shaped and operate on such a small scale that it is hard to figure out how exactly biological systems function. Anything that makes it easier to watch smaller scale phenomena can be very beneficial in speeding up the rate at which biological systems can be taken apart and figured out.

A logical extension of this technique would probably be to use quantum dots in place of the flourophores. Quantum dots last longer and they can be tuned to emit light at many different frequencies. That way different molecules emitting at different frequencies can be watched at the same time.

By Randall Parker    2003 February 12 05:10 PM   Entry Permalink | Comments (0)
2003 February 05 Wednesday
RNA Interference Speeds Discovery Of Purposes Of Genes

RNA-mediated interference (RNAi) is being used as a technique to more easily turn genes off in order to discover their purposes. Caenorhabditis elegans (or C. elegans) is a perfect organism to use for RNAi experiments.

A quirk of the physiology of C. elegans means that such gene inactivation can occur simply if the RNAi molecule is eaten by the worm. And luckily for the researchers, the preferred diet of this little worm is the bug that for decades has been used in thousands of lab experiments - the bacterium E coli. Simply inserting the RNAi sequences into E coli and allowing the worms to feed resulted in the chosen gene being knocked out.

The technique is remarkably fast. "It used to take a year to knock out a gene, now with RNAi one person can knock-out every gene in just a few months," says Ahringer.

To support this work the scientists had to develop a way to grow bacteria strains that each contained the ability to make a different RNAi aimed to knock out a different target gene.

"The worms eat the bacteria ... silencing the gene in the worm and her progeny," Julie Ahringer, of the Wellcome Trust/Cancer Research UK Institute of Cancer and Developmental Biology at the University of Cambridge in England, told UPI. "We optimized this ... technique and then worked out methods to efficiently engineer the large number of bacterial strains needed (one for each gene)."

Since it is so easy to deliver RNAi molecules into C. elegans its being used for experiments to rapidly discover what many genes do. This has sped up experiments that rely on knocking out specific genes by orders of magnitude. Recently the use of RNA interference led to the discovery 400 genes in C. elegans worm that affect fat storage.

Scientists at Massachusetts General Hospital (MGH) and their colleagues have scoured thousands of genes in the C. elegans worm and have come up with hundreds of promising candidates that may determine how fat is stored and used in a variety of animals. The findings, published in the Jan. 16 issue of Nature, represent the first survey of an entire genome for all genes that regulate fat storage.

The research team led by Gary Ruvkun, PhD, of the MGH Department of Molecular Biology, and postdoctoral fellow Kaveh Ashrafi, PhD, identified about 400 genes encompassing a wide range of biochemical activities that control fat storage. These studies were conducted using the tiny roundworm Caenorhabditis elegans, an organism that shares many genes with humans and has helped researchers gain insights into diseases as diverse as cancer, diabetes, and Alzheimer's disease.

Many of the fat regulatory genes identified in this study have counterparts in humans and other mammals. "This study is a major step in pinpointing fat regulators in the human genome," says Ruvkun, who is a professor of Genetics at Harvard Medical School. "Of the estimated 30,000 human genes, our study highlights about 100 genes as likely to play key roles in regulation of fat levels," he continued. Most of these human genes had not previously been predicted to regulate fat storage. This prediction will be tested as obese people are surveyed for mutations in the genes highlighted by this systematic study of fat in worms.

In addition, this study points to new potential therapies for obesity. Inactivation of about 300 worm genes causes worms to store much less fat than normal. Several of the human counterparts of these genes encode proteins that are attractive for the development of drugs. Thus, the researchers suggest that some of the genes identified could point the way for designing drugs to treat obesity and its associated diseases such as diabetes.

Of the 400 genes which RNAi-based screening identified to affect fat metabolism about half have known human counterparts.

To discover this treasure trove of fat regulators, the researchers inactivated genes one at a time and looked for increased or decreased fat content in the worms. Through this time-consuming process, they identified about 300 worm genes that, when inactivated, cause reduced body fat and about 100 genes that cause increased fat storage when turned off. The identified genes were very diverse and included both the expected genes involved in fat and cholesterol metabolism as well as new candidates, some that are expected to function in the central nervous system.

About 200 of the 400 fat regulatory worm genes have counterparts in the human genome. "A number of these worm genes are related to mammalian genes that had already been shown to be important in body weight regulation. But more importantly, we identified many new worm fat regulatory genes, and we believe that their human counterparts will play key roles in human fat regulation as well," says lead author Ashrafi. "The work was done in worms because you can study genetics faster in worms than in other animal models, such as mice," says Ashrafi. "The model is a great tool for discovering genes."

RNAi allowed the relevant genes to be identified out of a much larger set of genes.

The work was dependent on the use of an RNA-mediated interference (RNAi) library constructed by the MGH team's collaborators at the Wellcome/Cancer Research Institute in England. The library consists of individual genetic components that each disrupt the expression of one particular gene. With this tool, the researchers were able to systematically screen almost 17,000 worm genes for their potential roles in fat storage.

Now that the bacteria have been created that make each type of RNAi for C. elegans many other effects of genes can be looked at. Already the original researchers have used this technique to look at genes that affect longevity.

In another paper, Dr. Ruvkun and Dr. Ahringer have used the RNA method to screen the worm's genome for genes that increase longevity. With two of the six chromosomes tested, they have found that genes in the mitochondria, the energy-producing structure, are particularly important in determining life span.

This result demonstrates how the use of RNAi can support massive rapid screening of a large number of genes in order to identify a relevant subset for a particular purpose. This is not the only such recent result of this nature.

RNAi is being used to control the expression of the gene for p53 which is a crucial protein for regulating cell proliferation. Mutations in areas of the genome that control p53 expression are known to be crucial in the development of some types of cancer.

The study showed that establishing different levels of p53 in B-cells by RNAi produces distinct forms of lymphoma. Similar to lymphomas that form in the absence of p53, lymphomas that formed in mice with low p53 levels developed rapidly (reaching terminal stage after 66 days, on average), infiltrated lung, liver, and spleen tissues, and showed little apoptosis or "programmed cell death."

In contrast, lymphomas that formed in mice with intermediate p53 levels developed less rapidly (reaching terminal stage after 95 days, on average), did not infiltrate lung, liver, or spleen tissues, and showed high levels of apoptosis. In mice with high B-cell p53 levels, lymphomas did not develop at an accelerated rate, and these mice did not experience decreased survival rates compared to control mice.

The study illustrates the ease with which RNAi "gene knockdowns" can be used to create a full range of mild to severe phenotypes (something that geneticists dream about), as well as the potential of RNAi in developing stem cell-based and other therapeutic strategies.

Along with a recent study by Hannon and his colleagues that demonstrated germline transmission of RNAi, the current study establishes RNAi as a convenient alternative to traditional, laborious, and less flexible homologous recombination-based gene knockout strategies for studying the effects of reduced gene expression in a wide variety of settings.

RNA interference will be used in a newly announced effort to look at 10,000 genes for signs that they play roles that would make them relevant in understanding the causes of cancer.

This has been made possible by the discovery of a process called RNA interference which is used by the body to switch off individual genes while leaving all others unaffected.

The charity Cancer Research UK and the Netherlands Cancer Institute plan to join forces to exploit this knowledge to inactivate almost 10,000 genes one at a time in order to find out precisely what they do - and how they might contribute to cancer's development.

RNAi is being used as a tool to study the effects of shutting down a gene that causes soybean allergies.

Last September, for example, Anthony J. Kinney, a crop genetics researcher at DuPont Experimental Station in Wilmington, Del., and his colleagues reported using a technique called RNA interference (RNAi) to silence the genes that encode p34, a protein responsible for causing 65 percent of all soybean allergies. RNAi exploits the mechanism that cells use to protect themselves against foreign genetic material; it causes a cell to destroy RNA transcribed from a given gene, effectively turning off the gene.

How the double-stranded RNA gets used by the cell to turn off genes.

When double-strand RNA is detected, an enzyme called dicer, discovered at the Cold Spring Harbor Laboratory on Long Island, chops the double-strand RNA into shorter pieces of about 21 to 23 bases. The pieces are known as small interfering RNAs or siRNAs. Each short segment attracts a phalanx of enzymes.

Together, they seek out messenger RNA that corresponds to the small RNA and destroy it. In plants and roundworms, the double-strand RNA can spread through the organism like a microscopic Paul Revere.

The cell's reaction to double-stranded RNA in this manner may have evolved as a defense mechanism against double-stranded RNA virus invaders.

This page has links to some published papers that involve working with RNAi.

RNA interference (RNAi) is the process where the introduction of double stranded RNA into a cell inhibits gene expression in a sequence dependent fashion. RNAi is seen in a number of organisms such as Drosophila, nematodes, fungi and plants, and is believed to be involved in anti-viral defence, modulation of transposon activity, and regulation of gene expression.

By Randall Parker    2003 February 05 12:59 AM   Entry Permalink | Comments (3)
2003 January 13 Monday
Emerging World Changing Technologies

MIT's Technology Review has an article entitled 10 Emerging Technologies That Will Change The World. Here is the summary list of the 10 technologies.

Technologies pinpointed to change the future include glycomics, injectable tissue engineering, molecular imaging, grid computing, wireless sensor networks, software assurance, quantum cryptography, nanoimprint lithography, nano solar energy and mechatronics. For each technology, Technology Review has profiled one researcher or research team whose work exemplifies the field’s possibilities.

Molecular imaging will be greatly helped by quantum dots. Nanotech for solar is important because nanotech manufacturing techniques show promise for huge reductions in manufacturing costs. The biggest factor holding back the widespread use of solar photovoltaics is their cost (yes, energy storage is another problem but nanotech fabrication techniques for batteries and fuel cells will similarly reduce their costs).

Wireless sensor networks have implications for privacy that science fiction writer David Brin has fleshed out in both his non-fiction book The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? and in his fun fiction read Earth. Brin argues advancing technology will make the use of surveillance technologies ubiquitous and that our choice is between just letting only the government watch everyone or letting everyone use surveillance technologies to watch everyone else. I think he's right about this and agree with him that the latter option is preferable.

Here is the more detailed description of each of the technologies. In particular, nanoimprint lithography sounds especially promising as a way to make nanotech device manufacture affordable.

Right now everybody is talking about nanotechnology, but the commercialization of nanotechnology critically depends upon our ability to manufacture,” says Princeton University electrical engineer Stephen Chou.

A mechanism just slightly more sophisticated than a printing press could be the answer, Chou believes. Simply by stamping a hard mold into a soft material, he can faithfully imprint features smaller than 10 nanometers across. Last summer, in a dramatic demonstration of the potential of the technique, Chou showed that he could make nano features directly in silicon and metal. By flashing the solid with a powerful laser, he melted the surface just long enough to press in the mold and imprint the desired features.

Although Chou was not the first researcher to employ the imprinting technique, which some call soft lithography, his demonstrations have set the bar for nanofabrication, says John Rogers, a chemist at Lucent Technologies’ Bell Labs. “The kind of revolution that he has achieved is quite remarkable in terms of speed, area of patterning, and the smallest-size features that are possible. It’s leading edge,” says Rogers. Ultimately, nanoimprinting could become the method of choice for cheap and easy fabrication of nano features in such products as optical components for communications and gene chips for diagnostic screening. Indeed, NanoOpto, Chou’s startup in Somerset, NJ, is already shipping nanoimprinted optical-networking components. And Chou has fashioned gene chips that rely on nano channels imprinted in glass to straighten flowing DNA molecules, thereby speeding genetic tests.

Nanotechnology's big challenge is how to manufacture nanotech devices. Sounds like Chou's technique may be useful for fabrication of a wide range of nanotech devices notably including nanopore DNA sequencers. If Chou's technology only enables the construction of nanopore DNA sequencing devices that alone will make his technology extremely worthwhile. The ability to cheaply do full personal DNA sequencing would allow the collection of data on each person's DNA sequence. As a consequence the efforts to run down what each sequence variation does will be accelerated enormously. In addition to providing valuable information about the causes of almost all types of diseases detailed personal DNA sequence information will affect everything from mating choices to medical insurance to privacy.

There are other approaches to nanotech fabrication involving the use of proteins and biological systems to make nanotech structures that might turn out to be equally or even more promising for nanotech manufacturing in the longer run.

One item that I think should have been on the list is microfluidics. The ability to miniaturize chemical, biochemical, and molecular biological experiments will greatly accelerate the rate of advance of biotechnologies and of chemistry as well.

In terms of life extension and rejuvenation the most important technology on the list is injectable tissue engineering. What is especially needed there is the ability to make youthful non-embryonic stem cells to replenish various non-embryonic stem cell reservoirs in the body. One big challenge to achieve that goal is to understand for each non-embryonic stem cell type exactly what regulatory state its genes are in to make it be differentiated into its particular stem cell type. Non-embryonic stem cells are not pluripotent (i.e. they can not become all cell types) because they are in the various parts of the body to make new cells of particular types that each part needs. It is hard to say just how long it will take to develop sufficient control of cellular genetic regulation to be able to make exactly the kinds of non-embryonic stem cells that are desired for each reservoir type.

Another application of tissue engineering is for the growth of replacement organs. This too will be used for life extension and rejuvenation. Though in cases where injectable stem cells will do the job the stem cells will be preferred because stem cell therapy is a lot easier than surgery.

Another important technology emerging technology that went unmentioned in the MIT list is gene therapy. For many cell types one can't simply replace them when you get older (e.g. your brain!). The ability to do repair in situ is essential. Gene therapy will make this possible many years before nanotech repair bots become workable.

By Randall Parker    2003 January 13 02:03 PM   Entry Permalink | Comments (6)
2002 December 24 Tuesday
Microfluidics to Revolutionize Biosciences

Forbes has a nice write-up on the microfluidic chip designs of Cal Tech biophysicist Stephen R. Quake

He and his group, along with Caltech's Axel Scherer, added a few clever twists. His chips, the size of a half-dollar or smaller, are made with two layers of rubber, relying on a technique similar to injection molding used to make toys. The bottom layer has hundreds or thousands of tiny intersecting liquid-handling channels, each about the width of a human hair (100 microns). The top layer contains hundreds of control channels through which pressurized water is pumped. Valves are formed where the control channels cross over the fluid channels. When pressurized water is fed over such an intersection, the pressure pushes down the thin layer of rubber, separating it from the fluid below, and it clamps shut the fluid channel below, like stepping on a hose. Quake's lab can make the chips with $30 bottles of rubber, an ultraviolet light to create molds and a convection oven to cure the rubber. A grad student can design and make a new chip in less than two days.

Quake predicts that his chips will have 100 times the number of cells and valves in a few years. These chips will be used for handheld instant blood chemistry testers, mini DNA sequences that are orders of magnitude smaller and cheaper than today's models, mini-labs for analysing the state of single cells, for testing large numbers of drugs against large numbers of cells in parallel, and countless other biochemical tasks that can be made orders of magnitude less expensive and less time-consuming.

Quake's chip technology is being commercialized by venture capital start-up Fluidigm. They have a picture of one of the chips on their web site. Fluidigm is developing this technology to lower the cost of polymerase chain reaction (PCR) which is widely used for DNA sequencing.

South San Francisco, CA, September 26, 2002 - Fluidigm Corporation and The California Institute of Technology announced today major advancements in complexity and function of microfluidic device technology. Using its novel fabrication technology, the MSL™(multi-layer soft lithography) process, Fluidigm has demonstrated a fluidic microprocessor that can run 20,000 PCR assays at sub-nanoliter volumes, the smallest documented volume of massively parallel PCR assays. This technology is being developed in the near term to run over 200,000 parallel assays. Fluidigm believes this fluidic architecture will make significant contributions in cancer detection research as well as in large scale genetic association studies.

At the same time, a group led by Dr. Stephen Quake, Associate Professor in the Department of Applied Physics at the California Institute of Technology and co-founder of Fluidigm, published an article in Science today describing a paradigm for large scale integration of microfluidic devices. These devices are capable of addressing and recovering the contents from one among thousands of individual picoliter chambers on the microfluidic chip.

Using new techniques of multiplexed addressing, Quake's group built chips with as many as 6,000 integrated microvalves and up to 1000 individually addressable picoliter chambers. These chips were used to demonstrate microfluidic memories and tools for high throughput screening. Additionally, on a separate device with over 2000 microvalves, they demonstrated the ability to load two different reagents and perform distinct assays in 250 sub-nanoliter reaction chambers and then recover the contents.

"We now have the tools in hand to design complex microfluidic systems and, through switchable isolation, recover contents from a single chamber for further investigation. These next-generation microfluidic devices should enable many new applications, both scientific and commercial," said Dr. Quake.

"Together, these advancements speak to the power of MSL technology to achieve large scale integration and the ability to make a commercial impact in microfluidics," said Gajus Worthington, President and CEO of Fluidigm. "PCR is the cornerstone of genomics applications. Fluidigm's microprocessor, coupled with the ability to recover results from the chip, offers the greatest level of miniaturization and integration of any platform," added Worthington.

Fluidigm hopes to leverage these advancements as it pursues genomics and proteomics applications. Fluidigm has already shipped a prototype product for protein crystallization that transforms decades-old methodologies to a chip-based format, vastly reducing sample input requirements and improving cost and labor by orders of magnitude.

Note as well Fluidigm's development of a prototype to automate protein crystallization which is used for the determination of 3 dimensional structure of proteins. Fluidigm is selling their Topaz prototype protein crystallization kit and they list the following benefits for it:

  • Sample input is reduced by 2 orders of magnitude, increasing experimentally accessible proteins by 100%.
  • As many as 144 crystallization experiments can be conducted in parallel.
  • Reagent consumption is reduced 100x.
  • Labor and storage costs are reduced 300x compared to the 20-year old technology currently in place.

There are countless uses for smaller cheaper mini chemistry labs. As this technology advances it will accelerate the rate of advance of biological science and biotechnology literally by orders of magnitude. The impact will be greater than the impact of computers to date because it will make possible the cure of diseases, the reversal of aging and the enhancement of human intellectual and physical performance.

Also see my previous post on the Quake lab's work.

By Randall Parker    2002 December 24 02:47 PM   Entry Permalink | Comments (0)
2002 December 20 Friday
Gene Chips Will Accelerate Drug Development

By watching the effects that experimental drugs have on gene expression gene chips allow drugs which cause dangerous side effects to be identified at an earlier stage and at lower cost.

How is Merck using these things? Rosetta President Stephen Friend, who is now an executive vice president at Merck's labs, laid the groundwork. Friend used DNA chips to examine several potential medicines, some of which Merck had axed because animal studies showed risks of side effects. The DNA chips, in combination with Rosetta's software, flagged the duds from the drugs as well as the animal studies, but more quickly and cheaply. This means that medicines that are likely to fail will be less likely to make it into clinical trials.

Kim sees another opportunity down the road. DNA chips can be used to find genetic differences between people who respond to a drug and those who do not, starting in Phase II, or mid-stage, clinical trials. Since many drugs only seem to work for certain people, this would mean companies to target medicines to patients who would be helped--making clinical trials cheaper and easier.

Another way that gene chips (aka DNA chips or gene microarrays) will accelerate drug development is by finding genes and gene products to target for drug development.

Microarray technologies, or DNA chips, provide a high-density, high-throughput platform for measuring and analyzing the expression patterns of thousand of genes in parallel. Comparing expression levels of healthy and diseased tissues will reveal genes with a role in a disease process that can help researchers further accelerate discovery and validation of gene targets.

While gene chips and bioinformatics will accelerate drug development we are approaching the age in which drugs will not be the most important form of medical treatment. The biggest benefits for health and longevity will come from cell therapy and gene therapy. Cell therapy will be a far more powerful therapy because it will allow the replacement of aged, damaged, and dead cells. Gene therapy will be more powerful because the added genes will effectively program cells to become healthy again and even to replicate and again replace other cells that have died. Neither of these therapies are what we've traditionally called drugs. Still, gene chips will also accelerate the development of cell therapies and gene therapies as well.

Update: Here's a nice collection of microarray gene chip links.

By Randall Parker    2002 December 20 02:05 AM   Entry Permalink | Comments (0)
2002 December 17 Tuesday
Disease Simulations Speeding Drug Development

A survey of the growing use of computer simulation models of disease processes and metabolism includes a report on the success of a couple asthma simulation models named Bill and Allen to predict that an approach for asthma treatment wouldn't work.

Because Bill's asthma didn't seem to reflect real life and Allen didn't respond to the interleukin-5 blockers, Aventis didn't pursue these compounds as potential asthma therapies. The Entelos model seems to have been accurate. Despite promising animal studies, when other companies recently tested interleukin-5 blockers in people, they found that the compounds have much less effect than the researchers had originally expected.

Each simulation of a disease begins by modeling the normal physiology and interaction of the organs involved. "We are striving for a whole-body approach to health and disease," says Jeff Trimmer of Entelos. "We want to use [our models] to understand how a person gets sick." Even when models don't seem to simulate what happens in real life—as in Bill—the findings can help researchers better understand physiological factors that are important in causing diseases, says Trimmer.

Computer simulations will eventually speed the rate of biomedical advance by orders of magnitude.

By Randall Parker    2002 December 17 12:56 AM   Entry Permalink | Comments (0)
2002 December 14 Saturday
Quantum Dots To Speed Up Biological Science

In order to advance in our understanding of biological systems we need better tools for measuring what goes on in cells and between cells. Tools that let us watch more things at once at a smaller scale, for longer periods of time and with greater sensitivity can greatly speed up the rate at which the functioning of biological systems can be puzzled out. Quantum dots can do all those things as a number of recent reports have shown.

A team at Rockefeller University and the US Naval Research Laboratory have developed a way to use quantum dots to label different kinds of proteins in living cells to fluoresce at different colors so that the internal components of cells can be tracked and imaged for long periods of time.

Quantum dots are nano-sized crystals that exhibit all the colors of the rainbow due to their unique semiconductor qualities. These exquisitely small, human-made beacons have the power to shine their fluorescent light for months, even years. But in the near-decade since they were first readily produced, quantum dots have excluded themselves from the useful purview of biology. Now, for the first time, this flexible tool has been refined, and delivered to the hands of biologists.

Quantum dots are about to usher in a new plateau of comparative embryology, as well as limitless applications in all other areas of biology.

Two laboratories at The Rockefeller University -- the Laboratory of Condensed Matter Physics, headed by Albert Libchaber, Ph.D., and the Laboratory of Molecular Vertebrate Embryology, headed by Ali Brivanlou, Ph.D. -- teamed up to produce the first quantum dots applied to a living organism, a frog embryo. The results include spectacular three-color visualization of a four-cell embryo.

The scientists' results appear in the Nov. 29 issue of Science.

"We always knew this physics/biology collaboration would bear fruit," says co-author Brivanlou, "we just never knew how sweet it would be. Quantum dots in vivo are the most exciting, and beautiful, scientific images I have ever seen."

To exploit quantum dots' unique potential, the Rockefeller scientists needed to make a crucial modification to existing quantum dot technology. Without it, frog embryos and other living organisms would be fallow ground for the physics-based probes.

"Quite simply, we cannot do this kind of cell labeling with organic fluorophores," says Brivanlou. Organic fluorophores (synthetic molecules such as Oregon Green and Texas Red) don't have the longevity of quantum dots. What's more, organic fluorophores and fluorescent proteins (such as green fluorescent protein, a jellyfish protein, and luciferase, a firefly protein) represent a small number of colors, subject to highly specific conditions for effectiveness. Quantum dots can be made in dozens of colors just by slightly varying their size. The application potential in embryology alone is monumental.

Hydrophobic, but not claustrophobic

Benoit Dubertret, Ph.D., a postdoctoral fellow working with Libchaber, toiled for two years with quantum dots' biggest problem: their hydrophobic (water-fearing) outer shell. This condition, a by-product of quantum dots' synthesis, makes them repellent to the watery environment of a cell, or virtually any other biological context.

The ability to do track cells as they differentiate has enormous value for the development of stem cell therapies and the growth of replacement organs.

These scientists have developed the ability to have the cells take up the quantum dots using endocytosis so that injection into a cell is no longer necessary. They have also developed a way to link quantum dots to antibodies that have affinity to specific proteins.

The unique physical properties of quantum dots overcome these obstacles. Simply by altering their size, scientists can manufacture them to produce light in any color of the rainbow, and, additionally, only one wavelength of light is required to illuminate all of the different-colored dots. Thus, spectral overlap no longer limits the number of colors that can be used at once in an experiment. In addition, quantum dots do not stop glowing even after being visualized for very long periods of time: compared to most known fluorescent dyes, they shine for an average of 1,000 times longer.

Water-loving coats

But while quantum dots solve these problems, they have limitations of their own - the biggest one being their water-fearing or "hydrophobic" nature. For quantum dots to mix with the watery contents of a cell, they have to possess a water-loving, or "hydrophilic" coat. Three years ago, Simon and Jaiswal's colleagues at the U.S. Naval Research Laboratory made their dots biocompatible by enveloping them in a layer of the negatively charged dihydroxylipoic acid (DHLA).

In the same study, the researchers overcame a second major obstacle of making quantum dots biologically useful - building protein-specific dots. By linking antibodies specific for an experimental protein to the DHLA-capped dots, they were able to demonstrate protein-specificity in a test tube.

In the present study, the Rockefeller scientists in collaboration with their U.S. Naval Research Laboratory colleagues have again synthesized protein-specific quantum dots, but this time they have shown their efficacy in living cells - a first for this budding technology. To do this, the researchers employed two different methods of synthesizing the quantum dots, both of which involved linking the negatively charged DHLA-capped dots to positively charged molecules - either avidin or protein G bioengineered to bear a positively charged tail. Because avidin and protein G can be made to readily bind antibodies, the researchers could then attach the dots to their protein-specific antibody of choice.

The critical test was to determine specificity: can quantum dots achieve the same exquisite selectivity that occurs when a protein is synthesized fused to GFP? To answer this question, Simon and colleagues engineered a population of cells growing together in a dish to randomly produce different levels of a membrane protein fused to GFP. When these cells were incubated with quantum dots conjugated to an antibody specific for that membrane protein, the pattern of GFP fluorescence matched the fluorescence of the quantum dots. However, the fluorescence of quantum dots lasted immeasurably longer, and the proteins could now be imaged in a rainbow of colors.

"Researchers should now be able to rapidly create an assortment of quantum dots that specifically bind to several proteins of interest," says Jaiswal.

Uncharted cellular terrain

Proteins aren't the only subjects the researchers successfully lit up with quantum dots: cells too were labeled and observed in their normal setting for very long periods of time. In the Nature Biotechnology paper, the researchers monitored human tissue culture cells tagged with quantum dots over two weeks with no adverse effects on cells. They also continuously observed slime mold cells labeled with quantum dots through 14 hours of growth and development without detecting any damage. This type of cell-tracking approach would allow researchers to study cell fate either outside the body in culture, or in whole developing organisms.

Quantum Dot Corporation researchers use quantum dots to detect cancer cells.

Hayward, CA, December 2, 2002 - Quantum Dot Corporation (QDC), the leader in Qdot(tm) biotechnology applications and products, announced today the publication of a seminal scientific paper in the prestigious journal Nature Biotechnology. The paper, entitled "Immunofluorescent labeling of cancer marker Her2 and other cellular targets with semiconductor quantum dots", was published in the on-line version of Nature Biotechnology, following collaborative work performed by scientists at Genentech and QDC. The print version will be published in January 2003.

"The promise of Qdot conjugates to revolutionize biological detection has now become a reality. Our work with Genentech is the first practical application of the Qdot technology in an important biological system - specific detection of breast cancer markers. These results demonstrate the dramatic sensitivity and stability benefits enabled using Qdot detection," said Xingyong Wu, Ph.D., senior staff scientist at QDC, and the lead author of the paper. "We have also demonstrated cancer marker detection in live cancer cells, an extremely difficult task using conventional methods," continued Dr. Wu.

Small Times has an article that provides an overview of some of these recent results with quantum dots.

A third team of researchers reported their solution to the biocompatibility problem in Science. They sheathed the dots in phospholipid membranes and hooked them to DNA to produce clear images in growing embryos, where the nanocrystals appeared stable and nontoxic.

"These three papers combined indicate that bioconjugate nanocrystals will have major applications in biology and medicine," said Shuming Nie, director of nanotechnology at Emory University's Winship Cancer Institute.

Emory University biomedical engineer Shuming Nie argues that nanotechnology will provide benefits for biomedical applications many years before nanotech becomes beneficial in electronics applications.

Biomedical engineer Shuming Nie is testing the use of nanoparticles called quantum dots to improve clinical diagnostic tests for the early detection of cancer. The tiny particles glow and act as markers on cells and genes, potentially giving scientists the ability to rapidly analyze biopsy tissue from cancer patients so that doctors can provide the most effective therapy available.

Nie, a chemist by training, is an associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University and director of cancer nanotechnology at Emory's Winship Cancer Institute.

His research focuses on the field of nanotechnolgy, in which scientists build devices and materials one atom or molecule at a time, creating structures that take on new properties by virtue of their miniature size. The basic building block of nanotechnology is a nanoparticle, and a nanometer is one-billionth of a meter, or about 100,000 times smaller than the width of a human hair.

Nanoparticles take on special properties because of their small size. For example, if you break a piece of candy into two pieces, each piece will still be sweet, but if you continue to break the candy until you reach the nanometer scale, the smaller pieces will taste completely different and have different properties.

Until recently, nanotechnology was primarily based in electronics, manufacturing, supercomputers and data storage. However, Nie predicted several years ago in a paper published in Science that the first major breakthroughs in the field would be in biomedical applications, such as early disease detection, imaging and drug delivery.

"Electronics may be the field most likely to derive the greatest economic benefit from nanotechnology," Nie said. "However, much of the benefit is unlikely to occur for another 10 to 20 years, whereas the biomedical applications of nanotechnology are very close to being realized."

By Randall Parker    2002 December 14 11:22 PM   Entry Permalink | Comments (0)
2002 December 12 Thursday
Bioinformatics To Cut Drug Development Costs

The cost per drug brought to market is $880 million. The ability of computers to analysis greater quantities of information will cut costs and cut development time.

Paradoxically, the biggest gains are to be made from failures. Three-quarters of the cost of developing a successful drug goes to paying for all the failed hypotheses and blind alleys pursued along the way. If drug makers can kill an unpromising approach sooner, they can significantly improve their returns. Simple mathematics shows that reducing the number of failures by 5% cuts the cost of discovery by nearly a fifth. By enabling researchers to find out sooner that their hoped-for compound is not working out, bioinformatics can steer them towards more promising candidates. Boston Consulting believes bioinformatics can cut $150m from the cost of developing a new drug and a year off the time taken to bring it to market.

By Randall Parker    2002 December 12 01:47 PM   Entry Permalink | Comments (0)
2002 December 05 Thursday
Mouse Genome Sequenced

Its official, the mouse genome has been sequenced and this is a very good thing. You can read official announcement on the NIH site (which has the best copy for the click-thru to supporting docs), on Eurekalert, and on ScienceDaily. From the announcement:

The sequence shows the order of the DNA chemical bases A, T, C, and G along the 20 chromosomes of a female mouse of the "Black 6" strain - the most commonly used mouse in biomedical research. It includes more than 96 percent of the mouse genome with long, continuous stretches of DNA sequence and represents a seven-fold coverage of the genome. This means that the location of every base, or DNA letter, in the mouse genome was determined an average of seven times, a frequency that ensures a high degree of accuracy.

Earlier this year, the mouse consortium announced that it had assembled the draft sequence of the mouse and deposited it into public databases. The consortium's paper this week reports the initial description and analysis of this text and the first global look at the similarities and the differences in the genomic landscapes of the human and mouse. The analysis was led by the Mouse Genome Analysis Group. Below are some of the highlights.

  • Human Sequence: It's Bigger, But Is It Better? The mouse genome is 2.5 billion DNA letters long, about 14 percent shorter than the human genome, which is 2.9 billion letters long. But bigger doesn't always mean better, say scientists. The human genome is bigger because it is filled with more repeat sequences than the mouse genome. Repeat sequences are short stretches of DNA that have been hopping around the genome by copying and inserting themselves into new regions. They are not thought to have functional significance. The mouse genome, it seems, is more fastidious with its housecleaning than the human. Although it is actually accumulating repeat sequence at a greater rate than humans, it is losing them at an even greater rate.

  • Shuffling the Chapters of an Ancestral Book. The mouse and human genomes descended from a common ancestor some 75 million years ago. Since then there has been considerable shuffling of the DNA order both within and between chromosomes. Nonetheless, when scientists compared the human and mouse genomes, they discovered that more than 90 percent of the mouse genome could be lined up with a region on the human genome. That is because the gene order in the two genomes is often preserved over large stretches, called conserved synteny. In fact, the mouse genome could be parsed into some 350 segments, or chapters for which there is a corresponding chapter in the human genome. For example, chromosome 3 of the mouse has chapters from human chromosomes 1, 3, 4, 8 and 13, and chromosome 16 of the mouse has chapters from human chromosome 3, 21, 22 and 16.

  • Heavy Editing at the Level of Sentences. Although virtually all of the human and mouse sequence can be aligned at the level of large chapters, only 40 percent of the mouse and the human sequences can be lined up at the level of sentences and words. Even within this 40 percent, there has been considerable editing, as evolution relentlessly tinkers with the genome. The change is so great in most places that only with very sensitive tools can scientists discern the relationships.

  • Preserving the Gems. Despite the heavy editing, about 5 percent of the genome contains groups of DNA letters that are conserved between human and mouse. Because these DNA sequences have been preserved by evolution over tens of millions of years, scientists infer that they are functionally important and under some evolutionary selection. Interestingly, the proportion of the genome comprised by these functionally important parts is considerably higher than what scientists had expected. In particular, it is about three times as much as can be explained by protein-coding genes alone. This implies that the genome must contain many additional features (such as untranslated regions, regulatory elements, non-protein coding genes, and chromosomal structural elements) that are under selection for biological function. Discovering their meaning will be a major goal for biomedical research in the coming years.

  • The Gene Number. When the human genome consortium concluded last year that the human sequence contains only 30,000 to 40,000 protein-coding genes, the news elicited a collective international gasp. Humans, it seems, have only about twice as many genes as the worm or the fly, and fewer genes than rice. Many wondered how human complexity could be explained by such a paucity of genes. The prediction has since been the subject of debate with some researchers suggesting much higher gene counts. The human-mouse comparison will likely put the yearlong speculation to rest, indicating that if anything, the gene numbers may be at the low end of the range. Today's paper suggests that the mouse and the human genomes each seem to contain in the neighborhood of 30,000 protein coding genes.

  • Sex, Smell and Infectious Disease. Although the mouse and the human contain virtually the same set of genes, it seems that some families of genes have undergone expansion - or multiplied - in the mouse lineage. These involve genes related to reproduction, immunity and olfaction, suggesting that these physiological systems have been the focus of extensive innovation in rodents. It seems that sex, smell, and pathogens are most on the mouse's evolutionary mind. Scientists do not yet know the reasons for this, but they speculate that a shorter generation time, changes in living environment, lack of verbal and visual cues, and differences in reproduction may account for this.

  • Uneven Landscape of the Genomes. Since the two species diverged, the ancestral text has changed considerably, with substitutions occurring in both species. Twice as many of these substitutions have occurred in the mouse compared with the human lineage. A great surprise is that mutation rates seem to vary across the genome in ways that cannot be explained by any of the usual features of DNA.

  • Empowering Mouse as a Disease Model. The laboratory mouse has long been used to study human diseases. There are more than a hundred mouse models of Mendelian disorders, where a mutation in mouse counterparts of human disease genes results in a constellation of symptoms highly reminiscent of the human disorder. But there are many more such models to be found, and the availability of the mouse genome sequence will make their discovery only a few "mouse" clicks away. Furthermore, hundreds of additional mouse models of non-Mendelian diseases such as epilepsy, asthma, obesity, colon cancer, hypertension, and diabetes, which have been more difficult to pin down, will now be much more accessible to the tools of the molecular geneticist.

  • Understanding the Mouse. The mouse genome sequence will also open new paths of scientific endeavor aimed at understanding how the mouse genome directs the biology of this mammal. Scientists will no longer be working on genes in isolation, but will view individual genes in the context of all other related genes and in the context of a whole organism. They will be able to study many, even all, genes simultaneously, speeding the understanding of the mouse in molecular terms. Scientists say such molecular understanding of the mouse will be essential to realize the full benefits of the human genome sequence.

The sequence information from the mouse consortium has been immediately and freely released to the world, without restrictions on its use or redistribution. The information is scanned daily by scientists in academia and industry, as well as by commercial database companies, providing key information services to biotechnologists.

The work reported in this paper will serve as a basis for research and discovery in the coming decades. Such research will have profound long-term consequences for medicine. It will help elucidate the underlying molecular mechanisms of disease. This in turn will allow researchers to design better drugs and therapies for many illnesses.

"The mouse genome is a great resource for basic and applied medical research, meaning that much of what was done in a lab can now be done through the Web. Researchers can access this information through www.ensembl.org, where all the information is provided with no restriction," says Ewan Birney, Ph.D., Ensembl coordinator at the European Bioinformatics Institute.

The Washington Post write-up emphasises the importance of the discovery of more conserved sequence sections than expected.

The big surprise in the research, however, was that about 5 percent of the genetic material of mice and people is highly conserved, and matching genes alone can account for only about 2 percent of it. That means as much as 3 percent of the genetic material is playing a critical but mysterious role--one so important nature has kept that genetic information largely intact for 75 million years.

It's only speculation now, but most scientists think those stretches of DNA will prove to be regulatory regions--instructional segments that somehow govern the behavior of genes. More and more, to cite one example, it looks as though mice and people will turn out to have very different brains not because the genes encoding their brain cells are so different, but because the instructions that regulate how many times those cells reproduce during development are different--producing a far bigger brain in a human than in a mouse.

The discovery of the larger-than-expected conserved areas is the important thing to come out of the mouse DNA sequencing so far. Another interesting discovery is 300 genes that are unique to mice:

But the comparison has also revealed genetic differences too. Mice have around 300 genes humans do not and vice versa. The biggest disparities are linked to sex, smell, immunity and detoxification.

All are genes which help animals adapt to new environments, infections and threats. "All the fast things that happen in evolution are down to life-or-death conflicts, either with other organisms, or within species for mate selection," says Chris Ponting, head of a team at the MRC Functional Genetics Unit in Oxford, UK.

I will be very curious to see whether some scientists eventually track some of those genes to viruses. It is quite possible that viral infections left genes behind at some point and that those genes turned out to do useful things for mice.

These results suggest a much bigger role for RNA that does not code for peptides. Large amounts of the DNA that was unexpectedly found to conserved (not changed by accumulation of random mutations) in humans and mice which does not code for proteins may instead code for regulatory RNA molecules.

RNA, a more ancient chemical version of DNA, performs many basic tasks in a cell, one of which is to form a copy or transcript of a gene and direct the synthesis of the gene's protein. Recently, some of these RNA transcripts have been found to have executive roles all their own, without making any protein. An RNA gene is responsible for the vital task of shutting all the genes on one of the two X chromosomes in each female cell, ensuring that women get the same dose of X-based genes as men, who have just one X chromosome.

The mouse genome sequencing results have provided an immediate benefit for understanding the human genome by helping to identify an additional 1200 human genes that had gone unrecognized.

More than 2,000 of the shared regions identified in this study (out of 3,500) do not contain genes. What precisely these non-gene regions, sometimes called 'junk DNA', are doing in the genome is not yet known.

The consortium researchers discovered about 9,000 previously unknown mouse genes and about 1,200 previously unknown human genes. The mouse genome is 14 percent smaller than the human genome and contains about 2.5 billion letters of DNA.

The genetic differences between humans and mice turn out to be greater than expected:

In the Dec. 5 issue of the journal Nature, Pevzner and other scientists in the 31-institution Mouse Genome Sequencing Consortium published a near-final genetic blueprint of a mouse, together with the first comparative analysis of the mouse and human genomes. (Read NIH news release at http://www.genome.gov/page.cfm?pageID=10005831.) In a companion paper published in today's Genome Research journal, Pevzner and Tesler (in collaboration with Michael Kamal and Eric Lander at the Whitehead/MIT Center for Genome Research) analyze human-mouse genome rearrangements for insights about the evolution of mammals, and outline their development of a new algorithm to differentiate macro- and micro-level genome rearrangements.

Their conclusion: although the mouse and human genomes are very similar, genome rearrangements occurred more commonly than previously believed, accounting for the evolutionary distance between human and mouse from a common ancestor 75 million years ago. "The human and mouse genome sequences can be viewed as two decks of cards obtained by re-shuffling from a master deck--an ancestral mammalian genome," said Pevzner. "And in addition to the major rearrangements that shuffle large chunks of the gene pool, our research confirmed another process that shuffles only small chunks." "We now estimate over 245 major rearrangements that represent dramatic evolutionary events," added Tesler. “In addition, many of those segments reveal multiple micro-rearrangements, over 3,000 within these major blocks—a much higher figure than previously thought (even though some of them may be caused by inaccuracies in the draft sequences)."

To go along with the announcement of the mouse genome sequencing Nature has a collection of articles on the importance of mice in biomedical research. I don't like the funky page design where each choice on that page brings up a pop-up where then one can click to get various articles. But some of the articles are quite interesting. For instance, in this article various scientists describe how the mouse genome sequence data speeds up their work.

While Jenkins and Copeland look back fondly on those early days, the mouse genome sequence (see page 520) is accelerating their research in ways that make their past achievements seem pedestrian. Back in the 1980s, if Jenkins and Copeland were interested in investigating a spontaneous mutation presented at the Jackson Lab's weekly 'Mutant Mouse Circus', it was a laborious process. Identifying the gene involved meant crossing about 1,000 mice to map it to a stretch of chromosome bearing about 20 candidate genes. From there, a postdoc would have to sequence all of them in both normal and mutant mice to find out which was mutated.

"It used to be one postdoc project per mutation...," says Jenkins, "...and it was like looking for a needle in a haystack," adds Copeland. But since the mouse genome sequence became available in May (at http://www.ensembl.org/Mus_musculus), researchers can simply go to the database after the initial breeding experiments and look up all the genes in the relevant chromosomal region. By knowing from their sequences what types of proteins most of them encode, they can choose one or two that look most promising to search for the mutation.

"It took us 15 years to get 10 possible cancer genes before we had the sequence," says Copeland. "And it took us a few months to get 130 genes once we had the sequence." What's more, Jenkins points out, going back and forth between the mouse and human genomes will help to target related human genes that could be candidates for drug development.

This Nature article is especially interesting because it gives a sense of the sheer size of the job of some figuring out how mouse cells function and what methods may help to make the problem more tractable.

One experimental approach in which thousands of genes can be analysed in parallel is to isolate messenger RNA and to display the gene-expression profile on a chip. When this technique is applied to tissues, data are lost because aspects of the three-dimensional structures of multiple cell types are destroyed in the biochemical extraction. Data from in situ analyses contain more detailed information about each gene, but the generation of these data is serial and significantly slower.

Gene expression is being systematically examined at the transcriptional level by several groups, for instance in the 9.5-day-old mouse embryo and in adult tissues (see Box). Two other papers in this issue3, 4 report large-scale analyses of gene expression in embryonic and adult stages, but so far have examined just 0.5% of the genes in the genome, the homologues of the genes on chromosome 21. Transcription studies in situ have relatively limited resolution, and the tissues constituting a multicellular organism are complex mixtures of different cell types. Unless each cell is individually visualized for gene expression in combination with histological criteria, important information relating to biological function is lost, for instance the subcellular compartment(s) occupied by a protein.

The Sanger Institute's Atlas project is being established to systematically examine the expression pattern of every gene product at tissue-, cellular- and subcellular-level resolution, to provide a permanent, definitive and accessible record of the molecular architecture of normal tissues and cells. The ultimate goal is to define protein expression patterns for all 30,000 mouse genes in hundreds of different tissues, all gathered in archival data sets to support research projects worldwide. Data will be collected electronically and archived with a vocabulary allowing complex queries.

A recent report that is quite independent of the mouse genome sequencing effort demonstrates how mice are viewed as such a useful tool that scientists will transfer human genes into mice in order to be able to study the genes more easily.

Philadelphia, PA –Researchers at the University of Pennsylvania School of Medicine have bred a mouse to model human L1 retrotransposons, the so-called "jumping genes." Retrotransposons are small stretches of DNA that are copied from one location in the genome and inserted elsewhere, typically during the genesis of sperm and egg cells. The L1 variety of retrotransposons, in particular, are responsible for about one third of the human genome.

The mouse model of L1 retrotransposition is expected to increase our understanding of the nature of jumping genes and their implication in disease. According to the Penn researchers, the mouse model may also prove to be a useful tool for studying how a gene functions by knocking it out through L1 insertion. Their report is in the December issue of Nature Genetics and currently available online (see below for URL).

"There are about a half million L1 sequences in the human genome, of which 80 to 100 remain an active source of mutation," said Haig H. Kazazian, Jr., MD, Chair of Penn's Department of Genetics and senior author in the study. "This animal model will help us better understand how this happens, as well as provide a useful tool for discovering the function of known genes."

In humans, retrotransposons cause mutations in germ line cells, such as sperm, which continually divide and multiply. Like an errant bit of computer code that gets reproduced and spread online, retrotransposons are adept at being copied from one location and placed elsewhere in the chromosomes. When retrotransposons are inserted into important genes, they can cause disease, such as hemophilia and muscular dystrophy. On the other hand, retrotransposons have been around for 500 to 600 million years, and have contributed a lot to evolutionary change.

Its worth noting about this latest report that according to the mouse and human DNA sequencing project scientsts humans have more junk DNA than mice do and that mice may actually have just as much functional DNA as humans even though the human genome is bigger in total size. The human transposons mentioned in this report may have something to do with this state of affairs. Humans may have been under less selective pressure to keep down the amount of genetic waste that builds up (really probably parasitic DNA) or the human transposons might serve a more useful purpose than mouse transposons do. It will also be interesting to see how the work in this area progresses.

The availability of the mouse genome sequence is already accelerating efforts to understand the human genome more quickly. Also, the sequence data is going to be very helpful for scientists who are using mice to understand general phenomena in mammalian metabolism and cellular genetic regulation. Efforts to create genetically engineered mouse equivalents of human illnesses will be greatly helped by the identification of mouse equivalents of genes in humans. Still, most of the hard work is still to be done. It is much easier to figure out the primary sequence of a genome than it is to figure out how the expression of all genes is controlled or how all proteins function and interact with each other. Many more advances are needed in laboratory techniques, instrumentation, and in computer modelling in order to be able to fully understand how a single cell functions in all its complexity.

Update: ''We even have the genes that could make a tail'.

Yesterday's release also continues a pattern of humbling genetic revelations. Earlier research showed that humans had scarcely more genes than the lowly roundworm. Now there's proof that people are closely related to tiny, furry rodents.

''We even have the genes that could make a tail,'' said Dr. Jane Rogers, of the Wellcome Trust Sanger Institute in Cambridge, England.

By Randall Parker    2002 December 05 05:41 PM   Entry Permalink | Comments (0)
2002 November 14 Thursday
Dielectrophoresis Used To Move Fluids On A Chip

Many reseach and development groups in academia and in industry are working on the design of chips that can function as micro-labs. These will will make it possible to do biomedical tests and manipulations of biological materials less expensive and more widely available. This will lower the cost of medical testing, lower the cost and speed the rate of progress of basic research, and allow new types of therapies to be developed. The cost of labor and materials for doing biochemical manipulations can be lowered by orders of magnitude once these kinds of chips become functional. A University of Rochester group is working on the problem of how to get liquids to move around on micro-lab chips:

University of Rochester researchers are working on a new way to move and distribute microscopic amounts of fluid around a chip, essentially mimicking the work of scientists testing dozens of samples in a laboratory. The research is in response to a growing demand for "laboratories on a chip," programmable devices that automatically perform the multiple tests on much smaller amounts of material-on site and more efficiently than ever before. Researchers around the world are already working to develop chips that will allow instant glucose monitoring, DNA testing, drug manufacturing, and environmental monitoring.

In order to work, all of these chips need some sort of plumbing system to move liquid. Thomas B. Jones, professor of electrical engineering, and his team have developed a way to use the electrostatic attraction of water to electric fields, called dielectrophoresis, to divide a single drop of water into dozens of incredibly tiny droplets and move them to designated sites on a chip. The droplets can be mixed with specialized testing chemicals or biological fluids, or positioned for diagnostic tests with lasers or electrical pulses. Essentially, any laboratory test that can be shrunk to fit on a chip will be able to be serviced by the new plumbing system.

"Microchemical analysis is a rapidly advancing field, but while there are ways to test minuscule liquid volumes, no one has yet come up with a practical way to dispense and move these liquid samples around a chip," says Jones. "We're hoping to change all that. We've been able to take a single drop of water and split it up into as many as 30 droplets of specific sizes, route them around corners, send different droplets to different points on a chip and even mix different drops together."

Other microfluidic schemes use tiny channels and passages machined into substrates, but these are not only hard to make, but the pressure needed to move the fluid inside means that the slightest defect in fabrication could produce leaks. Jones' system uses narrow electrodes etched onto glass-so thin that they're almost invisible to the naked eye. AC voltage at about 60 kilohertz is applied to the electrodes and the resulting electrical force causes a "finger" to project from the drop. The finger stretches out along the electrode until it reaches the end, sort of a widened cul-de-sac. When the voltage is then switched off, the surface tension of the water itself pulls about half of the finger of water back toward the initial drop while half is left to form the droplets. This cul-de-sac can be quite a distance away across the chip-close to a centimeter in Jones' laboratory-and the path to it can even take sharp turns with ease.

Mixing different droplets together is as simple as setting the cul-de-sacs of two paths next to each other and then changing the electrical connections so that the droplets are attracted toward each other. To produce multiple droplets from a single finger, Jones widens the wires at certain areas along the path, making the finger bulge in that area and accumulating a droplet when the finger retracts.

In the same way that miniaturization changed computers from room-sized machines to pocket calculators, a similar change is coming to chemistry and the biological sciences. Familiar laboratory procedures are being automated and scaled down to the size of microchips. Some companies are even looking to such chips to manipulate and investigate individual cells, while others could benefit from a chip's ability to carry out possibly hundreds of tests on a new drug in just minutes. As the field expands, scientists are finding more uses for such micro-labs.

By Randall Parker    2002 November 14 11:41 AM   Entry Permalink | Comments (0)
2002 November 11 Monday
US Genomics May Drive DNA Sequencing Costs Way Down

US Genomics is developing technology to do linear single strand DNA analysis. They are not the only ones attempting this and it is not clear how successful they will be. But they have made progress jumping over some of the hurdles that they face with their approach. Notice the high speed per minute. At that rate they could in theory read the entire 3 billion DNA sequence of a human genome in less than 11 days..

Chan has developed a way to spool out the tangle of DNA in a chromosome using a 'nanofluidic' chip smaller than a computer key. Fluid flowing through the chip draws the DNA through an array of pegs like bowling pins. One end works loose and is drawn into a funnel at the end.

Rather than sequencing every letter, Chan and his team spot the differences between individuals — and use the reference genome to fill in the rest. Fluorescent tags stick to variable spots; a detector reads their order as they flow past. The speed-reading technique gets through around 200,000 letters a minute, he claims.

They are trying to develop the ability to unravel and read thru a genome as fast as a regular cell can when it replicates its genome during cell division.

The company's technologies are premised upon the direct and linear reading of large sections of genomes. Linear analysis is powerful because there is no upper limit on the size of DNA that is read. Furthermore, this is the method which nature has perfected over millions of years. DNA, during cell division, is replicated with DNA polymerase, an enzyme that tracks along DNA in a linear fashion. Identification of the bases is mediated by base-pairing and enzyme-DNA specific interactions. By reproducing nature's method of DNA reading, the highest readout speeds are possible. A human cell can replicate and read its DNA in less than thirty minutes. The company's technology is a biophysical rendering of the polymerase-DNA interaction and allows for speeds on the same time scale as nature's DNA polymerases.

U.S. Genomics's technology platform, the GeneEngine™, has two components, (1) nanotechnology systems for positioning DNA so that it can be read linearly (broadly termed DNA Delivery Mechanism(s)™) and (2) detection technologies that allow the reading of information from the DNA Delivery Mechanism(s)™. The combination of different DNA Delivery Mechanism(s)™ with particular technologies makes possible different applications in genomic analysis, such as complete genome analysis, sequencing, polymorphism analysis, and gene expression determination.

Here's the announcement for their patent for moving single DNA strands past a reader sensor.

Woburn, MA (JUNE 13 2001) – U.S. Genomics announced today that its first patent has been granted by the United States Patent and Trademark Office (6,210,896 Molecular Motors). The issued patent covers the first of a suite of proprietary techniques that U.S. Genomics has developed to allow the direct, linear reading of extremely long sequences of DNA.

Specifically, the patent covers the Company's technology for using molecules that interact with cellular polymers (such as nucleic acid -- DNA or RNA) in such a way that the molecules cause the polymers to move. The segments of the polymer that are moved by the "molecular motor" flow past a fixed point, emitting specific signals that reveal genetic information embedded on the strand of nucleic acid.

Eugene Chan, Chairman and CEO of U.S. Genomics, commented, "The granting of this first patent for U.S. Genomics is a validation of our approach to direct linear analysis of DNA. Modeled after the nearly instantaneous readings of DNA that natural cellular machinery executes, our approach to deciphering and understanding genetic information is directed towards complete-genome analysis - reading the entire sequence of genetic coding contained in a full, unbroken strand of DNA."

U.S. Genomics has developed the GeneEngine™, a set of laboratory devices that enable researchers to uncurl and separate individual strands of DNA or RNA which are then run through a microarray sequencer in extremely long, unbroken, linear segments. The genetic information captured through such direct linear readings is relatively much more comprehensive and integrated than data available through other current techniques. The molecular motors covered in this first patent provide the physical mechanism for moving the strands of DNA through the sequencer.

US Genomics is getting US military money to develop their technology to detect bioweapons attacks:

Woburn, MA (September 04, 2002) – U.S. Genomics announced today it was awarded a $499,500 contract by the Defense Advanced Research Projects Agency (DARPA) to examine the use of the Company’s direct linear DNA analysis technology to detect Class A pathogens, such as anthrax and smallpox. The contract will enable the company to study the use of its GeneEngine™ technology as a tool to create genomic maps or signatures of organisms; such maps have the potential to enable very rapid detection and identification of deadly bacteria.

While they do not sound like they are going to be ready to ship fully working products any time soon they have entered into an agreement The Wellcome Trust Sanger Institute to try out their GeneEngine technology in the study of genetic variations.

It is hard to interpret the announcement with The Wellcome Trust Sanger Institute. When will US Genomics deliver usable technology and what will that initial technology be capable of?

Woburn, MA (January 28, 2002) – U.S. Genomics and The Wellcome Trust Sanger Institute have entered into a collaboration to examine the use of U.S. Genomics’ direct, linear DNA analysis technology in research on the human genome. The partnership will study the use of this new technology to investigate human genetic data at a level of complexity, comprehensiveness, and accuracy not previously studied. The collaboration marks the first application of U.S. Genomics’ technology in an outside research setting.

Under the agreement, The Wellcome Trust Sanger Institute and U.S. Genomics will jointly employ their scientific expertise to conduct genetic research using the GeneEngine™ technology and other aspects of U.S.Genomics’ technology platform. The research collaboration will explore the application of U.S. Genomics’ technology to human genetic analysis at the highest level of detail and complexity. Financial terms of the agreement were not disclosed.

If we step back and look at it from a higher level what is interesting about this company and others like it is that venture capitalists are funding attempts to drive down the cost and accelerate the speed of DNA sequencing by orders of magnitude. Some of these companies will succeed. A lot of progress has already been made.

The US Genomics press releases are here.

By Randall Parker    2002 November 11 11:06 PM   Entry Permalink | Comments (0)
2002 November 10 Sunday
STMicro Releases Silicon DNA Analysis Chip

The semiconductor industry is going to do to biotech what it has already done to computers: make things dramatically smaller, faster, cheaper, and more powerful:

Philadelphia, October 31, 2002 - At the Chips-to-Hits conference in Philadelphia today, STMicroelectronics (NYSE:STM), the world's third largest semiconductor maker, presented a prototype silicon chip for DNA analysis that integrates both DNA amplification and detection on the same chip. This device is based on Micro-Electro-Mechanical-System (MEMS) technology that applies silicon-chip manufacturing technologies to produce miniature devices with a combination of mechanical, electrical, fluidic and optical elements.

The primary end use targeted by the DNA analysis chip is in medical diagnostics, to detect genetically related disease directly at the point of care without the delays of laboratory testing. Other applications of the DNA analysis chip include drug discovery - the search for more effective new drugs, the testing of livestock for genetic disease, and the monitoring of water supplies for biological contamination.

"The advantage of using silicon rather than plastic or glass for this function is that it has excellent thermal properties, which is extremely useful in analysis techniques like the Polymerase Chain Reaction (PCR) which are based on temperature cycling," said Benedetto Vigna, Manager of ST's MEMS Development Unit. "In addition it can be 'micromachined' readily using well-known and cost effective silicon-chip manufacturing techniques."

Compared to traditional tests, the ST silicon MEMS device offers a very compact solution that reduces the overall testing cost and delivers results in minutes. Using this technology, extremely small quantities of fluid can be analyzed; the limitation is in the external hardware used to transfer samples.

One of the world's leading manufacturers of conventional electronic silicon chips, ST also develops and manufactures silicon MEMS devices using in-house-developed technologies covering a broad range of applications. The microfluidic technology used in the DNA analysis device builds on the company's long experience in the manufacture of inkjet printer chips combining electronic and fluidic elements.

"Our goal in presenting this device to the life sciences community here at Chips-to-Hits," said Barbara Grieco, Business Development Manager in ST's Printhead and Microfluidics Business Unit, "is to identify potential partners in the biomedical field for the joint development of new devices that combine ST's knowhow in silicon MEMS technology with the partners expertise in biomedical technologies and markets." ST currently partners with leading companies in other fields for the joint development of MEMS-based devices for inkjet printers and optical switches.

The prototype DNA analysis device presented at Chips-To-Hits performs DNA amplification in microscopic channels buried in the silicon and then identifies DNA fragments in the sample. DNA amplification is performed using the Polymerase Chain Reaction technique. A prepared DNA sample mixed with suitable reagents flows into the buried channels in the chip where it is repeatedly cycled through three temperatures which doubles the quantity of DNA with each cycle. When the sample has been amplified sufficiently, it flows into a detection area on the same chip where gold electrodes are pre-loaded with DNA fragments. Fragments in the sample attach to matching fragments on the electrodes and are detected optically.

By Randall Parker    2002 November 10 12:41 AM   Entry Permalink | Comments (0)
2002 November 06 Wednesday
Fluorescence Imaging Chip System for Massive Parallel DNA Sequencing

From the Columbia University web site of Associate Professor Jingyue Ju comes a description of a massively parallel DNA sequencing method.

Fluorescence Imaging Chip System for Massive Parallel DNA Sequencing. The use of electrophoresis for DNA sequencing has been a major bottleneck for high-throughput DNA sequencing projects. The need for electrophoresis is eliminated when sequencing DNA by synthesis, that is, when detecting the identity of each nucleotide as it is incorporated into the growing strand of DNA in a polymerase reaction. Such a scheme, if coupled to the chip format, has the potential to markedly increase the throughput of sequencing projects. Our laboratory is developing a chip-based 'sequencing by synthesis' platform. This DNA sequencing system includes the construction of a chip with immobilized single stranded DNA templates that can self prime for the generation of the complementary DNA strand in polymerase reaction, and 4 unique fluorescently labeled nucleotide analogues with 3'-OH capped by a small chemical moiety to allow efficient incorporation into the growing strand of DNA as terminators in the polymerase reaction. A 4-color fluorescence imager is then used to identify the sequence of the incorporated nucleotide on each spot of the chip. Upon removing the dye photochemically and the 3'-OH capping group, the polymerase reaction will proceed to incorporate the next nucleotide analogue and detect the next base. It is a routine procedure now to immobilize high density (>10,000 spots per chip) single stranded DNA on a 4cm x 1cm glass chip. Thus, in the chip based DNA sequencing system, more than 10,000 bases will be identified after each cycle and after 100 cycles million of base pairs will be generated from one sequencing chip. Massively parallel DNA sequencing promises to bring genetic analysis to the next level where we can envision, for example, the comparison on individual genome profiles.

By Randall Parker    2002 November 06 05:39 PM   Entry Permalink | Comments (0)
2002 October 31 Thursday
HapMap Genetic Variation Mapping Project Begins

It is now believed that in most people genetic variations vary in groups. By looking at a relatively small number of locations in the genome it should be possible to predict (albeit with less than complete certainty) what genetic variations will be found between the tested locations (these ranges are called Haplotypes). To map out and identify the marker locations from which other locations can be predicted an ambitious 3 year $100 million dollar international project is attempting to collect analyze the DNA of several hundred humans. This project is known as the HapMap project:

Genetic information is physically inscribed in a linear molecule called deoxyribonucleic acid (DNA). DNA is composed of four chemicals, called bases, which are represented by the four letters of the genetic code: A, T, C and G. The Human Genome Project determined the order, or sequence, of the 3 billion A’s, T’s, C’s and G’s that make up the human genome. The order of genetic letters is as important to the proper functioning of the body as the order of letters in a word is to understanding its meaning. When a letter in a word changes, the word’s meaning can be lost or altered. Variation in a DNA base sequence – when one genetic letter is replaced by another – may similarly change the meaning.

More than 2.8 million examples of these substitutions of genetic letters – called single nucleotide polymorphisms or SNPs (pronounced snips) – are already known and described in a public database called dbSNP (http://www.ncbi.nlm.nih.gov/SNP/), operated by NIH. The major source of this public SNP catalog was work done by The SNP Consortium (TSC), a collaborative genomics effort of major pharmaceutical companies, the Wellcome Trust and academic centers.

The human genome is thought to contain at least 10 million SNPs, about one in every 300 bases. Theoretically, researchers could hunt for genes using a map listing all 10 million SNPs, but there are major practical drawbacks to that approach.

Instead, the HapMap will find the chunks into which the genome is organized, each of which may contain dozens of SNPs. Researchers then only need to detect a few tag SNPs to identify that unique chunk or block of genome and to know all of the SNPs associated with that one piece. This strategy works because genetic variation among individuals is organized in "DNA neighborhoods," called haplotype blocks. SNP variants that lie close to each other along the DNA molecule form a haplotype block and tend to be inherited together. SNP variants that are far from each other along the DNA molecule tend to be in different haplotype blocks and are less likely to be inherited together.

"Essentially, the HapMap is a very powerful shortcut that represents enormous long-term savings in studies of complex disease," said David Bentley, Ph.D., of the UK's Wellcome Trust Sanger Institute.

Since all humans descended from a common set of ancestors that lived in Africa about 100,000 years ago, there have been relatively few generations in human history compared to older species. As a result, the human haplotype blocks have remained largely intact and provide an unbroken thread that connects all people to a common past and to each other. Recent research indicates that about 65 to 85 percent of the human genome may be organized into haplotype blocks that are 10,000 bases or larger.

The exact pattern of SNP variants within a given haplotype block differs among individuals. Some SNP variants and haplotype patterns are found in some people in just a few populations. However, most populations share common SNP variants and haplotype patterns, most of which were inherited from the common ancestor population. Frequencies of these SNP variants and haplotype patterns may be similar or different among populations. For example, the gene for blood type is variable in all human populations, but some populations have higher frequencies of one blood type, such as O, while others have higher frequencies of another, such as AB. For this reason, the HapMap consortium needs to include samples from a few geographically separated populations to find the SNP variants that are common in any of the populations.

Charles Rotimi, Ph.D., leader of the Howard University group collecting the blood samples in Nigeria, said, "We need to be inclusive in the populations that we study to maximize the chance that all people will eventually benefit from this international research effort."

Because of the block pattern of haplotypes, it will be possible to identify just a few SNP variants in each block to uniquely mark, or tag, that haplotype. As a result, researchers will need to study only about 300,000 to 600,000 tag SNPs, out of the 10,000,000 SNPs that exist, to efficiently identify the haplotypes in the human genome. It is the haplotype blocks, and the tag SNPs that identify them, that will form the HapMap.

By Randall Parker    2002 October 31 10:38 PM   Entry Permalink | Comments (0)
2002 October 30 Wednesday
Cal Tech Chemical Lab On A Silicon Chip

The research group of California Institute of Technology biophysicist Stephen Quake has built a silicon chip that can function as a mini chemistry lab:

To show what such a device is capable of, Quake's team have made an array in which 3,574 microvalves can separate an injected fluid into 1,000 tiny chambers in a 25x40 grid. Each chamber contains just a quarter of a billionth of a litre of liquid. If all the chambers were full, they'd contain less than a hundredth of a raindrop. Each chamber can be individually emptied.

This is from the research paper's abstract.

Microfluidic Large-Scale Integration

Todd Thorsen,1 Sebastian J. Maerkl,1 Stephen R. Quake 2*

We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory.

The Quake group's web site at Cal Tech has all sorts of interesting information. You can read the PDF reprints of their published papers here There is a list of the group's major areas of research here. One of the projects is a miniaturized DNA sequencing device:

Novel DNA Sequencing Techniques

Marc Unger

The current paradigm in DNA sequence determination is Sanger Dideoxy sequencing by electrophoresis on polyacrylamide gels. This has limitations both in terms of speed (running a gel takes several hours) and read frame (a maximum of approximately 500 base pairs may be sequenced at one time). In order to surpass these limitations, we are developing novel a DNA sequencing technology based on microfabricated flow channels and single molecule fluorescence detection. Both microfabrication and single molecule detection have advanced to the point where straightforward techniques are readily available in the literature, and the equipment required can be purchased off-the-shelf. Work to date has focussed on microchannel preparation and calibrating our optical detection system. The picture below is a fluorescence image of single dye molecules (tetramethylrhodamine isothiocyanate) on a glass coverslip, at a magnification of approximately 1000x on your screen. Our microchannels are also working quite well - fabrication is now reliable, reproducible, and fairly easy. We are now beginning work on the chemistry of attaching molecules to the surfaces of the flow channels.

By Randall Parker    2002 October 30 12:05 AM   Entry Permalink | Comments (0)
2002 October 28 Monday
Sequenom Searching For SNP Disease Risk Factors

Wired News writer David Ewing Duncan paid a visit to Sequenom of San Diego to become the first person to be tested for all known genetic markers that are thought to contribute to diseases. While a couple of risk factors for high blood pressure and heart problems were uncovered his genetic screening results came out looking favorable for a longer than average life.

Toni Schuh, CEO of Sequenom, told Genomics & Proteonomics magazine that Sequenom is rapidly scaling up its ability to the number of genetic markers it is watching in a large group of people to try to identify genetic variations that contribute to disease:

“We are doing large-scale genetics discovery studies to find the genes that harbor the predisposition to disease and to nail down the variations in these genes that turn them into risk genes. A year ago if a geneticist wanted to do this, he would have 400 to 800 microsatellite markers to cover the entire human genome. Two years ago, if somebody had these 400 markers to do a study on 500 people, that was considered a big genetic study. We have 11,000 people in our healthy population now and our total base of DNA markers is more than 100,000. The dramatic change in scale in terms of markers is 100 times more than a few years ago. That’s the very significant inflection point in the power of pharmacogenetics and medical genetics in general,” says Schuh.

Update: On that previous link it is claimed that 4 million SNPs have been discovered so far and there may be millions more that have not yet been discovered. The article reports on many biotech instrumentation companies which are rapidly introducing new products that further accelerate and automate the process of DNA assaying.

While Sequenom doesn't have as many SNPs identified as Perlegen Sequenom is offering 2 million SNP assays to its customers.

At the end of last year, Sequenom Inc., San Diego, completed a portfolio of 400,000 different working SNP assays, which is now available to their customers on its recently launched Web site, www.realSNP.com. The site is named as such “because the SNPs are real,” says Charles Cantor, chief scientific officer at Sequenom. The Web site contains information on how to run the assays, as well as information on population frequency of SNPs in various populations. Sequenom continues to design SNP assays for every SNP in the public domain, so the RealSNP.com Web site currently has more than 2 million designed assays. “And we know from past experience that about 90% of those will work the first time they’re tried without any optimization,” says Cantor.

We are on the edge of an explosion in the number of known genetic risk factors for diseases.

By Randall Parker    2002 October 28 11:57 PM   Entry Permalink | Comments (2)
Perlegen finds 1.7 million SNPs

Perlegen, a two year old spin-off of Affymetrix, has announced that it has compared the DNA sequences of 50 people and identified all the Single Nucleotide Polymorphisms (singe genetic letter differences) found in that group of people. Perlegen is keeping that information to itself to use business deals with pharmaceutical companies to identify which genetic variations cause adverse drug reactions:

Yet there was only a muted celebration a few weeks ago, when Perlegen's scientists decided they'd found the last of the 1,717,015 SNPs that biotech firms have been seeking since the human genome was sequenced in 2000.

"We cracked a single bottle of cheap champagne," quipped Perlegen chief scientist David Cox.

Two simple reasons explain Perlegen's restraint.

Although the company claims to have found just about every SNP in creation, the scientific community hasn't the proof. Perlegen won't publish its SNP map. Instead it will try to recoup its investment by helping drug firms use these subtle genetic variations to determine why some people react badly to medicines -- or get sick in the first place.

Perlegen will also use this data to look for genetic variations that predispose people to get various illnesses. Perlegen has just announced a large collaborative effort to search for genetic variations that are risk factors for type 2 diabetes.

Perlegen will use human genetic variations (single nucleotide polymorphisms or SNPs) it has discovered and its high-density oligonucleotide array-based SNP genotyping capability to assist researchers from around the world in intensifying their search for genes involved in a disease that affects approximately 15 million people in the United States and millions more around the world. The study includes geneticists at the University of Michigan, the University of Southern California, the National Public Health Institute of Finland, and the National Human Genome Research Institute in Bethesda, Maryland. During the past nine years, by studying the DNA of type 2 diabetes patients and their families, the FUSION group (Finland-United States Investigation Of Non-insulin-dependent diabetes mellitus genetics) has narrowed its search for the genes that play a causal role in the disease to certain areas of the human genome. An area of particularly high interest falls on the long arm of chromosome 6. Now, with help from Perlegen's scientists and the company's innovative genotyping technology, the FUSION group hopes to discover the gene or genes in that region.

What I find curious about their approach is that they used only 50 people as DNA sample donors. Surely it is not possible to find all the genetic variations that matter by looking at just 50 people. There are genetic sequence variations that show up in less than 2% of the population and hence one would expect one would need DNA from more than 50 people to identify all the SNPs that matter. Likely they took this approach for cost reasons. Most medically important SNPs are in the 1.7 million they identified and this is a cost effective way to search for most medically significant SNPs.

Update: Perlegen can identify all the unique sequences (or only out of the 1.7 million SNPs they've mapped?) in an individual's genoome in 10 days:

In theory, that goal is within reach, because researchers can now scan the entire genome to look for DNA variants of interest. Perlegen Sciences, a closely held company in Mountain View, Calif., recently announced that it can parse a person's genome in about ten days using so-called DNA chips--an astounding advance, given that it took an international army of scientists all of the 1990s to create the first draft of the human genome.

Update: Keep in mind that just as DNA sequencing technology advanced to make SNP detection faster and cheaper for Perlegen's recent work it will continue to advance and SNP detection costs will probably fall by orders of magnitude in the next ten years. Many more research groups and companies will be able to collect of SNP maps of larger groups of people for less money in the future. A company that seeks to make money off of SNP maps of smaller groups of people had better find useful SNP variations fairly quickly if they want to make a profit off their information. Collecting that information will only become cheaper going forward. If you want to get a sense of just how rapidly biotech assay tools are going to advance then read the FuturePundit Biotech Advance Rates archive.

By Randall Parker    2002 October 28 11:22 PM   Entry Permalink | Comments (0)
2002 October 24 Thursday
Rate Of Biotech Medicine Advance Accelerating

This rate of new product development will increase even further as the ability to manipulate genes increases. Advances in the power of the technological tools will lower the cost of new product development and enable new types of products to be developed:

A record number of biotech medicines has reached the final stage of clinical trials, positioning the industry to produce as many products in the next few years as it has during the past 20.

Data compiled by the Pharmaceutical Research and Manufacturers of America show that of 371 biotech medicines now undergoing commercial tests, 116 have reached Phase III clinical trials -- the last step before the U.S. Food and Drug Administration decides whether they are safe and effective enough to sell to consumers.

This is the original press release from the Pharmaceutical Research and Manufacturers of America that probably inspired the San Francisco Chronicle article:

371 Biotechnology Medicines IN Testing Offer Hope of New Treatments for Nearly 200 Diseases

October 21, 2002

371 BIOTECHNOLOGY MEDICINES IN TESTING OFFER HOPE OF NEW TREATMENTS FOR NEARLY 200 DISEASES

Washington, D.C. – More than 250 million people have already benefited from medicines and vaccines developed through biotechnology, and a new survey by the Pharmaceutical Research and Manufacturers of America (PhRMA) identifies 371 more biotechnology medicines in the pipeline. Nearly 200 diseases are being targeted by this research conducted by 144 companies and the National Cancer Institute.

These new medicines – all of which are in human clinical trials or are awaiting FDA approval –include 178 new medicines for cancer, 47 for infectious diseases, 26 for autoimmune diseases, 22 for neurologic disorders, and 21 for HIV/AIDS and related conditions.

Approved biotechnology medicines already treat or help prevent heart attacks, stroke, multiple sclerosis, leukemia, hepatitis, rheumatoid arthritis, breast cancer, diabetes, congestive heart failure, lymphoma, kidney cancer, cystic fibrosis and other diseases.

"These medicines are the result of extensive efforts to understand the human genome and penetrate the molecular basis of disease," said PhRMA President Alan F. Holmer. "The cutting-edge medicines in development – many of which attack or prevent disease in fundamentally different ways – offer hope to patients with diseases for which we have no cures."

Among the new biotechnology medicines in development are an epidermal growth factor inhibitor that targets and blocks signaling pathways used to promote the growth and survival of cancer cells; monoclonal antibodies – or laboratory-made versions of one of the body’s own weapons against disease – that target asthma, Crohn’s disease, rheumatoid arthritis, lupus, various types of cancer, and other diseases; and therapeutic vaccines, designed to jump start the immune system to fight such diseases as AIDS, diabetes, and several types of cancer.

Researchers are also pursuing antisense medicines – which interfere with the signaling process that triggers disease pathways for AIDS, several types of cancer, Crohn’s disease, heart disease, and psoriasis, and gene therapies, which augment normal gene functions or replace or inactivate disease-causing genes, for hemophilia, several cancers, cystic fibrosis, heart disease, and other diseases.

PhRMA represents the country’s leading research-based pharmaceutical and biotechnology companies, which are devoted to inventing medicines that allow patients to live longer, healthier, and more productive lives. The industry invested more than $30 billion in 2001 in discovering and developing new medicines. PhRMA companies are leading the way in the search for new cures.

By Randall Parker    2002 October 24 12:19 PM   Entry Permalink | Comments (0)
2002 October 21 Monday
Gene Vaccines Accelerating Vaccine Development

The article discusses why gene vaccines are cheaper, faster to develop, usable for more purposes, and capable of being delivered in more ways than standard vaccines. Gene vaccines may even help slow aging:

Gene vaccines hold special promise as weapons against diseases too complex or dangerous for traditional immunology. Already, they've proven successful in hundreds of animal trials against bioweapons like anthrax and the plague, as well as against pandemics like malaria and TB, which claim millions of lives each year. In July, Oxford scientist Adrian Hill began testing a gene-based malaria vaccine on hundreds of at-risk people in Gambia.

Closer to home, a gene vaccine against melanoma has completed three rounds of clinical trials on humans and appears ready to be submitted to the FDA for final approval. When injected directly into cancerous tumors, the vaccine, called Allovectin-7, causes proteins to grow on the tumor's surface — which in turn stimulates the immune system. The drug's manufacturer, Vical, is reviewing data from the experiments in hopes of presenting them to the FDA.

By Randall Parker    2002 October 21 11:34 AM   Entry Permalink | Comments (0)
2002 October 19 Saturday
Keeping Brain Cells Alive Longer In Vitro

Imagine scaling this up to an even longer period of time and even more cells. Eventually they'll have the ability to keep Spock's brain alive:

A way of keeping slices of living brain tissue alive for weeks has developed by a biotech company. This will allow drug developers to study the effect of chemicals on entire neural networks, not just individual cells.

"We are building stripped-down mini-brains, if you will, directly on a chip," says Miro Pastrnak, business development director of Tensor Biosciences of Irvine, California.

By Randall Parker    2002 October 19 06:00 PM   Entry Permalink | Comments (0)
2002 October 11 Friday
Personal DNA Sequencing Affordability

This article projects it will take at least 5 years before personal DNA sequencing is affordable:

US Genomics in Massachusetts has developed a machine that scans a single DNA molecule 200,000 bases long in milliseconds. For now, it untangles the DNA and scans the molecule by picking out fluorescent tags located every 1000 base pairs or so.

But chief executive Eugene Chan says the company expects to be able to read sequences one base at a time in three or four years. "Our goal is to sequence the genome instantaneously," he says.

Blonde or brunette

Other firms, such as Texas-based VisiGen Biotechnologies and British company Solexa of Essex are also trying the single-molecule approach. The consensus is that it will take at least five years before sequencing technology reaches the point where it's fast and cheap enough to make personal genomics feasible. What's more, it also has to be highly accurate.

You can find my previous post about Solexa here and one about nanopore technology for rapid DNA sequencing here. Also, once personal DNA sequencing becomes cheap the mating dance will change. and also personal DNA privacy will become impossible to protect.

By Randall Parker    2002 October 11 07:43 PM   Entry Permalink | Comments (0)
2002 October 10 Thursday
Microtiter Plates Next DNA Sequencing Cost Reduction?

Genome Therapeutics Corp. has won an NIH grant to try to reduce DNA sequencing costs by an order of magnitude:

Reflecting a commitment to delivering high-quality genomics services, the commercial services division of Genome Therapeutics Corp. (Nasdaq: GENE), GenomeVision(TM) Services, has received a $1.6 million grant from the National Human Genome Research Institute (NHGRI) for the advanced development of genomic technologies. As the only commercial sequencing center in the federally-funded Human Genome Project and a major participant in the Rat Genome Project, GenomeVision Services has continually worked to advance its own technologies and practices in order to help streamline critical parts of the genome sequencing process, such as sample preparation and DNA analysis.

This is a refinement of current techniques:

The goal of this two-year grant, which is separate from previous awards from the NHGRI, is to achieve a five to ten-fold reduction in the sequencing costs for large-scale genomic sequencing projects. Specifically, GenomeVision Services is working to reduce the minimum amount of DNA needed, from microliters to nanoliters, for standard instruments to perform analysis using microtiter plates. In addition to plates that use smaller sample amounts, GenomeVision Services is also developing plates that allow the removal of contaminants while still enabling the retrieval of the DNA in the sample for additional analysis. Genome Therapeutics retains all rights to the microtiter plates, which are available for licensing.

By Randall Parker    2002 October 10 02:18 PM   Entry Permalink | Comments (0)
2002 September 23 Monday
Millionaires Paying For Gene Sequencing

Craig Venter says he's already signed up several millionaires as customers:

The newspaper said that for 400,000 (US$621,500), a person would get details of their entire genetic code within 1 week. "Armed with such information, the individual would be able to check for mutations linked with illnesses such as cancer and Alzheimer's," the Sunday Times reported.

By Randall Parker    2002 September 23 02:23 PM   Entry Permalink | Comments (0)
UK Start-up: Your genetic code in a day

The company's name is Solexa:

A British company says it is close to perfecting a gene sequencing method that could "read" someone's genome in a day.

From Solexa's web site:

Solexa was established in 1998 to develop and commercialize a revolutionary new nanotechnology, called the Single Molecule Array™, that allows simultaneous analysis of hundreds of millions of individual molecules.

We are applying this technology to develop a method for complete personal genome sequencing, called TotalGenotyping™. This will overcome the cost and throughput bottlenecks in the production and application of individual genetic variation data that are holding back the benefits to medicine that can flow from the genome revolution. Solexa’s technology will offer a potential five order of magnitude efficiency improvement, well beyond the range possible from existing technologies.

Our technical approach combines proprietary advances in synthetic chemistry, surface chemistry, molecular biology, enzymology, array technology and optics. Based on Single Molecule Arrays with the equivalent of hundreds of millions of sequencing lanes, we will deliver base-by-base sequencing on a chip without any need for amplification of the DNA sample.

To date we have raised over £15 million (€22 million; $23 million) in venture capital investment that has enabled us to make rapid progress with the development of our technologies. We have attracted a talented and multidisciplinary team of scientists to accelerate prototype development.

Solexa occupies its own customized 14,000 sq ft facilities in Cambridge, UK.

You can also find more on their technology here.

By Randall Parker    2002 September 23 12:31 PM   Entry Permalink | Comments (0)
Venter: Personal DNA sequencing $1000 in 10 years

Sounds like Craig Venter is expanding The Institute for Genomic Research to develop faster DNA sequencing machines:

And you expect to be able to get that cost down to $1,000?

That’s the goal.

How far off is that?

Somebody could make a discovery tomorrow, and it could be a year from now -- or it could take 20 years.

If you take the extrapolation of the 15 to 20 years of the public genome project and $5 billion, to Celera doing it for less than $100 million in nine months, to within this year, we’d be able to sequence the essential components of your genome in less than a week for about a half-million dollars.

If you extrapolate from that curve, it’s totally reasonable to expect with new technology development within five years, we should be there. I’ve given it a margin of five to 10 years.

However, he's still denying the obvious link between genes and personality types. Oh well, doesn't matter. Lots of neurobiologists are chasing down those links.

By Randall Parker    2002 September 23 11:10 AM   Entry Permalink | Comments (0)
2002 September 04 Wednesday
Nanopore technology: sequence your DNA in two hours!

Current DNA sequencing techniques involve taking the DNA from a person or other organism and then making billions of copies of it to run thru sequencing machines. This is slow, expensive, and error prone. Back in 1989 UCSC professor David Deamer first conceived of the idea of making nanopores thru which a single strand of DNA would pass at a time and as the strand passed thru the nanopore its changing electrical pattern would be used to read each successive DNA base (each letter location in the genetic code) via sensors built into the nanopore structure. This approach holds the potential of allowing for miniaturization, elimination of lots of expensive reagents, and to speed sequencing by many orders of magnitude.

One of the teams attempting to develop nanopore DNA sequencing technology is at Harvard. From Harvard Biology Professor Daniel Branton's home page:

A novel technology for probing, and eventually sequencing, individual DNA molecules using single-channel recording techniques has been conceived. Single molecules of DNA are drawn through a small channel or nanopore that functions as a sensitive detector. The detection schemes being developed will transduce the different chemical and physical properties of each base into a characteristic electronic signal. Nanopore sequencing has the potential of reading very long stretches of DNA at rates exceeding 1 base per millisecond.

Biophysics Ph.D. candidate Lucas Nivon, who works in the lab of Professor Dan Branton has this to say about the potential for nanopore technology:

Professors Dan Branton and David Deamer developed a new way to sequence single-stranded DNA by running it through a protein nanopore. Using this method, we could potentially sequence a human genome in 2 hours.

Well, 1 base per millisecond translates into 86 million bases per day. With a 2.9 billion size human genome it would take slightly over a month to sequence an entire genome. But Nivon's 2 hour estimate is plausible because many nanopores could be placed into a single device. With 500 nanopores in a single device the human genome could be decoded in less than 2 hours. The first article in the list below uses the 500 nanopore example though they quote a 24 to 48 hour sequencing time. Possibly different generations of this technology are being referenced to come out with different predicted sequencing times.

For a more detailed discussion of this topic see these articles:


By Randall Parker    2002 September 04 12:16 PM   Entry Permalink | Comments (1)
2002 August 31 Saturday
Will biological technologies advance as rapidly as electronic technologies?

How fast will biotechnology advance? Will it be extremely difficult and time consuming to discover the genetic causes of various human characteristics or the genetic variations that contribute to disease? We will start by taking a look at some of the known rates of technological advance in the electronics industry. Then we'll look at biotech and see if we can find similar rates of advance in crucial biotechnologies.

In the electronics industry it is well known that microprocessor speed doubles about every 18 months. Intel co-founder Gordon Moore in 1965 famously stated Moore's Law (more about it here) which predicted a microprocessor speed doubling rate that would last for decades. He originally predicted a 1 year doubling rate. But the rate of progress slowed to an 18 month doubling rate in the late 1970s. Gordon Moore is now predicting that in a few generations the microprocessor speed doubling rate will slow to a three year interval.

While the microprocessor speed doubling rate has attracted the most attention in the popular press there are other electronics technology doubling rates that are of equal or greater importance. Two big ones are hard disk storage capacity and fiber optic transmission bandwidth. In contrast to Moore's Law for microprocessor speed the hard disk storage doubling rate has actually accelerated in recent years:

Throughout the 1970s and '80s, bit density increased at a compounded rate of about 25 percent per year (which implies a doubling time of roughly three years). After 1991 the annual growth rate jumped to 60 percent (an 18-month doubling time), and after 1997 to 100 percent (a one-year doubling time).

Fiber optic capacity is doubling at an even faster rate. The number of pulses per laser is doubling once every 18 months while the number of laser frequencies per optical fiber is doubling once every 12 months. So in 3 years we can expect the transmission capacity of a single fiber optic to go thru 5 doublings which translates into a 32 times increase in capacity per fiber. The combination of increase in number of lasers and increase in amount of information sent per laser yields a doubling period is less than 8 months. This is an astounding rate of progress.

So what does all of this have to do with the future of biotechnology? Well, certainly computers and computer networks are vital for doing biological research and biotech development work. So biology will advance more rapidly in the future than it has in the past because technologies that are useful as supporting tools but which are not specific to doing biology are advancing so rapidly. But what is more interesting is progress of various technologies that are specific to trying to understand and manipulate biological systems. While what follows is far from a complete picture of biotech rates of advance even this partial picture makes clear that we can expect revolutionary advances in biotechnology within the lifetimes of most of us.

This recent article in the New York Times is about a project in Iceland to do SNP (Single Nucleotide Polymorphism - a single letter position in the genome that can vary from person to person) mapping to hunt for genetic causes of diseases. They mention that their cost of doiing each SNP position analysis is 50 cents. Okay, some scientists estimate that the number of important SNPs in humans is about 100,000 SNPs (other estimates range as high as 400,000). These are SNPs that occur in areas of the genome that get expressed. That means that if you happen to have a spare $50,000 (in US dollars) lying around you can have your own SNP map done now. Pretty pricey but literally millions of millionaires today could afford to have their SNPs mapped (hey you multimillionaires: be the first person in your social circle to know your DNA SNP map!). I will leave for a later post how we personally and collectively will be able to benefit some day from having our personal SNP maps done.

Since $50,000 is a large chunk of cash for most of us the really interesting question is this: How fast will SNP mapping costs fall? To start with, it would help to have data on how SNP mapping costs have fallen in the past. It looks like SNP mapping costs haven't fallen at all in the last 3 years. This Wired article from 1999 quotes a cost for SNP mapping of 50 cents which is the same as the price quoted in the June 2002 NY Times article. However, the Wired article claims that Glaxo's Luminex Bead Technology may eventually reduce SNP costs to one-one thousandth of a cent per SNP. So to have 100,000 SNPs checked would cost you one whole US dollar. The cost of a doctor's visit to draw the blood (or perhaps to take a skin sample) and send it into the lab would cost more than the test itself. One can imagine mass screening programs run at work places and schools as a way to drive the total cost closer to the cost of the test itself. When something like the Luminex Bead Technology makes it to market the vast bulk of the populations of the industrialized countries will be able to get their personal SNP maps done. In later posts we'll explore the many ways this information will be used.

That Wired article makes no claim as to when this huge reduction in SNP mapping costs is going to happen. But on the Cambridge Healthtech Institute site they claim that the biotech industry has targeted an achievement of 1 cent per SNP within 2 years. That would put the cost per person for SNP mapping at about $1000, or if one accepts a higher estimate of 400,000 for the number of important SNPs in the genome the total cost is $4000 per person. Quite affordable for the affluent person who really must have everything. The CHI article also cites an SNP assay system available now from Affymetrix using their gene array chip technology that lowers SNP assay costs to 30 cents per location.

Since not all DNA sequence differences are SNP differences there will still be uses for other types of DNA assaying technology. Basic sequencing of entire genomes will continue to have uses for human health and other purposes. There are advances happening in basic DNA sequencing technology as well. The Cambridge Healthtech Insitute page mentions a nanopore sequencing method that may eventually be able to sequence over 1000 DNA locations (with each location referred to as a base pair or bp) per second. That would be over 86 million bp/day per instrument which translates into the ability to sequence the entire human genome in a little over a month. Compare that to the several year time period that the human genome sequencing project took (and that used many DNA sequencing machines - anyone know how many?). In the short term they claim a 1 to 2 year industry goal to produce machines that can sequence on the order of over 1 million bp/day as compared to the current high end of 200,000-300,000 bp/day per instrument. To put this in perspective they are projecting an advance in DNA sequencing throughput in a time frame which yields a doubling rate that is faster than the Moore's Law doubling rate for microprocessors.

DNA sequencing can be thought of as a way to read structure. It doesn't by itself explain how the structure functions or when the structure is functioning in a particular way. However, advances are being made in methods for monitoring the activity level or state of each of the genes in a cell. It used to be that just measuring the activity level of a single gene was quite difficult. But there are technologies for watching gene activity as well. Affymetrix GeneChip arrays can measure the activity of tens of thousands of genes at once.

These are all signs that biotechnology is going to advance at rates which are analogous to the way electronics technology has been advancing for decades. If that is the case we should expect to see the costs for taking apart and manipulating biological systems to drop by orders of magnitude while the speed of doing so rises by orders of magnitude as well.

The lack of ability to rapidly read the contents and state of our DNA has kept molecular biology advancing for decades as a veritable snail's pace. Without easy access to the basic code that governs cells we had little prospect of ever fully understanding degenerative diseases, aging, or of how and why we differ from each other physically and mentally. But as sequencing and assaying techniques increase in speed and fall in cost the very complex biological processes within cells that have remained a mystery for most of human history are suddenly becoming accessible to dissection.

Later posts will explore the many practical uses and dangerous abuses that these advances in capability can be used for.

By Randall Parker    2002 August 31 12:12 PM   Entry Permalink | Comments (3)
Site Traffic Info