November 08, 2007
Peer Review Of Grants Stifles Scientific Innovation?
Former Intel CEO Andy Grove has Parkinson's Disease and he believes the rate of advance in biomedical research is far too slow and one of the reasons is the conformity enforced by peer grant reviews.
What stands in the way of more and faster success in getting cures to patients?
The peer review system in grant making and in academic advancement has the major disadvantage of creating conformity of thoughts and values. It's a modern equivalent of a Middle Ages guild, where you have to sing a particular way to get grants, promotions and tenure. The pressure to conform [to prevailing ideas of what causes diseases and how best to find treatments for them] means you lose the people who want to get up and go in a different direction. There is no place for the wild ducks. The result is more sameness and less innovation. What we need is a cultural revolution in the research community, academic and non-academic. We need to give wild ducks the opportunity to emerge and quack their way to success. But cultural change can be driven only by action at the top.
I would like to know what Grove would propose as an alternative to grant peer reviews. Scientific research is sufficiently distant from product development that markets (at least not markets for cures) can't provide the needed guidance or method of distributing rewards.
I don't have a good idea on this one. How would you distribute research money in way that would accelerate the rate of progress?
What I would really like to do is to contact Andy Grove and suggest that he use his semiconductor knowledge to help a company release a lot more inexpensive and more capable lab on a chip testing systems.
Labs on a chip that could track biomarkers daily or hourly and eventually in realtime.
Initially they could be for large clinical trials and other studies. Like TV ratings boxes.
But they could also be part of his healthplan by getting them to all pharmacies and clinics and doctors offices.
We should have blood testing for cancer detection as well as for disease detection and tracking and monitoring effects
Eventually to all peoples homes as part of regular monitoring of health.
I have no real understanding of medical research but in the handful of journal articles I've read, to me it seems like there is an academic research culture that worships particular forms of experimental design that are well-suited for figuring out the impact of one or two known variables as opposed to attempting to efficiently try wild ideas to uncover the unknown. There's quite a difference between testing a hypothesis and coming up with a bold new one.
Also, I'm sure a lot of research proposals are written with a keen political eye to get the most resources for the research department as possible. I think this means there's a lot of unproductive distractions from the "aimless playing" necessary for genuine new discovery.
There has to be more room in medicine for risk taking. Let patients choose to take risky treatments, and watch what happens. Medical practice of this sort might be set up in some other country where there is no medical red tape. To attract customers, a company with a pristine reputation for honesty would have to oversee treatments.
One way to make the incentives to reviewers work more like the incentives to VCs would be to reward the reviewers in proportion to a citation count for the papers that result from the grant. But I'm unsure what form the rewards should take. If it's money, it's not obvious who ought to provide the money (maybe the grantors, but it isn't easy to modify the existing grant system to make grantors care about those incentives). Reputational rewards would fit the research culture better, but I suspect it's hard to change researcher's attitudes so that a measure of the value of one's reviews creates prestige.
Funding medical research runs into a number of substantial inter-related barriers. These barriers are much less important in the physical sciences -- say condensed matter physics.
Barrier 1. The large majority of (perhaps 4 out of 5) carefully considered biomedical experiments (that have not been pretested) yield null results. That is, they do not allow one to come to any conclusions at all about the hypothesis.
Barrier 2. Biological systems can be generally approximated as being in a steady state or equilibrium. This means you get the squishy balloon effect. If you push on the system here, it will bulge out in an unexpected way. Think about COX2 inhibitors such as Vioxx and heart attacks. Even if your treatment reduces mortality from a major killer such as cancer it may induce even greater mortality from another mechanism such as diabetes. Furthermore, you generally can't tell ahead of time -- sometimes you need to wait 10 or 20 or 30 years to find out what the consequences of your actions have been. Tough.
Barrier 3. Funding agencies are unwilling (for political reasons) to have 4 out of 5 grants produce no results. Consequently, the investigators must have positive data saved up before they write the proposal. This ensures that the sponsor get "something" for his money. However, since this knowledge is already known ( at least to a small group) you get much less bang for your buck.
Solution: I don't know. My prejudice is that the sponsors should try to find the most able and sucessful investigators they can. Then give them the money and tell them to do the best science they know how. This way, the investigators (relatively more able scientists) make the decisions rather than the relatively less able sponsors.
the semiconductor focused approach that I am suggesting would transform both the research of disease and health and the monitoring and detection of disease.
Currently doctors and researchers do not get a constant close look at what is happening with individuals.
the monitoring of biomarkers would need to be combined with some basic environmental monitoring (cigarretes, air pollution etc...) and monitoring of substances (food, alcohol, drugs etc...)
Currently the monitoring of health greatly lags the monitoring of TV, shopping habits and online activity.
For online activity, companies perform studies with "heat maps" of words and where people are looking in an ad. Near real time.
For health, blood and other testing is inconsistent even when someone is at high risk for a disease.
For drugs, the prescription is based on statistical samples. It is like : I recommend the TV show Golden Girls because a study that we performed of people in your age group suggests that it would be beneficial. We can do a check up after a few months and see how that is going. Let us know if you have an adverse reaction, such as vomiting but otherwise stick to the prescription. If it does not work we will switch in a few months to 60 minutes and then the Tonight Show.
More data will help what Jim Rose is talking about.
If we are able to have large scale tracking of kidney function, heart function, lung function, arterial health, blood levels, other biomarkers etc... then we can start making the connections to overall wear on the system and when something is deteriorating.
We are able to identify abnormal wear on parts in car. But well before some one is about to become diabetic there are things going on that are leading up to that point. We need to trace back to the health equivalent of - you have misalignment and the tires still look great but the alignment problem will cause abnormal wear.
If we have all of the data then when something starts falling out for a population then the doctor/researcher can start making the correlation earlier. Appliance and car companies data mine and analyze customer service call transcripts and make the correlations. 20 calls talked about shorting or smoke. This means there was an electrical issue. If we track the factory dates we determine that they all were coming from a particular production line 8 years ago on Mondays in the second quarter. We will need to check all other appliances with that profile for a common assembly line and production issue.
There are weingard statistical operations management tracking rules to identify developing bad trends earlier.
The most potent restrictions on research mentioned in the article or in the above comments seems to revolve around the small number of paths to get research supporting funds. Whether it is political problems with getting NIH money, or institutional problems with getting the lab space to get that initial "positive data" Jim Rose mentions, the ways around bottlenecks are too few.
There is also the makeup of the peer review committees themselves. In all too many cases, when there are people doing grant peer review who "have a dog in the fight", either personally, or from their own institutional hierarchy, there is a reluctance to ascribe competence to people working on a different treatment path than one the reviewer is familiar with or holds to themselves. This is only exacerbated when the peer review committees working to funnel NIH money to a research community all have someone from one institution or foundation sitting on them.
In Type I Diabetes research this situation has become evident in recent years in research on the immune system treatments that point towards a cure without transplantation, through enabling protection of stem cell growth into new islets. Much name-calling and backbiting in the press has ensued, instead of faster research.
There is a distinct lack of starting points for research funding. Too much comes from the one institution in society that can take the resources it needs, but must be politically delicate in what it gives its blessing to, lest it be said to be wasting "public funds". There are also the fifty states, but they labor under those same political restrictions, with legislators not wanting to lose that winning five percent margin of the vote totals.
We need not one, or fifty, but five hundred stable sources of funds, each with its own peer review committee, with little or no cross-committee institutional dominance. That can only be obtained out in the rest of civil society. There will still be fads. There will still be sources where one institution or treatment path has a lock on their money. Still, alternate sources will remain for the innovators who find their paths blocked today.
While this means laws making it easier to set up charitable funds for research, it also means doing our utmost to connect researchers with these funds. The current time spent on grant applications is the single biggest part of the time schedule of many researchers. Estimates of thirty percent are common.
Making research progress a cheaper commodity has received far too little attention. Some relief will indeed come from the accelerating speeds at which the electronics industry is making test procedures and scanning procedures faster, easier and cheaper. In fact for the next ten years that may be the dominant accelerator of biomedical research. Eventually, as both test and production equipment for a given level of capability become cheaper, this will produce an environment where people can research their own conditions, in cooperation with others around the world. The idea that institutionalized researchers with huge labs behind them are the only way to make progress may well be as much a passing fad as the dominance over transportation held by railroads 100 years ago. In fact, I have seen those huge institutions often making available only a tiny cubbyhole for a researcher doing world class work.
The railroads of research dominate today, but we are getting the ability to build the aircraft of research in the near future. Faster, Please!
Interesting idea. Incentivize the grant reviewers to pick winners.
I see problems with it. But maybe fixable problems. A reviewer will become more incentivized to review more grants. Also, a reviewer will be more incentivized to rate highly the grants he does review. The more people out there doing research he has a stake in the better the chance he'll come up a winner.
So limit how many grants a reviewer can place a bet on. Make reviewers rank all reviewed grants in a year and only allow a financial stake in the top N grants that get approved,
Also, for established researchers why not just use the number of citations in the last few years to determine how much research money they get in the next few years? Why bother using reviewer scores? Push more money at winners.
But will that corrupt the citation process?
I like the idea of breaking up some of the granting agencies. With no overlap between the reviewers in different granting agencies the effect would be to reduce the effects of conformity. More schools of thought would thrive.
I just read Craig Venter's autobiography and I was fascinated by how many different funding methods he used over his career. First he was funded via peer grant reviews and then he went to the NIH and had funding without requiring peer review. But, he found that he still couldn't get enough money so he got a VC to pay for his sequencing lab in return for access to that data first. While at that company he applied for an NIH grant, and was denied because they didn't think his technology would work. Fortunately for him he had other funding and was able to prove them wrong.
Anyway, if you are interested in the topic, this book was an interesting case study in different ways to fund science and the pros and cons of each.
As he explained the peer review process, all reviewers must be in agreement on a project for it to get funded. This allows every member to have de facto veto power over any project. One suggestion I would have is to change the system to be more like ice skating where you throw out the high and the low score. This way it would take at least two people to vote against a proposal to deep six it.
Maybe fewer peers should review each grant proposal. The more people review a grant proposal the more conformity will be enforced.
An idea: Let individual scientists with high citation scores judge 5 randomly chosen grants each and let them choose 2. Basically, choose people who have demonstrated talent do the judging. But do not use committees or consensus voting. Let individuals make the call.
Peer Review And Innovation In Science
A. "The new face of peer review", in "Funding Opportunities and Advice" forum, at
refers to "changes to the peer review process".
B. However, "peer review process" is the least disturbing aspect of "peer review" in science
Samples of factual observations of other negative aspects of peer review in science:
"A U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research. Far from filtering out junk science, peer review may be blocking the flow of innovation, and corrupting public support of science."
- "Peer review stifles innovation, perpetuates the status quo, and rewards the prominent. Peer review tends to block work that is either innovative or contrary to the reviewers' perspective."
C. "Peer Review" is, factually, a tool of a "Subversive Activities Control Board"
The most revolting corrupt aspect of peer review in science is its exploitation by the Science Establishment to tightly clamp its political and financial omni-everything rule and control, including stifling of any shred of scientific innovation.
D. The corruption is not inherent in the tool, but in the nature of the Science Establishment
"Implications Of Science And Technology Evolution"
The peer review process is but a tool of the Establishment. The corruption is not inherent in the tool, but in the nature of the Science Establishment.
As long as Science and Technologhy are considered and handled, conceptually and administratively, as one realm and one faculty this corruption cannot and will not be overcome. This conception and attitude is THE CORRUPTION OF SCIENCE BY THE 21st CENTURY TECHNOLOGY CULTURE.
(A DH Comment From The 22nd Century)