In today's world, brimful as it is with opinion and falsehoods masquerading as facts, you'd think the one place you can depend on for verifiable facts is science.
You'd be wrong. Many billions of dollars' worth of wrong.
A few years ago, scientists at the Thousand Oaks biotech firm Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology.
The idea was to make sure that research on which Amgen was spending millions of development dollars still held up. They figured that a few of the studies would fail the test — that the original results couldn't be reproduced because the findings were especially novel or described fresh therapeutic approaches.
But what they found was startling: Of the 53 landmark papers, only six could be proved valid.
"Even knowing the limitations of preclinical research," observed C. Glenn Begley, then Amgen's head of global cancer research, "this was a shocking result."
Unfortunately, it wasn't unique. A group at Bayer HealthCare in Germany similarly found that only 25% of published papers on which it was basing R&D projects could be validated, suggesting that projects in which the firm had sunk huge resources should be abandoned. Whole fields of research, including some in which patients were already participating in clinical trials, are based on science that hasn't been, and possibly can't be, validated.
"The thing that should scare people is that so many of these important published studies turn out to be wrong when they're investigated further," says Michael Eisen, a biologist at UC Berkeley and the Howard Hughes Medical Institute. The Economist recently estimated spending on biomedical R&D in industrialized countries at $59 billion a year. That's how much could be at risk from faulty fundamental research.
Eisen says the more important flaw in the publication model is that the drive to land a paper in a top journal — Nature and Science lead the list — encourages researchers to hype their results, especially in the life sciences. Peer review, in which a paper is checked out by eminent scientists before publication, isn't a safeguard. Eisen says the unpaid reviewers seldom have the time or inclination to examine a study enough to unearth errors or flaws.
"The journals want the papers that make the sexiest claims," he says. "And scientists believe that the way you succeed is having splashy papers in Science or Nature — it's not bad for them if a paper turns out to be wrong, if it's gotten a lot of attention."
Eisen is a pioneer in open-access scientific publishing, which aims to overturn the traditional model in which leading journals pay nothing for papers often based on publicly funded research, then charge enormous subscription fees to universities and researchers to read them.
But concern about what is emerging as a crisis in science extends beyond the open-access movement. It's reached the National Institutes of Health, which last week launched a project to remake its researchers' approach to publication. Its new PubMed Commons system allows qualified scientists to post ongoing comments about published papers. The goal is to wean scientists from the idea that a cursory, one-time peer review is enough to validate a research study, and substitute a process of continuing scrutiny, so that poor research can be identified quickly and good research can be picked out of the crowd and find a wider audience.
PubMed Commons is an effort to counteract the "perverse incentives" in scientific research and publishing, says David J. Lipman, director of NIH's National Center for Biotechnology Information, which is sponsoring the venture.
The Commons is currently in its pilot phase, during which only registered users among the cadre of researchers whose work appears in PubMed — NCBI's clearinghouse for citations from biomedical journals and online sources — can post comments and read them. Once the full system is launched, possibly within weeks, commenters still will have to be members of that select group, but the comments will be public.
Science and Nature both acknowledge that peer review is imperfect. Science's executive editor, Monica Bradford, told me by email that her journal, which is published by the American Assn. for the Advancement of Science, understands that for papers based on large volumes of statistical data — where cherry-picking or flawed interpretation can contribute to erroneous conclusions — "increased vigilance is required." Nature says that it now commissions expert statisticians to examine data in some papers.
But they both defend pre-publication peer review as an essential element in the scientific process — a "reasonable and fair" process, Bradford says.
Yet there's been some push-back by the prestige journals against the idea that they're encouraging flawed work — and that their business model amounts to profiteering. Earlier this month, Science published a piece by journalist John Bohannon about what happened when he sent a spoof paper with flaws that could have been noticed by a high school chemistry student to 304 open-access chemistry journals (those that charge researchers to publish their papers, but make them available for free). It was accepted by more than half of them.
One that didn't bite was PloS One, an online open-access journal sponsored by the Public Library of Science, which Eisen co-founded. In fact, PloS One was among the few journals that identified the fake paper's methodological and ethical flaws.
What was curious, however, was that although Bohannon asserted that his sting showed how the open-access movement was part of "an emerging Wild West in academic publishing," it was the traditionalist Science that published the most dubious recent academic paper of all.
This was a 2010 paper by then-NASA biochemist Felisa Wolfe-Simon and colleagues claiming that they had found bacteria growing in Mono Lake that were uniquely able to subsist on arsenic and even used arsenic to build the backbone of their DNA.
The publication in Science was accompanied by a breathless press release and press conference sponsored by NASA, which had an institutional interest in promoting the idea of alternative life forms. But almost immediately it was debunked by other scientists for spectacularly poor methodology and an invalid conclusion. Wolfe-Simon, who didn't respond to a request for comment last week, has defended her interpretation of her results as "viable." She hasn't withdrawn the paper, nor has Science, which has published numerous critiques of the work. Wolfe-Simon is now associated with the prestigious Lawrence Berkeley National Laboratory.
To Eisen, the Wolfe-Simon affair represents the "perfect storm of scientists obsessed with making a big splash and issuing press releases" — the natural outcome of a system in which there's no career gain in trying to replicate and validate previous work, as important as that process is for the advancement of science.
"A paper that actually shows a previous paper is true would never get published in an important journal," he says, "and it would be almost impossible to get that work funded."
However, the real threat to research and development doesn't come from one-time events like the arsenic study, but from the dissemination of findings that look plausible on the surface but don't stand up to scrutiny, as Begley and his Amgen colleagues found.
The demand for sexy results, combined with indifferent follow-up, means that billions of dollars in worldwide resources devoted to finding and developing remedies for the diseases that afflict us all is being thrown down a rathole. NIH and the rest of the scientific community are just now waking up to the realization that science has lost its way, and it may take years to get back on the right path.
http://www.latimes.com/
Graham
But what they found was startling: Of the 53 landmark papers, only six could be proved valid.
"Even knowing the limitations of preclinical research," observed C. Glenn Begley, then Amgen's head of global cancer research, "this was a shocking result."
Unfortunately, it wasn't unique. A group at Bayer HealthCare in Germany similarly found that only 25% of published papers on which it was basing R&D projects could be validated, suggesting that projects in which the firm had sunk huge resources should be abandoned. Whole fields of research, including some in which patients were already participating in clinical trials, are based on science that hasn't been, and possibly can't be, validated.
"The thing that should scare people is that so many of these important published studies turn out to be wrong when they're investigated further," says Michael Eisen, a biologist at UC Berkeley and the Howard Hughes Medical Institute. The Economist recently estimated spending on biomedical R&D in industrialized countries at $59 billion a year. That's how much could be at risk from faulty fundamental research.
Eisen says the more important flaw in the publication model is that the drive to land a paper in a top journal — Nature and Science lead the list — encourages researchers to hype their results, especially in the life sciences. Peer review, in which a paper is checked out by eminent scientists before publication, isn't a safeguard. Eisen says the unpaid reviewers seldom have the time or inclination to examine a study enough to unearth errors or flaws.
"The journals want the papers that make the sexiest claims," he says. "And scientists believe that the way you succeed is having splashy papers in Science or Nature — it's not bad for them if a paper turns out to be wrong, if it's gotten a lot of attention."
Eisen is a pioneer in open-access scientific publishing, which aims to overturn the traditional model in which leading journals pay nothing for papers often based on publicly funded research, then charge enormous subscription fees to universities and researchers to read them.
But concern about what is emerging as a crisis in science extends beyond the open-access movement. It's reached the National Institutes of Health, which last week launched a project to remake its researchers' approach to publication. Its new PubMed Commons system allows qualified scientists to post ongoing comments about published papers. The goal is to wean scientists from the idea that a cursory, one-time peer review is enough to validate a research study, and substitute a process of continuing scrutiny, so that poor research can be identified quickly and good research can be picked out of the crowd and find a wider audience.
PubMed Commons is an effort to counteract the "perverse incentives" in scientific research and publishing, says David J. Lipman, director of NIH's National Center for Biotechnology Information, which is sponsoring the venture.
The Commons is currently in its pilot phase, during which only registered users among the cadre of researchers whose work appears in PubMed — NCBI's clearinghouse for citations from biomedical journals and online sources — can post comments and read them. Once the full system is launched, possibly within weeks, commenters still will have to be members of that select group, but the comments will be public.
Science and Nature both acknowledge that peer review is imperfect. Science's executive editor, Monica Bradford, told me by email that her journal, which is published by the American Assn. for the Advancement of Science, understands that for papers based on large volumes of statistical data — where cherry-picking or flawed interpretation can contribute to erroneous conclusions — "increased vigilance is required." Nature says that it now commissions expert statisticians to examine data in some papers.
But they both defend pre-publication peer review as an essential element in the scientific process — a "reasonable and fair" process, Bradford says.
Yet there's been some push-back by the prestige journals against the idea that they're encouraging flawed work — and that their business model amounts to profiteering. Earlier this month, Science published a piece by journalist John Bohannon about what happened when he sent a spoof paper with flaws that could have been noticed by a high school chemistry student to 304 open-access chemistry journals (those that charge researchers to publish their papers, but make them available for free). It was accepted by more than half of them.
One that didn't bite was PloS One, an online open-access journal sponsored by the Public Library of Science, which Eisen co-founded. In fact, PloS One was among the few journals that identified the fake paper's methodological and ethical flaws.
What was curious, however, was that although Bohannon asserted that his sting showed how the open-access movement was part of "an emerging Wild West in academic publishing," it was the traditionalist Science that published the most dubious recent academic paper of all.
This was a 2010 paper by then-NASA biochemist Felisa Wolfe-Simon and colleagues claiming that they had found bacteria growing in Mono Lake that were uniquely able to subsist on arsenic and even used arsenic to build the backbone of their DNA.
The publication in Science was accompanied by a breathless press release and press conference sponsored by NASA, which had an institutional interest in promoting the idea of alternative life forms. But almost immediately it was debunked by other scientists for spectacularly poor methodology and an invalid conclusion. Wolfe-Simon, who didn't respond to a request for comment last week, has defended her interpretation of her results as "viable." She hasn't withdrawn the paper, nor has Science, which has published numerous critiques of the work. Wolfe-Simon is now associated with the prestigious Lawrence Berkeley National Laboratory.
To Eisen, the Wolfe-Simon affair represents the "perfect storm of scientists obsessed with making a big splash and issuing press releases" — the natural outcome of a system in which there's no career gain in trying to replicate and validate previous work, as important as that process is for the advancement of science.
"A paper that actually shows a previous paper is true would never get published in an important journal," he says, "and it would be almost impossible to get that work funded."
However, the real threat to research and development doesn't come from one-time events like the arsenic study, but from the dissemination of findings that look plausible on the surface but don't stand up to scrutiny, as Begley and his Amgen colleagues found.
The demand for sexy results, combined with indifferent follow-up, means that billions of dollars in worldwide resources devoted to finding and developing remedies for the diseases that afflict us all is being thrown down a rathole. NIH and the rest of the scientific community are just now waking up to the realization that science has lost its way, and it may take years to get back on the right path.
http://www.latimes.com/
Graham
1 comment:
Thanks Graham very interesting.
Just as many of us suspected I am sure but its good that its starting to be addressed.
Kath
Post a Comment