Is Corporate Research Better?

While writing that New Yorker article on the decline effect, I talked at length to a prominent biotech executive with extensive experience in academic research. Although I didn’t end up using his quotes in the piece – he refused to go on the record, citing his extensive collaborations with various universities – he said some […]

While writing that New Yorker article on the decline effect, I talked at length to a prominent biotech executive with extensive experience in academic research. Although I didn't end up using his quotes in the piece - he refused to go on the record, citing his extensive collaborations with various universities - he said some very provocative things about science in the academy:

What most university researchers don't realize is that our data [in biotech] is more rigorous and reliable. That's for an obvious reason: I can't afford to spend years following up on an interesting result that isn't true. That false result is going to cost me millions of dollars...I don't care if an experiment can be published in good journal. I only care if it can be replicated and succeed in the next stage of development...What we try to do here [at the company] is make sure that all of our incentives are aligned for accuracy and replicability. That's why we insist on a higher threshold for [statistical] significance and mandate that all researchers state, in advance, what they are testing and exactly how they are testing it. Our method sections are ten times as long as what you see in journals. These policies aren't rocket science. They are simply designed to prevent people from manipulating the data to prove themselves right.

Of course, any conversation about the differences between academic and corporate research is going to be full of lazy and imprecise generalizations. While the quote above is about the rigors of basic corporate research, the need for profit has proven to be a deeply pernicious bias. (Some of the sloppiest and most cynical clinical trials have been funded by pharmaceutical and medical technology firms.) Let's also not forget that economic studies reveal that the biotech and pharmaceutical sector are deeply dependent (and often parasitic) on the public funding of basic research. The intellectual freedom of a well-funded academic scientist, able to pursue their curiosity for years at a time, is one of the triumphs of civilization.

And yet, there's intriguing new evidence that our nameless executive has a point: research policies typically found in corporate labs can lead to more accurate experimental data. According to a study cited in Nature Reviews Drug Discovery, researchers at Bayer have had difficulty replicating nearly two-thirds of published claims in the peer-reviewed literature:

An unspoken industry rule alleges that at least 50% of published studies from academic laboratories cannot be repeated in an industrial setting...A first-of-its-kind analysis of Bayer's internal efforts to validate 'new drug target' claims now not only supports this view but suggests that 50% may be an underestimate; the company's in-house experimental data do not match literature claims in 65% of target-validation projects, leading to project discontinuation.

For the non-peer-reviewed analysis, Khusru Asadullah, Head of Target Discovery at Bayer, and his colleagues looked back at 67 target-validation projects, covering the majority of Bayer's work in oncology, women's health and cardiovascular medicine over the past 4 years. Of these, results from internal experiments matched up with the published findings in only 14 projects, but were highly inconsistent in 43 (in a further 10 projects, claims were rated as mostly reproducible, partially reproducible or not applicable; see article online here). “We came up with some shocking examples of discrepancies between published data and our own data,” says Asadullah. These included inabilities to reproduce: over-expression of certain genes in specific tumour types; and decreased cell proliferation via functional inhibition of a target using RNA interference.

Irreproducibility was high both when Bayer scientists applied the same experimental procedures as the original researchers and when they adapted their approaches to internal needs (for example, by using different cell lines). High-impact journals did not seem to publish more robust claims, and, surprisingly, the confirmation of any given finding by another academic group did not improve data reliability. “We didn't see that a target is more likely to be validated if it was reported in ten publications or in two publications,” says Asadullah.

Although this is a limited study, it's troubling stuff. At the very least, Bayer's internal data is yet another piece of evidence suggesting the urgent need for scientific reform. We might begin by emulating a few of the policies that are standard in the best corporate/biotech labs, so that publicly funded researchers become more transparent about the details of their experiments before the experiments are done. (Jonathan Schooler of UCSB made a similar proposal earlier this year in Nature. Stanford's John Ioannidis, meanwhile, told Nature Reviews Drug Discovery that the Bayer study demonstrates that public science institutions should "have some kind of bonus — funding or recognition — for people who publicly deposit their samples, data and protocols, and who show that findings are reproducible.") Although science will always be a deeply human process, inseparable from our ordinary flaws, we need to take steps to make these flaws less consequential.