Welcome to Biotechr


Biotechr is written by Dr. Robert Kruse (@RobertLKruse), who holds a PhD and is currently completing his MD. His research work focused on infectious disease and immunology. This blog is focused on analyzing the latest developments in biotechnologies being developed in academia and industry, with a particular focus on biomedical therapeutics. I hope that the posts are interesting and useful, and hope you join in the discussion with guest posts on the site!

Disclaimer: The thoughts on this blog are not intended as any investment advice regarding any companies that might be discussed, and represent my opinion and not the opinions of my employer. This site is not designed to and does not provide medical advice, professional diagnosis, opinion, treatment or services to you or to any other individual.

Saturday, December 24, 2016

Integrating Reproducibility into Academic Incentive Structures

by Robert Kruse


The reproducibility crisis is affecting many areas of science, threatening our progress. Biotech in particular is susceptible to these issues, since most technology is transferred into organizations, as opposed to internal research driving innovation. The NIH and venture capitalists have studied the problem as a barrier to effective drug development, and many VC's have to spend money funding the validation of research rather than advancing it forward.

As a whole, publishing irreproducible science is not a victimless crime, as millions of dollars are wasted each year by labs trying to unwittingly build upon faulty science. For the biotech industry, billion dollar drug development failures can be based around flawed science. One estimate put the total amount in wasted preclinical research at 28 billion dollars every year.  Some scientists today argue that if the original study does not have follow up citations thereby suggesting flaws, that this is already a self-policing system. Unfortunately, that logic is just an interpretation and a lack of follow up could be due to a myriad of other reasons. Given literature now is online forever, we need a formal process of ensuring that the results are accurate and robust, and that researchers can quickly verify which research has been reproduced.

From http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002165

In Silicon Valley, the second or third player into the market is usually the winner, having taken a validated concept and then improved it. Facebook is the big success, not the earlier forbearers of social networks. Yet in science, we discourage the second and third comers to ideas and techniques, labeling them derivative and lacking significant improvement. Herein, we need to change this mantra. However, significant hurdles remain in changing current practices. The main hurdle is that there is no incentive in academic researchers today to actively try to reproduce other studies, whether spending the money or the valuable time to do such experiments. The idea is let someone else do that work, or continue focusing on your own research, and the inevitable tragedy of the commons occurs, where no one reproduces the data. This particularly hurts industry now, which is tasked with taking science and turning it into useful products. In order to get a drug to work, the pathway its based on must actually be important to the target disease. However, investors and companies waste millions of dollars each year chasing drugs based on junk science. Therein lies another crucial difference of motivations: academics are rewarded by journals for great stories and eye-catching headlines. The companies are rewarded by profits from products that actually work to help people.

How do we change these motivations? While many proposals out there consist of establishing new schemes and mandates, the easiest implementation route would build seamlessly into current incentive structures for academicians. Academic researchers are hired based on the impact factor of journals they publish in. There is relentless competition to continually try to get into the best journals, which leads to over-claiming scientific findings. Indeed, Nature and Science, the two best journals, also have the highest retraction rates. In order to motivate researchers to do reproducibility research, they need to be rewarded with Nature, Science, and other high impact papers on their resume. The same papers that will help them get hired into faculty positions, and provide the resume for strong grant funding for their entire careers.

Of course, Nature and Science are not in the business of publishing papers that are reproducing previous work. That is not high impact, since the convention today is for the original paper to be cited by other works. Furthermore, the reproducibility paper never generates the headline, upon which these journals thrive on. In order to solve these two contrasting motivations, it is proposed that journals adopt a new convention for a paper format for reproducibility. This paper would receive the exact same citation credentials as normal publications in that journal, thereby helping the researchers, but the paper would not be indexed by Thomson Reuters, thus not diminishing the journal’s impact factor.

As a test example, let’s consider the case of the STAP cells, published in two different articles in Nature in 2014, consisting of stress induction to produce induced pluripotent stem cells. Many labs across the world tried to reproduce the findings to no avail. If a lab tried to replicate it, whether it worked or not, they could write the work up and then publish it in Nature, to complement the original article, which could then be linked off of it. Since journals and researchers might still want to maintain the prestige of publishing in Nature, the amount of reproducibility papers off any single article could be limited at 2. If 3 labs total can independently verify the same result, than it should be a good sign that the rest of the field can trust the findings. These reproducibility papers would be peer reviewed under the same standards of the field, but without the expectation of impact, importance, and fit to the journal, since they are specifically connected to a single paper already published in that journal. In many ways, they would be similar to the open access journals that are proliferating today. Ideally, these reproducibility papers would be freely accessible and not restricted under any journal pay-walls in order facilitate the dissemination of knowledge about the original finding.

The journals might have a pushback that the proliferation of reproducibility papers might dilute their brand, but that should not be the case. If a study is truly groundbreaking, then it demands other studies to verify its findings. Furthermore, the successful reproduction of that result makes the original study even more powerful and important. While it is not envisioned that reproducibility studies would be actively advertised to the public, one could imagine the journal Nature celebrating the fact that its content has been found to be reproducible, affording it another opportunity to circulate an article in the press citing both the original publication and the new reproducibility finding.

Content is king in the developing digital world, whether from online streaming video services like Netflix, to cable companies signing live sports deal, to news sites such as the Huffington Post accumulating blog writers. Journals were born to serve a different world, where paper copies of journals were mailed out to academic scientists across the world in order to inform them of the latest discoveries. Indeed, the first Impact Factor rankings were issued in 1975, long before the advent of the Internet. Journals today should realize that it is in their interest to capitalize and grow content within their platform, in order to attract more readers on their websites. Rather than continually rejecting papers for not being important or innovative enough, journals should actively draw in more content through publishing online reproducibility studies within that same journal. Instead, journals such as Nature have created a tiered system, wherein they can keep the same studies in their journal family in order to secure payment/content, but this only hurts academic researchers. Instead, the system should be a win-win for both the journals and researchers, by allowing for the creation of a new reproducibility category.

To implement this plan, the main player that would need to acquiesce would be Thomson Reuters, agreeing that this new category of paper within a journal would not count toward impact factor. This agreement would not upset the current system at all, so should be amenable. The journals would also need to agree to this article format, but with it not being indexed, that should be the lesser step, since it won’t hurt their own impact factors. The third step would follow as researchers would flock to this new format, seeking to publish in high impact journals. Important for researchers would be that the articles would appear the same as others in the journal, such that no stigma for research articles would develop. Just as “Brief Communications” are seen the same as “Research Articles” today under the purviews of Pubmed searching, so would reproducibility articles be seen the same under Pubmed. Researchers would then benefit from these improved publication resume in facilitating getting new grants from the NIH, which are in part based on resume today. This proposal would also compliment proposed grant reforms that have called for increased levels of funding based on track record alone, as opposed to current evaluation and scoring of research proposals. The ability to do quality science needs to be rewarded in grants, and not prior reputation or speculative ideas.

The incentives of this new system would draw the highest fervor of reproducibility toward the highest impact journals, just as it should be since these articles are regarded as the most important in science. Furthermore, given the competition for limited reproducibility slots per one paper, the studies would come quickly, as opposed to protracted years between verification that studies are not reproducible. Science would begin to operate as a team across institutions, moving new discoveries together forward. Changing the norms and incentives of any societal system is challenging, but with building reproducibility into the current system, it is our best chance to reform the system through seamless integration.

Summary

  • There is a lack of incentive for other academic scientists to invest money in reproducing other published papers since the negative results won't benefit their career.
  • A potential solution to this problem would be to create a new category in journals that would allow validation studies to be published in prestigious journals thereby benefiting academic scientists career.
  • In order for these validation studies to not hurt the impact factor of the journal since they will likely be cited less and lack the "prestige" element, it can be agreed by Thomson Reuters that these articles will not be counted in the traditional impact factor calculation.
  • Journals will thus be able to keep all related content to high-profile papers, academic scientists will have a new route to publish in high impact journals for their CV, and the current system of academic/journal/NIH incentive structures can be maintained.