20 years ago, Dr. Kenneth Ryan, Chair of the Commission on Research Integrity (CRI) transmitted the report of the Commission to the Secretary of Department of Health and Human Services (DHHS) and to the House and Senate Committees. Established in Section 162 of Public Law 103-43, the NIH Revitalization Act of 1993 in reaction to continuing research misconduct and retaliation against whistleblowers, the Commission was asked to consider: “a new definition of research misconduct, an assurance process for institutional compliance with DHHS regulation, mechanisms by which to respond to and oversee related administrative functions and investigations, and development of a regulation to protect whistleblowers” (CRI p iii Report 1995).

The Commission’s driving concern was the best interest of the public and science, in particular, Public Health Service (PHS) use of public money to fund research and interest in the collective health of the citizenry (CRI p 6 Report 1995). The Commission operated with a set of basic beliefs about roles and responsibilities. First, the primary responsibility for preserving research integrity and pursuing research misconduct lay with individual scientists, research institutions and professional societies, with the federal government complementing and enhancing these efforts. Federal intervention should occur only with the most egregious forms of research misconduct or when institutional processes fail (CRI p vii Report 1995). Second, because of Congressional concern, their charges included attention to creation of an institutional climate in which good faith whistleblowers, as well as those they accuse, are treated fairly and with openness (CRI p iii Report 1995).

The Commission made extraordinary efforts to involve stakeholders through field hearings held in multiple locations across the country. Even so, the CRI recommendations, particularly those related to definition of research misconduct and whistleblower protection, were greeted with a firestorm of criticism (Wadman 1996a, b) and most have not, to this day, been enacted. Yet, the issues addressed remain relevant and problematic, and one could wonder whether inattention to them has contributed to the current crisis of bias and nonreproducibility in biomedical science.

The purpose of this commentary is to reflect on the work of the Commission, in light of prior and subsequent events, and to suggest conclusions about its legacy.

Prior Work/Events

The history of any movement including that of research integrity, plays out in phases, reflecting changes in norms and methods. Although human subjects protection work started in the 1960s and 1970s, attention to research misconduct surfaced in the 1980s but was largely dismissed in the belief that science remained uncontaminated. The community took little action to control the behavior of members violating even commonly understood norms. A second phase began to recognize the role of latent, problematic potentials inherent in science itself and in its social organization. Events surrounding the CRI reflect this phase.

A 1989 report from the Institute of Medicine (IOM) made three relevant observations: (1) absence of formal standards has generated uncertainty about what constitutes violation and allows the research system to tolerate substandard activities by a small number of investigators, (2) there are questions about adequacy and effectiveness of the current self-regulatory system in preventing research misconduct, and (3) absence of a mechanism to enforce standards tolerates careless practice in an excessively permissive research environment (IOM 1989).

Also in 1989, federal regulations governing scientific misconduct in research funded by the PHS were put in place, the investigatory Office of Scientific Integrity was established within the NIH and the Office of Scientific Integrity Review within the Office of the Assistant Secretary of Health. In 1992 these two structures combined into the Office of Research Integrity (ORI), reporting to the Assistant Secretary. The ORI has significantly invested in acting as a resource to help institutions voluntarily meet high ethical standards.

A 1992 report from the National Academy of Science Panel on Scientific Responsibility and the Conduct of Research noted that serious and considered whistleblowing is an act of courage that should be supported by the entire research community. The Panel also recommended that an independent Scientific Integrity Board be created by the scientific community and research institutions to provide leadership in addressing ethical issues in research conduct (National Academy of Science 1992). The Board would be a resource center to assist educational and research institutions to develop the capacity to provide assurances of good scientific practice in their settings. No institutional host was found to develop this role.

Also during this time period Congressman John Dingell, Jr. (D-MI), Chair of the House Energy and Commerce Committee increased pressure to develop mechanisms for quality control and oversight of science, and it became clear that there would be greater external social monitoring. The Commission’s charge to consider a new definition of research misconduct was intended to establish the grounds for that monitoring. The scope of behaviors in the new definition would reveal basic assumptions about how to achieve accountability. The Commission’s report also reflected a widespread belief that if self-regulation can work, then universities are best qualified to handle the problem (Francis 1999), a view that undergirded all of their recommendations.

Definition of Research Misconduct

In addressing its first charge, a definition of research misconduct, the CRI focused on those behaviors serious enough to require a federal agency to take action to ensure the integrity of PHS-funded research and its interest in the collective health of the citizenry. It proposed the following to encompass both investigators and reviewers:

“Research misconduct is significant misbehavior that improperly appropriates the intellectual property or contributions of others, that intentionally impedes the progress of research, or that risks corrupting the scientific record or compromising the integrity of scientific practices…. Examples of research misconduct include but are not limited to:

Misappropriation: using the ideas or words of another without giving appropriate credit.

Interference: … the intentional and unauthorized taking, sequestering or damaging of property being used in research, which seriously compromises it.

Misrepresentation: a term which replaced “fabrication and falsification”, and defined as a material or significant false statement or an omission that significantly distorts the truth, and a culpable mental state” (CRI p 15 1995).

This redefinition (MIM) was subsequently rejected in favor of fabrication, falsification and plagiarism (FFP), harmonized across federal agencies and finalized in 2001. But abandoning MIM left at least two ambiguities. First, misappropation was intended to go beyond plagiarism to include significant collaborative disputes or authorship disputes, previously and currently unaddressed by the ORI. In fact, most of the whistleblowers who appeared before the Commission told stories in which they believed ideas and research findings were taken from them and appropriated by more powerful others. They believed their complaints of plagiarism were unaddressed, and they themselves were shoved to the sidelines, unprotected by their institutions. While it is difficult to know whether MIM would have been sensitive to the difference between serious misappropriation and the common situation of miscommunication or differential expectations among authors, it is also not helpful that FFP doesn’t address it at all.

The second ambiguity was sabotage. The 2011 case of Vipul Bhrigu who sabotaged another student’s work was found by the US ORI to have committed falsification because he caused false results to be reported in the research record via his sabotage, an unfortunate stretching of FFP (Rasmussen 2014). In another recent case in which a fellow postdoc poisoned experimental fish, the involved university reportedly did not alert the ORI of the case. Those involved believed such sabotage “happens a lot” (Enserink 2014). Resnik et al.’s (2015) perusal of misconduct policies from 183 US institutions found 59 % went beyond the federal definition and 10 % of that group included misappropriation of property. Still, the absence of “interference” in the definition has left a significant wrong in research ethics unaddressed.

But other definitional shortcomings are also relevant. As was pointed out during the Commission’s hearings, other serious infractions that undermine validity and integrity of the research would not be covered by either MIM or FFP, yet are very damaging to subjects, the research enterprise and to use of public money. In an article, prescient at the time, Bailar (1995) pointed out that the bigger threat to the integrity of science stemmed from the “day-to-day almost casual erosion of all aspects of the scientific method that are meant to challenge and test scientists’ conclusions”. This includes selective reporting of data and analyses (Bailar 1995), which still remains so common and largely unacknowledged (Ioannidis 2005, 2014) that it may not constitute “a significant departure from accepted practices…” (42 CFR 93.104a).

All of these practices seriously detract from the ability of science to meet its public purpose, which is production of valid and reliable knowledge within the bounds of protection of research subjects.

Oversight of Scientific Practices

The CRI assumed that certain important practices did not rise to a level requiring federal intervention and that they would be controlled by institutions and the community of scientists. These practices, thought to be important to research integrity, include record keeping, data management, peer review of data and papers, mentorship and fair designation of authorship. This division of federal and institutional responsibility reflects an historic tension between academic freedom and the proper role of government control. Any federal invasion of responsibilities assigned in the report to institutions, even in the context of use of the public money, would have been unacceptable.

But today, the level of nonreproducibility and analysis and reporting bias in science is clearly an issue that has reached public attention (Spilling 2015), and should have been self-regulated but wasn’t. It is important to note that the current regulatory definition of research misconduct includes “omitting data or results such that the research is not accurately represented in the research record” as constituting falsification (CFR 93.103b). This regulation either has not been enforced or more accurately, in the absence of whistleblowers willing to report it or systematic audit of research quality including evidence of bias, never rose to the attention of regulators.

While institutions can play an important role in the quality of research produced under grants they receive, it is naive to believe that they can remain competitive in a system which does not control and indeed rewards selective analysis and reporting to attain publishable positive findings. Lack of a comprehensive framework for monitoring of progress in solving a scientific or health problem cannot be blamed on individual researchers (Bailar 1995) or on institutions receiving public research money. Both legal regulation and self-regulation have failed to ameliorate commercial dependencies (i.e. research that always supports products) and misplaced incentives (publish only novel, positive results) (Redman 2015). While quality control and progress in meeting public goals have suffered from such a system, no institution that fosters science seemed to have been able to prevent it.

Protection of Whistleblowers

The Commission proposed a regulation guaranteeing standards for protection of whistleblowers and enumerated a Whistleblower Bill of Rights and Responsibilities. Building on the legal foundation of the Whistleblower Protection Act, the Bill stressed: whistleblowers’ right to disclose information that supports a reasonable belief of research misconduct; institutions’ duty not to tolerate or engage in retaliation against good-faith whistleblowers including holding those that retaliate responsible and to provide WB relief; and where alleged retaliation is not resolved, complainants should have an opportunity to defend themselves in a proceeding where they can present witnesses and confront those they charge with retaliation against them (CRI).

This proposal evoked the response that it would better protect whistleblowers than those accused (respondents) (Wadman 1996b). In truth, neither is as well protected as should be the case. Respondents frequently do not have the funds to hire a lawyer to fully appeal agency research misconduct decisions (Redman and Merz 2008) and complainants rarely have one. And bad-faith whistleblowers, often corporate interests protecting a product, have not been successfully controlled. Solutions to both of these defects are readily apparent but are not being implemented—guarantee counsel for both complainant and respondent and investigate and sanction bad-faith whistleblowers. But a structural problem is not so easily resolved. Internal institutional proceedings are inherently compromised by structural conflict of interest; the institution serves as judge and jury of alleged misconduct that could mean liability or otherwise prejudice its interests if confirmed.

It is also important to note that in cases where whistleblowers have become known (Klintworth 2014), they are not infrequently students, postdoctoral fellows, or research technicians, who have the opportunity to closely observe what goes on in the lab. These individuals are also extremely vulnerable to retaliation.

In many sectors, whistleblowers remain part of the nation’s law enforcement strategy and nonretaliation approaches have been common as in current PHS research misconduct regulations (45 CFR Part 93.300e). Experience with Sarbanes–Oxley (SOX) whistleblower provisions, enacted in 2002 in response to corporate scandals, has demonstrated that legal rights, no matter how strong, will never be sufficient in isolation to protect and encourage whistleblowers. SOX also didn’t prevent the financial crisis of 2008 (Moberly 2012).

The Dodd-Frank Wall Street Reform and Consumer Protection Act, passed in 2010, amended SOX and took a much more direct approach in providing whistleblowers who disclosed fraud of more than $1 million, to retain 10–30 % of the amount recovered, without necessarily first using internal reporting mechanisms. This approach reflects the belief that individuals need encouragement to blow the whistle and resolve the problem they disclosed. The bounty is meant to compensate them for the risks they have taken (Greenberg 2011). But while the Securities and Exchange Commission (SEC) dealt with more than 6000 disclosures between 2011 and 2013, it provided bounties to only six individuals (US Securities and Exchange 2013). The Ethics Resource Center (2012) has reported a sharp increase in retaliation in the business sector. Although some whistleblowers in research misconduct cases have successfully used the False Claims Act and received a relator’s fee, Kohn (2011) warns that risks are high that the government will choose not to proceed and that the costs of competent legal counsel often exceed the amount of damages. Research misconduct is also inadequately reported, and the ORI’s degree of oversight of retaliation against whistleblowers is unclear.

In 2000 an announcement of proposed rule making regarding whistleblower protection flowing from CRI recommendations, was developed and published for public comment but never issued in its final form. It would have authorized remedies for the whistleblower. It also suggested sanctions toward institutions found to have engaged in retaliation including probationary status, termination of current and future funding, recovery of PHS funds misspent in connection with whistleblower retaliation, and publication of the determination (Handling 2000). While weak, this NPRM was consistent with the CRI recommendations. Even with the CRI teeth removed, the NPRM was not seriously considered and quietly died.

It is important to note how distant these responses are from the view that whistleblowers are exercising freedom of dissent in the public interest, that their silencing is part of the secrecy used to shield science from accountability, and that retaliation is a direct threat to scientific integrity (Devine 1995). Such a view thrives best in nonauthoritarian structures that believe they are publicly accountable and situations in which potential whistleblowers believe sufficient protections are in place, to include independent competent legal counsel and financial remedies. A political coalition strong enough to carry the Commission’s proposed whistleblower reforms apparently did not materialize.

Legacy of the Commission on Research Integrity

Few of the issues with which the CRI was charged have been implemented or further debated, most particularly the definition of research misconduct and the protection of whistleblowers. Except for concern about individual cases of research misconduct, Congressional attention has waned, and the public has not been engaged with this issue. No evidence supports the effectiveness, or lack thereof, of federal research misconduct regulations. And while little information is available on practices for which the Commission would hold institutions responsible, authorship disputes are still legion and their relationship to plagiarism (which is actionable) unclear. A push for data-sharing is focusing attention to data standards in general. An excellent IOM report drafts such standards but also notes that no entity currently has authority to enforce them (IOM 2015).

In retrospect, there were several significant “blind spots” in the CRI’s charge and discussions. First was the assumption that institutions receiving PHS money could move beyond their inherent conflict of interest and practices in the wider scientific community and be good regulators, that the research climate in which scientists and staff function would be supportive, and that institutions possessed the will to adequately protect whistleblowers. A second mis-assumption was that researchers knew what their product should be and how to reach it. A third source of unexamined confusion is a lack of a documented standard of what adequate quality for publicly funded science is, including control of bias and level of reproducibility. Since research misconduct is only one source of contamination, a commission charged today should focus on all elements of scientific quality.

The Commission faced two additional limitations. First, legalism dominated the discussion, in large part because the ethics of research conduct was considerably underdeveloped (Whitbeck 1995). Second, since almost no empirical research existed for the issues they were addressing, the CRI relied heavily on cases, particularly of the experiences of whistleblowers since that was a central part of their charge. While compelling, these individual accounts could not be placed in an accumulated record of experience. While ethics, including the empirical study of ethical issues, has become better integrated into specific areas of science such as genomics and neuroscience, it remains underdeveloped regarding research misconduct. For example, it is still not possible to thoroughly document the frequency of research misconduct and its harms to subjects, subsequent patients and wasted resources.

Although the Commission on Research Integrity incorporated assumptions common in its time, it also owed its existence and charge to Congressional pressure to force the scientific community to address significant flaws in its practice. The CRI definitional and whistleblower recommendations stretched the research misconduct framework, in the latter case to meet legal standards in other sectors. The predictable response from the scientific community was indeed shortsighted. As enumerated above, important issues, lack of structures to address concerns and misplaced incentives imported from commercial interests have sustained old problems and have been insufficient to address newly quantified issues of bias and level of reproducibility.

It is difficult to track a direct legacy of the Commission as its work is almost never cited. But it is reasonable to suggest that the recommendations and the reaction to them marked a boundary of self-regulation by the scientific community and universities that was not to be assailed by stronger whistleblower protection or government intrusion. In a number of ways that boundary has now moved. Legal protection for whistleblowers has advanced significantly in multiple sectors although still primitive for academic scientists, perhaps related to rejection of the opportunity recommended by the CRI. Second, a small amount of investment in empirical research has yielded beginning understandings about the environment for misconduct and tools for detecting it, accompanied by an expanding body of normative analysis. Issues of definition and management have been discussed in a series of global meetings. Finally, a recent meeting convened by the US National Academies of Science addressed ways to “remove some of the current disincentives to high standards of integrity in science” including a return to the 1992 never implemented NAS recommendation to establish an independent Scientific Integrity Board to address ethical issues in research conduct (Alberts et al. 2015, p. 1420).

20 years after the CRI report, the NAS is returning to some of the fundamental issues it raised—appropriate roles for government agencies, research institutions and universities in promoting responsible research practices; definition of research misconduct; and viability of self-regulation by the scientific community (Committee). The report, due out shortly, should continue these conversations considered so long ago.