Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction: The Problem

More is invested in research than ever before, with about a million new publications being listed on PubMed each year. However, studies from industry that set out to test reproducibility found that as much as 90 % of the research published by academic laboratories cannot be reproduced (Begley and Ellis 2012; Prinz et al. 2011). Because repeatable experiments and observations are at the heart of the scientific method, this represents an enormous inefficiency. As well as wasting the time and resources of academic researchers, it has led to financial losses by pharmaceutical companies, both due to actions of the companies themselves and because of faulty information they have relied on. For example, after researchers at Baylor reported that a type of antihistamine called latrepirdine could improve symptoms in Alzheimer’s disease (Doody et al. 2008), Pfizer spent $725 million and carried out a clinical trial involving 600 patients, only to find that the drug did not work.

As John Ioannidis has convincingly argued (Ioannidis 2005), much of the lack of reproducibility might be due to publication bias and inappropriate statistical analysis. This, together with sloppily conducted science, probably accounts for the vast majority of the problem. However, it is clear that some of the errors in the literature, and the failure of research to be reproducible, is due to research misconduct, i.e., the deliberate fabrication or falsification of results. In surveys of researchers, about 2 % admitted to having done so themselves and a third admitted to other questionable research practices (Fanelli 2009).

For example, in 2003 the physicist Jan Hendrik Schön retracted no fewer than seven papers from Nature for scientific misconduct (“Retractions’ realities” 2003), eight from Science, and a further six from Physical Review journals. In hindsight, there were abundant warning signs, such as his prodigious output. In 2001 alone he was listed as an author on 40 primary papers.

Similarly, it was implausibly high productivity that gave away cardiologist John Darsee. He authored five major studies in his first 15 months at Harvard, in the lab of renowned cardiologist Eugene Braunwald. Once the true story came out, he had to retract 30 papers and abstracts from his time at Harvard and another 50 from his earlier time at Emory (Knox 1983). But even these numbers pale beside those of Joachim Boldt, who has about 90 retractions, and Yoshitaka Fujii, who has 183 (see http://retractionwatch.com/category/yoshitaka-fujii/ and http://retractionwatch.com/2014/01/16/another-retraction-for-former-record-holder-joachim-boldt/).

Errors in research, whether due to misconduct or not, can waste money of governmental agencies; ∼$400,000 is the estimated loss for each paper that is retracted (Stern et al. 2014).

The problem of misconduct is not limited to academic laboratories. In 2005, New England Medical Journal belatedly published an expression of concern about a paper from Merck for failing to mention heart attacks in three patients in the trial of Vioxx (Curfman et al. 2005). In 2004, Merck withdrew the drug and settled legal action with a payment $4.8 billion (Horton 2004) (http://www.officialvioxxsettlement.com/). In testimony to a Senate investigation, the FDA found that as many as 55,000 premature deaths might have been caused by Vioxx.

Two Aspects to Integrity in Research

For research to proceed efficiently, two aspects of scientific integrity need to be fostered. Firstly, there is the integrity of the scientific literature, which can accumulate errors due to inadvertent mistakes as well as due to deliberate falsification or fabrication of data, i.e., research misconduct. Secondly, there is the integrity of the scientists themselves, who need to act honestly both in how they generate and report data and in how they adhere to ethical regulations and how fairly they allocate credit. Plagiarism , for example – the use of another’s words or ideas without attribution – might not cause scientific errors to enter the literature, but it is classed as research misconduct, because it is dishonesty in the conduct of research. Similarly, self-plagiarism, in which authors publish the same work more than once, does not introduce errors into the literature, but is to unfairly claim credit for research productivity. Researchers also must act honestly when conducting peer review of papers and grant applications. If research is not perceived to be a fair process, and where cheating is tolerated, confidence in research as a career, and the willingness of people to engage in it and fund it, will be undermined.

Growing Number of Retractions

A bellweather of the problems in academic publishing has been the growing number of retractions. Journals can correct errors in the literature and alert their readers to problems in published papers, in three ways: they can publish a correction, they can publish an editorial note of concern, or they can retract the paper, either with or without the authors’ consent. The number of papers that are retracted can give an indication of the amount of misconduct, but it is only a very crude measure, both because some papers are retracted due to innocent mistakes and because authors, journals, and institutions are reluctant to publish retractions because they feel it damages their reputations.

The Web site Retraction Watch (http://retractionwatch.com/) and the journal Nature (Van Noorden 2011) have both commented on the growing number of retractions. Is this due to increasing incidence, or increased detection, or both? Although a relatively small proportion of retraction statements say that the reason for the retraction was research misconduct, when Fang et al. followed up to determine the reason, they found that the vast majority (67 %) were for misconduct (Fang et al. 2012). In a subsequent paper, they attributed the increase in retractions to the lower quality controls for publishing flawed papers, increased detection (particularly of plagiarism), and a growing willingness of journals to retract (Steen et al. 2013).

The Difference Between Poor Practices Versus Misconduct (Intent)

Errors in the scientific literature, and the poor reproducibility of research findings, most likely occur for three reasons. Firstly, a small number of errors are just due to chance alone. If 20 laboratories all perform the same experiment, the lab with anomalous positive result might publish their findings, whereas the 19 other labs that did not make this observation would not even submit their findings. A much greater source of errors are those that arise from sloppy research with poor controls, lack of blinding, reagents that have not been validated, etc. These are the “flags” that Begley refers to in his commentary (Begley 2013). Lastly, there are the errors that arise from deliberate falsification of fabrication of data. These, together with plagiarism , are usually used to define “research misconduct,” and the critical element is intent, i.e., it was done in order to deceive.

Although all research misconduct shares the common features being both deliberate and dishonest, the seriousness varies enormously, from the very minor, such as deliberately failing to cite competitors, to the extremely serious, such as falsifying data that endangers the lives of human research subjects.

The Singapore Statement

In 2010, the Second World Conference on Research Integrity produced The Singapore Statement on Research Integrity (http://www.singaporestatement.org/). It provides a concise description of how researchers should behave, based on principles of honesty, accountability, fairness, and good stewardship. Among 14 listed responsibilities, it cites the importance of reporting findings fully, maintaining records, including as author all those and only those that meet the criteria applicable to the research field, giving credit to those who have contributed but are not authors, and declaring conflicts of interest.

Incentives

The main motivations for misconduct are, at their base, either financial or reputational. As fewer and fewer researchers are in tenured positions, and more and more rely on competitive grants to fund both their salaries and their laboratory costs, scientists know that if they do not keep publishing, their careers will be at an end. This is compounded when funding is based on non-objective measures or on simplified metrics such as volume of publications, rather than their quality. Similarly, students and postdoctoral researchers know that if their experiments fail, they will not get publications, and the next career step will be jeopardized. Foreign students and postdocs know that a successful experiment published in a prominent journal can lead to residency and citizenship and perhaps a tenure-track position, whereas experiments that fail to produce the hoped-for result will mean they have to return to their home country. Thus, the temptation to dishonestly generate experimental results is ultimately financial, but it is rarely to gain riches, more frequently to just keep a job (Kornfeld 2012).

Among more senior researchers, including those that have job security, there are strong incentives to build a reputation by consistently publishing in high-profile journals, to be invited to give plenary talks at international meetings, for membership of academies, and to be awarded prizes.

Such pressures have not only tempted researchers to fabricate papers, they have also led some to corrupt the peer review process, by tricking editors so that they act as referees for their own manuscripts (Ferguson et al. 2014).

The case summaries from the US Office for Research Integrity give some insight into how research misconduct occurs, how it is (sometimes) brought to light, and what sort of penalties are applied. For example, Dr. Jun Fu was a postdoctoral fellow at the University of Texas MD Anderson Cancer Center (https://ori.hhs.gov/content/case-summary-fu-jun). Having admitted to intentionally falsifying a figure in a research publication, he entered into a 2-year voluntary settlement agreement in which his research was to supervised and certified by his employing institution, and he could not sit on grant review committees. Adam Marcus and Ivan Oransky discuss the penalties handed down to those found guilty in an article in the New York Times (http://www.nytimes.com/2014/07/11/opinion/crack-down-on-scientific-fraudsters.html?_r=0).

Fabrication/Falsification

It is important to realize that there is a wide spectrum of severity of research misconduct. On the less severe end of the scale are practices such as intentionally failing to cite the work of competitors and citing your own work more frequently than necessary. Similarly, cropping out cross-reactive bands in Western blots or changing the white threshold of an image to “clean up” the background should not be done, because it alters the original data, but it is a relatively mild sin. On the other end of the scale is generation of data by just making up numbers or generating false images by duplicating, altering, and relabeling other ones.

In determining the severity of the misconduct, or whether it is misconduct at all, it is important to determine the degree of intent, although this is not always easy.

Figures in papers are often comprised of many similar-looking parts, whether they be photomicrographs, gels and blots, flow cytometry plots, or traces from a patch-clamp amplifier. It is therefore always possible for someone to inadvertently grab the same image file twice, leading to a duplicated and wrongly labeled part of a figure. On the other hand, if many duplications are found in the figures in a paper, and they also involve rotations, differential cropping, or mirror images, and if similar anomalies are also apparent in other works by the same authors, deliberate falsification or fabrication is much more likely. With increased pressures to publish, and the availability of image processing software, the temptation to cut corners and artificially generate the desired result has never been greater (Rossner and Yamada 2004). Hundreds of examples can be found on the post-publication peer review site PubPeer (https://pubpeer.com/). However, although sites such as this can alert readers to concerns about research papers and can provide very strong evidence, they do not provide proof of intent or reveal which of the authors on multiauthor papers bears responsibility. For this, action is required either by the authors themselves or through the establishment of an inquiry by their institution.

For the last decade or so, many journals have explicitly stated in their guidelines to authors what kinds of image manipulation are acceptable and which are not. The Journal of Cell Biology (JCB) has shown leadership in this area (http://jcb.rupress.org/site/misc/ifora.xhtml). Currently, however, even those journals that do have clear guidelines vary in how rigorously they ensure compliance or publish corrections when authors infract.

Stealing Credit

The importance of obtaining credit for work is illustrated by the frequency and vehemence of authorship disputes. Papers are the primary currency of research, and authorship is therefore the main mechanism for determining how credit is allocated. Authorship therefore gives benefits but also carries responsibilities (Strange 2008).

Like other forms of misbehavior, authorship issues can range from the trivial to the serious, with plagiarism – the taking of another’s words or ideas without attribution –being classified as “research misconduct,” along with fabrication and falsification. The reason authorship is so important is because it is the currency that determines not only honors such as prizes and membership of academies but also the grants and fellowships that pay the researcher’s salary.

In life science publications from academic institutions, the first author is usually the student or postdoc who did most of the hands-on experimental work. The last author is typically the laboratory head. Usually, authors in between will be closer to the first position if they have contributed experimental data and closer to the last position if they have provided analysis and writing.

Peter Lawrence highlighted the problem of misallocation of credit in a Commentary in Nature in 2002 (Lawrence 2002). He listed many examples of where senior researchers, who were not even present when discoveries are made, nevertheless received the accolades for the breakthrough, often to the exclusion of their more junior colleagues who actually did the work. This phenomenon has been termed “The Matthew Effect” (Merton 1968). Merton compared the inappropriate flow of credit from junior researchers who produce the work to senior researchers who do not:

For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath. Matthew 25:29

Although there have been calls for many years to improve the unfair power structures that operate in science to determine how credit is allocated (“On being a scientist. Committee on the Conduct of Science, National Academy of Sciences of the United States of America” 1989), little change has occurred.

Ghost and Honorary Authorship

Two of the unethical ways in which authorship is corrupted are known as ghost and honorary authorship. Ghost authorship is when someone who would fulfill the usual requirements to be listed as an author – namely, to have provided substantial intellectual input to a paper – is not named among the authors. Pharmaceutical companies have used ghost authorship as a way of hiding their role in a publication. For example, Merck was accused of using ghost writers for papers published about its antiarthritic drug Vioxx, which allowed them to avoid disclosing relevant financial relations (Ross et al. 2008).

Honorary authorship is when an author is listed without having fulfilled the usual requirements to justify their inclusion, i.e., where they have not made a substantial intellectual contribution to a paper. Sometimes when drug companies write papers, they offer honorary authorships to “opinion leaders” so in order to influence clinicians. For example, internal documents released by Wyeth to an inquiry by the US Senate showed that they commissioned papers about their hormone replacement drug Premarin and then recruited prominent clinicians to act as the authors, whereas those from Design Write, the company that wrote the papers, were not listed (http://www.grassley.senate.gov).

Honorary inclusion as an author can also be claimed by department or laboratory heads for work that they have not produced themselves, or it can be offered to friends or collaborators to curry favor. The honorary inclusion of a famous person or someone known to the journal’s editors can increase the chances that a paper is sent out for review. Honorary authorship on one paper can be offered by a group leader in exchange for honorary inclusion as an author on another group’s paper.

Whether honorary or ghost authorship is classed as research misconduct varies among nations. In the Australian Code for the Responsible Conduct of Research, obtaining grant funding, general supervision, or provision of reagents from third parties in and of themselves does not justify inclusion as an author, and, moreover, inappropriate authorship is listed as an example of research misconduct. In contrast, in the USA, authorship issues (other than plagiarism) are not considered to be misconduct by the Office for Research Integrity.

In a positive move, many journals are now adopting the practice of listing the specific contributions of each author. This discourages honorary authorship and makes it easier to know who should receive the most credit, and, if an error is subsequently found in the paper, it helps to determine who might be responsible. For example, in the paper by Kapoor et al. (2014), there are 30 authors listed, but the “Author Contributions” section mentions only three, revealing which of the authors contributed.

What to Look for (Red Flags)

People can become aware of accidental errors, or possibly deliberate research misconduct, in two ways. Firstly, they can become aware if they notice misbehavior of a colleague or a co-author. Alternatively, they might see something as a third party, when they are reading a paper and reviewing a manuscript for a journal or when they are acting as an editor.

Whether it is before a paper is written or after it is submitted and published, the earlier errors are noticed and corrected the better. When criticizing work at lab meetings, during manuscript review, or when reading published papers, there are a number of “red flags” that can signal sloppy science or possible misconduct.

Similar text, that may amount to plagiarism , can be detected by simple Google searches or by commercial software that is available at many institutions (e.g., “iThenticate” (http://www.ithenticate.com/) and “Turnitin” (http://turnitin.com/)).

Sloppy statistics, such as failing to describe the type of error bars that are shown in figures, or results that look implausibly consistent can be a giveaway.

Images should be looked at on a computer screen, rather than on a printed copy, because the resolution is greater, it is possible to zoom in, and the contrast and brightness can be altered. Things that should raise concern include sudden linear changes in brightness of the background of an image, a washed-out or perfectly uniform background, inadequate resolution, or parts of an image that appear to be duplicated. For more examples, see PubPeer (https://pubpeer.com/) and papers by Vaux and Begley (Begley 2013; Vaux 2008).

Researchers have a duty to take action if they become aware of errors or possible research misconduct. If they notice a mistake in one of their own publications, they should write to the journal and ask them to publish a correction or, if the mistake affects the conclusions of the paper, ask for it to be retracted. If a colleague is suspected of error or misconduct, the action to take would depend on the specific circumstances, such as whether it involves a publication or not, whether the colleague is more senior or junior, and whether the error is thought to be accidental or deliberate. Well-run institutions have mechanisms in place so that researchers can easily obtain advice on what to do.

If an error is found in a publication by a third party, the options are to contact one or more of the authors, a responsible person at the host institution, and the journal editors, post a post-publication peer review comment on the Web, and/or contact the national integrity office (if there is one).

Peer Review and the Responsibilities of Journals

In the general journals (e.g., Nature, Science, PNAS), and most of the life science journals, manuscripts are submitted online (via the Web) and are first seen by a member of the editorial board. In the high-profile general journals, the editors will be full-time paid employees of the publisher. In the other journals, the editors are usually part-time, and may be paid or volunteers, but will usually be prominent researchers with expertise in the field covered by the journal.

The first decision the editor needs to make is whether to send the manuscript out for review. Although in an ideal world this decision would be made on the basis of the scientific content of the paper, editors are often busy and make a decision without reading the paper, but just on the basis of the title and abstract, and whether the authors are known to them, or come from an institution they respect. In the high-profile general journals, this arises much more frequently, because here the editors will seldom have deep expertise in the area the paper addresses. In other words, publication bias can arise because the editors often do not base their decisions on the science alone. The influence particular authors can have on the decision to consider a paper for publication and the biases against papers from authors or institutions that are unknown to the editors are illustrated by the Korean stem cell case.

In years prior to publication of the papers that were later found to be fabricated, Korean stem cell expert Dr. Woo Suk Hwang had trouble publishing his work in high-profile journals. They would usually refuse to even send his manuscripts out for review. When Hwang met Dr. Gerald Schatten, a prominent stem cell researcher from the University of Pittsburgh, he offered to help Hwang get his papers published in journals such as Nature and Science in exchange for being listed as an author. When the story subsequently broke that the two papers in Science had been fabricated, Schatten’s defense was that he had not participated in or overseen any aspect of the work and had not interacted with most of the scientists that did the experiments. He also claimed to have minimal involvement with another co-authored paper in Nature (Marris and Check 2006). The lesson from this episode is that there is bias in what gets published. Acceptance of a paper – especially in the high-profile journals – is based more on who the authors are and where they come from, rather than the quality of the scientific content.

In this single-blind process, which operates in most scientific journals, the same problem arises with the reviewers. If an editor does send a manuscript out for review, knowing who the authors are might influence whether they choose reviewers who they think are extra tough or extra lax. When the reviewers receive the manuscript, the first thing they will look at will be the names of the authors, and if they are known to them, or are collaborators or competitors, it might influence their attitude to the paper.

In the 19th February 2004 edition of Nature, there were ten papers with figures that showed error bars, but only three of the papers described what the error bars were anywhere in the paper (Vaux 2004). This suggested that seven of the ten papers had not been carefully read by the authors, reviewers, or editors. Clearly, as the decision to publish had been made without the papers being carefully read, it was based on some other reason. The most likely explanation is reviewer and editor bias.

Journals Should Screen the Data in All Accepted Papers Prior to Publication

Journals play several important roles in ensuring the integrity of scientific research. They have the final say in publishing corrections, editorial notes of concern, and retractions. As gatekeepers for what gets published, they can prevent erroneous or falsified papers from appearing, but to do so they must operate a rigorous peer review process. If journals are alerted to potential problems by reviewers or readers, determining the validity of the allegations and which of the authors is responsible usually requires cooperation with the authors’ institutions, but this might not be requested and, if it is, might not be granted.

With leadership from Dr. Mike Rossner, the JCB has been innovating in adopting methods to prevent publication of erroneous figures (Rossner 2006). The JCB routinely screens the images and figures in all manuscripts accepted by the reviewers but prior to publication, looking for inadequate resolution, sudden changes in brightness, loss of visibility of the background, over-enhancement of contrast, etc. They find that for 25 % of papers, they need to ask the authors for the original data and to remake a figure, and in about 1 % of cases, they revoke acceptance. Practices at other journals vary – Nature checks the images in just two of the articles in each edition; Science relies mainly on its reviewers to identify problems; the Journal of Biological Chemistry has adopted many of the same author and review guidelines as the JCB, but does not routinely ensure compliance (Couzin 2006). However, almost all journals have now at least published image guidelines, so authors will know up-front what minimal resolution is acceptable, whether or how images can be altered, cropped, and spliced and how statistics should be described.

COPE

COPE, the Committee on Publication Ethics, has been a great source of advice for journal editors since its establishment in 1997. Although its mandate is limited, and it was established by journal editors to help other editors, its efforts have raised the standards of publication integrity and also provided benefits that have flowed on to authors, publishers, and institutions. For example, the COPE flowcharts , which give step-by-step recommendations on how to handle a variety of misconduct related issues, have been helpful to countless editors and have also helped whistle-blowers and authors know what to expect (http://publicationethics.org/resources/flowcharts).

Responsibilities of Institutions

The way the host institution manages allegations of research misconduct is critical, but is often handled suboptimally, not least due to conflicts of interest (such as fear of reputational damage) and lack of experience and established protocols.

In trying to avoid reputational damage when a case of research misconduct becomes public, an institution can risk even greater damage by engaging in a cover-up. Yet the institutions play an essential role, because unlike the publishers and readers of research papers, the institutions have access to the authors’ original data and can individually interview each of the authors to try to determine which ones were responsible for any mistakes or misconduct.

Institutions can hear of concerns of possible research misconduct from outsiders, such as journal editors or reader of papers or grant applications, or they can be contacted by a whistle-blower who might also be a member of the same institution or even a close colleague of the persons being accused. In many countries, the investigations have two phases. First, there is a preliminary investigation and collection and securing of evidence. The main goal of this state is to determine whether the allegations do not have substance and can be dismissed. If the case cannot be dismissed, then the investigation should continue to a more thorough stage.

Unless the case can be summarily dismissed, e.g., because it is apparent the allegations were mistaken, issues that now need to be addressed on a case-by-case basis include: is it possible to proceed as anonymous allegations, or does the name of the accuser need to be revealed? When is it best to inform the person against whom the allegation is made? What sort of supporters and advisors should be appointed to council and assist both the complainant and the accused? Which people should be interviewed and who should be present? Should the investigation be external and independent or internal? At what stage should other interested parties, such as funding bodies and journals, be informed? Do any expert investigators or witnesses need to be consulted? Who should be indemnified? When and what kind of legal advice should be sought?

Much useful advice on conducting investigations can be found at the ORI site (http://ori.hhs.gov/investigations). As outlined in this website, the key goal of the investigation is to substantiate or refute the allegation. The investigation must be carried out without regard to the motivation or status of the accuser, and the inquiry panel is responsible for gathering and assessing the evidence and conducting the case. The burden of proof lies with the inquiry panel, not the accuser. The terms of reference for the inquiries should not be set narrowly, so that if, during their investigations, the panel uncovers additional evidence of misconduct, they can extend their investigations until all related instances of misconduct are uncovered.

Once the inquiry panel has made its findings of fact, the host institution has to determine the best way for restitution. The institution bears responsibilities to the scientific public and the journals, that can be fulfilled by correcting or retracting publications. They also have responsibilities to funding bodies, that might involve alerting them or returning funds. They have a responsibility to those who have been found to have engaged in misconduct and to provide sanctions that are proportionate and, ideally, a path for reform. If mistreatment of animal or human research subjects was involved, they have a responsibility to determine what went wrong in their governance, so that similar failures will not be repeated.

Unfortunately, when an allegation is forwarded to an institutional official, they may not have much experience in handling such cases. Where there is a national office or ombudsman for research integrity, they can seek advice from them, but in countries where there is no such body, institutional officials administering cases of potential misconduct can find themselves alone, which makes mistakes much more likely.

Considering the two aspects of research integrity (integrity of the scientific record and integrity in the practice of science) in cases of research misconduct, the role institutions can play in upholding the former is more straightforward, as it will involve publishing corrections or retractions, but in this case a cooperative relationship with the journals is essential. COPE has published guidelines for cooperation between research institutions and journals on research integrity cases (Wager and Kleiert 2012). They recommend that institutions:

  • Have a research integrity officer (or office) and publish their contact details prominently;

  • Inform journals about cases of proven misconduct that affect the reliability or attribution of work that they have published;

  • Respond to journals if they request information about issues, such as disputed authorship, misleading reporting, competing interests, or other factors, including honest errors, that could affect the reliability of published work;

  • Initiate inquiries into allegations of research misconduct or unacceptable publication practice raised by journals; and

  • Have policies supporting responsible research conduct and systems in place for investigating suspected research misconduct.

The path to upholding integrity in the practice of science is less straightforward. The overarching principle is that research should be conducted honestly, and credit should be awarded fairly (“On being a scientist. Committee on the Conduct of Science, National Academy of Sciences of the United States of America” 1989). For this, education and classes in research integrity principles will have less impact than having researchers and administrators lead by example, by having procedures in place to handle allegations of misconduct and to manage such cases efficiently, and by not tolerating those who cheat. Those in countries with research integrity offices or ombudsmen have a source of advice on how to make allegations of misconduct and how to conduct investigations. Integrity offices also provide oversight to ensure allegations of misconduct are handled appropriately. In some countries, such as the USA and UK, the national science academies play an active leadership role in upholding research integrity, for example, by publishing articles on research ethics and mentoring such as that mentioned above or by discussing research ethics online (http://blogs.royalsociety.org/in-verba/author/elizabethb/). Others should follow their example or set even higher standards.

Institutions are wise to have procedures in place that anticipate the occurrence of research misconduct. Relying on education and the promotion of integrity principles on their own are unlikely to prevent all occurrences of research misconduct (Kornfeld 2012), but require measures to ensure compliance. Heavy-handed, restrictive “big brother” approaches are expensive to implement and are likely to cause resentment. The “fire alarm” approach to handling misconduct is both cheap and likely to be effective.

In the “fire alarm” model, researchers are not required to know how to investigate and manage cases of misconduct themselves, they are just required to “push the alarm button” to summon help when they see something that causes them concern. The key requirements of this model are that everyone must know how to sound the alarm, and once the alarm is sounded, the institution must have protocols in place to take action. The “fire alarm” model is relatively cheap to operate (e.g., compared to a surveillance model), empowers whistle-blowers, and is less likely to generate antagonism with administrators than other systems. In addition, as colleagues are most likely to spot problems, have the knowledge to distinguish what is acceptable in their particular field, and may see things early, the fire alarm model is more likely to minimize the amount of damage that occurs. While the fire alarm model could, like all other models, be abused, for example, if opponents of a scientist make multiple complaints as a form of harassment, whether action is taken against the accused person, or whether they are even informed of the allegation, would depend on the nature of the allegation and the strength of the evidence provided by the whistle-blower or as part of a preliminary investigation.

Roles and Responsibilities of Whistle-Blowers/Individuals

As written in the Singapore Statement on Research Integrity (http://www.singaporestatement.org/), researchers have a duty to report to the appropriate authorities any suspected research misconduct and other irresponsible research practices that undermine the trustworthiness of research.

The best way for researchers to fulfill this duty is complex and depends greatly on circumstance. Issues that need to be considered include:

  • Anonymity (whether the whistle-blower’s name needs to be revealed);

  • Who to raise concerns with (journal editors, authors, institutional officials, national research integrity offices, department heads, the individual who is suspected, PPPR blogs such as PubPeer, or PubMed comments, funding bodies);

  • The position of the whistle-blower in the hierarchy;

  • Whether delay could cause harm to human subjects or experimental animals;

  • The nature of potential conflicts of interest; and

  • The prevailing legal environment and whether it protects free speech.

Just as all researchers have a duty to report concerns of possible research misconduct, all would be wise to seek advice first. A search of the Web provides links to many national whistle-blower organizations.

In Australia, the Code for the Responsible Conduct of Research states that institutions must appoint one or more “advisers in research integrity,” so that those who have concerns can get confidential advice. The advisers inform the individual what options they have and, for example, how to make a formal allegation. The adviser’s role is one of support; they are not to investigate the case.

The Way Forward

The increasing numbers of retractions indicate a growing awareness of issues of research integrity and new avenues for reporting concerns. The Web has made anonymous post-publication peer review possible, in sites such as PubPeer. Individual scandals have prompted the strengthening of practices that promote research integrity in a number of countries, and this has led to establishment of offices for research integrity (ORIs) or research integrity ombudsmen. It has culminated in the series of World Research Integrity Conferences (http://wcri2015.org/) every few years, since the first in Portugal in 2007.

The Promise of the Web

In recent years, alarm about falling integrity in science has prompted a number of positive responses. The growth in the Internet has made it possible for bloggers to raise concerns anonymously. For example, it was concerns initially raised in a blog, and then publicized in the popular media, that ultimately led to the retraction of the Woo Suk Hwang’s stem cell paper (Kennedy 2006). Blogs reporting allegations of research misconduct, such as the Abnormal Science blog (http://ktwop.com/tag/abnormal-science/), 11jigen’s blog (http://katolab-imagefraud.blogspot.com.au/), and Paul Brooke’s science fraud blog (which was closed down following legal threats), have given way to more organized post-publication peer review sites, such as PubPeer (https://pubpeer.com/). PubPeer allows concerns about any published paper to be raised anonymously and automatically contacts the authors and invites them to respond. PubPeer has itself been threatened with legal action demanding that it release the names of registered commenters, but the strong freedom of speech laws in the USA give more protection than in other countries.

World Conferences and National Offices for Research Integrity

There have been four World Conferences on Research Integrity (http://www.researchintegrity.org/). These not only provide an opportunity for researchers, administrators, editors, and publishers to air their issues and propose possible solutions, they provide an opportunity for the latest research into scientific integrity to be discussed. The Second World Conference on Research Integrity in Singapore produced the Singapore Statement (http://www.singaporestatement.org/) which succinctly describes 14 responsibilities of scientists and how they flow from a set of four principles.

Several countries have established national offices for research integrity (ORIs) or ombudsmen for research integrity. The ORI in the USA (http://ori.hhs.gov/) will oversee any allegation of misconduct involving NIH-funded research in the previous 5 years. The NSF has a similar office where concerns can be lodged (http://www.nsf.gov/oig/hotline.jsp). In Germany, the DFG has an Ombudsman for research integrity (http://www.ombudsman-fuer-die-wissenschaft.de/). Denmark has a Committee on Scientific Dishonesty (http://ufm.dk/en/research-and-innovation/councils-and-commissions/the-danish-committees-on-scientific-dishonesty). Those countries that do not have a national office that can handle confidential reports of possible research misconduct leave its management in the hands of the research institutions, where serious conflicts of interest almost inevitably arise.

Improving Scientific Integrity in Publishing

Double-blind peer review (DBR) offers one way of reducing publication bias. In DBR, the authors’ names and affiliations are submitted on a Web page that is not presented to the editor who decides whether the paper should be sent out for review or the reviewers themselves. They are left to give their opinions on the merits of the science alone, not on whether they know the authors. Like the double-blind clinical trial, DBR is an innovation that attempts to reduce bias and increase objectivity in scientific publications (Vaux 2011). Post-publication peer review , whether on a dedicated site such as PubPeer, as part of PubMed Commons (http://www.ncbi.nlm.nih.gov/pubmedcommons/) or on a site hosted by the publisher, should improve integrity of the literature and de-emphasize the published paper as the be-all and end-all of career advancement.

Summary

Although it remains true that science is ultimately self-correcting, society as a whole will benefit more, and progress will be more rapid, if research is conducted efficiently. To do so requires minimizing the number of errors that enter the literature and quickly correcting those that inevitably do. Research will also be performed more efficiently if those who conduct it are fair and honest. However, as a human endeavor, science must be managed actively for its integrity to be upheld. This requires not only a bottom-up, “grass roots” effort based on principles of honesty and fairness, it also requires some top-down mechanisms to ensure compliance. There must be mechanisms in place so that errors and concerns of possible misconduct can be reported. Publishers should try to minimize entry of errors into the literature by screening manuscripts and using unbiased peer review and should cooperate with institutions when problems arise. Nations and national scientific academies should provide mechanisms to offer advice and oversight for research institutions. Researchers need to have integrity in how they conduct themselves, and whether it is through official channels or anonymously via the Web, when they see errors or have concerns about possible misconduct, they should, after seeking advice, speak up.