Robert Oppenheimer once mused that if the number of scientists at the end of the 1950s continued to publish at their current rate, by the end of the century the mass of their journals would weigh more than the earth (Sovacool 2005, 3). His statement underscores how “big” science has truly become in modern society. In 2003, for instance, between six and eight million scientists were employed in research and development in the United States. Their activities—roughly 40% of the world’s R&D effort—constituted a $300 billion industry, accounting for roughly 3.2% of the entire country’s gross domestic product (Resnik 2007, 1–4). Indeed, the practice of science has come to be associated with vast improvements in our standards of living, breathtaking technological improvements, and incredible advancements in human knowledge.

Yet perhaps because of its significance in contemporary culture, the issue of scientific misconduct has also come under intense public scrutiny—especially in the past few years. The Los Angeles Times ran a series of stories in 2003 about high-level researchers at the U.S. National Institutes of Health (NIH) and National Science Foundation (NSF) receiving money for patents and business connections with pharmaceutical companies (Krimsky 2005). Equally disparaging, Johnson & Johnson, A. H. Robins, Merrill Dow, and the asbestos, vinyl chloride, and tobacco industries have all recently been convicted in court of concealing information about the negative health effects of their products (Wagner and Michaels 2004).

These examples, along with others, have engendered a debate among scientists, ethicists, lawyers, journal editors, university administrators, and sociologists about the causes behind misconduct and necessary responses to it. Some suggest that misconduct is an individual problem and extremely rare. Others note that some institutions inadvertently promote misconduct by pressuring researchers to publish and over-perform, making the problem institutional. Still others argue the true problem is structural, and that the way that modern science is practiced makes ethically questionable behavior inevitable.

In order to maintain their cohesion and appeal, Helga Nowotny (2000) suggests that all persuasive narratives, which she calls ‘narratives of expertise,’ must be transgressive, collective, and self-authorizing. They must be transgressive in the sense that they can respond to issues and questions that are never purely within their disciplinary domain, and overlap with various areas of social life. They must be collective in the sense that they are told in a voice that extends beyond the competence of the individual expert. Such narratives seemingly speak with a carefully orchestrated concerted voice. Dissenting views are permitted to deviate as long as they do not challenge fundamental values. Third, such narratives must be self authorizing, locating their authority neither in one specific site nor among a single group of highly respected researchers, but in the process of accommodating a heterogeneous set of actors.

Narratives concerning scientific misconduct, however, seem to lack the transgressive, collective, and self-authorizing cohesion needed to maintain their allure. Consider the case of Hwang Woo-Suk, a South Korean scientist who recently admitted to fabricating data and embezzling funds related to his research on stem cells. Alexander Bogner and Wolfgang Menz (2006) argue that the Hwang case can really be conceptualized according to three different narratives (or ways of explaining events).

First, Hwang’s case can be understood as a narrow psychological problem concerning a few rotten individuals. This portrayal, perpetuated by newspaper commentaries, suggests that the reason behind misconduct is the base or selfish motives of those involved. The solution is to introduce proper control and evaluation mechanisms that allow scientists to better police themselves.

Second, it can be understood as a failure of a system of quality control in science production. This portrayal suggests that the true problem lies in certain institutions that pressure scientists to increase their output massively to achieve grants and other rewards such as tenure and recognition. Misconduct occurs when the institutions themselves fail. The solution here is to call on universities and journals to improve their safeguards (such as peer review and transparency in research records).

But third—and perhaps most interesting—misconduct can be read as a deeper, structural problem, or the failure of scientific institutions to promote proper values. In biomedicine, for instance, practitioners may be motivated by the desire to heal people, to make a profit, to contribute to overall scientific knowledge, or to a variety of other overlapping incentives. Here, the problem is one of structural values and norms—which ones are promoted in modern science, and do they make misconduct inevitable?

Drawing from research in sociology, science and technology studies, history, economics, political science, and psychology, this article explores two sets of questions. First, are these three narratives of misconduct isolated only to the Hwang case, or can they apply to the entire practice of science? Second, which set of policy mechanisms does each narrative promote as a response to the problem of scientific misconduct? To answer these questions, the paper begins by assessing contemporary definitions and estimates of scientific misconduct. It emphasizes disagreements over such definitions and estimates as a way to tease out tension and controversy over competing visions of scientific research.

Then, the article identifies three distinct narratives concerning scientific misconduct: a narrative of “individual impurity” promoted by those wishing to see science self-regulated; a narrative of “institutional failure” promoted by those seeking greater external control of science; and a narrative of “structural crisis” among those critiquing the entire process of research itself. It concludes by noting that each narrative suggests a different approach for resolving misconduct, and that the incommensurability of these views may help explain the inability of practitioners to reach consensus concerning unethical behavior in the scientific community.

Attempting to decide whether the problem of scientific misconduct is individual, institutional, or ethical is extremely controversial. Yet, as John Abraham and Julie Sheppard have noted, focusing on controversy enables the underlying interests and values involved in scientific practice to be revealed and richly documented. This allows us to view “science” as an intersection of enduring structural interests, each maintaining a degree of public credibility and social legitimacy (Abraham and Sheppard 1999). Such an investigation seeks to illuminate what Shelia Jasanoff (1996) calls the “coproduction” of scientific and social order. In order to produce knowledge, Jasanoff argues that scientists must seamlessly integrate both the “social” and the “scientific” into their work. Such an understanding “reshapes, however subtly or tentatively, the way we come to grips with the enduring problems of truth, power, agency, legitimacy, individual rights and social responsibility.”

Pointing Both Ways at Once: Contemporary Definitions and Estimates of Misconduct

In the U.S., the government has struggled to regulate ethical behavior in scientific research since 1966 (See Table 1).

Table 1 Timeline of important events in the regulation of scientific misconduct in the United States

Currently, the government classifies “scientific misconduct” as an intentional attempt by an investigator or scientist to manipulate data or fashion results. The Department of Health and Human Services defines misconduct as:

Fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. (a) Fabrication is making up data or results and recording or reporting them; (b) Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record; (c) Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit. Research misconduct does not include honest error or differences of opinion. (ORI 2005)

Rather than being simply black or white, instances of misconduct also differ by degree. The more severe types of culpability include purposeful (a direct desire or intention to commit misconduct), knowing (aware of the facts and circumstances concerning misconduct), and reckless (consciously disregarding regulations and norms). The definition is thus simultaneously narrow and broad: it narrows misconduct to only acts of fabrication, falsification, and plagiarism, but broadens misconduct to include those that act purposefully, knowingly, and recklessly.

Jocelyn Kasier (1999, 391) suggests that defining misconduct has been “the most controversial issue” for the scientific community, one they have “agonized over for years.” A 1995 proposal to expand the definition of misconduct headed by biologist Kenneth Ryan, for instance, drew criticism for being too open from members of the Department of Health and Human Services, American Association for the Advancement of Science, NSF, NIH, and the National Science and Technology Council.

Because of this discordance, everyone seems to suspect that some misconduct is occurring, but cannot seem to decide how much. Famous albeit alleged instances of scientific misconduct include exemplars such as Claudius Ptolemy (90–186), Galileo Galilei (1564–1642), Issac Newton (1643–1727), and Gregor Mendel (1822–1884) (Fisher 1936; Feyerabend 1965; Newton 1977; Westfall 1994). In contemporary times, it could be that the majority of respondents that “know” of instances of misconduct are referring to the urban lore of the laboratory or a few highly publicized instances of infamous abuse (See Table 2).

Table 2 Examples of high profile cases of scientific misconduct in the United States, 1971 to 2001

Efforts to estimate scientific misconduct are further complicated by the difficulty in generalizing across disciplinary boundaries where notions of misconduct change. Unfortunately, anecdotal evidence seems to point both ways at once.

On one extreme are studies suggesting that misconduct is extraordinarily rare. Before 1989, the NIH received between 15 to 20 reports of misconduct per year, and the NSF investigated a total of only 12 charges from 1980–1987. Between 1993 and 2002, the Office of Research Integrity reported just 136 cases of scientific misconduct (Reynolds 2004). Another study suggested that the scientific literature is 99.9999% pure (i.e., that one paper or less per million is fraudulent) (Shamoo and Resnik 2003).

On the other extreme are surveys suggesting that misconduct is uncomfortably common. A study at Arizona State University asked students in seven introductory biology and zoology courses whether they manipulated laboratory data to obtain desired results. A huge majority—84% to 91%—admitted to manipulating laboratory data “almost always” or “often” (Marshall 2000). A survey of 3,247 principle investigators engaged in scientific research found that one third reported personally engaging in “serious misbehaviors” (Martinson et al. 2005). A study undertaken by the British Medial Journal found that more than half of the respondents knew of uninvestigated cases of misconduct (Farthing 2000), and the New Scientist conducted a survey in which 92% of the readers responded that they knew or suspected misconduct in their institution (Howard 1994). One author even went so far as to project that for every major case of misconduct publicly recorded, hundreds of thousands remain undetected (Shamoo and Resnik 2003). As one assessment lately concluded, “there is a troubling discrepancy between public statements about how ‘rare’ misconduct in research supposedly is and the more private belief on the part of many researchers that it is fairly common” (Marshall 2000, 1662).

Why the discrepancy? David B. Resnik (2003) argues that definitions and estimates of misconduct reflect fundamentally different beliefs about the incidence of misconduct, its causes, and its implications for science and society. On the one hand, there are those that believe misconduct is uncommon and seek to narrow its scope only to fraud, fabrication, and plagiarism. On the other hand are those wishing to expand the definition of misconduct to include many unethical practices beyond fraud, fabrication, and plagiarism (such as conflicts of interest and exploitation of subordinates). Resnik argues that definitions and estimates of misconduct also serve as a battlefield for conflicting visions between science and society. On the one side of the debate are those wishing to see little public oversight or regulation of science. On the other are those desiring oversight and regulation, opening up science to greater public scrutiny. And, finally, Resnik argues that such discrepancies reflect the competing goals of stakeholders connected with scientific research. Some want to see only institutions held liable for fraud and misuse of federal funds, others want to promote ethical education and improve protections for all human and animal subjects in scientific research.

Ultimately, three distinct narratives about scientific misconduct seem to be emerging: a narrative of “individual impurity” depicting relatively rare instances of abuse; a narrative of “institutional failure” suggesting misconduct will occur among institutions that inadvertently foster it; and a narrative of “structural crisis” pointing to deeper problems within the practice of modern science itself.

A Narrative of Individual Impurity

The first narrative views scientific misconduct predominately as an individual problem. Science is seen as an institution that promotes a set of norms that guide most of its researchers, instilling a set of distinct values. Robert K. Merton (1973) identified the most famous of these as communalism, universalism, disinterestedness, and organized skepticism. Communalism refers to the common ownership of scientific discovery, and entails the belief that results should be shared among the entirety of the scientific community. Universalism means that all truth claims and research activities ought to be evaluated in terms of impersonal and objective criteria (and not on subjective criteria such as gender or ethnicity). Disinterestedness means that researchers should approach projects as neutrally as possible, having no preconceived interests or biases associated with the outcomes of their research. And organized skepticism means that all ideas should be subject to rigorous and structured scientific scrutiny.

Such values are reinforced according to a rewards system within science that utilizes positive and negative sanctions to influence behavior. Positive rewards such as job security and promotion, citation, research grants, and honorific awards are given to those that adhere to the values of science. In contrast, negative sanctions such as dismissal, cessation of research, and suspension of grant writing are wielded against those subverting the common set of scientific values (Bridgstock 1982).

Such a narrative views culture as structural or functional. Scientists are not merely puppets of a social system, but they act within perceived constraints, behaving according to their own goals and values. Anthony Giddens explained it by stating that “the structural properties of social systems … are like the walls of a room from which an individual cannot escape but inside which he or she is able to move around at will” (Quoted in Pickering 1993, 583). Scientists are tacitly socialized over time to acquire the values of the scientific community.

Scientists seem to accept that Merton’s values, while believed to be widely followed, are not universally practiced. In September, 1970, while giving the Presidential Address to the British Association of Science, the eminent physicist John Ziman (1970, 996) noted that:

Scientists will intrigue for political ends like any Jesuit, and can be as lordly as any consultant physician in the control of their juniors. They can deceive their wives, fiddle their tax returns, drive drunkenly, live beyond their means, feed parking meters, beat their children, and otherwise behave as antisocially as anyone else when the occasion demands.

Under this narrative, the problem of scientific misconduct reflects a far more basic problem: not everyone is good, and in every type of social activity “bad apples” will surface. There are scoundrels among scientists, just as among every other profession or part of society, but the key difference between science and other activities is that it instills values in its researchers that serve to make misbehavior a relatively rare problem.

The best solution, according to this narrative, is for scientists to police themselves. Only they possess the requisite knowledge, as research specialization makes it difficult for those outside of highly defined fields to differentiate unintentional error from deliberate deception. Such efforts, the thinking goes, can be enhanced with courses in ethics and responsible codes of conduct to teach young researchers about proper behavior. As Alan T. Lefor (2005, 881) argues, “one of the most important things we can do to combat this problem is to openly teach and discuss ethical issues in medicine and in science … each of us as teachers must also discuss this in the course of formal education.”

A Narrative of Institutional Failure

A second narrative takes a more moderate approach, suggesting that scientists are influenced by the institutions that support them, promoted by those calling for more external control of science. These voices suggest that institutions operate according to their own research pedagogies and differ greatly within disciplines and laboratories.

This second narrative suggests that the values of science are not monolithic, but instead reflect the institutional interests connected to a particular group of scientists. As Deena Weinstein (1979, 639) put it:

Fraud, deceit, cheating and mendacity are found within all the institutions that comprise contemporary society. Advertising and politics would be unrecognizable without these phenomena. Parents lie to their children, bankers embezzle funds, “lifetime” guarantees are not honored by either retailers or manufacturers, witness perjure themselves on the stand and the atheistic priest celebrates mass.

In this view, misconduct will occur when institutions create an environment that fosters it. The expensive nature of research projects has tended to depersonalize the research process and dilute responsibility among many investigators, forcing individuals to spend less time supervising personnel and interpreting investigative results. The National Academies of Sciences (2004) suggested that the materializing institutional culture that governs scientific research does not properly promote notions of integrity or honesty in the research environment. They concluded that no established measures exist for assessing integrity in the scientific research environment and that existing policies and procedures are not sufficient to ensure responsible codes of research. The Academies also noted that education is likely to be only of modest help because it is often accomplished non-creatively and implemented inappropriately. An analogous study sponsored jointly by the NIH and ORI found that “certain features of the working environment of science may have unexpected and potentially detrimental effects on the ethical dimensions of a scientists’ work” (Martinson et al. 2005).

Accordingly, conflicting motives have served to discourage scientists from adhering to policies of responsible research methods and codes of conduct at certain institutions. In the past two decades the University of Pennsylvania, Duke University, Johns Hopkins University, Purdue University, Cornell University, and Dartmouth University, among others, have all voluntarily or involuntarily suspended part or all research activities to halt suspected scientific misconduct (Sovacool 2005; Gilbelman and Gelman 2005). Acts of misconduct at these institutions occurred in the departments of psychology, nursing, medicine, biology, molecular biology, genetics, physics, chemistry, and psychiatry. Those involved included professors, graduate students, undergraduate students, research assistants, and deans.

The logic of the institutional failure narrative suggests that neither scientists nor institutions can be relied upon to stop misconduct. Conflicts of interest exist at multiple levels: universities often employ both the accuser and the accused; they are subject to government funding, and have a disincentive to report findings; they are dependent on the reputation of their active research and training programs to recruit students and achieve high national rankings; and they are subject to rigid internal hierarchies. A study conducted by the American Association of Medical Colleges concluded that the academic rewards system was actually one of the most important barriers to collaborative research—young people were simply discouraged from collaboration because they believe they have to demonstrate their independence (Cohen and Siegel 2005). This pressure to succeed through individual research creates an incentive for acts of misconduct to occur at the same time that it makes it easier for them to go undetected.

The narrative of institutional failure also suggests that ethics courses alone are insufficient to produce positive behavior—they focus on individual miscreants rather than broader change. This could be why a study of 172 University of Texas students enrolled in a “responsible conduct of research” course found “no significant change” in attitudes after training (Marshall 2000). Its results parallel a 1996 study that found that people who had gone through a training course in ethics were actually more willing to grant “honorary authorship” to colleagues that had not performed research than were those who had not been trained (Marshall 2000).

Instead, institutional standards aimed at protecting whistleblowers are sometimes advocated to reduce misconduct. A 1995 ORI report on whistle-blowing indicated that 68% of whistleblowers were damaged by making a claim of misconduct (Frankel 2000) A similar survey of 4,000 graduate students and faculty revealed that more than half of the students and 65% of faculty believed they could not report a faculty member of misconduct (Eisen and Berry 2002). Institutions may need to establish formal procedures, the thinking goes, for reporting misconduct so that whistle-blowers will come forward.

Stronger fines, mandatory incarceration, remission of research licenses, and more severe criminal penalties for convictions of false accusations of misconduct are also promoted as a means to help deter instances of misconduct. Two separate legal studies appearing in the Thomas Cooley Law Review and Michigan Journal of Law Reform have argued that criminal prosecution can create a prophylactic effect on preventing fraud and condemning the harsher forms of scientific misconduct (Goldberg 2003; Kuzma 1992). A parallel note from the Journal of Law, Medicine, and Ethics argues that “criminal sanctions for the most egregious cases [of scientific misconduct] might sufficiently raise the stakes to serve as a deterrent” (Redman and Caplan 2005, 347). Since stronger fines and penalties would threaten institutions with significant financial losses, it is argued that they could convince administrators to reform their research environments now to avoid punitive damages later. Criminalizing misconduct, the argument continues, could also encourage scientists to be more scrutinizing of their own research methods and analysis.

A Narrative of Structural Crisis

A third narrative suggests that the instances of scientific misconduct represent a deeper problem that extends beyond the individual or institution to the practice of modern science itself. Under this narrative, no common value system exists for science or even a particular institution. Instead, the Mertonian or varying institutional norms may represent one aspect of science, but such values are historically situated accommodations to a particular set of circumstances rather than universal principles that transcend time and locale.

Such a narrative holds that there is no such thing as one scientific method or even “science.” In her work on the sociology of scientific knowledge, Karin Knorr-Cetina (1999) elucidates the concept of an “epistemic culture” to suggest that individual scientific laboratories develop distinct cultural, social, and technical stances. She concludes that the production of scientific knowledge is deeply influenced by practices of work, trust, modes of analysis, methods of interpretation, values, and institutional arrangements within each group of scientists.

Instead of one overarching scientific method, many different scientific methods exist, each influenced by an amalgam of local and social factors. Some research managers prioritize autonomy; others emphasize intense control. Some seek to create novel phenomena and the tools for studying them; others address established programs with accepted methods. Some rush to publish and claim priority; others delay and build to advantage. Some share techniques and materials; others keep secrets and risk censure. Some work daily in the laboratory to retain control of techniques, findings, and group members; others retreat to the office to write papers, reviews, and proposals. Some promote tightly focused research agendas; others allow agendas to evolve over time (Hackett 2005).

The narrative of structural crisis highlights different structural changes in the practice of science itself that may promote unethical behavior. First, research no longer matches its image. No theory, no matter how good, ever agrees with all the facts in its domain. Investigators must therefore nudge certain facts out of the picture, defuse them with an ad hoc hypothesis, or just plain ignore them (Broad 1981). Yet depictions of science in the media and scientific literature present a reconstruction of the research process. All of what are in retrospect mistaken ideas, badly designed experiments, or incorrect calculations are omitted.

Scientists present research as if it had been carefully thought out, planned and executed according to a neat and rigorous process. The scientific literature makes it virtually impossible to avoid this portrayal, as journals will rarely accept a more realistic account of what actually happened. No scientist publishes all the raw data. Such information must be processed, smoothed, massaged, reorganized and then filtered before publication (Martin 1992). Stephen J. Gould fondly referred to this process as “dimly perceived finagling” to make data more persuasive.

Second, universities over-emphasize publication and citation. The pressure to publish and get large grants at universities has increased considerably over the past three decades. Since promotion often hinges on the size of grants and number of publications (rather than discovery or quest for knowledge), scientists feel coerced into producing results (Martin 1992). Furthermore, in such publications citations to other research are frequently included not because they have been read, or even impacted the researchers, but because a longer list of citations believed to improve one’s chance to get grants (Martin 1992). Two eminent medical researchers even went so far as to state that:

The process of citation … is now frequently nothing more than a kind of game, and a rather dirty game at that. In the most extreme form, the game seeks to hide the foundations of the work amidst a mass of trivial and irrelevant references, and seeks to establish the author’s own laboratory as the sole source of wisdom. More usually, it is just a question of omitting to mention references outside the author’s continent, or opposed to their point of view. (Manwell and Baker 1981)

In other words, researchers have become attuned to greatly increasing their publication output and to strategically using citations to enhance their own arguments while debasing those of their opponents.

Third, subordinates are often alienated during the production process. Much work completed by other people associated with scientific research is not typically acknowledged. It is considered inappropriate to acknowledge spouses, graduate students, typists, secretaries, librarians, laboratory assistants, and others not involved in “real science” (Martin 1992). To the extent that scientific research is done on large teams, individual scientists have come to feel more anonymous. One classic study of graduate education in the United States found that 36% of 2,331 recent Ph.D. recipients, 30% of 1,821 graduate faculty, and 41% of 79 graduate deans agreed with the statement that “major professors often exploit doctoral candidates” (Manwell and Baker 1981).

Fourth, competition has replaced cooperation. Faith in the American marketplace and capitalism has promoted the idea that competition will distribute research and the other rewards of science efficiently and appropriately (Martin 1992; Hackett 1990). Consequently, competitiveness and secrecy shrouds many aspects of science: the peer review process, refereeing of manuscripts, evaluation of research grant proposals, and nomination of individuals for prizes and awards are all highly competitive but conducted in secret. Interviews with 180 British high energy physicists found that 60% practiced secrecy about their own research, partially for fear of theft of ideas, partly for fear of looking foolish (Manwell and Baker 1981).

Analogously, a national survey of biotechnology companies found that 82% require scientists to sign confidential disclosure agreements. Of those firms, 88% said that these agreements also applied to students. According to a different survey, 53% of scientists at Carnegie Mellon University had signed contracts that allowed companies to delay publication (Resnik 2007). And a recent survey of 1,849 geneticists found that 47% of respondents said that they had been denied at least one request for data, and 35% said that sharing of data had decreased in the last decade (Resnik 2007).

The result is an arena where the values behind scientific research may have slowly shifted to favor commercialization and profits over knowledge and ethical behavior. Many scientists have significant financial interests including stock, copyrights, and patents in their research. The transition to more profit oriented university means that students now have twin responsibilities: they must research a profitable project, and continue their education. Professors must play two roles, teacher and employer. Students have shifted onto research budgets, becoming employees rather than trainees or fellows, and the character of those budgets has also changed. An increasing proportion of research is now underwritten by industry, where principal investigators are held accountable to their sponsors for keeping research within the confines of the proposal and for producing and securing results (Sovacool 2005; Martin 1992).

This problem is particularly acute when scientists are employed by or receive research funds from companies or government bodies. Researchers receiving money from chemical companies do not draw attention to the dangers of pesticides. Physicists working on nuclear weapons do not stray outside their narrow task to talk about disarmament. Engineers working on automobile companies do not propose alternatives to the automobile (Martin 1992). Tobacco scientists do not lament the health effects of smoking, and climatologists sponsored by oil companies do not hail the warnings of climate change and global warming. Caviar companies have intentionally suppressed scientific results related to artificial caviar to prevent a depression in prices, and electric utility companies suppressed research on compact fluorescent light bulbs to keep electricity consumption growing (Black 2004; Saunders and Levine 2004). Those that sponsor research have economic and political imperatives that may predefine conclusions, and academic institutions may prioritize potentially lucrative research findings over ethical codes of conduct.

One recent study of 15 metropolitan universities showed that faculty in the natural sciences, basic biomedical sciences, and engineering earn on average between $10,000 and $30,000 more per year than colleagues in the humanities and social sciences (Resnik 2007). This is partly because many researchers supplement their income with financial arrangements related to their research, including ownership of stock, consulting contracts, honoraria, and royalties from patents. A national survey of 2,052 life scientists found that 28% received research support from industry (Resnik 2007). Another found that 34% of lead research authors had financial interests related to their research, 20% served on advisory boards of companies, and 22% were listed as inventors in a related patent or patent application (Resnik 2007). In yet another survey, 19.8% of life scientists reported that they had delayed or stopped publication in order to protect pending patents, negotiate license agreements, or resolve intellectual property disputes (Resnik 2007).

Bruce Hackett (2005) argues that these changes in the structure of science have done more than influence different institutions; they have altered the nature of scientific research itself. Hackett suggests that the scientific community consists of seven different value axes (freedom and autonomy versus accountability; producing research versus educating students; local versus cosmopolitan orientation; quality versus quantity; specialization versus generalization; competition versus cooperation; efficiency versus effectiveness). The tensions among these values, Hackett notes, are not just scholarly concepts, nor are they inconsequential changes in the practice of science. Scientists risk becoming more ambivalent about promoting the public good as the rules guiding social conduct slowly erodes. Scientists in marginal positions, for example, often feel a weak connection between their performance and the meager rewards they have perceived. Scientists also risk becoming more alienated from their work, as a significant amount of laboratory research is disconnected from those receiving credit in scientific publications.

According to this narrative, scientific misconduct will be inevitable as long as the underlying values behind science continue to prioritize publication, exploitation, and competition over discovery, full recognition, and cooperation. They thus transcend any individual researcher or particular institution.

The solution, according to this narrative, is to make science more transparent, and to educate the wider public about the interests and values that drive it. Aristotle is believed to have once said that “it is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible” (Barnes 1984). Aristotle’s forewarning suggests that the process of scientific research should not be viewed as monolithic phenomena, but instead as a blend or cultural mix of many different interests, values, dynamics, and goals that endure but vary over time. Science is not just a personal calling, with scientists working diligently in the laboratory seeking personal satisfaction of discovery, nor is it an institutionalized profession with career concerns and commercial values. It is both at once, full of contradictory forces tugging on its practitioners (Hackett 2005). The public must be made aware, then, that the actual practice of science is a messy business, one rife with fundamental tensions between opposing values.

Conclusion

The presence of at least three fundamentally different narratives concerning scientific misconduct suggests that it will remain a site of struggle and controversy for years to come. Each narrative depicts a different cause behind, and remedy for, addressing scientific misconduct (See Table 3).

Table 3 The three narratives of scientific misconduct

According to the first narrative, virtually nothing needs to be done to stop it—we must accept the inevitability of a small number of abuses that reflect a few bad individuals.

According to the second narrative, reform is warranted but it need only be institutional. We should strive to create more incentives for whistleblowers, and perhaps mandate harsher fines for misconduct, making sure the sanctions against misconduct will outweigh any institutional rewards.

However, according to the third narrative, a gap between the ideal and the practice of science is emerging. Science is no longer simple, and its community no longer subscribes to a common set of values. To blame only a few individual violators or institutions divides the scientific community into the guilty and the innocent, and heaps large amounts of contempt on the few singled out as violators. It therefore creates the illusion of solidarity among the scientific community, reaffirming their central virtue. And by isolating a few behaviors as corrupt, it stamps all others as blameless. In this way the interests of corporate and government patrons of science are less likely to come under attack.

Instead of treating the practice of science as orderly and homogenous, this narrative argues that it should perhaps be treated as a concept perennially in motion, a process continually under reconstruction by different practitioners and institutions. The underlying feature of scientific research might best be described as constructivist rather than essentialist: its meaning is not given but produced. We must recognize rather than obscure such complexity, and highlight that the meaning of science—and thus scientific misconduct—will vary across countries, social groups, and time.

Conceptualizing scientific culture as a set of constantly shifting values and interests avoids the trap of positing an idyllic past characterized by a set of “good” values, followed by the perhaps terrifying prospect of “bad” values. It is more likely that both sets of values always existed and will continue to exist. Changes in the conditions and structure of science will determine which values are expressed or emphasized at a given time. According to this narrative, a more enduring solution to misconduct—if one is to be found—would be to change the way that science is currently practiced, to alter its fundamental structure. Here, the pattern of government and university support, the way that students and subordinates are treated, the very way that certain values are emphasized in modern science must be reformed. The logic of this narrative demands that we can either continually lament the symptoms of misconduct, or finally treat its underlying pathology.