Introduction

Recent years have seen an explosion of interest in research integrity and misconduct. Fraudulent research (falsification, fabrication and plagiarism, or FFP) and sloppy science (detrimental or questionable research practices, QRP) have been present since the inception of scientific knowledge production, along with concerns about such behaviours (Horbach and Halffman 2017; Mitcham 2003; National Academies of Science, Engineering and Medicine 2017; Resnik 2003). But research integrity has now risen to prominence as a public and policy issue (Marres 2007), an issue that is articulated through media coverage of high profile fraud (Franzen et al. 2007); policy debate of research integrity and how to maintain it (including the activities of the European Federation of Academies of Sciences and Humanities [ALLEA] in EuropeFootnote 1 and the Office of Research Integrity [ORI] in the USFootnote 2); the gradual rise of compulsory graduate training in responsible conduct of research (Todd et al. 2017); and increasing consideration of the topic in academic literature (Meriste et al. 2016). Funding patterns similarly suggest that research integrity is a problem to be solved: the European programme Horizon 2020 now regularly issues calls for projects that will investigate and support research integrity.Footnote 3

This paper is concerned with how research integrity is being articulated, both within these activities and debates and in, in particular, more informal settings. The approach taken is similar to that of Horbach and Halffman (2017) in understanding discourse around research integrity as an empirical phenomenon in and of itself. While Horbach and Halffman treat the diverse ways that ‘research integrity’ and ‘misconduct’ are framed within different literatures, here the focus is on scientists’ discussions of integrity, good science, and misconduct in order to explore how the policy problem of integrity is negotiated within the working practices of everyday science. The research is grounded within traditions of qualitative social research and critical discourse analysis (Fairclough 2003; Law 2004). It is therefore less concerned with the precise details of practices of research (mis)conduct than with how these practices are talked about by scientists, and thereby builds on work such as by De Vries et al. (2006) in looking at scientists’ informal conceptions of good and bad science. Because the focus is on discourse and not (necessarily) practice, terms such as ‘scientists’ talk’ and ‘interview talk’ are used to refer to the data.Footnote 4

As a starting point, it is useful to place concerns about research integrity and misconduct in a wider frame than is generally used. Research that analyses integrity or misconduct often starts by noting the importance of research integrity to science, discussing the prevalence of misconduct and whether this is on the rise, framing particular practices as FFP or QRP, commenting on the difficulties of defining either integrity or misconduct, or noting an increase in training in responsible conduct of research (see, e.g., Fanelli et al. 2018; Godecharle et al. 2017; Olesen et al. 2017; Salwén 2015; Resnik 2003; Shaw and Satalkar 2018). Much of the focus is on the practices and learning of individual scientists and how these affect science as a whole (a focus that has been problematised; e.g. Penders et al. 2009). However, research integrity, as an issue, might also be placed in a much wider frame, one that is concerned with the soft governance of science through codes of conduct, ELSI activities, or ethical norms (Balmer et al. 2015; Kearnes and Wienroth 2011; Pickersgill 2012). Viewed through this lens, the rise of research integrity as a policy issue can be understood as one aspect of broader efforts to ‘responsibilise’ science, academics, and universities—efforts that are also expressed through the promotion of responsible research and innovation (RRI), the ‘impact’ agenda, or an audit culture that seeks to ensure productive and accountable research (Kearnes and Wienroth 2011; Saille 2015; Shore 2008; Strathern 2000).

Such a frame positions research integrity and misconduct as technologies of governance that align research with specific ideological agendas. It is this framework that the analysis will be used to speak to within the concluding discussion.

Scientists’ Characterisations of Research Integrity and Misconduct

How do scientists characterise or understand research integrity and misconduct? First, it seems clear that terms such as ‘research integrity’ or ‘misconduct’ are used loosely, and that actor terms—scientists’ ‘bottom up’ notions of research integrity (Glerup et al. 2017)—are neither fixed nor precise. Characterisations of research integrity or good science are widely heterogeneous, varying between disciplines (Penders et al. 2009), national contexts (Horbach and Halffman 2017), industry and university settings (Godecharle et al. 2017), or simply individuals (Olesen et al. 2017; Shaw and Satalkar 2018). As Horbach and Halffman have argued, the “universalizing, essentialist language of ‘integrity’” (2017, 1462) glosses over the existence of profound differences in mundane understandings of robust science.

We also know that there are important, and apparently increasing, divergences between policy discussion of misconduct and scientists’ concerns and characterisations. De Vries et al. (2006), reporting on what “researchers [rather than policy actors such as the ORI] see as behaviors that hamper the production of trustworthy science” (p. 44), argue that what they call ‘normal misbehaviour’ was viewed as a far more significant threat than the more newsworthy practices of FFP. Mundane ethical infringements, such as treating postdocs poorly or ‘cleaning’ one’s data rather too enthusiastically, were represented both as vastly more prevalent and as a more insiduous threat to science’s integrity than cases of fraud, which were presented as highly unusual. As a result, “[w]hen policymakers limit their concern to the prevention of infrequently occurring cases of FFP”, De Vries and colleagues write, “they overlook the many ways scientists compromise their work” (2006, 48). Horbach and Halffman (2017) also trace increasing divergences between scientific discussion of misconduct (as expressed in academic publications such as Nature) and policy literature. For the latter, ‘integrity’ is approached “from a repressive and norm-based perspective” (p. 1481)—largely in terms of how to sanction misbehaviours—while scientific texts tend to focus on authorship issues in particular and on a broadly construed, virtue-based notion of integrity in general. Looking at one particular codification effort, the Swedish Research Council’s definition of ‘misconduct’, Salwén (2015) similarly argued for the necessity of policy efforts that are aligned with the ‘ordinary language’ of scientists.

Other qualitative work has explored how scientists characterise either research integrity or research misconduct. One study of researchers based in Malaysia (Olesen et al. 2017) found an understanding of misconduct as dishonesty, and that plagiarism and authorship disputes were depicted as common forms of misbehaviour; many scholars did not, however, bother reporting such infringements because of the time, effort, and possible repercussions involved (see also McIntosh et al. 2017). Conversely, research integrity is often tied to qualities of honesty and objectivity (Godecharle et al. 2017; Meriste et al. 2016; Shaw and Satalkar 2018). As noted above, there is much less agreement when it comes to the details of what integrity looks like in practice. A study involving 22 research managers and senior scientists from universities, spin-off companies, and multinational drug companies found that “[i]dentical actions were judged heterogeneously between universities, spin-offs, and international companies, and sometimes even within one company” (Godecharle et al. 2017, 5). There is also evidence that research integrity is regarded not only as honesty (and therefore as the ‘opposite’ of misconduct), but as a wider set of qualities or virtues. Integrity can, then, be in question even without overt misconduct (Horbach and Halffman 2017; Shaw and Satalkar 2018).

Many of these studies begin to point towards how the causes or triggers of misconduct are imagined. Pressure to publish, competition for research grants, hostile working environments or other structural pressures are frequently mentioned, as well as references to the character of deviant scientists (De Vries et al. 2006; Olesen et al. 2017; Shaw and Satalkar 2018). Here there is an important point of contrast with policy discourse. As Penders notes (in discussing a recent volume analysing research practice), key work on research integrity makes it “a narrative of risky behaviour and mental pathologies present in individuals” (2017, 62). Though increasing attention is being directed at organisational cultures and how these enable or disable practices of (mis)conduct (Martinson et al. 2010; Wessels et al. 2015), much public discussion of research integrity reinforces what Resnik has characterised as the ‘bad apple theory’ of misconduct (2010), in which individual pressure or pathology is the key trigger for malpractice. A final, related finding is that researchers seem largely unaware of policy moves to manage research integrity and misconduct. Godecharle et al. (2017) found a profound ignorance of documents and procedures around research integrity in an interview study with Belgian researchers based in business, spin-outs, and universities. “Despite the stated importance [by interviewees] of awareness [of misconduct guidelines]”, they write, “all of the interviewees but one were unfamiliar with the national Belgian guideline on research integrity” (p. 8). Research exploring RRI and Codes of Conduct for nanotechnology have identified similarly low levels of knowledge about such initiatives (Glerup et al. 2017; Kjølberg and Strand 2011).

These studies begin to suggest how scientists understand research integrity, and how they imagine the causes and nature of misconduct. One limitation is that they tend to be explicitly framed around topics of integrity and misconduct, often directly asking, for instance, how informants would define integrity or what aspects of misconduct they are most concerned about (Olesen et al. 2017; Shaw and Satalkar 2018). The research is thus framed by language and terms of reference that are of concern to policy makers and common within academic debate, but which may or may not be native to (particular) scientific cultures (Glerup et al. 2017). Given an apparent gap between the meanings attributed to research integrity in policy, on the one hand, and science, on the other, there is a need for research that takes a more ethnographically-oriented approach, and which can explore the ‘folk theories’ of scientists (Rip 2006) concerning the nature of robust and ethical scientific practice.

Research Approach

This was the approach taken in this study. The research sought to explore scientists’ understandings of research integrity (broadly construed) and the relation of these understandings to policy debate about integrity and misconduct. It was carried out in Denmark, and one aim was to investigate whether scientists were familiar with the Danish Code of Conduct for Research Integrity (UFM 2014) and their views on this and similar codes or guidelines. A total of 31 semi-structured interviews were carried out with natural scientists working as post-doctoral researchers, assistant professors, or associate professors in three Danish universities, with interviews (conducted in English) lasting between 1 and 2.5 h. 18 interviewees were male and 13 female. Informants were recruited through convenience and snowball sampling; as described below, one criteria was that they had experience of working in different national contexts. They worked across a range of natural science disciplines, including biology, physics, chemistry and medical science.Footnote 5 Though there are differences in imaginations of good research practice even across natural sciences (Penders et al. 2009), these become more profound when humanities and social research disciplines are included; the decision was made, then, to focus on a cohort of natural scientists.

The interviews were designed to enable scientists to discuss their experiences of and views about good and bad scientific practice through the lens of international mobility. All had worked in multiple national contexts, either having trained (i.e., completed a PhD) in Denmark, worked abroad, and returned to Denmark, or having trained abroad and moved to Denmark at a postdoctoral or later phase. Many had worked in multiple national and regional contexts, and therefore had extensive experience of research in different locations. The interviews were structured around their experiences of working in different places, and they were encouraged to talk about what mobility had meant for them both personally and scientifically. In discussing how mobility had affected their scientific work the interview guide covered differences they had experienced in research practice in different places, differing assumptions (if any) about what comprised ‘good’ or ‘bad’ research, and whether they had encountered anything they viewed as misconduct or bad practice. The interviews closed by raising the topic of research integrity specifically. Interviewees were asked if they had heard of or used the Danish Code of Conduct, whether they knew of anything similar (for instance in other national contexts), and what their views of such codes were. The interviews were recorded and transcribed. Anonymised transcripts were coded using the program MaxQDA with a combination of theory-guided (related to the questions guiding the research) and in vivo (emergent from the data) codes. These codes were then collated and used to develop key analytical themes (Silverman 2001).

This article does not report on how interviewees talked about international mobility. Rather, the focus here is on aspects of the interviews that touch upon integrity, misconduct, and good and bad research practice—what will be called ‘integrity talk’. As noted, the approach taken is interpretative and critical (Fairclough 2003); the aim, therefore, is to understand actor terms and meanings related to robust, ethical science and its inverse, rather than the meanings attributed to phrases such as ‘research integrity’ specifically. Analysis of integrity talk indicated a set of themes that were present across the interviews: there were no key differences between researchers who had trained in Denmark or abroad, between men and women, or between disciplines. The data set can therefore be regarded as providing some insight into the discourses of research integrity and good scientific practice present within an internationalised culture of natural science.

The analytical sections that follow begin with a summary of how scientists responded to the Danish Code of Conduct and related activities. These responses—broadly, ignorance, indifference, and occasional hostility—require, as we will see, some explanation. The following sections therefore turn to look at integrity talk more generally in order to understand why these scientists might see codes of conduct and other policy initiatives as largely irrelevant to their work. These rationales are explicated in two sections, one concerning the nuanced nature of good science and the second exploring scientists’ talk about systemic and structural effects. A closing discussion section returns to looking at research integrity as an aspect of soft governance of science more generally. Throughout, anonymised quotes are used to represent themes that were widely prevalent; extracts should therefore be understood as illustrative rather than comprehensive. Given the complexity of the narratives interviewees gave, explicating their accounts is prioritised over the inclusion of multiple quotes demonstrating the same theme.

Knowledge and Views on Codes of Conduct

Of the 31 interviewees, three said that they knew about and had read the Danish Code of Conduct for Research Integrity. This document, released in 2014, aims to “support a common understanding and common culture of research integrity in Denmark” (UFM 2014, 4). Its principles and standards are based on international activities (including the code developed by ALLEA), and all eight Danish universities and the principle public and private research funders have subscribed to it. The major funding council for basic research, Independent Research Fund Denmark, explicitly asks funded projects “to live up to the principles of the Code of Conduct”.Footnote 6

It was not the case, however, that the remaining 28 interviewees were totally unaware of policy activities to promote research integrity. While 11 did give a straightforward no in response to being asked whether they knew about the Code, others gave answers that might be characterised as ‘no, but’. No, but they had heard about the compulsory training PhD students had to take in responsible conduct of research. No, but they thought that perhaps they had checked a box about something like that when filling out a funding application. No, but they weren’t surprised that such a document existed. No, but maybe they had received an email about it and just not clicked on the link. In general, then, there was a sense that “you just feel there is somewhere this information”, to quote one interviewee, and that this information could be accessed if necessary. There was awareness of the generalities of moves to enhance research integrity, but little engagement with the specifics.

If most interviewees were unaware of the existence and principles of the Danish Code of Conduct, what were their responses to it and to similar codification projects? As the high number of ‘no, but’-type responses suggests, scientists were by and large indifferent to the existence of such codes, and were certainly not concerned that they weren’t familiar with the Danish Code in particular. They could be positive about the idea of codification in principle, as with this comment from UlrikFootnote 7:

I have no clue what this stuff [the Code] is, but I would say that it’s probably good that you have some kind of manual to state this is what you’re allowed to do. (Ulrik)

Similarly, some respondants said that it could be helpful to have a resource one could refer to if necessary, or to use in training students. More cynically, several interviewees viewed the use of codes as “ass covering” and “tokenistic”: they were seen as a concrete means of disciplining individuals in cases of dubious behaviour, or of signalling the university’s willingness to align with best practice.Footnote 8 (Indeed, some interviewees had stories of codes being used to justify poor behaviour—individuals had used them to ‘prove’ that their actions were within the letter of the law.) Others framed codes as unnecessary or even unhelpful, and at times reacted with outright hostility. In these cases they were at best something that “most people would roll their eyes at” (Iris) or that is “not going to change anything” (Lidia); at worst, they were seen as an insult to the character of scientists and to the ability of science to regulate itself. “I don’t need to be taught about scientific integrity”, said Jaap:

honestly, if people need to be told about scientific integrity there’s something wrong with them to begin with. We all know that you should, you know, work honestly. It’s not new, I don’t know what’s in here [the Code] but I, I can’t imagine that they would tell me anything new that’s- that’s important. (Jaap)

For Jaap, the use of external regulation or guidelines was a slur on scientific professionalism: being ‘honest’ was something “we all know”. Similarly, Caspar’s view was that “if you get to the point where this is necessary, then you’ve already lost”. Both saw codes as a (further) encroachment of policy and management into science. For Caspar, in particular, the Danish Code was just one example of his university’s passion for “counting” and attempting to quantify good scientific practice. Forcing people to read it, he said, could actually have negative consequences, by giving them ideas of how to cheat.

Few interviewees were as hostile as Caspar and Jaap to codifications of research integrity. But, to reiterate, very few were actively interested in them, or especially concerned about their own lack of knowledge. The Danish Code and its like were framed as largely irrelevant to scientific practice. And yet these researchers were not irresponsible, or cheerful cheats: indeed, they often had stories of research practices that they disapproved of or were concerned about. They cared about the production of robust knowledge, whether by themselves or others. The next two sections explore the integrity talk of the interviews further to outline why such researchers, who presented themselves as committed to responsible science, might see codifications of research integrity as irrelevant or unimportant.

The Nuances of Good Science

The Danish Code of Conduct for Research Integrity is, like many other codes (e.g. ALLEA 2017; European Commission 2008; see discussion in Meriste et al. 2016), principle-based. It begins by stating three key principles—honesty, transparency, and accountability—and then discusses how these are applied within six areas: research planning and conduct, data management, publication and communication, authorship, collaborative research, and conflicts of interest (UFM 2014). Inevitably, its advice is rather general. Indeed, the code itself notes in its preamble that “the applicability of the standards for responsible conduct of research may differ between various fields of research” (p. 5).

In contrast to the use of clear cut and widely applicable principles, the scientists in this study continually emphasized how nuanced good scientific practice is. They spoke of the messiness and contingencies of research, and the demand for flexibility that this entailed. They were, generally, deeply reflexive and at times even anxious about their own practices, emphasizing that it could be difficult to know how to behave with integrity. In this respect there are similarities with findings from De Vries et al. (2006), who argued that “work on the frontiers of knowledge” (p. 48) is subject to uncertainties that are made manifest in questionable practices and debate about these. In the research described here, scientists articulated an awareness of the nuances and ambiguities of scientific practice in a number of ways. Arguments tended to focus on practices that were specific to particular disciplines or areas of research. In the following sections some illustrative cases of scientists talking about the nuances of scientific practice and their uncertainties as to what was good or bad behaviour are discussed; these are not exhaustive even within this data, but they start to demonstrate the complex ways in which scientists talked about their efforts to carry out robust and ethical science.

First, interviewees at times struggled with the fine lines between cheating, laziness (how far should you dig into your data?), and the learning that naturally occurs as your work in a field develops. At stake here are questions about, for instance, what it really means to apply principles such as honesty or transparency, or how one should behave with colleagues:

…it’s not like he took other ideas you know he didn’t steal ideas they were open. But you know, he would see what another lab was doing and rather than approaching it from a collaborative point of view he would approach it from a competitive point of view. So rather than saying hey, that looks great let’s work on this he would say hey, that looks great. (Quentin)

It’s not that things that were unethical were going on, and I don’t think there is necessarily a right balance between these things, so it’s also… I mean, how many control experiments, do you want to do? Because you can end up spending all your time making one little point, very, very well documented, then miss the bigger picture. (Niklas)

But maybe I need a code to tell me if I enhance this contrast [on a gel plot] in Adobe Photoshop, is that okay and how much contrast I can use? (Lisardo)

These behaviours—using ideas in a “competitive” rather than collaborative manner, trying to decide how many control experiments are appropriate, and choosing how to depict and enhance data visualisations—are not clearly defined as good or bad. In each case the speaker is ambivalent about what the correct practice should be. Quentin, for instance, is explicit that the colleague he was speaking about was definitely not ‘stealing’ ideas, but he still frowned on the attitude that this colleague had in seeking to compete rather than collaborate. Niklas similarly says that there is not “necessarily a right balance” in deciding how many controls ensure that your work is robust, while for Lisardo the only place a code of conduct might be useful is within fine grained decisions of precisely how to depict and analyse his data. That, he says, is where he needs to understand “is that okay?”.

Relatedly, interviewees often talked about the challenges of analysing and reporting research. Carsten notes that:

…whenever you present data you are also sort of interpreting a little bit, it’s very hard to present raw data. […] So you have to do a bit of interpretation. (Carsten)

As several interviewees said, science relies on ‘telling good stories’: it is always necessary to interpret data and to use it to construct an argument. How best to do this was frequently a topic of concern, to the extent that one researcher, Cecilie, talked about being “repulsed” by this aspect of science. She had been tempted to leave science:

because I felt it was very difficult to walk on that knife edge of having to present your stuff in a clear way and not start manufacturing wrong stories, because you have to decide on how you want to tell your story. If you don’t tell a story, you don’t get anywhere with the data. I’m not sure I’m right about this, but this is something I sometimes say, I’m in the entertainment industry or something like that, right. I hate that but we are to some degree because we’re paid to make interesting stories. (Cecilie)

Cecilie is clear that making “interesting stories” is simply how science works. You need to tell a story in order to “get anywhere with the data” and therefore produce meaningful results; indeed, this is what scientists are paid for. But she also feels that she walks on a “knife edge” as she decides how to present her work clearly without “manufacturing wrong stories”. Her discomfort is related to her sense that she is in the “entertainment industry”, always having to tell stories, even when she is not sure which are the right ones to tell. Again, it is important to note that this is something she is uncertain about. As she says, “I’m not sure I’m right about this”. As with the quotes above, then, her account of good practice in science is not black and white, but tentative and exploratory. The nature of responsible conduct is an open question—one that, in this case, is related to one’s own emotional responses and preferences (“I hate that”, says Cecilie, describing her response to the need to tell good stories).

A number of interviewees reflected even further on their own subjectivity in navigating between good and bad scientific practice, talking about anxieties about their own conduct. This was not related to intentional misconduct; rather, they expressed fears of unconscious bias or of becoming paralysed with uncertainty about whether their stories were ‘true’. Here Camilla talks about needing a senior colleague to be clear that they “believe this”:

I think the person that’s most critical of that data is often the person sitting with it. So, you also need your PI [principal investigator] to at some point say, I believe this, we can move forward with this, this is good stuff, because you always tend to see the weaknesses in your own experiments all the time. So again, it’s one of these… Like there’s always different aspects because there is not ultimate truth because biology is crazy complicated. (Camilla)

Others similarly spoke about becoming almost too thorough in their checks against publishing something that didn’t hold up, interrogating their experiments and arguments so carefully that, left to themselves, they would never publish anything at all. Good research conduct was related to being able to “move forward” even in situations where “there is not ultimate truth”. Camilla presents her research—she works with biochemical systems—as so complex and contingent that one can never wait to find a final or complete explanation. Later, she argued that part of the skill of research was being able to tell when something was “true enough”.

Again, these examples are not comprehensive but indicative of the ways that scientists in this study talked about good conduct of research. The overarching point is not the details of their accounts but the ways in which they represent the research process as complex and nuanced, the lines between good and bad practice finely drawn and at times impossible to distinguish, and the process of making these distinctions something that is personal and contingent. These researchers were often unsure as to the right course of action in a particular situation. Their everyday work involved a multitude of small decisions, any of which might, in retrospect, be questioned. It is perhaps not surprising, then, that codes of conduct, with their generalising principles, were seen as largely irrevelant to the practice of robust research. Often, responsible conduct was depicted as intuitive, personal, or tentative—the opposite of clearly defined principle-based codes. As one researcher, Niklas said, the “real challenges” of carrying out robust research were “always much more specific” than could be captured by principles such as honesty or transparency.

Incidentally—and as an aside from the main argument—this emphasis on nuance and complexity also explains why scientists in this study made occasional recourse to the character of the researcher in discussing research integrity. David Resnik frames the ‘bad apple’ view of misconduct as it being perpetrated by scholars who are “morally corrupt, economically desperate, or psychologically disturbed” (2010). Conversely, interviewees talked about individual’s “general honesty” or their “moral system” affecting whether they would cheat or engage in QRP or not. “If you’re an asshole”, said Quentin, “you’ll be an asshole in science and in society generally”. There was thus a sense that the complexities of science were such that one required a strong internal compass or personal attitude of integrity; science is protected, in some way, by scientists being good people and having good intentions. Knowing where to draw the line between good and bad practice was framed as so nuanced that it was impossible to legislate for through generalising codes or external regulations. Rather, as Lidia said, “it just depends on your personal integrity … you either have it or you don’t”.

An Ethics of the System

Interview talk is not necessarily consistent. Multiple and at times conflicting narratives are to be expected (Davies 2011; Fairclough 2003; Silverman 2001). In this data, at the same time as interviewees made mention of personal character, honesty, and personality as shaping how researchers would behave and where they would draw lines between good and bad practice, they also referred to the system of science as key to the prevalence of misconduct. Wider structural and contextual factors—‘publish or perish’, intense competition for research funds and positions, insecurity of employment—were viewed as triggers for bad practice. Beyond this, however, researchers also suggested that these structures were ethically problematic in and of themselves. Misconduct, bad science, and unethical practice were thus located not only at the individual level, in the form of ‘asshole’ scientists (to use Quentin’s term, quoted above) with poor integrity, but within the system of science itself. In important ways that system, as it is currently articulated, was viewed as fundamentally unjust.

There are therefore two dimensions to integrity talk relating to the wider context in which research is carried out. The first relates to the pressures scientists work under, and the way in which these can promote an environment where misconduct seems a viable or even inevitable option. This is Mattie, for instance, talking about what it’s like to be an assistant professor on soft money:

Yes, we’re under-funded, we’re time stressed. So when you’re having to set up a course for the first time, and supervise students and do experiments yourself and have a family life, and there’s just not enough hours in the day to do it all, and it’s just always this pressure of funding. Even if you have a permanent position, unless you pull in a grant, you’re not going to be able to support your postdoc, or get a PhD student. If you don’t get that funding then you don’t do the research, and then you won’t get the next funding, so there’s this vicious cycle of pressure. […] so in terms of having to produce results, if you can cut corners, and get the result and get the paper out faster- (Mattie)

To be clear, Mattie wasn’t suggesting that she had “cut corners” herself, though she had experienced the pressures she is describing. In fact she breaks off in order to tell the story of a current collaborator who desperately needed more publications if she was to keep her lab open, and who was putting pressure on Mattie to write up their joint work in a quicker and less thorough way than she, Mattie, was comfortable with. That collaborator was currently “pretty pissed off” with Mattie due to what she saw as unnecessary delays. Mattie had drawn different lines in terms of what she felt was an acceptable way of doing research, but she was also clear that she was deeply sympathetic to the scientist she was working with, and saw the pressure that scientist was under as almost unendurable (and certainly unfair). Mattie is explicit, in the quote above, that there is a connection between being “under-funded” and existing within a “vicious cycle of pressure”, on the one hand, and ‘cutting corners’ in order to “get the result and get the paper out faster”, on the other. She hasn’t been driven to that point herself, but she can understand it. Others similarly pointed out how much there is at stake, and how easy it could be to end up, perhaps even without noticing it, cutting corners or becoming biased. “It’s not only your career that’s at stake”, said Carsten, “you know, you have to bring money home to your family and all that stuff”. His view was that living under the pressure of trying to maintain a career in science could lead to questionable research even without your being aware of it:

It’s not because people want to cheat, right? I don’t think so at least. I think people are doing, er, you know, best as they can and presenting stuff as best as they can. It’s just again you have an incentive to publish, right? […] So you have this, you know, this bias, even though it’s probably not conscious it’s just a bias. (Carsten)

In this vision—an extension of the distrust of one’s own practices as discussed by Camilla above—the precarity of science may affect even the most well-intentioned researcher. If one’s livelihood is at stake, how to be sure that uncertainty and the need to get results isn’t unconsciously shaping your interpretations of your research? Not everyone, however, thought that these pressures were experienced in such subtle ways. Others placed blame on the competitive nature of science, and the need for ‘splashy’ results, but told stories of out and out cheating as a response to this. “Cheating pays off”, said Ulrik, recounting how two former colleagues who he thought had fabricated data had ended up with tenure track positions.

A second dimension of talk about the system of science more explicitly locates ethical problems within the structures of science as a whole. That is to say, if the quotes above explain misconduct and unethical behaviour as being triggered by the pressure scientists feel themselves under, at times interviewees framed those pressures as unethical in and of themselves. Ethics (and responsibility) moves from the level of the individual to that of the system. For Selena, for instance, the way in which scientific work is rewarded is unfair:

The system that we have that is all based on publications and publishing as fast you can, is not actually, you know, kind of fair. It’s not rewarding research integrity. It’s rewarding results that come as fast as peop- as you can produce them. So in that sense, I mean, research integrity is not the priority for the whole system. (Selena)

In an environment in which scientific merit is judged by “publications and publishing as fast as you can” there is no incentive, Selena notes, to value research integrity. The primary valuation device is “results that come as fast … as you can produce them”. This, Selena argues, is ‘not fair’. The system is compromised in that it rewards those who are lucky, or sloppy, or outright cheats, rather than those who prioritise the integrity of their results and arguments and thereby may not publish so much. Quentin makes this point even more bluntly:

[O]ne of the things that I’ve learnt is that I think the kind of the structure of research in modern times is really somewhere between broken and immoral. And there are far too many PhDs and not enough kind of long term stable postdoc positions […] when you’re a postdoc you get a position like six months, one year, two years at a time. And it’s a terrible way to establish a life. (Quentin)

Quentin has been talking about the competitive nature of modern research. The conclusion he has come to is that the system is “somewhere between broken and immoral”: broken because dysfunctional, immoral because of what it does to individuals’ lives. Having to take short-term contracts, potentially in multiple different countries or cities (he had spent postdoc time in three different national contexts), is disruptive exactly at the moment that junior researchers may be wanting to settle or start a family. Georg makes a related point when he discusses the need to be internationally mobile if one wants an academic career:

I think anyone that prioritises family earlier will be hindered by the mobility requirements […] Because of going through these long periods of uncertainty, years and years of uncertainty, before there’s a more long-term job waiting for you. I think that’s a significant problem with the current system. (Georg)

Again, the ethical problem is located at the level of the structures of science—its employment patterns, reward systems, use of human resources—rather than in the behaviours of individuals in response to these structures.

It is important to note that interviewees talked more readily about these systemic and structural issues than about misconduct at an individual level. Practices that they frowned on—whether or not they were explicitly framed as misconduct—were rarely discussed outside of the context of the pressures of making a career in science. This ethics of the system thus seems to be a key way in which scientists think about integrity problems in science. Their concerns are less about what individual scientists do—to repeat Carsten, quoted above, there was a sense that misconduct rarely happens “because people want to cheat”—than the injustice, pressure, and unfairness that are depicted as central aspects of how science operates today. Policy activities, including codes of conduct that primarily seek to advise on or regulate the practices of individuals, which don’t address structural injustices would therefore seem to have little traction on the most immediate concerns of researchers about integrity and the ethics of research.

Governing Science Through Integrity, Ethics, and Responsibility

In discussing integrity talk from interviews with scientists the article thus far has made three points. First, researchers were almost entirely unfamiliar with the relevant national code of conduct for research integrity, and they were largely indifferent towards it and similar codes. Second, this is in part because interviewees represented science, and the challenges of good scientific practice, as nuanced and specific to individual situations (indeed, at times as specific to particular people), while codes of conduct rely on abstract principles and generalised guidelines. Third, these researchers were primarily concerned with what has been called an ethics of the system of science, rather than the activities of individuals. Their concerns about rights and wrongs within research focused on injustices and pressures at the level of the system; misconduct by individuals was seen as triggered by such pressure, or as a lesser ethical problem than the ‘unfairness’ or ‘immorality’ of the system as a whole. There therefore seems to be a fundamental disconnect between the content and emphasis of codes of conduct and related policy activities that seek to enhance research integrity, and the ways in which scientists understand integrity problems in science. In the former case the focus is on individual practices (and, to some extent, how organisations can support or alter such individual practices; Wessels et al. 2015); in the latter, injustice at the level of scientific recruitment, reward, and employment is the central issue at stake.

It is important to emphasise, again, that this analysis is of integrity talk. It cannot say how these narratives relate to behaviour: it is entirely possible, for instance, that researchers tell an idealised story of scientific practice that they do not themselves live up to. It is similarly not surprising to find multiple explanations of misconduct (personal character, unconscious bias, systemic pressures); diverse explanations are likely to be mobilised depending on the immediate context of talk (Davies 2011). This study reveals ways of talking about research integrity and good science current within natural science, offering insight into the ‘ordinary language’ (Salwén 2015) of scientists about (mis)conduct, and thereby a language or set of ideas through which policy documents might come to have greater traction on scientific cultures.

This closing discussion returns to the notion that the rise of policy concern with misconduct and research integrity is not unique but is related to, for instance, the use of (bio)ethics to informally regulate scientific practice (Pickersgill 2012), ‘responsibility’ as a framework for research and innovation (Kearnes and Rip 2009), or the ELSI agenda (in which social or humanistic research on ‘ethical, legal and social implications’ is integrated into natural science; Balmer et al. 2015). All of these efforts utilise codes, guidelines, or social or humanistic expertise as a means of attempting to direct science. Most pertinently, the use of responsible research and innovation (RRI) as a central framework for European research funding can be seen as an effort to modulate the process and outcomes of science by calling on scientists to alter how they carry out research (Saille 2015; von Schomberg 2013). Significantly, RRI has also been criticised as focusing on individual agency in a context in which agency is, in practice, often limited (Spruit et al. 2016). The argument here is that these developments should be viewed as interconnected. Research integrity and misconduct have risen to prominence not just because of a number of high profile fraud cases, or because misconduct is on the rise, but because they are somehow indicative of a policy moment in which intervention into research practice is central to imaginations of how science is funded and justified (Hartley et al. 2018; Saille 2015).Footnote 9

How can we understand the nature of this policy moment? There is now a large literature on the changes that universities and research are undergoing and in particular on how the language of ‘responsibility’ is becoming integral to academic practice (see, e.g., Amsler and Shore 2017; Ball 2012; Glerup et al. 2017). A central argument is that researchers have over the last two decades been “re-formed as … neoliberal academic subject[s]” (Ball 2012, 17). As such they are subject to a process of responsibilisation, in which risks that were once shared have become personal and individualised. Put crudely, individuals:

become responsibilised when they are internally persuaded that social risks such as illness, unemployment, poverty, and lack of education or job training or career progression are problems whose solutions are the personal responsibility of the individual subject, not something the state is responsible for remedying by creating better conditions or support. (Amsler & Shore 2017, 125)

The dynamic is thus one of a shift of responsibility from collective systems to individuals. In the context of academia, it is increasingly researchers themselves, not universities or research communities, who are responsible for career progression and the production of excellent science: they are ‘entrepreneurial selves’ within ‘entrepreneurial universities’ (Hakala 2009; Müller 2014). Many of the initiatives described above—RRI, bioethical frameworks, ELSI, codes of conduct—can be understood as technologies of this responsibilisation. They direct agency to scientists, asking them to take on new forms of responsibility for the process and outcomes of their work. Hence, with regard to research integrity, the use of soft law devices such as codes of conduct which ask researchers to monitor, evaluate, and improve their practices (Kearnes and Rip 2009). Integrity and responsibility for ensuring robust research are, within such devices, framed as situated at the level of individuals.

The concern (here) is not with whether these developments in scientific governance are good or bad. Rather, they are relevant because the empirical findings described can be understood as a form of resistance to or rejection of this insistence on responsibility for the ethical conduct of science as being primarily located within individuals. In drawing attention to the ethics of the system of science, interviewees tacitly reject a model of research integrity that depicts it first and foremost as about the behaviour of scientists. They thus resist trends of responsibilisation and suggest, through their critiques of the current scientific system, a need for more dispersed ways of ensuring justice, ethical conduct, and good science. In this view, responsibility for research integrity is not only (or even primarily) located in individual researchers, but in the system of science as a whole. Such an argument is the reverse of the responsibilisation Amsler and Shore (2017, quoted above) describe: it directs responsibility, accountability, and blame away from individual scientists who engage in FFP or QRP, and towards the structures and systems of science as a whole. It implies, contra much discussion of misconduct, that integrity is properly located within a system rather than in specific, individual practices.

Conclusion

Exploring the talk of scientists about integrity, misconduct, and ethical science has thus led to the question of where these qualities are to be found within science. Are they primarily about micro-practices, in the sense of decisions about authorship, how data is treated by a lab group, or how many controls one should run (for example)? This is the approach taken by policy initiatives such as the Danish Code of Conduct for Research Integrity, which seek to guide the behaviours of scientists and to encourage institutions to teach and monitor those behaviours. Or are they distributed, relating to norms within the system of science such as employment conditions, the kinds of scientific work that is valued, or funding that enables a taken-for-granted over supply of qualified PhD graduates? Though interviewees in this study did not ignore the significance of individual behaviour, such behaviour was viewed in the light of this latter vision, in which ethics and (in)justice are articulated within a system as a whole.

One response to this argument might be that these researchers are confusing two separate issues. Precarity in scientific careers, for instance, has received some policy attention (DNRF 2015; Nature 2017), but is framed in very different terms to discussion of research integrity and misconduct, being understood not as ethical issues but as relating to labour market dynamics, the training PhD students receive, or career expectations (Nature 2017). But to say that scientists are missing the point is in itself to miss the point. It is important that policy initiatives—whether about research integrity or wider systemic issues—are couched in terms that are meaningful to their users, and that they treat the issues that are viewed as of most concern (Horbach and Halffman 2017; Salwén 2015). Discussion of research integrity that locates it solely in the behaviours of individuals, and makes no effort to incorporate or reflect on wider injustices in the system of science, runs the risk of being ignored by the very researchers it is directed at.