Old ideas give way slowly; for they are more than abstract logical forms and categories. They are habits, predispositions, deeply engrained attitudes… – Dewey, 1910

In a changing world, old habits… need modification, no matter how good they have been… Efficiency in following a beaten path has then to be converted into breaking a new road through strange lands – Dewey, 1922

The task, however, is enormous enough, for it involves not simply breaking down passive barriers such as those of distance in space and time and vernacular, but those fixed attitudes of custom and status in which our selves are embedded. Any self is a social self, but it is restricted to the group whose roles it assumes – Mead, 1925

In a recent article, we described and advocated for new ways to understand the ethical dimensions of quantitative research practices (Zyphur and Pierides 2017). One of our positions was that in order to take ethics seriously, researchers need to understand scientific practices as value-laden instead of value-free. We were challenging the idea that quantitative researchers should only discuss ethics in the narrow terms given by normative ethics (e.g., utilitarianism, virtue ethics). Our aim was to recast the discussion about quantitative research and ethics to include a variety of ways in which these are inter-related by offering a more expansive view of ethics and ethical dilemmas associated with quantitative research methods. In particular, we wanted researchers to think about everything they do as ethics-laden by considering the purpose and consequences of their actions. This includes how variables are defined, decisions about which analytic strategy to use, what counts as observation or measurement, and importantly the kind of people who do quantitative research and the purpose of it. Based on this, we focused on how researchers could understand and deal with the inherently normative dimensions of their work. In response, Cortina (2019), Edwards (2019), and Powell (2019) offered a range of views on our article.

Powell (2019) supports our effort and agrees with our philosophical orientation but wants us to say more about the value of the pragmatist philosophy that underpins our article (see Zyphur et al. 2015), fearing that a failure to clarify this could lead to our pragmatism being mistaken for something else (e.g., social constructionism, etc.). Cortina (2019) and Edwards (2019), quantitative researchers who write in defense of positivism, misunderstand our views as unempirical or subjectivist, subsequently fearing that our proposal may lead to an ‘anything goes’ approach to research. We respond by providing all quantitative researchers with a justification for abandoning such a positivist position and we advance a pragmatist alternative that they can use to more coherently and more meaningfully address the relationship between values and action. We continue to maintain that when researchers use quantitative methods ethically, any foundations they deploy should stand up to empirical scrutiny in the form of an inquiry into their uses and practical effects.

Two insights guide us. First, quantitative researchers have an empirical orientation that values the testing of beliefs or logical propositions against experience. Thus, empirical evidence is valued in a way that requires positivists to take seriously empirical critique, even one that rebuts positivism itself. This insight leads us to consider scientific practices on empirical terms, with which quantitative researchers must engage if they want to remain faithful to their own values. Our account shows that the positivism which Cortina and Edwards espouse is based on weak or unstated theories that are rebutted by substantial empirical research (e.g., see Latour and Woolgar 1986; Shapin and Schaffer 1985; Shapin 1995; also Hackett et al. 2008), including from organization science (e.g., Farjoun et al. 2015). As we conclusively point out, their beliefs and propositions are never subjected to the same standards of theoretical rigor and empirical verification that they openly advocate for others—as we argue, their positivist doctrines cannot withstand even weak standards of empirical evidence.

As their positivism is not based on empirical observation, it is a kind of ‘positivist dogma’ that is committed to an abstract reality that never exists in practice. Quantitative researchers who continue to adhere to this dogma become unscientific by failing to acknowledge, let alone see, how they are active participants in the production of what they abstractly call ‘reality.’ Second, this insight is useful because it helps us understand why researchers who adhere to a positivist dogma will only be able to understand how values are embedded in scientific practices if they can break their habits of conceptually separating facts from values, and logic from ethics. These are habits that have developed over hundreds of years and are a product of a rich history that leads adherents of the positivist dogma to unknowingly separate and subordinate ethics to a logic of statistics and probability (for an historical analysis from the 1600s to the present day, see Zyphur and Pierides 2019). This logic-then-ethics priority exists in many papers and editorials in business and management journals. If researchers continue to unwittingly reproduce these habits, then an inquiry into ethics is not possible, and positivists will be stuck in their current position of basing their practices on a set of propositions or ideas that are unscientific on their own terms.

Rigorous theory and empirical findings that profoundly rebut the positivist dogma do exist (e.g., Kuhn 1961, 1970a, b), but these seem to be misunderstood or dismissed as ‘constructivist’ by positivists—perhaps because they are still unfamiliar with an established agenda for the study of business, ethics, and society that is critical of such dichotomies (Freeman and Gllbert 1992). Similarly, within management and organization theory, existing views critical of positivism often come from outside the quantitative research community and get little traction in it (e.g., Adler et al. 2007; Alvesson 2003; Alvesson and Sandberg 2011; Calás and Smircich 1999; Newton et al. 2011), perhaps because positivists think that these views are inadequately scientific (e.g., Donaldson 2005; Hunt 2005; McKelvey 1999).

Whatever the case, as we show in this paper, the positivist dogma is unscientific on its own terms and prevents meaningful inquiry into ethics because its adherents fail to see how they help to actively produce what they abstractly, unscientifically, and uncritically call ‘reality’ (as if this simplistic utterance was a reasonable way to finish a debate rather than indicating the need for scientific inquiry and discussion about what people are doing when they speak this way). Thus, our difficult task is to critique positivism from inside its own logic, while offering an alternative that will lead to scientifically rigorous ways of understanding research and that will enable a new focus on ethics. This is difficult because positivism—like dogmatism but unlike an experimental science that actively inquires into its own production and the effects of its implementations—is self-reinforcing by setting the terms for evaluating empirical evidence rather than allowing itself to be subjected to empirical critique.

The positivism we examine is therefore deeply troubling because: (1) it fails to provide adequate terms for quantitative researchers to engage with ethics, other than to treat it as an afterthought of logic (for details, see Zyphur and Pierides 2019); and (2) it inhibits quantitative researchers from inquiring about the conditions that produce this failure. To illustrate the problems with this type of positivism, we begin by describing the foundations for the science advocated by Cortina and Edwards, which allows us to then link the commitments they expect of quantitative researchers to various impasses that these foundations cause—by an ‘impasse’ we mean problems caused by positivist foundations that cannot be understood or addressed on the terms of the foundations themselves. After our critique, and to provide terms that will allow quantitative researchers to engage with ethics, we begin with a simple and hopefully obvious assertion that quantitative research is a kind of work done by people.

Although some quantitative researchers may not endorse all of our conclusions, our notion of research as work should be an intuitive place for all organization scholars to start. If work is an important topic (Barley 1996, 2001; Okhuysen et al. 2013), then the work that researchers do is by definition within the purview of that scholarship. By treating research as work, we hope positivists can view their thoughts, discourse, and practices as only some of many possible ways of working in the world, and abandon feelings of obligation to mimic or draw contrast with the ways they think scientific work is done elsewhere—especially caricatures of physics or other ‘hard sciences.’

To encourage a rigorous yet practical science that takes ethics more seriously, we offer a classical pragmatist approach to inquiry that underpins our thinking—most influenced by John Dewey and George Herbert Mead. Our approach is interested in collectively and scientifically addressing matters of concern (as in Koffman 2018; Latour 2004). To be crystal clear, it is incorrect to read a realist–subjectivist dualism into this approach, as Edwards (2019) and Cortina (2019) did with our initial article (i.e., Zyphur and Pierides 2017). Classical pragmatism is staunchly scientific, emphasizing that scientific methods should serve as tools to guide practical action, with ethical ends in view. Indeed, pragmatism is more rigorous than positivism in this sense because it emphasizes that foundational commitments should be more than simply matters of belief; they should be empirically evaluated through inquiry (Dewey 1922, 1929; James 1898, 1907; Mead 1899, 1929).

Building on pragmatist organizational scholarship (e.g., Elkjaer and Simpson 2011; Kelemen and Rumens 2013; Lorino 2018; Freeman 2004; Simpson 2009; Wicks and Freeman 1998), our pragmatist approach fosters an interest in ethics, which cannot be separated from action (Ezzamel and Willmott 2014). The approach we are developing emphasizes situated practical action which engages with ethics via an interest in doing and studying research as a kind of work. We conclude by calling for broader inquiry into quantitative research in order to better understand how it is done, who does it, and how to defend it—but without the unscientific positivist baggage that cannot be defended without becoming dogmatic. This, we hope will lead to a variety of beneficial outcomes, including more relevant research and researchers.

Problems with Positivism

As paradigmatic cases of the type of positivism with which we take issue, we start by discussing papers by Edwards (2011) and Cortina and Landis (2011), which are exemplars of the kind of positivist dogma that causes various problems yet is uncritically voiced in high-impact journals such as Organizational Research Methods. We show how their work is based on beliefs and commitments that are not justified by the kind of rigorous scientific theory and empirical observations the authors themselves endorse (e.g., Edwards and Berry 2010). In so doing, we continue our dialogue with these authors while concluding with an encouraging note for any researchers who might want to use our work as a reference point for abandoning the brand of positivism that these authors espouse—to replace it with the more rigorous and ethics-oriented pragmatist quantitative research that we further develop in this paper.

To begin, Edwards (2011) critiques ‘formative’ measurement and statistical models by appealing to a critical realism that favors their ‘reflective’ analogues. For him, “reflective measurement[s] are consistent with a critical realist ontology,” but “formative measurement [such as measures of socio-economic status] signifies an ontology… that could be characterized as constructivist, operationalist, or instrumentalist rather than realist” (p. 13). Edwards’s basic point is that psychometric concepts mapped to his ontology are consistent with reflective measures and therefore these are superior to formative measures, which seem constructivist. However, to argue why his ideas and methods are superior, no testable theory or empirical findings are offered outside of the author’s and his community’s insistence on positivist doctrines. If science requires testable theories and empirical evidence, his position is unscientific. Furthermore, if his positivism is not accepted by the reader, his arguments are unpersuasive and, instead, merely seek to constrain research practices to specific types of ‘measurement’ in order to satisfy his type of positivism.

To illustrate Edwards’s difficulties, he notes “[t]he entities that constructs describe are real in the sense that they have the capacity to influence one another, as explained by theoretical models… That is, constructs refer to entities that exist in the real world, independent of attempts by the researcher to measure them” (p. 11; see also Edwards 2019). This is neither a testable theory nor does it rely on empirical evidence. Instead, Edwards is merely defining the brand of positivism that he espouses, so in response to the question, “[w]hich perspective is more defensible?,” Edwards can only say, “[a]lthough opinions on this matter might differ… [an appeal to positivism] seems eminently reasonable.”

In these and other passages, Edwards implies that those who reject his positivism are unreasonable, but he never provides any rigorous theory or empirical evidence to substantiate this. Indeed, if Edwards’s view is reasonable, then this prompts a range of scientific questions which his type of positivism will have trouble addressing. For example, by what social and testable process did his beliefs become reasonable, and what does such a social process say about his positivism? If all of reality is at stake, why is ‘reasonableness’ a useful criterion for adjudicating on such matters? Where do notions of reasonableness come from? Edwards offers no answers to these questions and therefore offers no compelling arguments, only recapitulations of his beliefs and the statistical practices that purportedly follow from them. The implication is that Edwards offers no scientifically valid reasons for his approach, and thus his critiques are not even grounded in the very science that he espouses.

Furthermore, Edwards offers no inquiry into the ethical implications of his proposals. For example, a paradigmatic case of a formative measure is socio-economic status (SES), which is relevant to research and social action that bears on questions of social justice. If Edwards’s recommendations are taken, what is to come of this approach to understanding social class and inequality? Edwards says nothing on this point or related issues that have a direct bearing on the ethics of the research practices that he is recommending.

Next, consider the article by Cortina and Landis (2011), who describe quantitative research as an act of translation, leading them to recommend null hypothesis significance testing (NHST) over effect sizes, in part, because NHST “satisfies objectivity requirements of science” by being based on “externally determined criteria” (pp. 333–336). In brief, the authors note that both approaches involve arbitrary cut-offs, but the former is better able to manage research practice by leading researchers to engage in a similar activity, with a key purpose being “[ruling] out chance as a viable explanation” for observed results (p. 336). Although the authors never define objectivity or chance, they imply that both can be controlled by using specific habitualized practices of a research community (i.e., NHST), which is perhaps why they desire for everyone to follow similar procedures (see also Landis and Cortina 2015).

However, the authors never offer a testable scientific theory or empirical support for how the reliance on a community, and its habits, accomplishes the goal of objectivity and the elimination of chance, much less why keeping everyone engaged in common procedures is the best way for science to proceed. These points are crucial because if “we as a science choose our epistemology” (Cortina and Landis 2011, p. 345), then how can objectivity or chance be understood as anything other than reproductions of a community’s own concepts that it adopts for its own uses? It is unsurprising that objectivity and chance can be controlled if a community develops its own understanding of these in relation to its own practices. A social scientific question is how this occurs, and an empirical question is to what extent particular notions of objectivity and chance—and practices associated with them—may lead to useful ways of doing social scientific research and conceptualizing ethics? Or, what happens when unexpected events or ‘surprises’ occur—events that were previously unknowable from within a chosen epistemology?

Like Edwards (2011), Cortina and Landis (2011) offer no social scientific support for their recommended approach and, instead, merely recapitulate their core values. Along the way, they rely on the notion of a community to justify their position, but they do not seem to take the implications of this reliance seriously when recommending how to practice what is supposed to be an objective social science. In the end, the reader is left with a manifesto: “[e]nforce the law… [with] gatekeepers” (p. 347). What the authors fail to clarify is how this kind of policing relates to objectivity and chance other than as community-defined values, and why any one practice or notion of objectivity or chance should be chosen outside of its status as a kind of communal habit. The result is that the authors’ claims are not critically interrogated or subjected to the same requirements of rigorous social scientific theory and empirical inquiry that they themselves note should be the pillars of science. Furthermore, how their proposals might relate to ethical issues is nowhere to be found.

In sum, positivists who employ this type of reasoning avoid subjecting their own core beliefs and commitments to the rigorous theoretical and empirical interrogation they demand of others—indeed, we will show that many of their views are untestable under their own logic. For a community of scientists, the consequence is that many papers on quantitative research methods explicitly or implicitly espouse beliefs and practices that are unscientific on their own terms, making their work seem more like dogma than the kind of organization science they desire. In the next section, we unpack what purportedly grounds such positivism so that its foundations can be understood in relation to empirical findings.

Foundations and Impasses in Positivist Quantitative Research

Quantitative researchers typically emphasize the importance of concepts such as objectivity, validity, or bias that are defined in relation to true inferences—the Cortina (2019) and Edwards (2019) critiques of our prior work are replete with such ideas. The foundations for this logic involve a theory of knowledge (i.e., an epistemology) that has two main parts: (1) a theory of meaning where substantive theories, hypotheses, models, and/or data represent worldly states of affairs; and, (2) a theory of truth or knowledge where these emerge if substantive theories, hypotheses, models, and/or data correspond to these states of affairs. This epistemology relies on a theory of ‘being’ or ‘reality’ (i.e., an ontology) that stipulates the existence of two fundamentally different kinds of things: (1) a singular reality that is the object of researchers’ study; and, (2) researchers with minds and language that can represent this reality and test correspondence among it and its representations (see Hacking 1983; Rorty 1979). With this logic, practices such as hypothesis testing are meant to assess how well representations correspond to data that also serve to represent a singular reality.

These foundations help organize the practice of research, but they create impasses that have no clear solution for the positivists who uphold them. In general, the problem is that these foundations say nothing about their origins or how they relate to research practices and ethics in the communities that use them. For example, with the same data, a psychologist may purport to represent personality; in sociology, social structure; in economics, choice (as in Parmigiani and Howard-Grenville 2011). A ‘theory of measurement’ can describe differences here, but this only means that different researchers think the world is made of very different things, with no guidance on which approach to use. Also, to test correspondence of data and a theory, different and incommensurable tools can be used (Kuhn 1961, 1970b), such as regression or qualitative comparative analysis with different standards for correspondence (Fiss 2007). On the ethics and consequences of such decisions and versions of reality, the foundations are mute because they require that the meaning of data vis-à-vis conceptions of reality and the determination of correspondence are already settled and go unquestioned.

The impasse here is that a representation or a tool to test correspondence is only comparable to preexisting representations with preexisting tools—rather than any kind of abstract ‘reality’—illustrating a “stumbling block of empiricists in trying to account for science on an empirical basis” (Dewey 1929, p. 140). Powell (2019) is thus correct to draw our attention to an examination of “ourselves, asking if we are the right people for the job, working in the right places, carrying the right tools.” The foundations on which Cortina (2019) and Edwards (2019) rely are ahistorical and they ignore the ethical consequences of persisting with them; tautologically speaking, they say that things are this way because this is how they are, and nothing else. No insight is given into the practical implications of choosing among different, incommensurable representations or tools, and the vast ethical implications of these choices seem willfully ignored.

For example, statistical tools are often justified using ‘Monte Carlo’ procedures that simulate data to test how the tools work (e.g., Aguinis et al. 2009). Such procedures specify a fictitious world as a parameterized model that is used to generate data that are then used to assess the tools. This forms a closed loop in which researchers literally invent everything to recommend tools for representation and correspondence. In turn, outside of simulated worlds, how can researchers know whether they have “true” representations and accurate tests of correspondence, much less useful and ethical versions of them? Their methods dictate that their representations are evaluated against each other with methods they invent for themselves, so nowhere along the way is an abstract ‘reality’ to be found, just more representations and self-made methods. If statistical tools are used, the world appears to be statistical ‘in nature,’ if other tools are used then it appears in their image, and such method-made-images are all the positivists ever have. This is partly why it is unscientific for positivists to argue for the singular ‘reality’ of their constructs and methods, because if they believe in these then it is a foregone conclusion that the images produced will be correct in their eyes—this is a self-reinforcing belief and cannot be disconfirmed, because their practices produce images of reality, not the reverse.

A key problem here, or perhaps a key solution, is that any observation can be made to fit a positivist’s way of representing and testing correspondence, because these are practices rather than something ‘foundational’ about ‘nature’ (Kuhn 1961, 2012). In the philosophy of science, this issue has been treated in relation to ‘auxiliary hypotheses,’ ‘webs of belief,’ ‘meaning variance,’ and observations being ‘theory laden’ (e.g., Feyerabend 1962, 1975; Hanson 1958; Knorr-Cetina 1981; Lakatos 1970; Quine 1969). However, the point is that, to represent and test correspondence, there is “no wholesale constraint derived from the nature of [scientific] objects” (Rorty 1982, p. 165). The foundations for positivist research struggle with such ambivalence because “the experimental method can only be applied where a reality which is not called into question sets the conditions to which any hypothetical solution must conform. The scientist puts a question to nature, and so far as the answer to that question is concerned, nature [itself] cannot be problematic” (Mead 1929, p. 78).

Unfortunately, this problem is evident in all three comments on our earlier article. By claiming that “reality [is] out there for us to study,” Cortina (2019) renders this reality abstract and disconnected from the process of scientific inquiry. Edwards (2019) argues that “although the methods used in QR arguably impact representations of reality, they do not create that reality itself, which exists independently of researchers,” whereas Powell (2019) argues that “removing or reforming traditional QR will not solve the problems of the human condition because these problems are not caused by a research method.” Though we agree with many of Powell’s points, here we think he joins Cortina and Edwards in failing to recognize the performativity of quantitative research and its relational character. As an example, many argue that the quantitative practices of economics provide justification for vast inequalities and the financialization of social institutions (see Chambost et al. 2018), which has a different kind of value-laden ‘reality’ under different approaches.

Even organizational research shows that nature is pluralistic in this way, and different researchers ascribe different realities to different kinds of things (Hassard and Wolfram Cox 2013; Morgan 2006; Morgan and Smircich 1980). Such plurality adds flexibility to research practices, but it also means that any notion of reality is inseparable from the activities that produce representations of it—in the example above, using the same data, psychologists can represent psyches, sociologists structures, and economists choices. The result is a pluralistic world, with inexhaustible ways of describing it and using it. This is because representations are always descriptions that are produced by people at work, and researchers invoke these as practices for their own purposes. In turn, reality is “ontologically multiple” (see Law 2008, p. 637; Mol 2002), with the observed qualities of scientific objects (including subjects) tied to the productive acts of researchers at work—which changes incommensurably over time and varies incommensurably across communities of researchers (Kuhn 1970b).

This observation should be taken seriously by anyone claiming to be a scientist, not least because it is supported by an avalanche of empirical research. Organization researchers note that “frameworks for interpreting experience in organizations are generally resistant to experience… disagreements over the meaning of history are possible, and different groups develop alternative stories that interpret the same experience quite differently” (Levitt and March 1988, p. 234). This leads to the inference that “[i]ndividuals are continuously committed to recreating the world in accordance with their own” (Nonaka 1994, p. 17). As in other forms of work, different researchers encounter different realities, because “[r]oles tell organization members how to reason about the problems and decisions that face them: where to look for appropriate and legitimate informational premises and goal (evaluative) premises, and what techniques to use in processing these premises” (Simon 1991, pp. 126–127).

Statistical practices are tools that bring different images of companies, markets, and societies into being (see work in Klein and Morgan 2001; see also MacKenzie 2006; Miller and O’Leary 1987; Morgan 1988; Poovey 1995; Porter 1986, 1991, 1992a, b, 1993, 1997; Power 2004). Adopting metrics and methods of standardization changes how these objects—always social and material—are experienced, creating copies of the realities that are adopted (see work in Howlett and Morgan 2011; Porter 2007). New types of people and objects come into being when new measures, classifications, and expertise emerge to make sense of people and objects in ways consistent with the same tools that are purported to merely represent them (Eyal 2013; Hacking 2002; Miller and Rose 2008; Rose 1989, 1998). The rather obvious, albeit ironic, implication is that the type of positivism we critique directs its researchers to avoid comparing their representations to the impartial reality they seek to uphold; they are comparing their representations to other representations using tools that they have created for themselves (Hacking 1992a, b). Embracing this type of positivism, researchers become unscientific by failing to see how they are active participants in the production of what they abstractly and unscientifically call ‘reality.’ Indeed, it is their commitment to their own abstractions that leads them to misunderstand that they are doing this.

Broadly speaking, practicing research with any approach creates the kinds of images and objects that researchers were looking for from the outset (Morgan 2006), even if the researchers engaging in the practice overlook this. Therefore, instead of Simon’s (1991) idea that a “change in representation implies change… in organizational knowledge and skills” (p. 133), we advocate being open to Shapin’s (1995) idea that “[s]hifting judgments are possibly best read as reliable reflections of shifting realities” (p. 291). This way of understanding science means that the foundations of positivist research only apply when a representation and tool for testing correspondence are already in place, and therefore they will often not be useful for scientific understandings of their own implementation. In turn, this underscores our focus on how quantitative research can be understood in light of the need for rigorous scientific theory and empirical research that is ethically oriented.

Our discussion now begins to answer this question in a manner that is consistent with Powell’s (2019) discussion of Jamesian sub-worlds: different research communities have different practices for representing and assessing correspondence, and these practices have no necessary or singular link with their own purported foundations (Davidson 1973; Hacking 2002; Kuhn 1961). Although activities of representation and correspondence are debated and always in flux, the positivist doctrine we are critiquing fails to offer insight into this process or into ethics because these things exist in the context of a research community’s activity rather than abstractions such as ‘objectivity,’ ‘validity,’ ‘reality,’ and the like. Indeed, “the question of what ‘X’ refers to is a sociological matter, a question of how best to make sense of a community’s linguistic behavior,” as well as its other practices, all of which are functions of historical, social, and material contingency rather than some singular reality underlying foundations (Rorty 1982, p. xxiv). As such, quantitative research and positivist discourse seem less related to facts and more related to habits and values.

In sum, conceptually separating researchers from their notions of reality and its ethics creates impasses, the most important of which is a block on the possibility of scientifically inquiring into the separation itself because positivism is partly defined by it (even though this is empirically unjustified). Researchers are expected to accept a sort of tautological and sophomoric ‘it is what it is,’ while institutionally powerful actors rely on enforcing their own values via the control of academic knowledge production—consider that the editors of Organizational Research Methods, Cortina included, have all been trained in quantitative micro-level psychology (Aguinis et al. 2019), where positivists rule the roost. Is this what a pluralist social scientific community of organizational researchers should look like? Not only do the foundations of positivism fail to overcome this self-created and self-imposed impasse, they also fail to offer tools to help understand this problem. What might motivate positivists or ground practical action, if not the dogmatic assertion of the circular logic that we exposed above? Why do they maintain a deep commitment to abstractions rather than empirically looking into their own logic and ethics?

Our argument thus far suggests how these issues hinge on features of science that cannot be simplified with foundations. Instead, a discussion is needed of what is involved in research, and how research is directed—its purpose. Science has never been about lone scientists representing a singular reality—the simplistic, unhelpful figure offered by Cortina (2019) and Edwards (2019). Science is people engaged in social and material or, better, socio-material activities that emerge and carry meanings in relation to a community and its values—whether organization scientists, physicists, or others. We next articulate new opportunities for quantitative researchers who want to leave behind the impasses sustained by Edwards, Cortina, and other positivists.

From Positivist Dogma to Actual Social Scientific Inquiry

Quantitative research is work that is done by people (Shapin 2008). Like anyone else at work, researchers speak to each other, write, think, and physically act in ways that must be learned in material environments (Dewey 1922, 1929). This learning and the environments, discourse, and thoughts associated with it have evolved over time (Poovey 1998). This evolution and its results are profound, but they are also mundane because research is merely a part of the ongoing activity of people at work, with material environments and ways of working that are made by and for researchers themselves (Hacking 1992a, b, 2002; Shapin and Schaffer 1985). Such productions include abstract concepts such as ‘objectivity,’ ‘validity,’ ‘bias,’ ‘chance,’ or ‘reality,’ as well as physical objects such as academic buildings, faculty clubs, coffee shops, surveys, questionnaires, computers, statistical software, and the data and findings that are the products of this work. The result is that no part of the research process escapes the situated embodiment of the researcher at work, meaning there is no free-standing objectivity, reality, or chance, only different ways of speaking, writing, and otherwise working among other people and material things (Shapin 2010). There is never an abstract ‘reality’ that resembles this concept when it is used. There are always only particular acts of talking, thinking, and collaboratively producing quantitative research and its results.

Therefore, we challenge the positivist researchers we have cited (and those we have not) to explain things like objectivity, validity, bias, chance, or reality in ways that transcend the practices of researchers at work and the communal nature of the way that these concepts are defined and deployed in actual material practices rather than abstractly. The empirical concern here is that positivists cannot do this without abstraction, because embodied practices in material environments are all that humans ever do—including thinking or ‘perception,’ which are active practices that must be learned (Dewey 1922, 1929). In turn, to generate representations or notions of objectivity, it does not seem to be the case that the metaphorical emperor has no clothes, it seems more accurate to say that the clothes have no emperor.

Therefore, although our conception of research as work should not be too contentious (see also Kilduff et al. 2011; Tsoukas and Knudsen 2003; Van Maanen 2011), it presents difficulties for positivists because it demands that they situate themselves as part of the ongoing production of their own concepts and representations as work—the irony, of course, is that positivists who claim to be empirical social scientists seem to lack the social and the empirical. Crucially, understanding research as work inhibits attempts to reach beyond it via abstraction. If positivists are going to be empirical scientists—even belittling others who do not follow their ideology—then it follows that they would not want to avoid this process of inquiry or its consequences, lest they be forced to critique themselves.

In turn, it is unsurprising that reactions by positivists to observations and statements like ours often embody dismissiveness if not outright hostility or retrogression (Cortina 2019; Edwards 2019). Generally, their reactions are derived from a concern about the prospect of a science without the foundations we critiqued previously, as if theirs is the only way to organize research in a principled fashion (e.g., Edwards 2011). This reaction is understandable for researchers who have been trained to uphold their doctrine rather than engage in theorizing and empirical inquiry that includes themselves as active participants in the process. However, this reaction and its link to positivist foundations is out of touch with empirical observations because replacing positivist foundations with a science of practical action and ethics will enhance scientific rigor and relevance rather than hinder them.

In what follows, we argue for this proposition through a profound way to reconstruct quantitative inquiry. Taking a classical pragmatist approach to science (e.g., James 1898, 1907; Dewey 1922, 1929; Mead 1899, 1938), we eschew the idea that any foundations are indispensable except for scientific inquiry itself. Working with quantitative methods, any foundations researchers deploy should stand up to empirical scrutiny in the form of an investigation of their use. This approach emphasizes situated practical action, which engages with ethics via an interest in doing and studying research as work.

A Pragmatist Approach to Empirical Inquiry

In order to conduct scientific inquiry, we propose that researchers can follow a pragmatic method for testing theories and otherwise conducting research. This inquiry involves examining the practical effects of anything—theories, methods, hypotheses, or philosophical foundations—when put into practice (James 1898, 1907). The idea is that “the chief function of [research] is not to find out what difference ready-made formulae make, if true, but to arrive at and to clarify their meaning as programs of behavior” (Dewey 1916, pp. 312–313). The point is to continuously investigate the practical effects of different ways of talking, writing, and organizing human activity, so that the most contingently practical ways can be derived and then deployed where appropriate. Importantly, what is practical “may be aesthetic, or moral, or political, or religious in quality—anything you please. All that the theory requires is that they be in some way… acted upon” (Dewey 1916, p. 330).

With this approach, researchers can move beyond traditional tests of theory, on the terms of representation and correspondence, in order to test entire sets of foundationalist doctrines. This can be done by looking at what happens practically when researchers embody any logical scheme or other way of working as researchers. In this process, quantitative tools could be used, but a pragmatist approach can provide the necessary liberty for researchers who want to use any tools available to make a difference in the world but would otherwise have been constrained by positivist logic. Indeed, by focusing on the practical, a pragmatist way of doing science connects research to what matters to specific people doing specific things (Dewey 1916; James 1907), which should be central to any applied science.

To allay the fears of positivists, here, we propose that researchers committed to empirical inquiry can appreciate a pragmatist approach because practical action requires rigorous empirical observation. Indeed, a pragmatist mantra is that foundations or theories that are contradicted by experience disrupt practical action (Dewey 1920, 1922; James 1907). As with any work, the theories of researchers can only be practical when they achieve desired ends, and this is difficult when theories cannot describe and predict experience.

For example, a quantitative researcher evaluating a theory will put it to work in a process that involves objects of study, measures, data, statistics software, results, and interpretations that are coupled with all of these, authoring papers that attempt to clean up and package the mess that is research work (Collins 1985; Shapin 1989; Star 1989; Star and Strauss 1999). In this context, research is practical when substantive theories can be grafted onto the complex web of activities and objects involved in research work—aptly described by Pickering as ‘the mangle’ (1995). From an empirical perspective, key components are the data and the objects of study that were inquired about in the course of research. When data and results fit neatly into a researcher’s other activities, the researcher solves the problem of asking and answering a question on terms that will be understood by colleagues and lead to publication. When viewed this way, it seems that the point of research is always practical in some limited respects, meant to predict and produce specific experiences for researchers. Yet, of course, this process works not because the methods represent reality in a singularly true way, but instead because the complex collection, assemblage, or perhaps network of actions, discourses, environments, tools, objects, and the like fit together—yet always temporarily.

Such a pragmatist view of research jibes with a philosophy of science that describes how theories endure when they are useful for predicting and organizing experimental data, allowing the manipulation of phenomena (Hanson 1958; Kuhn 1961, 1970a, b). When theories fail at this task, researchers adjust their theories or tools to fit the practical demands of their research activities. Indeed, it seems that empirical researchers have been pragmatists of a sort all along: researchers attempt to predict, explain, and control various aspects of their experience, and when they fail, they adapt their theories and methods, never being inexorably wedded to any specific one (Kuhn 1970b), perhaps apart from unnecessary ‘foundations.’

By understanding what is practical as what works in research and other contexts, researchers can be empiricists while easily abandoning the positivist foundations we have critiqued. Indeed, organization researchers can focus on the aspects of empirical research which have proved its most useful component: organizing research work in ways that are practical. Specifically, for a pragmatist form of research that tests hypotheses, “[t]he highest criterion that we shall present is that the hypothesis shall work into the complex of forces into which we introduce it” (Mead 1899, p. 369). This brings the entire enterprise of research from the lofty heights of abstraction down to the humble sites where actual research takes place.

To be clear, the approach we are describing does not abstractly specify the objects that different disciplines will inquiry into, nor does it define a normative agenda for ethics through which practitioners of a discipline make decisions. In the absence of a disciplinary object (see du Gay and Vikkelsø 2017) and a pragmatist ethics (e.g., Dewey 1927, 1991; Dewey and Tufts 1932), a focus on ‘the practical’ alone can become the kind of ‘anything goes’ relativism that Cortina (2019) and Edwards (2019) fear, and that Powell (2019) warns against. This leaves us with two questions that quantitative researchers must answer when doing their work, lest they be methods fetishists or an ‘anything goes’ community. First, what is their object of inquiry? Second, what work can organization research do? There are no ready-made answers to these questions precisely because if research is thought of as work, the work of answering these questions still needs to be done and those answers will always need to be collectively agreed, ongoing, and contingent.

If positivists want nothing other than for their work to uphold a doctrine while speaking mostly to each other, then the positivism we have critiqued coupled with quantitative methods do seem like a practical way to proceed. However, countless empirical observations and a range of debates about relevance show that this approach is not practical for a wide variety of other purposes, including engaging with worldly problems in relevant ways that take ethics seriously. Indeed, the positivism we identify too often produces an overly technical and insular way of doing quantitative work, leading many researchers to decry how an “emphasis on technical rigor has shifted our focus away from the soul of relevance and the applied nature of our field” (George 2014, p. 1; among many others, see Amabile et al. 2001; Rynes et al. 2001).

A key part of this problem is that by focusing on foundational concepts such as objectivity or validity and technical logics that are supposed to achieve these (or guard against others such as chance or bias), researchers overlook the entire point of an applied science: to facilitate practical action that addresses problems of concern in ethical ways. A pragmatist approach makes this latter goal central to the practice of science, and therefore we propose that not only is pragmatism more defensible both theoretically and empirically, it is also better able to organize researchers around goals that should be central for an applied science. With a focus on practical action, a pragmatist approach engages with ethics in the manner which we introduced at the start of this paper.

Implications for Future Research

With our initial article (Zyphur and Pierides 2017), we argued that commonly employed formulaic scripts for doing quantitative research—such as ‘best practices’ or ‘rules of thumb’—divorce research from ethics. We then encouraged researchers to bring a concern with ethics into the core of scientific practices. As a means for achieving this, we proposed a new tool that allows quantitative researchers to connect the purposes of their research with a researcher’s orientation and practices via ethics. We called this ‘relational validity.’ The responses by Powell (2019), Edwards (2019), and Cortina (2019) to our initial article have allowed us to (1) further clarify how the logic of statistics and probability—on which positivists such as Edwards and Cortina rely—came to be historically divorced from ethics (see companion article, Zyphur and Pierides 2019); and (2) show how the positivist separation of facts and values—which leads Cortina and Edwards to misunderstand our work—fails to provide adequate empirical evidence to substantiate its own claims, let alone critique ours.

We had previously introduced relational validity as a tool, but as Powell (2019) suggests, we were not explicit enough about our philosophical background—what we meant by ‘purpose’ and how our pragmatism is an alternative to positivism not the source of our critique. By making this explicit, we have been able to (3) provide further clarification about how we draw on classical pragmatism, thus ensuring that our discussion does not regress into the kind of misunderstandings that are evident in Edwards (2019) and Cortina (2019). We have (4) proposed that quantitative researchers can start to think about their research practices as a kind of work. As a result, our present paper allows all organizational scholars to join us in shifting the discussion about quantitative research away from the empty cries of positivism about a supposed abstract ‘reality’ that may actually be seen as a set of ethics-laden practices (e.g., ‘measurement’ or statistical ‘estimation’) with specific products, towards empirically grounded scientific inquiry that focuses on the ethics of actual people and such practices. This conclusion has much broader implications for reconstructing quantitative research than the specific pragmatist approach that we have recommended above. Therefore, we conclude by pointing to avenues for future research that may, or may not, follow our specific approach.

Quantitative Research as Work

With the potential to ethically confront problems that matter for the human condition and our planet, the ‘black box’ of science (and philosophy) can be opened by investigating organizational research practices as a kind of work, as jobs done in relation to organizations (Latour 1998), and as a way of life (cf. du Gay 2015). Because any notion of reality enacted by researchers is tied to their activities and material environments, quantitative researchers can take the doing of work as a place to begin such inquiry. This move can also help organization researchers to study physics, philosophy, psychology, management, or any other activity as a form of organized work rather than feeling obligated to emulate the ways any of this work is done.

In this pursuit, quantitative organization researchers can also investigate scientific ideals such as truth or objectivity, which always exist as words that organize socio-material activity in specific environments. Known in this way, the abstract reality often proposed for the kind of positivism that we have critiqued quickly condenses into a sea of heterogeneous techniques, actions, values, discourses, relationships, and disorganizations that evaporate only when workers reach what are temporary agreements about what they will call true, objective, or factual in specific circumstance for specific purposes (Latour and Woolgar 1986).

To guide inquiry, Hacking (2009) notes that studying research can involve “on the one hand, the study of mental capacities, and on the other, the history of civilizations and of their institutions” (p. 36). Conveniently, many organizational researchers are already prepared to investigate such heterogeneity by extending their existing work on decision-making, creativity, and agreement versus conflict in teams (as in Shadish and Fuller 1994). Alternatively, for inspiration organizational researchers might turn to sociologists who have already pioneered the investigation of research work (e.g., Garfinkel et al. 1981; Gieryn 1983; Meyer et al. 1994; Meyer and Jepperson 2000), including the use of numbers at work (e.g., Michaud 2014; see also Desrosières 1998; Poovey 1998; Porter 1995; Shapin and Schaffer 1985). In all cases, it will be crucial for quantitative organization researchers to remain aware of how their own work activities serve as ways of enacting the kinds of realities that they study (Willmott 2011).

To assist in this process, it will be useful to develop stronger links with the interdisciplinary field of science studies, which is “wholly compatible with pragmatism” (Shapin 1995, p. 303) and has taken the lead in studying the work of researchers (e.g., Bloor 1991; Latour and Woolgar 1986; Shapin 2008; Star 1983, 1985). Among other things, science studies offer many theoretical and empirical treatments of the links between science and society. By seeing such links, we expect that quantitative organizational researchers will be better placed to understand what they do as a form of work, in turn allowing them to draw on knowledge from their own field to understand themselves (see Casler and du Gay 2019), and address worldly problems.

The Activities of Quantitative Research

Positivists whose work and logic we have scrutinized in this paper may be aided by recognizing how their goals are often oriented around seeking universals and timeless absolutes such as ‘objectivity’ while guarding against others such as ‘chance’ (as in Cortina and Landis 2011; Edwards 2011). Such concerns often limit quantitative researchers to studying phenomena in irrelevant ways by relying on simple mathematical models and requiring large sample sizes (McKelvey 2006). To tackle problems worth caring about, these researchers should end their battle with ‘chance’ and the obsession with abstractions. Instead, they may be better served by moving the focus from statistical estimation of something they think is abstractly ‘true,’ to a description or a statistical estimate that will be practical for motivating effective action in specific contexts. The implication is that, for researchers, the fight is not a skeptical one against chance or errors in inference regarding universal or abstract truths; the fight is a fallibilistic one against failures of action in specific material environments (Bernstein 2010; Martela 2015).

Quantitative organizational researchers can address this problem of action by looking at what kind of work needs to be done in a specific context and figuring out how quantitative tools may be recruited to help (Coghlan 2011; Van de Ven 2007). For this, researchers will need to embrace complexity, perhaps by treating all statistics as descriptive rather than inferential. This will require skill in the craft of understanding the context associated with acts of quantitative description in order to predict the results of putting the descriptions into action to address specific and local problems (see Cartwright and Hardie 2012; Reiss 2009). This empirical pursuit is important because “[c]onsequences reveal unexpected potentialities in our habits whenever these habits are exercised in a different environment from that in which they were formed. The assumption of a stably uniform environment (even the hankering for one) expresses a fiction due to attachment to old habits” (Dewey 1922, p. 51). For quantitative researchers interested in universals or absolutes, or estimating a ‘true’ parameter, a key message is that “[there] is no such thing as an environment in general. There are specific changing objects and events,” and these can be acted on in more or less practical ways to create outcomes about which researchers may care (Dewey 1922, p. 154).

There is no magic bullet for bringing an imputed future into the present, and any supposed certainty generated by statistics or a dogmatic adherence to foundations cannot substitute for local knowledge of what may result from actions and their ethics, which can only be undertaken collectively in the here-and-now. What quantitative researchers can do is avoid the unnecessary mandate to start with positivist foundations of the type we have critiqued. Instead, “in pragmatism the path… leads in the opposite direction, to reflection upon the methods of science, in order to elucidate [their] practical character” (Joas 1993, p. 256). This will help orient the quantitative research community towards what we propose is its central task:

the task of stating definitely to itself what the ends are for which [its scientific] means shall be used… The wealth of means to accomplish our ends is compelling us to ask ourselves the embarrassing question what those ends are. The old formulas are no longer adequate… Self-control of the whole community can only be attained by the intelligent comprehension of the issues before it, and the wealth of means… is setting that goal concretely before us. We are coming nearer than ever before to understanding what is involved in providing the community with the goods it needs for its life. In a word, science is enabling us to restate our ends by freeing us from slavery to the means and to traditional formulations of our ends. (Mead, 1938, p. 474).

In order to make this happen, a new educational program will need to be introduced to the core of technical training for quantitative research. The exact details of what this looks like will need to be developed and researched with an eye towards what is practical and ethical, but central to the curriculum will need to be an understanding of social scientific inquiry, empirical history, ethics, and the formation of individual and collective character. For this, we have been clear that we would encourage a return to classical pragmatist texts that embody this pursuit (Dewey 1927, 1991; Dewey and Tufts 1932), but we would also strongly encourage our readers to explore other approaches that focus on ethics (e.g., du Gay 2015; Ezzamel and Willmott 2014). As quantitative researchers begin to experiment through genuine inquiry into their own practices and the effects of these practices individually and collectively, these researchers will need to make many intelligent judgements. The kinds of people who are able to do this must be educated, not just technically trained, so that they can take their task more seriously than those who suppose their job is just one of representing a singular abstract ‘reality’ or getting singularly ‘true’ statistical estimates by using procedures they have developed for themselves. Overcoming such simple-minded, historically ignorant, and quite frankly unscientific pursuits is a goal about which we can be hopeful for the future of science and the forms of expertise it allows. Indeed, we audaciously foresee a future wherein the researchers who bang the drum of positivism are outnumbered by those who have been waiting for the opportunity to do things better from the standpoint of ethics.