1 Introduction

Computer simulations have conventionally been understood to be either extensions of formal methods such as mathematical models or as special cases of empirical practices such as experiments. Here, I argue that computer simulations are neither and are best understood as instruments. As instruments, computer simulations belong to a third—and often neglected—essential element of scientific inquiry: technical artifacts. Understanding them as such can better elucidate their actual role as well as their potential epistemic standing in relation to science in general and to particular scientific methods, practices or other devices. That is, understanding computer simulations as instruments we can better assess whether or not, when and how we can trust them as scientific instruments.Footnote 1 We can also more easily compare them and their sanctioning process to the normative trajectories that other instruments have had to undergo in order to belong to such an exclusive set of technically demanding artifacts such as scientific instruments. Providing the initial groundwork for this conceptual framework on both the nature and epistemology of computer simulations is the main aim of this paper.

2 Technical Artifacts in Science

Before understanding computer simulations as instruments, it is important to understand that instruments in general, including scientific instruments, are not identical to the formal methods they manipulate. They are also not identical to nor do they have identical properties to the formal methods that constitute and underlie their construction. This also means that the epistemic warrants that underlie trust and reliance in the theoretical principles that enable the construction of an instrument are not identical to the epistemic warrants that underlie our reliance and trust on that instrument itself (Symons & Alvarado, 2019). One could trust and rely on the theoretical principles related to optical phenomena, for example, without necessarily trusting that a telescope built with reference to these theoretical principles is automatically a reliable one. This is evidence that the epistemic standing of one thing is not the same and is not always indicative of the epistemic standing of another, in this case, instruments and the formal characteristics that underly their construction or that constitute the content these instruments manipulate. The same can be said about the relationship between instruments and the experimental settings they are deployed in: an instrument’s properties and an instrument’s epistemic standing are not necessarily identical to the properties or epistemic status of the experiment in which the instrument is used. Scientific instruments belong to an extremely exclusive class of sophisticated man-made objects called technical artifacts, whose design, sanctioning process and deployment requires distinct—i.e. superior—epistemic norms and practices to those in ordinary epistemic practices and methods.

Hence, instruments are distinct from both the formal and the experimental in both their overall properties and their epistemic standing. Scientific instruments even more so. If this is the case, then instruments need to be understood as constituting a distinct element of scientific inquiry with their own properties and their own epistemic standing (Baird, 2004). Our understanding of computer simulations, as members of this class of technical artifacts, needs to be modified accordingly. In this introductory section I offer an overview of the characteristic epistemic and ontological distinctiveness of instruments vis-à-vis other branches of scientific inquiry and lay out a preliminary working definition of the kinds of artifacts, devices, that qualify as such.

What is an instrument? While this is an important and relevant question, it is beyond the scope and aim of this paper to cover the ontology of objects such that instruments as a general category of objects that humans use can be properly captured.Footnote 2 However, we can for the purposes of this paper begin with a working definition (see footnote 1 above):

Instruments are technical artifacts—a materially instantiated object/process whose design is intentionally teleological (a mean to an end or a mean to a further mean).Footnote 3, Footnote 4

While this is a maximally tolerant definition and may warrant future investigation, for now, we can use it to identify the kinds of devices that are used as aids in inquiry. This will become clearer in the following paragraphs.

An intuitive way to individuate an artifact is by its relation to some agential intentionality (Kroes, 2003; Symons, 2010).Footnote 5 That is, in order to know what an artifact is we can ask what the artifact was designed, developed and deployed for. While some objects are artifacts in virtue of having a desired property when encountered by an agent with a convenient purpose (e.g. a rock in the form of a soup bowl), other objects are artifacts because they are synthesized from resources and materials to have a desired property in virtue of their material affordances (hardness, softness, flexibility, etc.). The latter are the more definite artifacts for they are explicitly designed and constructed to have a sought-after property.Footnote 6 The former are ready-found useful objects: pseudo-artifacts (Kroes, 2003). While a rock can be identified in virtue of some of its material properties alone, the identification of artifacts requires us to include an intentional dimension in their description. Else, we risk not capturing what they are. There is a sense in which describing a corkscrew to a child as a pointy and twirly piece of metal without mentioning its intended use will leave the child clueless as to what the artifact really is. Hence, artifacts in general, but technical artifacts in particular, can only be individuated by appealing to their two ontologically distinct sets of properties: the functions for which they were intentionally designed and the physical elements that can instantiate these functions. (Simon, 1996; Kroes, 2003 p. 69; Symons, 2010).

Instruments are technical artifacts and as such cannot be understood merely in terms similar to those with which we understand abstract objects such as theories, which are constituted by propositions and inferential principles. This is because instruments are are not constituted of the same things as theories or principles (Baird, 2004, p. 4). Theories are a systematic and coherent collection of propositional knowledge. Instruments, on the other hand are also functional instantiations of designs and processes. In virtue of not being only propositions or collections of propositions, instruments cannot be understood solely through the same articulations we understand theories with. Hence, the life of an instrument, as such, requires its own epistemological account independent from the one for theory (Baird, 2004).Footnote 7

Furthermore, while all artifacts have an inherently designed function (i.e. they are made for something), some artifacts are more highly sophisticated and are more meticulously constructed than others. Some instruments in science, for example, are highly sophisticated technical artifacts. When they are well made, they are also constituted by highly theoretical and functional specifications (Symons & Alvarado, 2019) as well as by very specific material which is optimal for the execution of the functional specifications in light of theoretical requirements (Golinski, 1994). This constitutes a sort of conceptual independence of technical artifacts such as scientific instruments: they are highly sophisticated technical artifacts that are the product of sophisticated functional specifications and sophisticated material properties to carry out these functions. It is uncontroversial that theories are not constituted by the same things or in the same way and very much the same can be said about empirical practices such as experiments. Experiments require specifications and material conditions for their performance but they are not constituted of the same things as theories. A theory can be a theory without experiments and experiments can be conducted without a theory, even if we tend to (rightly) downgrade the epistemic status of such experiments. So, they are conceptually independent from each other. As we will see in sections below, however, experiments are also conceptually independent from the technical artifacts that enable their implementation. As stated in the introduction to this section, these technical artifacts are not made of the same things nor do they have to have the same functions as the experiments in which they are deployed.

Once they are made, these highly sophisticated technical artifacts are deployed in to the laboratory setting where they are used to perform a role in the acquisition of knowledge. It is there that technical artifacts are yet again placed somewhere between theory and experiment: they may have been made with theoretical principles embedded in their construction and they may have been placed in the service of an experimental setting, but as they are made, they are not strictly members of theory or experimental practice; as they are deployed, they are not so either. Rather, they a third kind of thing: the things with which principles are enacted and with which experiments can be carried out. As such, they are a third and essential branch of scientific inquiry and deserve to be investigated and accounted for as ontologically and epistemically independent from investigations into theory and experiment.Footnote 8

As we will see in detail below, instruments such as computer simulations also do different things in scientific inquiry from the things that a theory and an experiment do. The conceptual independence of instruments referred to above together with the functions that these instruments carry out beyond those of the theoretical and experimental functions is evidence that they are also epistemically independent. That is, the reasons and means by which we justify their reliability and our reliance on them are distinct and independent from the reasons and means by which we justify the reliability of and our reliance on the theoretical principles from which these instrument are derived or from the experimental settings onto which they are deployed. More strongly put: instruments are a third—separate, distinct, and independent—source of knowledge in scientific inquiry (Baird, 2004). Because of this the knowledge that we gather from them, and the fact that we can gather knowledge from them at all, must include epistemic warrants that do not arise from the warrants derived from theoretical principles alone or from otherwise sound empirical practice (Symons & Alvarado, 2019). We can have a very well-established theory and still deploy a flawed apparatus to test it. We can have a very well-designed experiment and still deploy a subpar device in its service. Similarly, we can have a very good instrument and deploy it in a misguided experimental setting.

The discussion above points to the ontological and epistemic distinctiveness of instruments. If this is so, when we understand computer simulations as such, we may find that their epistemic status must also be reassessed within new parameters, which differ from the parameters established by the theoretical underpinnings that inform their functionality and from the empirical warrants of the settings in which they are deployed. Furthermore, a few more things regarding the development and understanding of the role of computer simulations in science become clearer. For example, when we understand computer simulations as instruments, we can explain why they have often been understood by philosophers and practitioners as being in between theory and experiment (Humphreys, 2004) and why they are seen by some as special cases on either category. The relative neglect of the role of instruments in scientific inquiry (Van Helden, 1994; Baird, 2004) in the philosophy of science is in part to blame for this unfortunate false dichotomy and the ensuing limitations inherent to contemporary philosophical debates on computer simulations. As we will see in sections below, that computer simulations seem to be in-between theory and experiment may have more to do with the fact that instruments in general are in between and not with computer simulations having a special status all on their own.

It is important to note at this point that being a technical artifact and a precision instrument itself may not qualify an instrument to be a scientific instrument. Many non-trivial considerations lie in between the design, development and deployment of a technical artifact and its sanctioning as a scientific instrument. Even paradigmatic examples of scientific instrumentation, such as the telescope, had to undergo extensive and heterogeneous trials before they were accepted by the scientific community. However, the first step in recognizing a scientific instrument as such and having it undergo the comparative and epistemological assessment that would qualify it implies recognizing it as an instrument to begin with. In order to know whether computer simulations can indeed qualify to be used in scientific inquiry and what their use and contribution is to scientific inquiry we must first establish their nature: what they are, what they do and how they do it. In this section I sought to clarify the distinctive nature and epistemic status of instruments in general. In the next section I aim to establish the groundwork to begin understanding computer simulations as belonging to this class of objects. Whether or not they are indeed scientific instruments and what kind of scientific instrument they may be is a question I address in the section after the next.

3 Computer Simulation as Distinct

In the previous section I established the distinct nature of instruments as technical artifacts and scientific instruments as a more exclusive subset of these in relation to formal and theoretical constructs. I also differentiated scientific instruments from the empirical practices in which they are deployed. In this section, I offer a detail account of computer simulations as also distinct from both the theoretical formalities that underly their construction and the content they manipulate in their functioning, as well as from the experimental settings in which and for which they are used.Footnote 9 This distinction is then used as the basis to further suggest that their nature and their epistemic import can be best understood as those of an instrument.

A theoretical or abstract model of the kind conventionally used in science is a conceptual construct (Durán, 2018, 2019, 2020) that stipulates the relationships and the dynamic transformations of a system and the relationships of the entities therein (Morrison, 2015; Pincock, 2011). These models abstract and describe the scientifically salient features of a system. As such, they offer a formal representation of a target system (Morrison, 2015; Weisberg, 2019). This is often called a simulation model (Durán, 2018; Resch et al., 2017). In contrast, as we will see later in this section, computer simulations (of dynamic target systems) are technical artifacts—physical implementations of abstract specificationsFootnote 10— that implement/execute the computational processes required to follow the progression of these specifications and descriptions. That is, computer simulations are the complex devices with which the formal dynamic descriptors, the simulation models, are carried out. While computer simulations may be the product of, or contain within them the specifications of a conceptual model, computer simulations are something other than the model itself: they are the implementation (through hardware architecture and software specifications) of said models. Hence at the most basic level, a distinction can be drawn between the model and a computer simulation of that model by noting that the model is not the simulation and vice versa. They are two distinct things. At the very least, even the fact that computer simulations requires a model to implement and that the model requires a computer simulations to be carried out signals a conceptual distinction between the two. It is also evidence that at a very basic level not only are they different things but also that they have different functions.

The distinction between a model’s description and a model’s implementation can be understood by analogy to many other representations (Weisberg, 2019), particular those that include instructions for performance. It is obvious, for example, that a recipe for a cake is not the cake itself. But, furthermore, the recipe of the cake is also not identical to the carrying out of the steps required for the cake to be instantiated. Something or someone must implement the instructions in the recipe. Carrying out these steps is crucial for the instantiation of the cake. Similarly, carrying out the specifications of a simulation model is crucial for the instantiation of a computer simulation. This analogy is not completely without problems, of course. The computer simulation is not like a cake. The computer simulation is not only an end result. Results of a simulation are not the simulation. Rather, as we will see in sections below, the computer simulation is more like a performative measurement instrument: it is what it is while it is doing what it does. In order to see this, consider exercising. Neither the instructions to exercise in a particular manner nor the result of the exercise are exercise. Rather, performing the exercise is the exercise. Similarly, computer simulations are the simulating of whatever target model one is simulating (Imbert, 2017; Morrison, 2015). However, with these simple examples one can envision the many different conceptual dimensions that separate the instantiation of a process from a description of that process. When it comes to computer simulations we must understand that a simulation model, particularly as described by Durán (2020), is not the implementation of the simulation model and therefore is not the simulation itself either. We will see this more in detail in Sect. 2.2.

In the sections below I elaborate on the ways in which computer simulations are distinct from their constituent elements, how they are distinct from the abstract theoretical principles which guide their functioning, as well as how they are distinct from the settings in which they are deployed.

3.1 Computer Simulations as Conceptually Distinct from their Formal Constituents

Let us begin by looking at the many ways in which computer simulations are not identical to any of the preliminary stages required for their development. According to some philosophers and practitioners (see Winsberg, 2010 p.11; Resch, 2017) a computer simulation can either be the product of the sum of the complete set of stages represented in a simulation pipelineFootnote 11 or it can be only one of the stages: a stage between the actual implementation of functional specifications and its results. Whichever position we take between these two approaches, the computer simulations is still something other than any of the formal elements that it manipulates or constitute it. Hence, we can say that computer simulations are neither the complete set of stages preliminary to their yielding of results, nor are they their end results. Whatever computer simulations are,Footnote 12 the point here is that however many different ways one choses to flesh out the process of creating them, computer simulations are a distinct thing from those processes in and of themselves.

It is well documented, for example, that the mathematical operations that form the basis of computer simulations of dynamic systems are quite different from those found in the theoretical stages of inquiry (Durán, 2018; Morrison, 2015; Winsberg, 2010). Consider a computer simulation that is developed in a context in which well-established theoretical principles and thorough mathematical equations exist for a target phenomenon. It is well known to anybody dealing with coding mathematical models into computer languages that the equations in such well-established theoretical models are seldom, if ever, directly part of the computer simulation itself. The continuous equations used to formalize theoretical principles in fields such as physics or other natural sciences have to be translated into discrete and approximate solutions that computers can process. The way a computer simulation solves an equation is by providing approximate values to discretized parameters that roughly correspond to the results one gets from a continuous equation. While the results can be similar or approximate to an almost negligible degree, the fact remains that both the results and the methods by which they are arrived at are distinct from a continuous equation. Furthermore, while these translations, from one kind of mathematical model to another, have substantial sophistication and research to support our reliance on them as sound scientific practice, the translation of the latter into computer language (programs) is not as well established. Often the translation of mathematical models into code for a computer to run includes many idiosyncratic engineering practices that are far removed from the sound theoretical principles in virtue of which the initial model was constructed.

This is an important departure between scientific models and the computer simulations with which they are explored. Scientific models often require theoretically principled mathematics that have specific properties that tie them back to the phenomena that they are meant to capture.Footnote 13 Numerical methods of the kind used for computer simulations are more often than not guided by the need to reproduce approximate values that only tie them to the original continuous formalities of scientific models but not to the phenomena in question (Parker, 2003; Winsberg, 2010; Morrison, 2015; Symons & Alvarado, 2019). That is, while conventional uses of mathematical abstractions in scientific models require a theoretical justification that ties them to the subject of inquiry, their discrete counterparts are only responding to the adequacy of approximations and not to whether or not the phenomena to be simulated may in fact elicit some or any of the processes by which such values are arrived at. One can, for example arrive at similar values through many motley processes (Winsberg, 2010) that respond to engineering constraints, formal language choice, computational ingenuity, etc., without regard to whether or not the methods by which said values are arrived at have anything to do with reality. This point already signals, at the very least, a departure—a gap—between the originating formal aspects of inquiry and their machine-implemented counterpart in computer simulations. The departure consists in the different target, the subject of interest: a scientific model’s target is a phenomenon in the world, a computer simulations target is the scientific model’s output values (Morrison, 2015).

Furthermore, besides the distinct mathematics at play, there is also a difference in formal methods between discrete mathematics and the code to implement them. A machine must understand what to do in order to carry out the discrete operations and this is specified through a set of logical commands that deviate from the principles of mathematics that guide either the discrete or the continuous mathematical operations (Parker, 2009; Winsberg, 2010). In fact, the procedures by which the machines execute these logical commands are often the result of engineering ingenuity that has little to no formal basis (Gransche, 2017 p. 38). These two stages of the design, development and deployment of computer simulations alone already constitute grounds for distinction between computer simulations and mathematical models, whether these models are computer-based or not, but also between the coding and the machine execution of such code.

At this point we can say without much controversy that computer simulations are not identical to the mathematical models they implement, the equations in such models, or any of the formal aspects that constitute them. Below, I detail how they are also distinct from the empirical practices in which they are used.

3.2 Computer Simulations and the Experiments they are Deployed in

Very similar points to the ones made above can be made about the relationship between computer simulations and experimental settings. As suggested by Barberousse (2019), a computer simulation can be the software/hardware implementation of experimental specifications. Barberousse argues that full experimental settings that include procedures, computations, controls and data manipulations can be encoded in the programs we use in simulations as well as in the architecture used to run them. That is, full specifications and descriptions of an experimental procedure can be encoded for a machine to execute—or more accurately, as we will see below, to simulate. However, that this fact warrants thinking of the computer simulations as identical or similar in nature and function to the experiments themselves is not immediately obvious. In fact, it is this functionality of computer simulations—the capacity and inherent design to simulate—that precisely separates computer simulations from what an experiment is,Footnote 14 from what an experimental setting comprises and from what an experiment’s description and specifications are and do.

For example, a description of an experiment does not simulate the experiment. I take this point to be fairly straightforward: If I write a detailed description and specifications for an experiment in a napkin, neither the napkin nor my markings on it constitute the experiment. Similarly, a description of an experiment to be simulated on a computer simulation is not a computer simulation. The description alone does not simulate. That is, it does not and cannot implement the necessary processes that carry out the operations required to simulate anything. Furthermore, although both are representations, they are not of the same kind. This is evident when we think of the difference between a written equation that one has to solve and of an equation that is solved by a machine. The written equation does not solve itself. It is but a blueprint, descriptive specifications of a process that still requires an implementation, namely something or someone to instantiate the necessary operations in the order specified to transform input into an output and represent the changes accordingly.

In other words, computer simulations are conceptually distinct from the processes and components that constitute them. They are also distinct from the purposes for which, and the settings in which, they are deployed. For example, a computer simulation can be used in an experiment without constituting the experiment itself. Computer simulations can also simulate an experiment, but the fact that it can “run” the simulation of the experiment is already evidence that it is something other than the experiment itself. A simple way to visualize this is to think of a laboratory. A laboratory is a place that has properties such that one can carry out an experiment. A conceptual distinction can almost always be drawn between the place that has the properties for the experiment to be carried out and the properties of the experiment itself. Even more precisely, we can think of the difference between an experiment and the instruments that enable a scientist to conduct it. In short, a computer simulation does things and is deployed to do things other than what its constituent components do. When they are deployed to do similar things to the things that models do—such as modeling, solving, etc.—computer simulations do them differently. This is in large part why we use them in the first place, because they allow us to do things that neither of the conventional elements of scientific practice allow us to do or they allow us to do some of those things in a preferable way (considering tradeoffs). (Simon, 1996; Parker, 2003; Humphreys, 2004; Winsberg, 2010; Weisberg, 2012; Morrison, 2015). If this is so then we also have a functional way to distinguish computer simulations from their components, the stages of their construction, the content they manipulate and the settings in which they are deployed.

3.3 Computer Simulations as Functionally Distinct

In the paragraphs above I showed that at a very basic level, computer simulations can be conceptually distinguished from their constitutive formal aspects and from the experimental practices for which they are deployed. Whatever they are, computer simulations are not the formal methods and they are not the experiments with which they are often equated or subsumed under in the philosophical literature. Drawing from the points in the last paragraph above, in this section I will show that computer simulations are also distinct from scientific models and from experiments in that they carry out different tasks than those of models and those of experimental practices. Furthermore, when the functions of computer simulations do overlap with the functions that can be carried out by other methods these functions are carried out in a different way (faster, discretely, approximately, etc.). As mentioned above, early successful computer simulations were deployed predominantly in order to “bypass the mathematical intractability of equations conventionally used to describe” highly non-linear phenomena (Fox-Keller, 2003). This is evidence that these technical artifacts were conceived of, designed with, developed for and deployed in the service of a task other than the tasks that could be carried out by any other tools at researchers’ disposal at the time (hand-written calculations, human computers, abstract and physical models, etc.). I expand on these points below.

At a fundamental level, the function of the computer simulation to simulate is something that is not done by either the formal elements that constitute it—such as a theoretical model—or by any of the components that comprise it (simulation model, description, computational architecture, etc.). This function is also not done by the experimental specifications encapsulated (Baird, 2004) in its procedures. I can for example, design a whole experimental setting and then, rather than hire a team of researchers, conduct field studies, build a lab, etc., I can put together a computer simulation that can simulate this experiment. The simulations task is to simulate the experiment encapsulated in its specifications. But the computer simulation is not necessarily the experiment and the experiment is not necessarily the computer simulation: it does something else than what these things do.

More importantly, a computer simulation is designed to do something else. As we saw at the start of this paper, it is this intentional design that is the core of what makes artifacts distinct from one another and from other things, e.g. organisms (Symons, 2010) and other natural objects with functional properties such as pseudo-artifacts (Kroes, 2006).

The functions for which artifacts are designed are particularly important to technical artifacts and more so when these technical artifacts are deployed in scientific contexts where epistemic requirements are stricter than those in ordinary epistemic experience (Symons & Alvarado, 2019). A computer simulation, is a technical artifact, particularly a physical construct with a specified function at the design level.Footnote 15 In a simple way, it is a technical artifact designed to run the model(s) or the comparative processes specified by the experimental procedures encoded in it. It is designed to represent in a performative manner the dynamic progressions of a system specified in a model or an experimental setting. While a computer simulation can be used for many broadly construed scientific purposes—explanation, experimentation, etc.—that may overlap with the purposes of other elements of inquiry like those of models or experiments, computer simulations are, at their core, designed to simulate. This function is different from that of the model, or the experimental procedures encoded in it to follow or the experimental settings in which it is itself embedded as an instrument. While it is true that both a simulation and a model share a few common functions and properties, namely those associated with representing, a non-trivial difference lies in their performative status. A simulation can only represent by performing stipulated operations. A model does nothing of the sort. When faced with a static model, for example, or a description of a model it is the epistemic agent that performs the operations therein. Just like a drawing on a napkin does not simulate, neither does a model insofar as this is not implemented by something or someone.

Similarly, a computer simulation may share some functions and properties with those of some experimental practices (Morrison, 2015). It is true that both the simulation and a controlled experiment allow a researcher to test parameters, manipulate values, etc. However, the computer simulation is to the experimental set up what the laboratory is to the experiment. That is, an experiment can be differentiated from that which enables us to carry it out. They do different things. An experiment can be designed to test, manipulate and explore a given hypothesis; a computer simulation of an experiment is designed to simulate the experiment that tests, manipulates and explores that hypothesis. There is an added functionality to the simulation, namely to simulate, that makes it functionally—and even metaphysically distinct. This is so even when the simulation becomes the subject of an experiment. Insofar as there is a function or a property of the experiment or the simulation that falsifies an identity relation between both, then the they are evidently distinct.Footnote 16

Here again, we can see a fundamental departure that allows us to distinguish computer simulations from both the formal elements that constitute their functioning and from the experimental settings in which they are deployed. Consider that a computer simulation is not the end result of dynamic calculations, nor is it the final state of the processes of the target system being simulated. Rather, a computer simulation is the execution of the processes meant to represent the dynamic development of the target phenomenon. To have a clearer view of this point consider the following: a still image of a specific point in time in future galactic formations is not a simulation. This is the case even if the image was produced by following formal specifications, models, equations, etc. It is still just an image and not a simulation. The image may even be the product of a simulation, but it is not the simulation itself. Furthermore, consider a numeric representation of the future position of a specific star within this galactic formation: this result is also not the simulation, though it may have been acquired through it. The simulation happens somewhere after the specifications are implemented and before the results are produced. The simulation happens as it is performed. The computer simulation is that which carries out such performance.

Besides implementing the specifications of conceptual models and other formal specifications, a computer simulation can also be designed to compute and represent the progression of the transformations of values in an experimental setting. And just as with the example above regarding the extra functional aspect of computer simulations in relation to conceptual models, the distinction is evident here once more: computer simulations can simulate an experimental setting (Barberousse, 2019). Once again, in this sense, a computer simulation has a different function from that of the experimental setting: it is an artifact designed to simulate it. A model of a system can, and is often, constructed with the dynamic provisos of a target system so that it can, when implemented on a separate artifact, provide the necessary specifications for this separate artifact—the simulation—to mimic the behavior (dynamic development) of said system. Yet the model does not itself constitute a simulation. The model does not run the model, the experiment does not run the experiment, nor do they simulate themselves—that is, they do not mimic/represent themselves: the computer simulation does.

In previous sections I sought to establish that there is a conceptual independence between formal abstractions, experimental settings and computer simulations. The point of this section is to show that even when these formal abstractions or the experimental settings are an integral part of the simulations such that the computer simulation manipulates the formal abstractions or such that it simulates the experimental settings, it is still a distinct thing in virtue of what it is doing to them or with them. The function of the simulation is to simulate and this is a function not found either in the mathematical abstractions or in the experimental settings that it simulates.

In short, whatever it is they do, what computer simulations do is not done, or is not done in the same way, by theoretical elements, or models. Similarly, whatever computer simulations do, is also not what experiments do. Rather they are the thing with which the experimental values are entertained, the thing which the entities and transformations of a target in the real world are mimicked, or the thing with which the experimental procedures are automated, etc. They are simply a distinct thing and their epistemic status reflects this. The epistemic warrants, for example, that underlie our reliance on the formal methods that underlie their functioning are not the same warrants that could justify our reliance on computer simulations (Alvarado, 2020; Symons & Alvarado, 2019). The same reasons that lead us to trust a simulation model (accuracy, reliability, etc.) cannot simply be transferred to the computer simulation that uses that simulations model. Rather, the simulation as mediator artifact, as an instrument, must be calibrated and validated on its own terms. Yet again, this shows an epistemic independence that signals towards the fact that computer simulations are not identical to the formal elements that underlies their functioning.

4 Computer Simulations as Instruments

Given the discussion in the sections above, we can say that the following is true: computers are simply not identical to either of the things they have been compared to or subsumed under and—following Lenhard (2007, 2019), Morrison (2015) and others (Morgan & Morrison, 1999)—computer simulations do not fit neatly on either category of the conventional dichotomy between formal abstractions and experiments. Some philosophers have proposed that this is because computer simulations are a special kind of formal method such as a special kind of model (Weisberg, 2012). Others (Morgan & Morrison, 1999; Barberousse, 2019), have proposed that this is due to the fact that computer simulations are a special kind of experiment more akin to measurement than to empirical practices with causal interventions. Still others, such as Lenhard (2019) and other practitioners (See Humphreys, 2004; Rohrlich, 1990), have proposed that this is due to the fact that computer simulations are a special and novel way of doing science.Footnote 17

The previous sections offered a series of distinctions that allow us to differentiate computer simulations from both the formal and the empirical elements of scientific inquiry that they are conventionally compared to or subsumed under. Simply put, simulations are something else. Following Davis Baird (2004) and others before him (see Van Helden, 1993) who suggest that there is already a well-established—if often neglected by the philosophy of science—branch of scientific inquiry related to instrumentation, in this section I argue the following two things:

  1. (a)

    Computer simulations are best understood as belonging to this latter category: i.e., they are instruments. Yet,

  2. (b)

    They are a hybrid instrument.

Understanding (a) provides us with a broad framework to elucidate why computer simulations simply do not fit neatly into the conventional categories that they have been erroneously subsumed under. As we saw in the discussion above, views that attempt to reduce computer simulations to either theoretical elements or empirical elements of inquiry cannot deny the distinctiveness and therefore individuality of the artifact in question: the computer simulation. Philosophers have known for a while that computer simulations did not immediately fit within each category of the conventional dichotomy under which they were subsumed.Footnote 18 To make sense of this and of their epistemic role in scientific inquiry, philosophers and practitioners suggested that computer simulations are, epistemologically speaking, somewhere in between experiment and theory (Rohrlich, 1990; Morgan & Morrison, 1999; Humphreys, 2004). These views are somewhat correct. Computer simulations do show a dual nature of sorts, and they do not belong on either side of such a dichotomy. However, this is not, as I will argue below, because of their novelty (Frigg & Reiss, 2009; Humphreys, 2009) vis-à-vis scientific methodology as a whole. They do not represent, for example—and strictly speaking—a paradigm shift regarding foundational theoretical principles or values in scientific inquiry. Computer simulations are not epistemically in-between because of their novel methodological characteristics—as Lenhard (2007, 2019) and others suggests, though they may have some of these novel characteristics. They are also not in between solely because of the fact that they can function as other measurement practices do—a suggestion made by Margaret Morrison as a means to accommodate the seemingly ambiguous epistemic nature of computer simulations. Both of these views—Lenhard’s and Morrison’s—can easily be accommodated when we locate computer simulations in the realm of novel technologies, a novel device with which to do old things better and with which to do some new things. This is because technical artifacts, such as instruments, in general exhibit a dual nature between the abstract specifications of their design—which in the case of scientific instruments often include theoretical formalities—and the materiality of its implementation that. Yet, as a device, computer simulations do not represent a novel branch of scientific inquiry but rather a novel addition to an often-neglected branch of inquiry: instrumentation.

As Morrison (2015) suggests, computer simulations can be understood as a novel measuring device with which to conduct a fundamental element of experimentations—measurement. As such they provide a way not only to run experiments or run models but also to investigate the adequacy or the lack thereof of models and experimental settings.Footnote 19 This bidirectional feature—that they can at the same time function within an experimental setting and also test the adequacy of elements in the experimental setting—also makes of computer simulations a hybrid of sorts in that they allow to experiment on the things that inform them—experiment on the experiments, if you will.Footnote 20 But it is here, precisely that we find what we are looking for, computer simulations are more aptly described as a novel device with which we can do novel or distinct things from what we could do with other available devices and methods in scientific inquiry. Hence, a better explanation of why computer simulations are somewhere in between experiment and theory is because they are in fact neither. Just like a stethoscope, which is neither a purely theoretical construct in medicine nor an experimental practice in and of itself, computer simulations are also neither. Stethoscopes can be used properly when their use is theoretically informed. That is, when their design, development and deployment emerge from a deeper conceptual understanding of the phenomenon that they are designed to detect and a conceptual understanding of the instrument’s relation to the kind of inquiry in which the stethoscope is deployed. Similarly, stethoscopes can also be an adequate device in an experimental setting within medicine: you can test hypotheses with the help of the stethoscope. Yet, stethoscopes belong neither to class of things that constitute theoretical knowledge, nor to the class of things that are the subject of inquiry in the practice of medicine itself. Rather, they are instruments designed, developed and deployed in clinical practices of medicine. Similarly, computer simulations belong to a third—and equally important (Heilbron, 1993; Van Helden, 1993; Baird, 2004 p. 89)—element of scientific inquiry: namely, instruments.

Understanding (b) requires that we see what kinds of roles are played by instruments in scientific inquiry and which kinds of instruments fit those roles. As we will see below, computer simulations continue to elicit a certain in-betweenness even within the category of instrumentation. This is because, as many other complex instruments, in order to function, computer simulations must do many and varied things. As I will show below, they are hybrid instruments. In the following sections I go through different taxonomies of instrumentation and show how computer simulations, as technical artifacts, as members of scientific instrumentalia (Durán, 2018), are also epistemically diverse and therefor hybrid: they enhance our understanding in many different ways. As mentioned above, this further solidifies the intuitions discussed above regarding their in-betweenness and their novelty, but this time within the category to which instruments belong.

4.1 Hybrid Instruments

First and foremost, computer simulations, as technical artifacts go, belong to the class of technical artifacts whose main function is to serve as epistemic enhancers (Humphreys, 2004). This rough characterization is sufficient to provide an intuitive framework in which we can differentiate them from the kinds of artifacts that enhance other limitations of human agency, such as physical strength or perceptual abilities. The kind of enhancement that a calculator provides, for example, is different from that of a bulldozer, which is in tun distinct from the kind of enhancement provided by a microscope or a hearing aid. While in a scientific setting any instrument can be said to contribute to the general aim of knowledge acquisition, we can still differentiate between the artifacts that augment our physical capacities and those that augment our epistemic ones.Footnote 21 If computer simulations enhance anything, they enhance our ability to acquire knowledge and not our ability to push harder or dig deeper.Footnote 22

According to Humphreys (2004), there are three ways an epistemic enhancer can extend the reach of our understanding. The first one is extrapolation, which is the capacity of an instrument to expand “the domain of our existing abilities” (p. 4). Then there is conversion which happens when “phenomena that are accessible to one sensory modality […] are converted into a form accessible to another” (2004 p. 4). And finally, there is augmentation. This last kind of enhancement occurs when, mainly through one of the other sorts of enhancements—particularly conversion (p. 4)—we are “given access to features of the world that we a not naturally equipped to detect in their original form”.

At first sight, it is easy to take computer simulations to do all three and often at the same time. A computer simulation can, for example allow us to gain insights into the evolution of galaxies which would take millions of years to examine in real time. At the same time they can convert intractable numerical values into immediately intelligible visualizations of complex dynamics. Furthermore, as with the example of the evolution of galaxies, they can provide access to “features of the world that we are not naturally equipped to detect in their original form.” A careful reading of what Humphreys has in mind, however, reveals that there are some challenges in this characterization. In order to better understand the epistemic role and position of computer simulations in scientific inquiry it is worth going through these three types of enhancement in detail as they provide a picture of the kinds epistemic endeavors that computer simulations undertake as well as a glimpse into why they are hybrid instruments across many domains and dimensions of inquiry.

We can understand each one of the distinct kinds of enhancements proposed by Humphreys with the help of some examples. Humphreys (2004), begins by pointing to the perceptual enhancement characteristic of optical instruments to exemplify extrapolation. Telescopes and microscopes, for example, expand the domain of the visible things for us. They also expand the level of detail of a perceptive ability that most of us are already acquainted with, namely vision. Similarly, other kinds of telescopes expand the range of the spectrum of electromagnetic radiation available to us without them. When it comes to computer simulations, we can see that, at the very least—particularly if we share Humphreys’ understanding of them as mathematical machines—they enhance our existing ability of analysis. That is, if we consider that we as epistemic agents have an analytical ability, say to manipulate and entertain the relationship between values as well as of the relationship with the symbols that represent these values and infer from their transformations, then we can see that computer simulation indeed expand on this existing modality. Therefore, computer simulations expand the domain of our existing epistemic abilities.

That computer simulations can enhance our epistemic capacities via conversion is a lot more straightforward. Consider the necessary conversion that musical notation undergoes when implemented on a musical instrument: the information on the sheet of music is of a different kind, namely visual or logical, and is converted into sound. In the scientific context of computer simulations one can immediately see this type of enhancement occurring when computed numerical values are converted into pixelated gradients on a grid and the transformations of such values are displayed as spatial changes on a screen. In this example, mathematical, or merely numerical information is transformed into visual information. So, computer simulations also convert.

Whether or not computer simulations allow us to augment our epistemic capacities in the sense specified by Humphreys is an interesting question. Conversion as you may recall occurs when we are “given access to features of the world that we a not naturally equipped to detect in their original form.” Humphreys himself notes that this is not immediately obvious from looking at what computer simulations do. According to him, simulations are the kind of thing that we use solely for mathematical tasks. In his view, computational methods have not yet proven to have given us access to mathematical features that we are not naturally equipped to detect in their original form. This is a contentious issue that exceeds the scope of this paper,Footnote 23 for now, it suffices to say that the ability of computer simulations to both extrapolate and convert is evidence of their hybridity as epistemic enhancers. What this shows, at least, is that they are not just one kind of epistemic enhancer but rather that they can have multiple functions and function as multiple kinds of instruments at once.

Importantly, others, particularly Symons and Boschetti (2013) believe the function of a computer modelFootnote 24 is simply to predict and that this alone should constitute the basis upon which we judge their merit. (2013) However, they also admit that they can do other things, particularly be used in exploratory tasks (2013 pp. 813).

Computer simulations also prove to be hybrids of some other sort. They can be more than one kind of instrument at the same time. According to Baird (2003 p. 45), there are three kinds of instruments: models, which represent; devices that create a phenomenon; and measuring instruments which can either detect the instance of a property or compare theoretical values against a phenomenon. Conventional measurement instruments, such as thermometers, as we will see, are, according to Baird, hybrids between the kinds of instruments that represent and those that create or recreate a phenomenon. This is because they must create/recreate a set of procedural steps in order to obtain their reading. Models, according to Baird, are not merely representative in that they ‘stand in’ place of actual phenomena of interest. Rather, they are representative in that they integrate knowledge and are constituted by knowledge of the target itself in an epistemically independent way. That is, in their own way and not necessarily in the same way that theory or experiment do. He explains this epistemic independence of models as instruments via Watson’s and Crick’s double helix DNA model. In this case, Baird says, they “did not use the model as a pedagogic device. They did not simply extract information from it. The model was not part of some intervention in nature and it was also not a part of an experiment.” (p. 36) Hence the model was not theoretical and was not part of an interventionist empirical practice such as a conventional experiment.Footnote 25 And yet, the model had the standard theoretical virtues since “it can be used to make explanations and predictions. It was confirmed by X-ray and other evidence, and it could have been refuted by evidence.” (p. 36) Computer simulations can also function like this when they are used as a device that is independent from both theory and or empirical experimentation to test or inform theory and experiment construction. Lenhard (2007) for example suggests that computer simulations can be used to fine tune the model specifications, parameters and assumptions of an experiment before having to carry it out. Furthermore, computer simulations are often designed with their representative functions in mind. Some simulations, like those of cellular automata are paradigmatic of the dynamic Baird is alluding to. They were developed independently of any theoretical framework associated with any particular phenomenon, or even discipline. They were also developed independently of any particular experimental setting associated with an inquiry onto a target phenomenon. While they were themselves experimental, they were not part of a premeditated focus inquiry besides that of investigating the features of the machines that produced them. It was only later that they came to be used as a tool that could to provide both theoretical and experimental insight regarding the formation and development of natural systems deemed to be similar enough to them.

Measuring instruments on the other hand, according to Baird, work by generating a signal from an interaction with a given target “which, suitably transformed, can then be understood as information about” that target (Baird, 2004). According to Baird, measurement requires that we can “produce, in laboratory conditions, a stable numerical phenomenon over which one has remarkable control.” (Hacking & Hacking, 1983, as cited in Baird, 2004) Measuring instruments are instrumentally “encapsulated knowledge” (Baird, 2004 p. 68) because they are constituted by the integration of a material object and the kind of knowledge provided by a model, theoretical values and principles. Hence, measuring instruments are hybrids in that they must reproduce and perform a set of specified procedures in order to represent their reading. A key insight in this description comes from Baird’s use of Hacking’s definition of measurement in which the main function of a measurement is to produce a “stable numerical phenomenon” in a setting of rigorous control. Computer simulations, in fact, are the kinds of technical artifact that can and do produce numerical phenomena. In fact, even if we take only the narrow definition of computer simulations (Durán, 2018) as equation solvers, this is what computer simulations strictly do. Furthermore, as far as controlled situations, it just simply does not get any better than the abstract realm in which some philosophers take computer simulations to operate. If computer simulations are, for example, anything like implemented models as Herbert Simon (1996) suggests—machine-automations of mathematical relations—then they are the kinds of instruments that Baird alludes to.

While a lot of work is being done by the first part of the description regarding the generation of a signal by an measuring device, this can be easily interpreted to be exactly what the display in some computer simulations is doing. We can think of an instrument which upon detecting a certain signal reacts accordingly. We can also think of an instrument which only produces such a reaction when other indirect values are computed, such as the ones that Morrison describes in particle physics. These two kinds of instruments are different in one sense. They do not both interact with the phenomenon in an equally direct way. However, they are also similar in that a computation must take place, whether it be analogous or digital in order for the detection to occur. If so, the difference is one of degree and not of kind and computer simulations can indeed qualify as a version of the latter kind (Morrison, 2015). Computer simulations also have to carry out, reproduce, a set of procedural specifications every time they are meant to represent whatever they are simulating. In this more physical sense, computers are reproducing a certain state of affairs as they implement the specifications of their simulation model. As we saw above, one of the things that simulations do is to encapsulate, through their procedure, the testing of models (Lenhard, 2007). Once a procedural hierarchy has been established to run dynamic equations of a system, the computer simulation can in some way test whether these dynamics correspond to the phenomena that researchers are investigating. But computer simulations do not only encapsulate knowledge regarding the principled theoretical values and the direct experimental data, they also encapsulate the procedure by which to transform/manipulate the content. That is, they encapsulate experimental settings too (Barberousse, 2019). As such they are a hybrid instrument in Baird’s terms. And as such we can characterize their in-betweenness within the realm of instrumentation without appealing to a sui-generis branch of science altogether. Thus, understanding computer simulations as instruments best explains the in-betweenness that so many philosophers of science have pointed to.

There are, of course, other ways of cataloguing the kinds of artifacts found in laboratories. And computer simulations also fail to simply fall under one single category or another in these other taxonomies. Heidelberger (2003), for example, distinguishes between 4 distinct functions of instruments in scientific experimentation. According to him, they either fulfill a productive or a constructive function. A scientific instrument is productive when it produces a phenomenon that doesn’t not normally appear in everyday epistemic experience. A constructive instrument, on the other hand, is the kind that can intervene in the target of interest in order to modify its behavior (2003 pp. 146). If we consider that computer simulations are capable of elucidating properties of a system that are not easily found in the world and manipulating data in ways that are not usually available in the world, we can construct this as meeting the first conditions. Much of astrophysics, and or particle physics would be unavailable to us otherwise. So, we can agree that computer have the capacity to manipulate data in such a way as to mimic behaviors of a system that are not easily found in the world. Granted, as we have seen in previous sections, whether this constitutes ‘doing experiments’ or not is a contentious manner. However, we can always fall back and appeal to Morrison’s conception that at least sometimes, in some cases, the way we conduct experiments in physics is not so far removed from the way experiments are characterized by simulations processes. If so, then we can say that the manipulation of data in fact constitutes an both the production of phenomena not easily found in the world and an intervention that modifies its behavior.

Heidelberger’s view also includes yet two more important categories for our discussion. The performative aspect of computer simulation is indeed important, but this performance is often deployed with an ulterior epistemic purpose, namely to render intractable processes intelligible, often through visualization, etc. Heidelberger calls this the imitative instruments, which “produce effects in the same way as they appear in nature without human intervention.” (2003 pp. 147) He also posits instruments as acting in a representative role, where the “goal is to represent symbolically in an instrument the relations between natural phenomena and thus better understand how phenomena are ordered and relate to each other” (pp. 147) Without going too much into detail, we can see that computer simulations straightforwardly carry out more than one of these tasks: they represent, they reproduce in an imitative manner, they are constructive in their control of variables, etc.

These other taxonomies provide yet another explanation for the recurring intuition that computer simulations are something that is always neither here nor there, but rather in-between of our efforts to categorize them. The difference here however is that this in betweenness is no longer characterized as happening at the level of meta-methodical aspects of scientific inquiry. That is, the in-betweenness of computer simulations is not in between formal and experimental practices of science, but rather in between conventional categories of instruments and artifacts found within scientific inquiry. While the details in each of these cases can be vastly expanded, what this point is poised to show is that, at the very least, computer simulations are the kind of instrument that does not fit easily into conventional categorizations of instruments in scientific inquiry. But it is also to say that computer simulations are the kind of instrument that can do these and other things, that incorporates the functions of many instruments and that it is a hybrid instrument. Perhaps computer simulations may indeed be a novel kind of instrument that requires its own epistemic assessment with regards to its status in scientific inquiry. If this is so, it is not because it is a sui generis kind of method, or a third branch of inquiry all on its own. Rather, it is because as an instrument it may indeed have genuinely novel properties that therefore pose genuinely novel epistemic challenges. This, by the way, is a common trajectory for all novel instruments introduced into scientific inquiry. Hence even their seemingly novel epistemic character can be explained by the view that understands them as instruments and not something else.

In short, computer simulations, are hybrid epistemic enhancers (Humphreys, 2004) in that they help us transform one sort of information into another, they help us enhance existing capabilities and they allow insight into areas that we would not have access to otherwise; they are hybrid instruments in that they are often both capable of simulating the processes by which an experiment is conducted (Barberousse, 2019).Footnote 26 and finally, computer simulations are also hybrid in that they are capable of being both productive and constructive instruments in Heidelberger’s terms. In other words, they are able to produce (simulate) a phenomenon in an environment that does not exist in nature as well as of modifying the (simulated) behavior of a system through intervention. All of this, they can do because they are instruments.

5 Conclusion

In this paper I have argued that computer simulations are best understood as instruments. This is because computer simulations are at the very least something separate and distinct from the theoretical and practical aspects to which they have been compared to and subsumed under. Furthermore, computer simulations are technical artifacts and their distinctness as such can be functionally identified as separate from the formal methods and the experimental practices for and in which they are deployed. The distinctive epistemic status of computer simulations from the epistemic status of the elements that underlie their functioning and from the experimental settings in which they are deployed also signals towards their categorization in this third, equally essential, branch of scientific inquiry. I also argued that their characteristic in-betweenness is best explained by understanding them as hybrid instruments rather than as a sui generis branch of scientific inquiry. While substantial challenges remain to be discussed, such as whether or not we can position computer simulations as rightfully belonging in the canon of properly sanctioned scientific instruments, the arguments in this paper constitute a preliminary step towards a more robust and adequate understanding of their possible role in scientific inquiry. They also offer a unificatory understanding of computer simulations, which is compatible with much of the literature and which requires only one ontological commitment regarding computer simulations: to view them as the instrument that they are.