1 Introduction

This article reviews an impassioned debate that has focused on a ‘what is-question’, namely: What is (a digital computer) simulation? The ensuing discourse centres on the question what kind of method simulation is and how scientists obtain knowledge from running simulations. At its root, therefore, this is an epistemic-methodological discourse which is led by philosophers of science.

Peter Galison was the first to address the key question of this discourse when he asked: Where should computer simulations be located on the ‘usual methodological map’ which distinguishes experiment from (mathematical) theory? (Galison 1996, 120) His question identifies the universe of discourse for answering the what is-question. The genus proximum is obviously restricted to a very simple methodological map. This implies that the discourse reviewed in this article does not include all discussions concerning the definition of the concept of simulation. For example, neither Humphrey’s definition (see below) nor the definition by Hartmann (1996) refers to this genus proximum. This article, consequently, does not review the discussion on these definitions. Theory and the experiment are the recurring reference points of that discourse. The specific question addressed is, do (digital computer) simulations ultimately qualify as experiments or as thought experiments?

Ever since Galison (1996) posed that initial question, a passionate debate has developed that raises many subsequent questions discussed in the epistemology and methodology of computer simulation. Dowling (1999, 271), for instance, argues that the usefulness of a computer simulation depends on the construction and maintenance of its methodological ambiguity. The state of the art, however, has been described and evaluated as a ‘battlefield’ (El Skaf and Imbert 2012, 3453), and a comprehensive overview is missing. This article will address that omission, illuminate the battlefield, and review the discourse. The purpose is also to evaluate that discourse and to give an outlook on questions that have not yet been addressed. This article argues that this discourse has provoked many other debates, epistemological and methodological. With respect to the ongoing discourse this article suggests opening the debate to new positions that are derived from locating computer simulations on richer methodological maps and turning from simulations to the activity of simulating.

A short overview of key issues in the philosophy of simulation is presented in Sect. 2. The next sections review the debate concerning the location of simulation on the methodological map. In this debate four major positions have evolved: simulation is a special type of thought experiment (Sect. 3), simulation is a special type of experiment (Sect. 4); simulations differ fundamentally from thought experiments (Sect. 5); simulations differ fundamentally from experiments (Sect. 6). In the discussion we ask how that discourse can be evaluated and how it might proceed (Sect. 7). Section 8 offers some concluding remarks.

We offer Humphreys’ (2004) refined definition of computer simulation as an orientation. He introduces the concepts of core simulation and full simulation:

System S provides a core simulation of an object or process B just in case S is a concrete computational device that produces, via a temporal process, solutions to a computational model … that correctly represents B, either dynamically or statically. If in addition the computational model used by S correctly represents the structure of the real system R, then S provides a core simulation of system R with respect to B. … When both a core simulation of some behaviour and a correct representation of the output from the core simulation are present, we have a full computer simulation of that behaviour. And similarly, mutatis mutandis, for a full simulation of a system. (Humphreys 2004, 110f)

This definition differentiates between the temporal processes of the calculations (core simulation)—that take time in any simulation run—and the importance of the temporal dimension in representing the target system. It implies a dynamic representation for dynamic behaviour and a static representation for static behaviour of the target system.

2 Major Issues of the Philosophy of Simulation

In the last decade the philosophy of simulation has developed as a new branch of the philosophy of science.Footnote 1 Major issues of the philosophy of simulation have been the epistemology and methodology of computer simulations. In particular, controversies developed concerning three questions:

  1. 1.

    Are computer simulations a novel activity sui generis which means that they raise new philosophical questions?Footnote 2

  2. 2.

    Where should computer simulations be located on the ‘usual methodological map’ (Galison 1996: 120)—which distinguishes experiment from (mathematical) theory? Specially, do simulations ultimately qualify as experiments or as thought experiments?Footnote 3

  3. 3.

    (How) can computer simulations produce new knowledge and explain real-world processes?Footnote 4

The philosophical discourse on simulation has seen both advocates (see, e.g., Humphreys 2009) and opponents (e.g. Frigg and Reiss 2009) of the thesis that computer simulation constitutes a new method sui generis. Overall, this discourse has not been intense, while its complement, the debate concerning the location of computer simulations on the methodological map has been passionate and an end to this discourse is not in sight. This article reviews the latter discourse because it plays a key role in respect to other questions and because it connects the philosophy of simulation to the philosophy of the experiment and the philosophy of the thought experiment. Sections 36 concentrate on reviewing this passionate debate. In this debate four major positions have evolved: (1) simulation is a special type of thought experiment, (2) simulation is a special type of experiment, (3) simulations differ fundamentally from thought experiments, (4) simulations differ fundamentally from experiments.

3 Simulation as a Special Type of Thought Experiment

Those positions which argue that simulation is a special type of thought experiment rely on the concept of thought experiment. The philosophical discourse on thought experiments has generated three outstanding definitions of this concept which are discussed before the two distinct positions are presented that either qualify computer simulations as a formalized thought experiment or as an opaque thought experiment.

3.1 The Concept of Thought Experiment

Cohnitz (2006, 96) has pointed out that the concept of the thought experiment has not yet been sufficiently explicated for a philosophy-of-science analysis. In particular, it is not clear whether the conclusion or some premises—which are not made explicit in advance—are elements of a thought experiment or not. Bearing this in mind, we assemble three outstanding definitions of the thought experiment.

The most extensive discussion of a definition of thought experiments in philosophy and science can be found in Sorensen (1992). He argues that thought experiments are limiting cases of experiments. They purport to deal with their questions by ‘contemplation of their design rather than by execution’ (Sorensen 1992, 6). From a logical point of view, thought experiments can be reduced to paradoxes, a paradox being a small set of individually plausible but jointly inconsistent propositions (Sorensen 1992, 122–131). Sorensen argues that it is rational to minimize inconsistency. Therefore, a paradox has the power of rational persuasion. Each paradox suggests a number of deductions. ‘Since the n members of a paradox are individually plausible but jointly inconsistent, n deductions can be derived by taking the negation of one member as a conclusion and the remaining members as premises. Each of the deductions will be valid and will have compelling premises’ (Sorensen 1992, 130). By revealing an inconsistency between scientific beliefs, thought experiments stimulate a scientific change of mind. Thought experiments also involve a storytelling setting: ‘Thought experiments are stories’ (Sorensen 1992, 6). They tend to be dragged into conventions for storytelling, e.g., the convention whereby aesthetic instincts have to be satisfied—they charm us into conviction—and avoid complexity. They have many of the limitations typical of short stories. However, compared with stories they are almost always far briefer, sketchy, and strange (Sorensen 1992, 205–208, 264–266) which suggests the need for approaching them as tiny stories (Sorensen 1992, 286).

Recently, two accounts of thought experimentation have been proposed, both based on an intense discussion of well-known earlier accounts. Cooper (2005) has examined the accounts of Kuhn (1964), Norton (1991, 1996), Brown (1991), Sorensen (1992), Gooding (1990) and McAllister (1996) and argues that they are inadequate. For instance, Norton and Sorensen are said to neglect thought experiments that do not have a propositional form. Cooper states that in her account, the form of the model is unconstrained—allowing for different epistemological perspectives, like the argument view, Platonic thought experiments, and mental models (see below Sect. 3.2). According to Cooper (2005, 336), a thought experiment attempts to construct models of possible worlds. This model either constructs or represents a possible world. Strictly speaking, the model will not produce a single possible world, but rather a template for an infinite number of possible worlds. During a thought experiment a series of ‘what if’ questions is asked and the “thought experimenter is committed to rigorously considering all relevant consequences in answering the ‘what if’ questions” (Cooper 2005, 337). This involves a manipulation of the thought experimenter’s worldview. The result of the manipulations is either a consistent model or a contradiction.

Roux (2011, 3) is motivated by the trivial use of the concept of thought experiments that has now become the rule. She reconstructs the emergence of the notion of thought experiment as a succession of misunderstandings and omissions of the works by Ørsted (1811), Mach (1893, 1905) and Einstein (1918–1921; more references are given in Roux 2011), and she elaborates three characteristics of her concept of thought experiment: Thought experiments are counterfactual because they are achieved in thought; they involve a concrete scenario—vivid specific cases—that might attract our attention in an aesthetic way and involve our imagination while they include particulars that are irrelevant to the outcome; and they have a well-determined cognitive intention, e.g., answering a question, raising a problem, testing a declaration by highlighting some paradoxes, or giving proof of a previously unknown result.

3.2 Simulations as Formalized Thought Experiments

In the first place, computer simulations have been described as formalized thought experiments and valued for their deductive power (Ziegler 1972, 14f., 89; see also Coleman 1964, 529). However, these pioneers neither explicated the concept of thought experiment nor did they refer to a distinct methodology of thought experiments.

Meanwhile, the philosophy of simulation has been associated with the philosophical discourse on thought experiments which has been established in the early 1990s (see Horowitz and Massey 1991). In that discourse, Norton (1991, 1996) has argued that thought experiments are simply arguments: They ‘posit hypothetical or counterfactual states of affairs, and … invoke particulars irrelevant to the generality of the conclusion’ (Norton 1991, 129). He has put forward the hypothesis that each thought experiment can be reconstructed as an argument such that the epistemic power of the thought experiment is that of the argument. To run a thought experiment is to execute an argument (Norton 1996, 354). Brown (1991, 2004) has objected that not each thought experiment can be reconstructed as an argument. There are—what he calls—Platonic thought experiments. Any such thought experiment ‘destroys an old or existing theory and simultaneously generates a new one; it is a priori (emphasis J.R.B.) in that it is not based on new empirical evidence nor is it merely logically derived from old data; and it is an advance in that the resulting theory is better than the predecessor theory’ (Brown 1991, 77). Nersessian (1992) objects to both in conceptualizing thought experimenting as mental modelling. She proposes that thought experimenting is a form of ‘simulative model-based reasoning’ (Nersessian 1992, 291f.). When thought experimenters reason, they manipulate mental models of the situation depicted in the thought experimental narrative. However, mental models do not involve a system of propositions: ‘A mental model is a non-propositional form’ (Nersessian 1992, 293).

Beisbart (2012) has established an argument view concerning computer simulations while remaining neutral about Norton’s argument concerning thought experiments. In particular, he has put forward the hypothesis that each computer simulation can be reconstructed as an argument such that the epistemic power of the computer simulation is that of the argument. To run a computer simulation is to execute an argument (Beisbart 2012, 403; please note that limitation theorems show that not all computations can be reduced to arguments). In Beisbart (2012), he reconstructs an example of a deterministic computer simulation model, while in Beisbart and Norton (2012) a probabilistic model and Monte Carlo simulations are considered. To reconstruct a computer simulation as an argument is not a trivial task, as instructions and commands in the programme code do not support each other in the way premises support an argument’s conclusion in the deductive-nomological (D–N) model of explanation. In addition, when calculating, the computer does not follow the implemented algorithm exactly: Any computer produces roundoff errors, and approximation errors will result from solving difference equations instead of differential equations. Given the premises P 1 to P n —implemented as instructions and commands in the programme code—a single run of a computer simulation will produce the result (or conclusion) Co. Beisbart (2012) reconstructs this run of the simulation program as an argument that runs from P 1 P n to Co: If P 1 and … and P n hold true, then Co obtains, with output Co taking approximately the values that the algorithm would yield if it were executed accurately. Relying on the extended mind hypothesis (Clark and Chalmers 1998), Beisbart finally argues that ‘running a computer simulation can be seen as a process in which a coupled system [of scientist and computer, N.J.S.] reasons through the reconstructing argument’ (Beisbart 2012, 422).

Though relying heavily on the philosophy of thought experiments, Beisbart has bracketed the ontological question—the individuation of computer simulations. He introduces the argument view for epistemological purposes (Beisbart 2012, 403, 416). He compares computer simulations and thought experiments, assuming that both are arguments, with respect to their content, their form and the use that is made of them (Beisbart 2012, 425–428): Both involve concrete scenarios and both refer to real or imagined target systems (or sometimes classes of target systems). Yet, simulations are much more specific in content and in the conclusions they reach. Both are deductive arguments, and both sometimes depend on hidden premises that have to be discovered. Yet, simulations include much longer and more complicated arguments; they take the form of calculations and they involve approximations. Like some computer-aided proofs, computer simulations are no longer surveyable. While Kuhn (1964) has stressed the potential of thought experiments to initiate conceptual change, Beisbart (2011, 260f.) argues that computer simulations instead make a contribution to normal science.

Recently, the thesis that simulations are formalized thought experiments has been challenged. It has been argued that they are opaque thought experiments.

3.3 Simulations as Opaque Thought Experiments

While Coleman (1964) and Ziegler (1972) have valued computer simulations for their deductive power, a new generation of scientists has put forward a hypothesis that is marked by disillusion.

Di Paolo et al. (2000) argue that simulations are opaque thought experiments in which the consequences follow from the premises, but in a non-obvious manner. Since simulation models are complex, the behaviour of such a model cannot by understood by simple inspection. Instead, suitable tools have to be chosen in order to achieve an adequate understanding. They criticize the practice of considering non-obvious patterns or entities as ‘emergent’. Non-obvious patterns require explanation, as do obvious patterns. The retreat to the concept of emergence and the renunciation of explanation would amount to an admission of failure.

Humphreys (2004, 147–151) argues that simulations are epistemically opaque because most steps in the process are not open to direct inspection and verification. He presents two sources of opacity: Computational processes are too fast for scientists to follow them in detail. In addition, in some simulation models an explicit algorithm linking the initial input values with the final output values cannot even be given. Basically, science would depend on computational speed. Therefore he ends with a daring conclusion: ‘Because these constraints cannot be circumvented by humans, we must abandon the insistence on epistemic transparency for computational science’ (Humphreys 2004, 150). He argues that the notion of replacing epistemic transparency still has to be developed.

Beisbart (2012, 427ff.) compares computer simulations with computationally assisted proofs of theorems, where the concept of surveyability has been introduced. A proof counts as surveyable ‘if it can be looked over, reviewed, verified by a rational agent’ (Tymoczko 1979, 59). He concludes that ‘computer simulations are arguments that are not consciously followed and that are not often surveyable. The question is how much and what kind of understanding we can obtain in this way’ (Beisbart 2012, 429).

Referring to the complexity of computer simulation models, Kuorikoski (2012, 168) has raised the provocative question whether replacing an unintelligible phenomenon with an unintelligible model constitutes epistemic progress. He warns that there is a risk involved in claiming to understand computer simulations: the danger of the illusion of understanding. This illusion is among other things based on (1) the modeller’s knowledge of all the assumptions implemented in the simulation model; (2) the ease with which simulation results can be visualized; and (3) the possibility of experimentally varying each initial variable and model parameter. The result is a psychological metacognitive phenomenon—a sense of understanding, the feeling of having grasped something, which is only a fallible indicator of true understanding. This sense of understanding must not be conflated with understanding proper, in the same way as the understanding of components must not be conflated with ‘understanding of the process, visualization with insight, manipulability with knowledge of mechanisms and ability to build something with knowledge of principles of its operation’ (Kuorikoski 2012, 183).

4 Difference in Principle Between Simulation and Thought Experiment

Opacity is of major importance for Lenhard (2011) as well. However, whereas Di Paolo et al. (2000) argue that simulations are opaque thought experiments, Lenhard (2011) introduces epistemic transparency as a condition of thought experiments. As a result, the epistemic opacity of simulation models constitutes a difference in principle between simulation and thought experiments. In keeping with this definition, simulation cannot be regarded as a special type of thought experiment.

Lenhard (2011) presents an innovative approach to opacity. He argues that simulation models and thought experiments use iterations. Thought experimenters test repeatedly and in an explorative way which premises may lead to a conclusion, while the context is kept constant until the goal is reached: a continuous, uninterrupted line of arguments without contradiction. In simulations—beyond that—model runs are repeated while the context is varied systematically. Thus, Lenhard’s (2011) concept of iteration does not refer to the recursions typical of numeric solutions of equations. In the end, the iterative mode in thought experiments renders any further iteration unnecessary. Departing from an initial state of non-transparency and ambiguity, the process converges through step-by-step confirmation to a single line of argument of the thought experiment (iteration mode ‘convergence to one path’). During this process, high standards of epistemic transparency apply. If the thought experiment ultimately is accepted, the iterations will have removed the non-transparency. ‘Nur als epistemisch transparentes Experiment kann es sedimentieren’ [only as an epistemically transparent experiment, can the thought experiment sediment; trans. N.J.S.] (Lenhard 2011, 136). In computer simulations the iterative mode remains structurally necessary as computer simulations do not remove opacity but only compensate for it. Computer simulations are opaque because such a high number of steps are involved that the overall process is no longer surveyable (Lenhard 2011, 136). Algorithmic transparency proceeds with epistemic opacity. In series of experiments epistemic transparency is replaced by step-by-step exploration. The results are landscapes, systematic collections of individual results (iteration mode ‘exhaust possibilities’). However, there arises no insight into a behaviour—a thought experiment cannot be reconstructed from the landscapes (Lenhard 2011, 138).

The thesis concerning the difference in principle between simulation and thought experiment emphasizes the advantages of the thought experiment. However, it can also be used as a justification for simulations: simulations advance while thought experiments fail (see also Humphreys 2004, 115). If the step-by-step process does not converge to one path, simulations can nevertheless combine individual results. They are used where the condition of epistemic transparency cannot be fulfilled (Lenhard 2011).

5 Simulation as a Special Type of Experiment

Can simulations be a special type of experiment? The standard textbook by Gilbert and Troitzsch (1999) emphasizes that the methodology of the experiment is similar to that of simulations. However, simulation and experiment would not be the same. ‘The major difference is that while in an experiment, one is controlling the actual object of interest …, in a simulation one is experimenting with a model rather than the phenomenon itself’ (Gilbert and Troitzsch 1999, 13).

5.1 The Concept of Experiment

Experiments involve the manipulation of one or more independent variables and observing the effect(s) on a particular outcome—the dependent variable. Basic elements of this definition are the intervention and observation of changes in the behaviour of the system which is interfered with under at least partially controlled conditions (see, e.g., Zimmermann 1972, 37; Parker 2009, 487). This definition has been characterized as the ‘old image of experiment’ (Morgan 2003, 217), since Hacking (1983, 230) pointed out that ‘to experiment is to create, produce, refine and stabilize phenomena’ and recognized that phenomena are hard to produce in any stable way—which also became the basic insight of New Experimentalism.

There are several moderate positions which elaborate communalities and differences between simulations and experiments. According to Norton and Suppe (2001, 92), simulation models are ‘just another form of experimentation’. Barberousse et al. (2009) state that computer simulations are often used as experiments and yield data about target systems. They argue from an epistemological point of view and provide a step-by-step analysis of semantic levels of simulation models. They conclude that the physicality or materiality of the computer is not decisive in accounting for the explanatory power of simulations. Winsberg (2009) emphasizes that simulations produce results that resemble the data generated in experiments. Simulations use many, if not all, common sense techniques that experiments also use to sanction their results (Winsberg 2003). Without referring to experiments, Krohs (2008) argues that simulations are a priori measurable worlds. Morrison (2009) has even put forward a keener hypothesis and placed it in the context of experiments. She argues that the results of some simulations may be characterized as measurements, not simply as calculations, and that simulations can attain an epistemic status comparable to laboratory experimentation. The main reason for this is that models play an important role in simulations and experiments. Models can function as measuring instruments. In opposition to this hypothesis Beisbart (2011: 65–72) has objected that the results of computer simulations are over-controlled—they are determined substantially by the computer programme and the input—which would not be true in the same way for the results of experiments: ‘There is no space left for an answer by nature’ (Beisbart 2011, 67).

Hacking (1983) has introduced the hypothesis that experiments have a life of their own: ‘I think of experiments as having a life: maturing, evolving, adapting, being not only recycled, but quite literally, being retooled’ (Hacking 1992, 307). Thought experiments do not allow for intervention in the real world; experiments, however, do. Winsberg (2003) and Lenhard (2011) plea for the notion of attributing a kind of life of their own to simulations. While Lenhard (2011, 132) emphasizes that the complexity of simulation models causes us to treat them as unknown research objects, Winsberg (2003, 121f.) considers the whole process of model building. Activities, practices and assumptions carry with them their own history of prior successes and accomplishment in the history of the development of a certain simulation model. They mature, evolve and are retooled. A tradition of using these activities, practices and assumptions is thus established, one that affects future processes of model building.

Apart from these positions, there have been four approaches to describing simulations as a special type of experiment: as experiments without materiality, as material experiments, as experiments on theories, and as modelled experiments.

5.2 Simulations as Experiments Without Material Intervention

Morgan (2003) views the depiction of experiments as involving manipulations of elements in the material world under conditions of control as a stereotype and ‘old image’ (Morgan 2003, 217). She is concerned with modern hybrid forms that involve elements of non-materiality either in their objects or in their interventions, and she is prepared to stretch the notion of what counts as material in experiments. One of these hybrid forms is computer simulations, which she describes as virtual experiments: nonmaterial experiments which may involve a kind of mimicking of material objects.

Moreover, Morgan (2003, 220) provides a complementary view on opacity which she does not refer to explicitly. She states that in material experiments ignorance may prevent scientists from explaining why a particular set of surprising results occur. Instead, scientists may be confounded, which suggests that material experiments have a potentially greater epistemological power than nonmaterial ones. However, in experiments without material intervention that are based on mathematical models surprising results can be explained even later on. This statement does not render the opacity hypothesis relative; however, it does recognize that opacity is not restricted to computer simulation but it is of a more general importance.

5.3 Simulations as Material Experiments

Contrary to Guala (2002), Morgan (2003) and Parker (2009) has described computer simulations as material experiments in a straightforward sense. A computer simulation study qualifies as an experiment. The system interfered with is a programmed digital computer. A computer simulation study is thus first and foremost an experiment on a real physical/material system. In such a study, scientists learn mainly about the behaviour of the programmed computer, including all the roundoff errors and approximation errors which the computer may generate. Parker (2009) characterizes an experiment as an ‘investigative activity that involves intervening in a system in order to see how properties of interest of the system change, if at all, in the light of that intervention. An intervention is … an action intended to put a system into a particular state, and that does put the system into a particular state, though perhaps not the one intended’ (Parker 2009, 487). The intervention occurs under at least partially controlled conditions. Parker makes explicit that she proposes a definition of the experiment without mentioning a target system. Following Hartmann (1996), she defines a computer simulation as a time-ordered sequence of states that serves as a representation of some other time-ordered sequence of states. Both definitions imply that there is a fundamental difference between a simulation and an experiment: ‘While a simulation is a type of representation … an experiment is an investigative activity involving intervention’ (Parker 2009, 487; emphasis W.S.P.). Thus, simulations do not qualify as experiments. However, computer simulation studies consist of the broader activity and therefore qualify as experiments.

Beisbart (2011, 60–65) has rejected the concept of simulations as material experiments. He argues that a simulation scientist is not normally interested in the hardware and that she does not want to learn about the hardware of the computer. Neither the hypotheses of the researcher nor her observations refer to the hardware.

5.4 Simulations as Experiments on Theories

From an epistemological point of view, Dowling (1999, 261) has put forward the hypothesis that ‘a scientist running a computer simulation performs an experiment upon a theory.’ In contrast to the authors mentioned above, her thesis is based on empirical research. She has interviewed 35 scientists from various disciplines who use simulations in their research, and she has analysed textbooks and other scientific publications in order to find out how scientists working with simulation reconstruct their research as a theoretical or experimental activity. She finds that often simulation is constructed as both theoretical and experimental. The constructions are context-sensitive and sometimes serve a social function. If, for instance, theoretical work was related to a higher status or was intended to support a grant application, then these scientists linked simulation with theory. If analogies between the output of a simulation and the predicted behaviour of a target system were deemed helpful, they linked simulation with experiment. Most fascinating of all, the same scientists reconstructed simulation as both theoretical and experimental, depending on the context. Dowling’s conclusion is that simulation should be characterized as a hybrid activity: “Simulation is thus presented as a hybrid of traditional scientific practices, facilitating ‘experiments’ on ‘theories’” (Dowling 1999, 265), and she derives the hypothesis presented above. She then corroborates her thesis by reference to several as-if statements: In simulations, researchers interact with theories as if they were entities that could be adjusted, observed, and measured (Dowling 1999, 267). The scientist can interact with it as if it were a real target, drawing on the physical skills of recognition and reaction (Dowling 1999, 269). On the one hand, the simulation programme is presented as a black box, as an ‘opaque, unpredictable entity’ (Dowling 1999, 265). On the other hand, the technology can be characterized as a transparent calculating machine. “By combining an analytical grasp of a mathematical model with the ability to temporarily ‘black box’ the digital manipulation of that model, the technique of simulation allows creative and experimental ‘playing around’ with an otherwise impenetrable set of equations, to notice its quirks or unexpected outcomes” (Dowling 1999, 271). She concludes that simulation constitutes a ‘significantly novel, and highly productive mode of scientific work’ (Dowling 1999, 271).

In opposition to Dowling it has been argued that, strictly speaking, theories (and most models) cannot be observed nor can researchers causally interfere with them (Parker 2009, 489). Beisbart (2011, 73f.) examines whether the statement might then be interpreted in a metaphorical sense. In an attenuated sense, however, not only simulations but also traditional pencil and paper calculations manipulate (‘intervene’ with) equations and observe the results. Not even in an attenuated sense could scientists claim that by doing this are they running an experiment. Thus, even a metaphorical interpretation of the statement would be misleading.

5.5 Simulations as Modelled Experiments

Beisbart (2011, 74–80) has proposed describing simulations as modelled experiments: ‘More precisely, a simulation of target X is a model of a possible experiment on X’ (Beisbart 2011, 74). He claims that intervention and observation are modelled in a simulation. Quasi-intervention and quasi-observation in simulations mirror intervention and observation in experiments. On the one hand, this conceptualization recognizes the similarities between simulation and experiment. On the other hand, differences are also recognized, and dubious statements are avoided, like the statements according to which the target of the simulation is being intervened on, and the statement which claims that the target is being observed. He argues that this proposal is compatible with the view that simulations function in the same way as thought experiments do (see Sect. 3.2 above). Overall, however, on the methodological map, simulations should not be attributed to the experimental pillar but more properly to the theoretical pillar of science (Beisbart 2011, 255, 260).

6 Difference in Principle Between Simulation and Experiment

Winsberg (2009, 578) rejects Gilbert and Troitzsch’s (1999) statement that in a simulation one is experimenting with a model rather than with the phenomenon itself. He argues that Gilbert and Troitzsch’s models are abstract entities, and one cannot experiment with them. There would be a difference in principle between simulation and experiment. At present, there are two approaches which support this thesis.

Guala (2002, 67) and Morgan (2003) argue that experiments are characterized by ‘deep’, material similarity of target and object. In simulations, the similarity is merely abstract and formal. In a genuine experiment the same material causes as those in the target system are at work, while in a simulation these causes are altogether different. If, for instance, participants are manipulated in an experiment, the same material causes are at work as in everyday life, while in a simulation only certain causes that are considered to be relevant would and could be modelled. As opposed to Morgan (2003, see above), Guala (2002) introduces material similarity as a condition for experiments. According to this author, the lack of material similarity in simulations establishes an ontological difference between simulation and experiment.

Winsberg (2009, 580f.) criticizes the concept of material similarity as being too weak and the notion of formal similarity as being too vague. He argues that any two systems may have some material similarities with one another and some differences. Rather, the relevant similarity might be a material or a formal one. Investigators may never be sure which of the two similarities has been established successfully. Winsberg (2009) puts forward the hypothesis that it is the nature of the background knowledge whereby researchers justify their beliefs as to whether the object can stand in for the target which fundamentally distinguishes simulations from experiments. In an experiment, the background knowledge may be the belief that the object and the target have the same material composition. In a simulation, however, the background knowledge is related to the principles of model building: ‘In simulation, the argument that the object can be used to stand in for the target—that their behaviors can be counted on to be relevantly similar—is supported by, or grounded in, certain aspects of model building practice’ (Winsberg 2009, 586). He illustrates the three kinds of background knowledge that scientists working with simulation bring to the construction of models in computational fluid dynamics: theory of fluids, physical intuition, and well-established computational tricks. Winsberg states that such model-building principles can be identified in any discipline that one is inclined to study, and that there may be more than three kinds of background knowledge (Winsberg 2009, 587).

7 Discussion

Recently, El Skaf and Imbert (2012) described the state of the art in the philosophy of simulation as a battlefield. In this section, we will discuss answers to two questions: (1) How can the discourse be evaluated? (2) How could the discourse proceed?

7.1 Driving Other Debates

El Skaf and Imbert’s (2012) metaphor of the battlefield has a negative connotation. They tend to evaluate the whole discourse as fruitless. They base their statement on incompatible conceptions of the thought experiment and the experiment in the overall discourse. In their conclusion they qualify computer simulation as a hybrid form of science instead of committing to statements about the identity of computer simulation. A similar position is taken by Frigg and Reiss (2009). They claim that simulation is “a Sui Generis activity that lies ‘in between’ theorizing and experimentation” (Frigg and Reiss 2009, 595). The opposite position is taken, for example, by Humphreys (2004), Küppers and Lenhard (2005) and Galison (1996). Simulation is seen as a ‘tertium quid’ vis-à-vis the theory and the experiment. This article takes these two opposite positions as a starting point in order to argue that the discourse on the location of simulations on the methodological map has hardly been fruitless. Instead, it has provoked many other debates, epistemological and methodological, in particular:

  • Third mode of science. Are computer simulations a novel activity sui generis, which means that they raise new philosophical questions? Departing from the analysis of the opposition of (mathematical) theory and experiment, this question has been addressed (references see Sect. 2).

  • Epistemology of simulation. (How) can computer simulations produce new knowledge and explain real-world processes? (e.g. Beisbart 2012).

  • Uses of simulations. Which activities are characteristic of simulation studies? Which of these activities are similar, and which differ from activities typical of experimentation and thought experimentation? Philosophers of science (e.g. Tal 2011) and practitioners of simulation (e.g., Elsenbroich and Gilbert 2014, chap. 1.3) reflect on how simulation models are actually used.

  • Feedback to the philosophy of the thought experiment. What are the basic features of the thought experiment? (e.g. Lenhard 2011) Are simulations the end of thought experimenting in science? (e.g. Chandrasekharan et al. 2013).

  • Feedback to the philosophy of the experiment. How does experimentation change? (e.g., Morrison 2009).

This list includes only the most obvious fields that have felt the impact of that discourse with some illustrative examples. Unfortunately, a comprehensive overview of these fields cannot be given in this article (but see, e.g. Grüne-Yanoff and Weirich’s 2010 review on the epistemology of simulation).

How is this discourse likely to proceed? Modern philosophy lacks a firm foundation upon which such controversies can be decided. Consequently, no end to this discourse is in sight. And considering the demonstrated heuristic function of the debate, no end is necessary. Instead the debate should go on. Instead of concentrating on single positions that might be further elaborated, this article focuses on a more general question: what questions have yet to be addressed?

7.2 Considering Richer Methodological Maps

What is striking from the point of view of a social scientist is that the ‘usual methodological map’ that Galison (1996, 120) refers to comes from science, particularly from physics. That map, however, looks quite different from the perspective of the social scientist. Unlike physics, (mathematical) theory and (empirical) experiment are not the cornerstones of that methodological map. This article uses sociology as a discipline to illustrate the argument that simulation in the social sciences is located on a different methodological map as compared to simulation in the natural sciences.

If a sociologist refers to her methodological map, she will refer neither to the experiment nor to (mathematical) theory. On the one hand, with respect to (mathematical) theory, we have to recognize that social theories do not lend themselves to formal descriptions—rational choice theory being the only exception. On the other hand, the experiment is limited in scope and importance due to ethical and practical reasons. A crucial concern is that small-scale experiments (e.g. on individuals’ norm adherence), both in the lab and in the field, are not generalizable to other social conditions and thus do not contribute to answering large-scale research questions (such as the stability of social order). However, sociology is mainly concerned with aggregate states, dynamics, and outcomes. An indicator of the rather marginal role of the experiment in sociology is the amount of attention it gets in textbooks. For example, in the standard international textbook on methods of empirical social research, Bryman (2012) dedicates only four out of 766 pages to the field experiment and the laboratory experiment.

Basically, the methodological map of sociology is more complex and includes methods that are unique to the social sciences. One particular method that is unique to the social sciences is that the researchers can interview their objects of research: individuals are asked to answer the social scientists’ questions. While economists tend to ignore this method of inquiry, sociologists have developed a large variety of interview types and use them intensively. On a methodological map of sociology, predominantly verbal theories will represent one cornerstone while the other, the empirical corner, is somehow ‘overcrowded’ (in comparison to physics, see Fig. 1). That corner is split into a quantitative and a qualitative camp. Each camp hosts diverse methods of empirical social research, such as, for example, surveys and qualitative interviews, or standardized and participant observation. On this methodological map, computer simulation may be perceived as being located in the quantitative camp or ‘in between’ verbal theory and the quantitative-empirical methods. With reference to the methodological map of sociology, neither the thesis of simulation as a ‘tertium quid’ vis-à-vis the theory and the experiment nor the thesis of simulation as a hybrid form makes sense any more.

Fig. 1
figure 1

The methodological maps of physics and of sociology

There are many more differing methodological maps; take, for example, the methodological maps of archaeology or of psychology. What do we learn from repositioning computer simulations on these diverse maps? First of all, we are led to recognize that computer simulations should not be overestimated. Only on a Spartan methodological map, such as in physics, can computer simulations be perceived as a ‘tertium quid’ or hybrid form. With respect to richer methodological maps, different questions have to be asked. This article therefore urges widening the debate to include new positions derived from locating computer simulations on richer methodological maps.

7.3 From Simulation to Simulating

The present discourse theorizes simulation somehow without considering the simulating scientist. It seems that computer simulations are done by the computer. This is true for the calculations, but it ignores the contribution of the simulating scientist to the research. One may object that this statement ignores the extended mind hypothesis (Clark and Chalmers 1998). On the contrary, we argue that this statement holds even for the extended mind hypothesis. Even in the concept of a coupled system of scientist and computer that reasons through the reconstructing argument, human activity is limited to executing arguments—the same activity that is attributed to the computer.

This article recommends opening the epistemic-methodological discourse on simulations to the activity of simulating. What is simulating?Footnote 5 It suggests connecting the philosophy of computer simulations to the philosophy of play. Play is an ‘essential element of man’s ontological makeup, a basic existential phenomenon’ (Fink et al. 1968, 19), ‘just as primordial and autonomous as death, love, work and struggle for power’ (Fink et al. 1968, 22). This article suggests investigating the claim that scientists may slide into the mode of play while working on their simulation models. Will simulating ultimately qualify as playful investigating? This approach has the potential to bring together the philosophy of play (MacLean et al. 2015; Ryall et al. 2013), the philosophy of computer simulation, the philosophy of simulation games (Crookall 2011) and the philosophy of computer games (Sageng et al. 2012). The latter are much more interested in the activities of humans than the philosophy of computer simulation. Such a new direction in the discourse on the epistemology and methodology of simulation will not only illuminate simulating as an activity but also the nature of the simulating scientist’s experience. It will at the same time introduce constructivist thinking into the hitherto positivist philosophy of simulation. And we can finally expect that this new approach feeds back into the philosophy of play, the philosophy of simulation games and the philosophy of computer games.

8 Conclusions

Computer simulation is a young method on the methodological map of both the natural sciences and the social sciences. This article has reviewed the passionate debate on the epistemology and methodology of simulation that has stimulated some major research fields in the philosophy of simulation as well as the philosophy of the experiment and the philosophy of the thought experiment. With respect to the ongoing discourse this article suggests opening the debate to allow for new positions that are derived from locating computer simulations on richer methodological maps and turning from simulations to the activity of simulating.