In the last 15 years there has been a good deal of attention focused on the idea of mechanism as a central organizing principle for understanding both ontological and explanatory questions in the sciences. A group of philosophers of science often called the New Mechanists have argued that most of the phenomena that scientists seek to explain are the product of the operation of mechanisms, where mechanisms are understood as collections of objects (the parts of the mechanism) that are organized in such a way as to be productive of the phenomena in question.

Our goal in this paper is to explore how well this neo-mechanistic worldview coheres with the picture of nature that we derive from quantum mechanics. There is prima facie reason to be concerned that the two pictures do not fit well together. The neo-mechanists suppose that mechanisms are composed of objects with definite properties, where these objects are connected to each other via local causal interactions. Quantum mechanics (QM) calls into question whether there are really such things as objects with definite properties and whether causal relations can be understood in terms of local interactions between such objects. Moreover, mechanisms are hierarchical in the sense that the parts of mechanisms may themselves be complex objects composed of subparts which are components of lower level mechanisms. It seems then that even complex macroscopic mechanisms must supervene on a set of “objects” that behave non-classically. This dependence upon a non-classical micro-level might seem to infect the ontological and even explanatory claims of the New Mechanists.

Our judgment is that a more careful description of the relationship between mechanisms and quantum mechanical phenomena will show that these concerns are ill founded. Despite real differences between quantum mechanical and classical ontologies, we shall argue that the phenomenon of quantum decoherence accounts for the emergence of classical objects and properties, and that these objects and properties provide an appropriate ontological grounding for mechanistic explanation.Footnote 1 Furthermore, notwithstanding the differences between quantum mechanical and classical ontologies, there are important analogies between some quantum mechanical explanations and classical mechanistic explanations that allow us to legitimately speak of quantum-mechanical mechanisms.Footnote 2

Our paper will proceed as follows: We begin in part I with a characterization of the ontological claims of the New Mechanists. In part II, we summarize important non-classical features of QM and discuss how these are plausibly interpreted as posing a threat to the New Mechanist agenda. In part III, we examine the theory of quantum decoherence, a kind of analysis of quantum mechanical systems which provides insight into why many macroscopic systems behave classically. In part IV, we discuss how decoherence can be appealed to defuse the worries in part II—thus providing ontological and explanatory legitimacy to mechanistic explanations of classically behaving systems. Finally, in part V, we consider how to apply mechanistic strategies to the explanation of systems that do not behave classically.

1 The ontological claims of the new mechanists

Beginning in the 1990s, and especially since the publication of the widely cited paper “Thinking about Mechanisms” (Machamer et al. 2000), philosophers of science have paid an increasing amount of attention to the concept of mechanism and its role in scientific inquiry. The New Mechanists have argued that many scientists across a range of disciplines understand their work as involving the discovery and description of mechanisms responsible for the production of the phenomena that they study. It is often pointed out that an account of scientific activity that gives centrality to the concept of mechanism is more descriptively adequate in most sciences than one that focuses on scientific laws; accordingly, the New Mechanists argue that activities such as model and theory construction, testing, explanation and prediction must be recast in a mechanistic light.

It is a challenge to clearly identify the ontological claims of the New Mechanists. There are terminological and sometimes substantive disagreements among the New Mechanists, and their ontological positions have shifted somewhat since their first publications (Bechtel and Richardson 1993; Glennan 1996; Machamer et al. 2000). A detailed discussion of these disagreements and shifts is beyond the scope of this paper. We will limit ourselves to a description of the points about which the New Mechanists (as expressed in their recent work) appear to be in agreement, together our own elaboration of what we take to be the most plausible ontological picture that is consistent with this consensus position.

One of the tasks that have occupied the New Mechanists is to give a concise but informative characterization of what a mechanism is (see, e.g., Bechtel and Abrahamsen 2005; Glennan 1996, 2002; Machamer et al. 2000). One way to summarize the mechanistic consensus is this: Mechanisms consist of parts (entities, components) that are so organized that the activities and interactions of these entities are productive of a phenomenon.

There are several features of the consensus that are relevant to understanding the ontological presuppositions of the New Mechanists. First, mechanisms consist of discrete parts. As Glennan (2008, 378) writes, “By calling parts ‘entities’ or ‘objects,’ mechanists suggest that parts have properties that are relatively stable over time and that at least theoretically these parts are subject to manipulation and isolation from the rest of the mechanism.” While parts are taken to be real things as opposed to explanatory constructs, the New Mechanists agree that there is an inherent perspectivalism in the process of identifying and individuating parts: mechanisms are always mechanisms for some phenomenon or behaviour (Glennan 1996; Darden 2008; Craver 2013). A single system that exhibits a variety of behaviors may be decomposed in different ways depending upon what mechanism-dependent phenomena one begins with. For instance, if the phenomenon in question is range of motion of primate limbs, parts would include such things as bones, muscles and ligaments. If on the other hand, the phenomenon in question concerns coordination and control of limbs, there would be a different division—including, for instance, sensory and motor neurons, parts whose boundaries cross-cut the boundaries of the parts upon which range-of-motion phenomena depend. This is in part an explanatory point, but the New Mechanists accept a broadly realist account of explanation in which these decompositions are explanatory because they refer to real features of the world.

A second feature of the New Mechanist consensus is the idea that phenomena exhibited by the mechanisms are produced by the activities and interactions of parts. The terms ‘activity’, ‘interaction’, and ‘produce’ are all transparently causal. If the activities and interactions are not genuinely causal, then mechanism can’t produce anything. Mechanistic explanation is a species of causal explanation and the legitimacy of mechanistic explanation depends upon the interactions between parts being genuinely causal.

It appears at first glance that there are serious substantive disagreements among the mechanists about just what activities and interactions are. Much of this appearance is due to the Machamer et al. 2000, which frames its characterization of mechanisms in opposition to the earlier views of Glennan (1996). Machamer, Darden and Craver characterized their ontological position as “dualist” as opposed to the views of Glennan and Bechtel and Richardson, which they describe as “substantivalist.” Their view is supposedly dualist because they say that mechanisms consist of entities and activities, and that neither category is reducible to the other.

Although Machamer et al. 2000 makes a number of important points about the ontology of mechanisms, we believe the term ‘dualism’ is misleading. It overstates ontological disagreements with Glennan and Bechtel and suggests an unnecessarily spooky view of activities that is not developed in subsequent work. The sense in which their position is dualistic is simply in insisting that any proper characterization of a mechanism will make reference not just to the parts of that mechanism, but also to the activities (and interactions) of those parts. But this is something that Glennan and Bechtel certainly agree with. Moreover, the term ‘dualism’ suggests an independence of entities and activities that is implausible. Activities always require actors, and there are no entities which don’t (or at least can’t) engage in activities. Perhaps the most obvious difference between the terms ‘activity’ and ‘interaction’ concerns the arity of the relation (Illari and Williamson 2012). An interaction always involves more than one entity or part, while the term ‘activity’ is inclusive of both interactions and solo activities.Footnote 3

The New Mechanists embrace an approach to causality that falls within the family of approaches that Woodward (2011) has characterized as “geometrical-mechanical” accounts. These accounts, which also include the causal-mechanical approach to causal processes that has been advocated by Wesley Salmon and Phil Dowe, suggest that genuine causality is an intrinsic relation that depends upon singular connections via continuous processes. They are contrasted with what Woodward (and many others) call difference-making accounts, which understand causality as an extrinsic and comparative relation in which counterfactual dependence (or perhaps other sorts of comparative relations) rather than productive connection is essential.Footnote 4 The New Mechanist approach diverges from the Salmon/Dowe approach to productivity in its causal pluralism. Rather than seeking a reductive analysis in terms of some physical characteristic (like exchange of conserved quantities) the New Mechanists argue that there are many sorts of causes, corresponding to different sorts of activities and interactions, which produce different kinds of changes in different kinds of entities. For instance kinds of activities and interactions that occur in ecological systems (e.g., trophic interactions between plants and animals like predation and grazing) are very different than those that occur in biochemical systems (e.g., unwinding and transcription of DNA or folding of proteins).

A third feature of the New Mechanist consensus is its focus on organization. It is the organization of the entities (and their activities) that allows the mechanism to produce the phenomenon that it does. A pile of lawnmower parts does not a lawnmower make. While mechanists emphasize the importance of spatial and temporal organization, it is ultimately the causal organization upon which the productive capacities of the mechanism depend.

To clarify the role or organization in mechanisms, consider as a brief example the mechanism for starting a lawnmower engine. The engine is started by rapidly pulling a cord while the throttle is set to an appropriate level. The cord is attached to a flywheel which in turn engages a clutch which causes the crank shaft to move, which in turn moves the piston, allowing air and fuel into the cylinder. The flywheel is also connected to a magneto—a device which uses the rotation of magnets to generate a voltage. The magneto is attached to the sparkplug which produces the spark that ignites the fuel-air mixture in the piston. The production of the phenomenon (namely the starting of the mower) depends essentially on organization. The parts must be spatially organized so that the same part—the flywheel—may simultaneously engage the clutch and turn the magneto. Timing is also essential here. The parts must be so organized that the spark generated by the spark plug enters the cylinder at the correct time in the piston’s cycle. These spatial and temporal arrangements determine the causal organization of the system.

An important consequence of the New Mechanist view of causal organization is a certain kind of anti-holism. Typically it is not the case that every part of a mechanism is causally connected to every other part. And in many cases of causal dependence, the connecting process will be indirect. For instance, in the lawn mower, the pulling the cord may produce a movement of the piston—but not directly. It operates via a chain of more direct interactions of the intervening parts.

A fourth feature of the New Mechanist approach is its focus on the hierarchical organization of mechanisms. Mechanisms and the phenomena they produce may in turn be embedded in larger mechanisms, and the parts of mechanisms and their activities and interactions may be explained in terms of the operations of lower-level mechanisms. This idea is schematically represented in diagrams such as the one found in Fig. 1.

Fig. 1
figure 1

Hierarchical arrangement of mechanisms (redrawn from Glennan 2011)

Glennan emphasizes that the activities and interactions of the parts of mechanisms are “mechanically explicable”—meaning that what is, at one level of the hierarchy, a direct interaction between parts, will be explained as the operation of a complex mechanism at a lower level. In this diagram, the upper level circles and arrows represent higher-level entities and interactions, while the lower level circles represent lower level ones.Footnote 5

An obvious question raised by this hierarchical organization, is if, when and how the hierarchy bottoms out. Machamer et al. (2000) emphasized that mechanistic explanations start from “bottom-out” entities and activities. Machamer et al. emphasize that “bottoming out is relative” (ibid, 13). Within different fields or research groups, different activities will be taken as unproblematic and basic. This is an important methodological observation, but it does not answer the metaphysical question of whether or not there are entities and activities that form an absolute bottom.

Metaphysically we may identify a couple of possibilities. One is a metaphysical atomism in which there is a set of basic objects—the atoms—that interact with each other, perhaps in a manner that is law-governed, but which is at any rate not explicable by further mechanisms. This position, often called microphysicalism (Pettit 1993), has its defenders in the metaphysics community, but we take it that our current best physics raises serious doubts about microphysicalism (Schaffer 2003; Ladyman and Ross 2007). Another possibility is that there are mechanisms all the way down; there is no fundamental level and every interaction is mechanically explicable.

While we concur with Schaffer that there are not compelling scientific or metaphysical arguments for a fundamental level, the mechanisms-all-the-way-down approach does pose challenges for the ontological views of the New Mechanists. The problem is not that the New Mechanists are committed to microphysicalism; it is rather that one cannot assume that all levels will have the ontological features required to provide mechanistic explanations of higher level entities and activities. Even if there is no absolute bottom, there may be some level below which there are no classical mechanisms. We will call this level the fundamental classical level.

After a review of some important features of quantum mechanics, we will argue for two claims about the relationship between the New Mechanicism and the ontological consequences of quantum mechanics. First, we shall show how the theory of quantum decoherence can explain the emergence of a fundamental classical level. Second, we will show that, by relaxing certain assumptions, we can find within the quantum realm non-classical phenomena that can be explained mechanistically.

2 Potential clashes between the new mechanicism and quantum mechanics

Let us draw together the key features of the mechanist consensus in order to identify the ontological suppositions of the New Mechanists and the potential conflicts that these ontological commitments have with ontological commitments that may be forced upon us by quantum mechanics:

  1. 1.

    The New Mechanists believe that the world is composed of a variety of objects; many objects are compounded from smaller objects; there may or may not be some basic objects of which all other objects are made. Either way, in asserting that these objects (both the basic and the compound) are really objects, the New Mechanists suppose that these objects can be characterized by some set of properties that exist in the objects regardless of whether they are observed or measured.

  2. 2.

    The New Mechanists believe that there are causal relations that obtain between these objects, basic or compound. Higher level causal relations obtain in virtue of causal relations between parts of intervening mechanisms. If there are any basic objects, there must be fundamental (non-mechanism-dependent) causal relations that obtain between them.

  3. 3.

    The New Mechanists believe that causal interactions between the parts of a mechanism are intrinsic and local. It is not the case that every part of the mechanism is connected to every other part. Given a decomposition of mechanisms into parts, one can distinguish some causal influences that involve direct connection between parts, while other causal influences obtain via intermediate entities and activities.

There are at least three non-classical features in quantum mechanics that seem to clash with the ontological commitments of the New MechanistsFootnote 6:

  1. (A)

    Indeterminacy of properties

  2. (B)

    Non-localizability of quantum objects

  3. (C)

    Non-separability of quantum states due to entanglement (“quantum holism”)

In this section we will explain briefly how these features of quantum mechanical systems arise, and where they appear to conflict with the ontological approach of the New Mechanists.

From an ontological point of view the most initially disconcerting feature of quantum mechanics is arguably the (A) indeterminacy of properties. In general, QM only predicts probabilities for finding certain values of observable quantities upon measurement (e.g. position, momentum, spin). The issue is not that properties are fuzzy or unsharp. In the most extreme case, nothing at all can be said, i.e. there is not even an increased probability for finding a value in any given finite interval. A classic expression of this indeterminacy is Heisenberg’s uncertainty principle. According to this principle, upon precise measurement of, say, an electron’s position, its momentum becomes completely uncertain, and conversely a precise measurement of the electron’s momentum makes the position uncertain. Thus Heisenberg’s uncertainty principle tells us that it is impossible to simultaneously ascribe a sharp position and a sharp momentum to an electron, even though we can measure either its position or its momentum precisely, or both successively. More generally, it is impossible to simultaneously ascribe sharp values to all measurable properties of quantum objects.

The apparent conflict between the mechanist program and the indeterminacy of properties arises from the fact that the phenomena produced by mechanisms are thought to be produced by the interaction of parts in virtue of their dynamically relevant properties. If it is impossible to even ascribe these properties to the mechanism’s parts then the mechanistic program comes to a grinding halt in its very first step, namely the decomposition into interacting parts.

Problem (B), the non-localizability of quantum objects, is in one sense just a particular instance of the indeterminacy of properties, in this case concerning the position observable. However, non-localizability has features that merit additional attention. If at a given time t 0 the wave function of some quantum object is localized in a finite region it will develop, due to the dynamical law of quantum mechanics, infinite tails immediately after t 0. In other words, the wave function will instantaneously spread over the entire space (Hegerfeldt 1998). This happens in non-relativistic quantum mechanics already but there it is not a problem because nothing forbids infinitely high velocities. At least it makes sense to assume that a quantum object is localized in a finite interval at some given time. However, if one respects the requirements of special relativity theory, localizability gets lost in a more drastic way: The very concept of a localized object doesn’t fit into the resulting theory any more.Footnote 7

Specifically concerning mechanisms, two problems arise. The first problem is that non-localizability implies that multiple quantum objects “occupy” the same infinite space-time region. This obviously and unsurprisingly applies to light quanta. However, it also applies to quantum objects with (rest) mass, such as electrons. Thus apparently the most basic components are no longer spatiotemporally distinct objects as initially intended by the mechanistic program. Secondly, if quantum objects are spread out in the entire universe, then it seems that they don’t qualify as those entities that interact locally in mechanisms.

The third and final problem (C) concerns the non-separability of quantum states due to entanglement, also called quantum holism (Healey 2009). While the first two problems apply to single quantum objects already, non-separability applies only to composite systems.Footnote 8 In general, quantum objects are entangled with each other. This means that complete possible knowledge about the states of the subsystems does not imply complete knowledge about the compound state. The reason for the entanglement of quantum objects are radically non-classical laws for their composition: The features of the composition are intrinsic properties of the compound system, which are not captured by specifying all spatiotemporal properties of the separate subsystems. Thus non-separability—or quantum holism—may undermine the very conception of separate parts, which is indispensable for the mechanistic program. But there is also another point where quantum entanglement seems to clash with new mechanicism: Even the most complete information about the spatiotemporal organization of the system’s parts does not determine the behavior of the whole system, thereby calling the key idea of mechanisms into question.

Note that the issue of the non-determinateness of properties differs from non-separability of states. In a sense, the latter is worse than the former. While the non-determinateness of properties may be dealt with in terms of probabilistic dispositions or ‘propensities’ (Suárez 2007), non-separability of states poses a still more serious threat to the applicability of the mechanistic conception in the quantum realm because it seems to prohibit the separate ascription of properties—be these determinate or only dispositional—to different parts of a compound system. Due to ‘quantum holism’ one cannot say everything relevant about one given quantum object without having to say something about other quantum objects, too, and this applies not just to their mutual spatiotemporal relation. One may claim (Hüttemann 2005) that we are here dealing with a strong form of emergence because why a given compound system is in a certain superposition of entangled subsystems cannot be explained in terms of the states of its subsystems: The entangled parts of a compound system that is in a determinate state, namely a superposition, can no longer themselves be in determinate states (they are in so-called “mixed states”).Footnote 9 Apparently this undermines the idea of explaining the behavior of a system mechanistically, because there don’t seem to be any separately describable parts below the level of the whole system.

3 Decoherence and the emergence of classical phenomena

In the famous Copenhagen interpretation of quantum mechanics, commonly seen to be primarily built on the ideas of Niels Bohr,Footnote 10 there is a sharp divide between (microscopic) quantum objects and macroscopic measurement apparatuses: In order to describe the quantum world it is indispensable to refer to classical measurement apparatuses. Since measurement apparatuses thus differ fundamentally from quantum objects, it is impossible to understand what happens in a measurement. In loose connection with the Copenhagen interpretation it is then often said that measurement apparatuses can record determinate measurement outcomes because they are macroscopic. However, when we want to know a bit more about what is going on, this view turns out to be very unsatisfactory: First, where does the microscopic world end and our macroscopic world begin? In principle, quantum mechanics seems to be universally valid and govern our everyday world just as much as it governs the micro world. Second, if we make the natural assumption that macroscopic objects are themselves made up of microscopic quantum constituents, it is completely unclear why there should be a fundamental physical divide between the quantum and the classical world in the first place. The postulate that measurement apparatuses must be described in a classical way even seems to forbid asking for the physical processes that take place when a quantum system interacts with a measurement apparatus to produce the determinate measurement outcomes we observe.

Von Neumann (1932) broke sharply with this non-intelligibility dictum by offering a detailed account of the quantum measurement process. His account treats all relevant parts, including the measurement apparatus, as quantum systems. Virtually every modern treatment follows von Neumann’s account in many respects. However, while von Neumann’s descriptive analysis, terminology and general approach were a great break-through, in the end he primarily achieved a very lucid formulation of the basic problems, but not their solution. So it remained unclear whether it is appropriate to treat macroscopic objects as quantum systems, in particular given that they very often appear classical.

In recent decades many experiments and industrial techniques have shown that quantum phenomena are not restricted to the microscopic level. And even in cosmology quantum physics is indispensable. Nevertheless, in our midsized everyday world and also in most scientific contexts, the peculiarities of quantum mechanics are surprisingly absent.Footnote 11 These facts make it ever harder to explicate the quantum/classical boundary using a distinction between the microscopic and the macroscopic.

The theory of decoherence provides a precise and explicit analysis of this boundary, following von Neumann in spirit. Its goal is to explain how classical behavior emerges in a world that is completely governed by the laws of quantum mechanics. In other words, decoherence attempts to explain why objects that are ultimately made up of quantum mechanical constituents very often behave classically. As we will see, decoherence shows which processes contribute to the suppression of quantum effects in macroscopic systems. However, in all of the following one has to be aware that decoherence only supplies partial and approximate answers. They are partial because they have to be combined with further considerations or particular interpretations of quantum mechanics. And the answers of decoherence are approximate in the sense that quantum effects never disappear completely.

3.1 Non-classical phenomena

In Section 2, we introduced those non-classical features of quantum phenomena that appear to pose problems for the New Mechanist. To explain how decoherence theory might explain away these problems, it will be helpful to focus on a famous QM experiment that manifests these phenomena—the double-slit experiment.

Since the fundamental dynamical law of QM, the Schrödinger equation, is linear, the sum of any two solutions is also a solution and thus represents a possible state of affairs. The superposition principle in itself is well-known from classical physics: water, sound or electromagnetic waves are also described by linear wave equations, and can therefore be added together, sometimes producing interference effects. In the double-slit experiment, we appear to observe the same phenomenon. Light passing through two parallel slits projects onto a screen, and we see an interference pattern that seems to arise from the classical superposition of two waves, one originating from the upper slit and one from the lower slit.

On closer scrutiny something odd happens, which is most clearly apparent when we perform the double-slit experiment with an electron beam. It is possible to lower the rate of electrons in the beam so much that only one electron at a time passes through the double-slit.Footnote 12 We know this, because we detect dot-like hits of single electrons on the screen. At first these dots seem to be random but after a while we see a pattern forming, namely the same kind of interference pattern of multiple bright and dark bands that we get for classical waves passing through a double-slit—the notorious wave-particle duality.

How do these individual electrons collectively form the interference pattern? One could imagine that it is different electrons, some passing through the upper and some passing through the lower slit, whose effects are superposed and lead to the interference. However, this cannot be the case because we know that we get the interference pattern even if we make sure that only one electron passes through the double-slit at a time. Hence the superposition of whatever goes through the upper and the lower slit must refer to a single electron already. Thus after an electron has passed through the double-slit we have a so-called “coherent superposition” of two states that classically exclude each other, namely first, for the electron having gone through the upper slit and, second, for it having gone through the lower slit (Schlosshauer 2007, sec. 2.2.)

The peculiarities of wave-particle duality might be tolerated for small quantum objects like electrons, but the same effects can occur on larger scales. Wave-particle duality has been experimentally demonstrated in the behavior of buckyballs—large, cage-like carbon C60 molecules (see Fig. 2).

Fig. 2
figure 2

Interference pattern produced by Buckyball molecules (Adapted from Nairz et al. 2003. With permission from American Journal of Physics)

It seems that something as large as a C60 molecule must be well localized in any context. And in fact, it is hard to realize experimental arrangements in which we see wave-like effects for large objects. But it is possible, and nothing prevents interference effects from occurring not only with electrons and buckyballs, but also with macroscopic objects. The superposition principle of quantum mechanics allows for highly non-classical states on any scale (e.g. for Schrödinger’s cat).

Mathematically speaking, the quantum mechanical superposition principle says that any linear combination of states, i.e. essentially any sum of states, is also a state. Ontologically speaking, this means that a superposition is on a par with its “component states”Footnote 13 (the summands in the linear combination), i.e. the superposed states. Superpositions are strikingly non-classical because mutually exclusive classical states are ascribed to one and the same “object”, and interfere with each other as if there were different interacting objects. Saying that a superposition is coherent then means that the component states, which are added up, all refer to one object and not to an ensemble of objects.

The presence of superpositions is at the core of the most severe conceptual problem of quantum mechanics, the quantum measurement problem: On the one hand, quantum mechanics tells us that a superposition can arise when a quantum object interacts with a measurement apparatus and remains a superposition for all times—according to the one and only dynamical law of quantum mechanics, the Schrödinger equation. On the other hand, the measurement results we actually find (and which quantum mechanics predicts to occur with certain probabilities) are particular pointer positions, which are determinate outcomes. This is a formidable conflict because the Schrödinger equation never brings us from superpositions to determinate measurement outcomes. There appears to be an acute need for such a second kind of dynamics, the so-called collapse of the wave function.

In order to appreciate the paramount significance of the measurement problem, it is important to realize that this is not a problem concerning measurements per se. It is about the ubiquitous manifestation of determinate properties, which the basic dynamical law of quantum mechanics, the Schrödinger equation, doesn’t seem to allow. While the problem can be described in terms of laboratory measurements, wave collapse (via “quantum measurements”) happens everywhere and all the time. Only a tiny fraction of these events take place in a lab. According to the Schrödinger equation, any object—be it microscopic or macroscopic—that interacts in a suitable way with a quantum object in some superposition should itself be in a non-classical superposition for all future times. However, this is not what we observe.

3.2 Decoherence as an explanation of the emergence of classical phenomena

The fact that superpositions and the resulting interference effects are so hard to realize and detect even for objects the size of C60 molecules, which are still much smaller than, say, cats, indicates that something seems to be going on that is ever more difficult to avert as objects get larger. This something is called decoherence.Footnote 14 It is almost ubiquitous in macro-realms because the larger an object is the more it tends to interact with its environment. When it does, it gets entangled with so many other close and remote things that there is no longer any way to locally detect a quantum superposition and the resulting interference effects. The crucial point is that the superposition gets delocalized into the environment so that the coherence of the superposition decreases and eventually almost disappears.Footnote 15 Accordingly, the characteristic quantum mechanical interference effects are suppressed due to interactions between the respective system and its macroscopic environment. Thus coherence is a measure of the “quantumness” of a physical system and decoherence is the process by which this quantumness effectively disappears.

Let us describe more closely the phenomenon of decoherence. If electrons travelling through the double slit were classical objects, then they would produce two peaks on the screen behind the slits; but they instead produce the non-classical interference pattern. Moreover, this pattern cannot be explained as representing our ignorance about the electron trajectories, since the classical alternatives (going through one or the other slit) interfere with each other for one and the same object. As we have seen in Section 3.1, such a non-classical superposition of classical alternatives can be amplified up to any scale.

It is helpful to describe a superposition mathematically in terms of a so-called density matrix. Figure 3 contains a graphical representation of the density matrix for a “Schrödinger cat state” (Zurek 2007), i.e. a non-classical superposition of classical alternatives. The two peaks on the left and right side graphically represent the off-diagonal elements of the density matrix, which are responsible for the non-classical interference effects. Usually we don’t observe any such effects on macroscopic scales. But why is that so? The standard formalism of quantum mechanics seems to give us no reason to expect that the off-diagonal elements, and thereby the non-classical interference effects, should disappear on any scale. In particular, it seems that measurement devices should inherit the interference behavior from the quantum objects they are designed to measure. Quantum mechanics tells us that they become entangled with the quantum objects to be measured and just form bigger superpositions, so that we get the same problem again on a higher scale.

Fig. 3
figure 3

Evolution of the density matrix for the Schrödinger cat state (reprinted from Zurek 2007). The smaller the melted peaks on the right side are (representing the off-diagonal elements), the better the system decoheres

Decoherence theory provides a plausible explanation of why we don’t see such superpositions. In essence, decoherence theory suggests that in most situations, quantum entanglement is spread into the environment, making non-classical superpositions locally unobservable. Formally, the idea is to extend the so-called von Neumann chain S + A of system S and measurement apparatus A to include environment E, so that we get the bigger system S + A + E, and to then abstract from the environment again.Footnote 16 This is no formal trick. Rather, the point is that our puzzlement over the rarity of macroscopically entangled states arises from an insufficient application of the quantum formalism. The extension of the von Neumann chain is motivated by the realization that the system S + A, like most physical systems in nature, is hardly ever isolated, but an open system, which interacts more or less intensely with its environment E.

The second step, abstracting from the environment again, is again a formal step with a solid physical interpretation.Footnote 17 Here, too, the crucial point consists in appreciating the consequences of an obvious fact: what we actually deal with, as scientists as well as in our everyday lives, is not the universe as a whole with all of the interconnections of its constituents; what we actually deal with is a small number of features of local states of affairs. Thus, although in principle quantum systems interact with the whole universe and build up infinitely many entanglements, what we observe and measure is only a tiny fraction of this system. Nonetheless, it is due to this interaction that the interference/ entanglement/ coherence present in a local system S + A gets irreversibly delocalized into the environment E. The Schrödinger dynamics, according to which a superposition always stays a superposition, thus appears to be broken—provided we restrict our perspective to S + A. The system S + A appears disentangled although, and in fact because, it is entangled with its entire environment. Thus this second step consists in deliberately dispensing with information that is effectively no longer available to us.Footnote 18 Ideally, the establishment of an entanglement with the environment has the effect that the density matrix of S + A has no off-diagonal terms, which represent the troublesome interference between classical alternatives.Footnote 19 In practice the interference terms don’t vanish completely but just become very small (see Fig. 3). The degree to which the interference terms disappear is a measure for how well the system “decoheres”.Footnote 20

4 Decoherence and the grounding of the new mechanicism

In Section 2 of this paper we identified three apparent threats posed by the non-classical features of quantum mechanics, namely (A) indeterminacy of properties, (B) non-localizability of quantum objects, and (C) non-separability of quantum states due to entanglement. In this section, we will show how, in light of our discussion of decoherence and of the local character of mechanistic explanation, these features do not in fact threaten the mechanistic program in the ways we might have supposed.

Threat (A) says that since quantum objects—and thereby all objects—fail to possess definite values for dynamically relevant properties (in particular position and momentum), mechanistic decomposition into locally interacting parts is not possible. As we saw in the last section the basic reason for this quantum mechanical indeterminacy of properties is that, with respect to most observables, quantum objects are in superpositions of different values and these superpositions never disappear through any interaction that is described by the Schrödinger equation. But we also learnt that due to decoherence these superpositions are effectively invisible for most real systems that interact with their environment. Thus decoherence shows that threat (A) is much less acute in most cases where we want to give mechanistic explanations.

In one sense, as we noted in Section 2, non-localizability (B) is just the problem of the indeterminacy (A) of one particularly important property, spatial position. However, given the special importance of position in many mechanistic explanations, and in particular the inconsistency between the quantum-mechanical indeterminacy of position and the special relativistic requirement of causality, more needs to be said about (B). Fortunately, it turns out that beyond the general reasons regarding indeterminacy of properties, the theory of decoherence indicates that there are special reasons that explain why position behaves more classically than other observables. The problem we don’t have with the position observable is most intuitively visible when we formulate it regarding the many worlds interpretation of quantum mechanics: In its strongest version it says that for each measurement-like interaction we have a branching into a multitude of coexisting worlds—thus no “collapse of the wave function”. However, the formalism of QM allows decomposing a given quantum state in many different incompatible ways. So the standard quantum measurement theory doesn’t tell us with respect to which observable the branching into parallel worlds occurs—a formidable problem. Decoherence yields the solution: it is not the case that all observables are on a par. The interaction with the environment is often such that one observable is effectively selected.Footnote 21 In most cases the so-called “preferred pointer basis” happens to be the one of the position observable. There is no general law why it should be that way, but that’s the result of many concrete calculations for particular examples of decoherence processes. That the preferred pointer basis refers to position means that it is this property in particular that behaves classically—again only from a local perspective of course. This means that locally macroscopic objects are typically not in superpositions of different positions.

Finally, decoherence also substantially attenuates threat (C)—the non-separability of quantum states due to entanglement. In principle entanglement is omnipresent, and it can in fact be detected even in large macroscopic objects such as the particle accelerator in CERN, which has a circumference of almost 27 km. However, because of environment-induced decoherence, quantum entanglement is hardly ever visible on such macroscopic scales. One can only sustain detectable large-scale quantum entanglements if one has a system as super “clean” and shielded from environmental interactions as one finds in a particle accelerator in a tunnel 175 m beneath the ground. Thus, because of decoherence, quantum entanglement usually plays no role on scales where mechanistic explanations start.

In our exposition of the decoherence program, we have emphasized that the theory of decoherence explains why the world we observe appears to be approximately classical in local contexts and at macroscopic scales. We should couple this observation with an explicit reminder that decoherence theory does not (by itself at least) solve the measurement problem or other conceptual problems in quantum mechanics.Footnote 22 But these are not the problems we seek to solve. For our purposes, the main significance of decoherence lies not in its contribution to the foundations of quantum mechanics, but in the explanation it gives of why and to what extent it is legitimate to treat the many things in the world around us in a classical way.

The value of the decoherence program depends upon whether one is concerned with local or global questions. While decoherence alone is insufficient for global interpretive matters, it is very helpful for questions viewed from a local perspective. By a local perspective, we mean any physical situation where a given system can be distinguished from its larger environment. A local perspective can be taken towards systems of any size.Footnote 23 We can ask local questions even about very large systems like galaxies, since these can still be distinguished from their global environment within the universe. In contexts that are local in this sense, decoherence helps to explain why systems whose constituents are ultimately subject to the laws of quantum mechanics behave approximately classically. This legitimates classical mechanistic explanations of the behavior of such systems. Moreover, it does so in a way that is largely independent of any particular interpretation of quantum mechanics.Footnote 24

One way of understanding what is required for the mechanistic approach is to say that there must be some fundamental classical level, a bottom-out level of classically behaving entities that can be used to ground mechanistic explanations of higher-level phenomena. The decoherence account provides us with an account of how this fundamental classical level emerges, but an important consequence of this account is that it shows that and why this level is not a uniform one. The fundamental classical level is not absolute but depends on the specific circumstances. We can have quasi-classical objects on the level of molecules, as in nano-technology, and we can also have detectable quantum phenomena on macroscopic scales (e.g., superconductivity, laser light, EPR correlations over large distances at CERN). What decoherence tells us is that it is not size per se that matters for the emergence of apparent classicality, but the kinds and quantities of interactions between a system and its environment.

The non-uniformity of the classical boundary fits well with the ontological presuppositions of the neo-mechanistic approach. The mechanistic approach starts from the assumption that mechanisms are local (Glennan 2011; McKay Illari and Williamson 2011), which is to say that the causal powers and behaviors of mechanisms arise from the arrangement and interactions of particular parts situated at a particular location in space and time. There is no need to appeal to some single set of universally valid laws to account for these interactions. The mechanistic approach, like the decoherence approach, shows how to explain the behavior of particular systems located within particular environments.

5 Non-classical mechanisms within quantum mechanics

Our strategy in this paper has, to this point, been largely defensive. We have argued that decoherence provides a useful explanation of why, in particular local circumstances, systems behave classically in spite of their being ultimately constituted of entities that obey the principles of quantum mechanics, and that this explanation deflects possible concerns over the ontological and explanatory legitimacy of the mechanistic approach.

However, it is not the case that all systems behave classically, and in particular there are some systems whose macroscopic-scale behavior depends essentially on non-classical features of the parts that constitute them. Familiar examples are superconductorsFootnote 25 and lasers, but nanotechnologicalFootnote 26 and even some biological systems belong in this group.Footnote 27 Are such systems mechanistically explicable?—Yes and no. It is clear that traditional mechanistic explanation depends upon assumptions that the parts and interactions involved in the production of phenomena are classical—and so classical mechanistic strategies cannot be used to explain such phenomena. On the other hand, we think that there are important similarities between classical mechanistic explanations and certain varieties of explanations for the behavior of genuinely quantum mechanical systems. Such explanations describe what we might naturally call non-classical mechanisms. In this section we will briefly consider what such explanations look like, comparing classical and non-classical mechanistic explanations.

In this paper we have identified three non-classical features of quantum mechanics: (A) indeterminacy of properties, (B) non-localizability of objects, and (C) non-separability of states. Our conclusion to this point has been that decoherence explains why we don’t typically see these features in classical mechanistic systems. But now let us consider the sense in which we can offer mechanistic explanations when these features are present. Our view briefly is this: While the quantum mechanical indeterminateness of properties (A) is a serious problem for fundamental ontology, it is usually not a concern for scientific explanations. When it comes to potentially mechanistic explanations in the quantum realm, we are mostly dealing with systems that encompass a huge number of components, where only statistical statements matter. Thus the definite probabilistic predictions of quantum mechanics are all we need. Since we are not concerned with individual objects, the problem of indeterminate quantum properties becomes irrelevant.

So how about the second threat for mechanistic reasoning, non-localizability (B)? We shall argue that localizability of parts, while important in many classical mechanistic explanations, is not an indispensable feature of mechanistic explanation.Footnote 28 The reason is that the fundamental mode of organization that matters in mechanisms is causal dependence, not spatial location. It is only when spatial location determines causal dependence that spatial location is essential to mechanistic explanation. In some mechanistic systems spatial location is absolutely essential. For instance, in the lawn mower discussed in Section 2, the capacities of the various parts to interact with each other depend upon them being physically situated in exactly the right way. But in other thoroughly classical systems this is not the case. Consider for instance a system consisting of an ensemble of radio transmitters and receivers. Whether a particular receiver is connected to a particular transmitter will not depend upon its specific location, but upon whether it is tuned to receive the transmitted frequency. There are in fact many cases of classical systems where causal organization does not depend upon spatial organization. For instance, in biochemical mechanisms, organization is largely determined by the various molecular properties that make some molecules react with others, rather than on molecules having precise locations within a solution.

If it is indeed causal rather than spatiotemporal organization that matters for mechanisms, we should be able to offer a non-classical but mechanistic explanation of certain kinds of quantum phenomena. We briefly consider here one such example by showing the sense in which the quantum-mechanical explanation of laser light is mechanistic.Footnote 29 A laser produces light with a very high monochromaticity and intensity, provided the energy supply exceeds a certain threshold. The quantum theory of laser radiation starts on the most basic level of quantum field theory, where all the relevant parts of the laser mechanism are described in detail, e.g. atoms with internal structure and specific behaviors in isolation and interaction. The most important quantum mechanical aspect of laser light is the occurrence of stimulated emission of radiation. Laser theory explains this observable macro-phenomenon in terms of the interacting subunits, which in this case are the laser active atoms and the resulting electromagnetic field modes inside the laser. However, the field modes are not spatiotemporally but functionally characterized. But that is enough for a mechanistic explanation to work. The decomposition of a compound system into components is a pragmatic matter that is ultimately justified by its explanatory success. And in the exemplary case of the laser, understanding field modes as parts does the trick: The field modes interact with the laser active atoms in such a way as to produce the phenomenon of laser light provided certain conditions in the set-up are fulfilled, namely in particular the transgression of the laser threshold for the inserted energy.

One remarkable result of the development of laser theory is how much of the (semi-) classical reasoningFootnote 30 carries over to the quantum treatment. Since this continuity refers in particular to the essential interactive processes that produce laser light, this indicates that to the extent that (semi-) classical laser theory is mechanistic, so is quantum laser theory. Remarkably, treating not only the laser-active atoms but also the radiation field quantum mechanically doesn’t seem to infect the explanation of laser light in its mechanistic nature. One may object that due to the essential role of extended fields, i.e. “objects” that are not localized, (semi-) classical laser theory itself isn’t mechanistic and thus neither is quantum laser theory. However, this is the same situation as in our above consideration of an ensemble of radio transmitters and receivers, and there we have already pointed out why a mechanistic reading pertains.

In some cases, however, non-separability and the resulting quantum holism (C) may present an insuperable boundary to mechanistic explanation, because even the complete specification of parts and their spatio-temporal organization does not determine all properties of the composite system. Hence, mechanistic reasoning doesn’t always seem to work.

Now one may argue that the specification of how the parts are organized in the whole must not only encompass external spatio-temporal but also internal relations, which refer to the entanglement correlations between the system’s parts.Footnote 31 However, we think that this move would be against the spirit of the mechanistic approach because entanglement relations are inherently global. For similar reasons Darby 2012 argues that enriching the supervenience basis by entanglement relations leads away from David Lewis’ metaphysical thesis of “Humean supervenience”, i.e. the idea that the world is a mosaic of local particular facts.Footnote 32 But do these considerations by the same token undermine mechanistic explanations for genuinely quantum mechanical phenomena? We think they do not.

What if those properties of the composite quantum system that aren’t determined by a complete specification of its parts and their external relations, are simply not the ones we need in our explanation? It is not the case that nothing is determined by the parts and their external relations. And in fact, in many scientific contexts we only need to know those properties that are determined by the parts and their external relations: They alone determine exactly what is crucial for mechanisms, namely the dynamics of the compound system.Footnote 33 The reason is that the dynamics of a quantum-mechanical compound system is determined by its total energy, represented by the so-called Hamiltonian, which is neatly split up into parts that comprise the behavior of the system’s components in isolation, the interactions between these components (described by interaction terms), and with any other relevant systems. In our quantum-mechanical laser, for instance, the Hamiltonian for the atoms inside the laser sums over the Hamiltonians of all the single atoms, i.e. each atom has its own Hamiltonian—notwithstanding the indistinguishability of “identical quantum particles.” The electromagnetic field modes, i.e. oscillations with different wavelengths, are also treated as independent parts, which interact with the laser active atoms. Now, the crucial point is that the dynamics of the compound system is determined by the total Hamiltonian, which is given by simply adding up the Hamiltonians for the subunits.Footnote 34 There are no tensor products for Hamiltonians, and thus, neither is there an entanglement of Hamiltonians.Footnote 35 While we make this argument for lasers, the same argument will apply to many other systems. Even in those systems for which quantum entanglements are locally detectable and not irreversibly spread out into the environment by decoherence, mechanistic explanations (and mechanistic ontology) will still work so long as the specific entanglement correlations are irrelevant to the behavior of the system we want to explain. Hence threat (C) plays no role. Only if entanglement correlations are relevant for the dynamics are mechanistic explanations no longer possible.Footnote 36

In conclusion, we think it is appropriate to say that the behavior of a composite quantum system is, under circumstances like those of the laser, due to what we call a “non-classical mechanism”: The mechanistic explanation shows how a stable behavior of a compound system reliably arises purely on the basis of the interaction of its constituents, where it is the causal organization that matters and the spatiotemporal organization remains almost completely unspecified. One important difference between this case and our case of the classical set of transmitters and receivers is that in the classical case, it is possible to attribute locations to the parts while in the non-classical case it is not. But whether classical or non-classical, spatial organization is in both cases irrelevant to the mechanistic explanation.

While we believe that the laser example shows how mechanistic explanation extends into the quantum domain, we do not mean thereby to suggest that all explanation in the quantum domain is mechanistic. We have emphasized that in circumstances in which entanglement plays an essential role in the explanation of some phenomenon, mechanistic explanation is not possible. Moreover, we believe that there are many high-standard explanations (both classical and non-classical) that are not mechanistic. For instance, many explanations ignore mechanistic causal processes and appeal to abstract features of systems, e.g., by appeal to conservation laws, symmetry considerations and dimensional analysis.Footnote 37 Explanations of “non-classical mechanisms” deserve to be called mechanistic because they share a lot with classical mechanistic explanations, and are quite different from these non-mechanistic explanations, whether classical or non-classical.

6 Conclusion

Let us take stock of what we have learned. We have argued that even though decoherence doesn’t solve the global conceptual problems of quantum mechanics it helps considerably in abrogating worries that local mechanistic explanations may be undermined by the universal validity of quantum physics. Mechanistic explanations are concerned with the local causes of local phenomena that occur within the world. For instance, why do flocks of birds so often form the inverted-1-shaped form often seen in autumn? A mechanistic explanation explains how this local phenomenon arises through the local interaction of the birds; global entanglements between the birds (and their constituents) and the rest of the universe are (to a high approximation) not causally or explanatorily relevant to the production of this phenomenon.

A central tenet of the mechanistic approach to causation and explanation is that mechanisms are particulars, and this entails that they are local in the sense we have described. The phenomenon that a mechanism produces is a local phenomenon, and the parts and their interactions that produce the phenomenon are local as well. If the explanandum concerns the local behavior of a system that behaves classically, and if there is an explanation of that behavior that refers to entities and activities that themselves behave classically, this explanation is not undermined by locally undetectable global entanglements of these entities and their interactions. Moreover, the sorts of systems that are most clearly amenable to mechanistic explanation (e.g., biological systems) are open systems (i.e., systems that interact with their environment), and are thus systems for which the decoherence approach can be legitimately invoked.

Beyond this, we have argued that even within domains where the behavior of systems must be explained by appeal to non-classical components, some explanations are still mechanistic. At its core, the mechanistic approach involves explaining the behaviors of systems in terms of the properties of their parts. Such explanatory strategies are sometimes available even within the domain of quantum mechanics.