The theme of this volume is content and context, as a perspective on the long-standing debate between reductionism and holism. I will discuss these dichotomies with reference to three areas of science: computer science, mathematics, and foundations of quantum mechanics.

Firstly, though, some general remarks. In my view, from a scientific perspective, reductionism is, perhaps in a caricature form, the basic method of science; whereas holism is reaching for a way to protect various forms of belief from the incursions of science, and to call for a return to a pre-scientific viewpoint. Science proceeds by mastering the overwhelming complexity of everything by isolating aspects of the whole: subsystems, degrees of freedom, parts. This enables it to find the hidden simplicities and patterns underlying the richness and specificity of phenomena. In a slogan:

figure a

On the other hand, this method has to be understood in a suitably nuanced fashion. I shall argue, firstly, that Computer Science offers an excellent arena for discussing these issues, all the more so as it is removed from the emotional undertones which usually color the debate.

1 First Lens: Computer Science

Let us begin with an old chestnut which has often been used in the following kind of reductive argument:

All a computer does is manipulate 1’s and 0’s. Therefore it can’t [...].

The specific conclusion often relates to exhibiting intelligent behaviour of one kind or another. There is a serious discussion to be had about possible limits to AI. But the above argument does not contribute to such a discussion. In fact, while the premise of the argument is, for standard computer architectures, true enough, there are no interesting conclusions which can be drawn from this fact.

The basic issue is this: does the fact that at a low level of descriptionFootnote 1 computers are manipulating finite strings of bits, in any way prevent or falsify the description of computers as manipulating much higher level objects: whether they are our medical records, bank accounts, credit records, games of chess, mathematical proofs, musical scores, visual images, text or speech? Any computer science undergraduate after a year or two of their studies will be well aware that the answer to this question is a resounding No! In fact, a large part of what they will have learnt in their studies will have been precisely how to build high-level structures of diverse kinds based on lower-level primitives. This is actually what programming, and software design and architecture, are all about. Several features are worthy of attention:

  • Firstly, there is an obvious parallel between the way software developers build high-level abstractions from low-level primitives, and work in the foundations of mathematics. Indeed, contemporary work in developing formal mathematics in systems such as Agda, Coq, HoTT (homotopy type theory), etc. makes this explicit, so that the boundaries between code and mathematics become somewhat indistinct. From the heroic age of Principia Mathematica, we are now in an era of highly engineered software systems, capable of undertaking large scale proofs, such as the proof of the Kepler conjecture in the Flyspeck project led by Tom Hales (Hales et al., 2017), and the proofs of the Four-Color Theorem and the Feit-Thompson theorem in projects led by Georges Gonthier (Gonthier, 2008; Gonthier et al., 2013). In each case, elaborate towers of concepts, definitions and results relating to specialised mathematical theories are built up from simple, logically evident foundations. This has much the same overall structure as the way that elaborate towers of modules and libraries determined by application-level concepts are built from the basic mechanisms provided by a programming language—which itself sits on top of a stack of compilers, editors, tracing and debugging tools, etc. These in turn are ultimately mapped down to the code controlling the “bare metal” of the computer—which is indeed directly manipulating 1’s and 0’s. This last fact, however, is gloriously irrelevant to the software artefact that sits on top of this tower of abstractions. Indeed, by virtue of the decoupling of high-level programming languages from specific machines provided by the now-routine mechanisms of compilation, the same code can be run on many different machines, eliciting different sequences of manipulations of bit-strings, but achieving the same high-level effect.

  • While the architecture of concepts embodied in code is analogous to the architecture of mathematical concepts, there is also mathematics about code: namely, the tools of formal specification and verification. It is by virtue of these tools that we can be sure of the independence of high-level code from low-level implementation details. Another important aspect is directly relevant to the holism-reductionism debate, as reflected in the manifesto of this volume. For each level of the software tower, specifications will be written at the corresponding abstraction level. If we are specifying relationships between geometric objects in a visual feature recognition system, or a hierarchical relationship in an ontology used in a medical database, we would no more refer to details of bitstrings being loaded into the registers of a GPU than we would refer to electrons in describing the biology of elephants. Yet our code will not run without being executed on a physical machine, any more than an elephant can exist without being manifested in physical matter. So as far as the delightful rhetorical flourishes of this volume’s manifesto are concerned, they are exhibited in software in terms completely familiar to computer science undergraduates on a daily basis. Nothing to see here!

  • One feature that perhaps serves to obscure the analogy we are making is that we customarily read these towers of abstractions in opposite directions. In the case of scientific reductionism, we read the tower downwards, in the direction of analysis. That is, we emphasize the reduction of higher-level concepts to lower level ones—even though, in many cases, this reduction is “in principle”, and difficult or impossible to achieve in practice. By contrast, in the case of software, we read the tower upwards, in the direction of synthesis. We are interested in constructing a complex artefact, not in analyzing the complexity of a pre-existing class of systems occurring in nature. But this difference in how these towers arise does not in itself show that they are different in kind—and indeed, the well-understood towers of software abstraction may serve to shed light on the hierarchical structure of scientific theories.

  • One difference that may be argued is that in natural science, we expect nature to force our hand in the development of a tower of theories, whereas in an engineering context we have the luxury of choosing our programming language and tools, and our hardware. But this difference is not as great as it may appear. In mathematics, the same ideas in a given domain may be developed from different choices of foundational concepts. For example, algebraic geometry has been subject to several different foundational frameworks, in a process which is still ongoing (Van der Waerden, 1971; Dieudonné & Grothendieck, 1971; Anel & Catren, 2021). In physical science, physical systems can be studied in a classical, semi-classical, or fully quantum framework according to need.

The main overall point we wish to make is that Computer Science offers, in the ideas of the tower of software concepts, a well-understood, well-formalised, non-mystical paradigm for understanding how systems can be described at different levels of abstraction, and the levels related to each other. Moreover, although the mappings downwards are well-understood, and essential for the design and verification of implementations, there is no temptation to refer to them when reasoning about the higher levels! In fact, the development of higher levels of abstraction is a practical necessity. It is an essential tool in mastering complexity. Indeed, the tower enables us to introduce suitable levels of abstraction, so that we can “think bigger thoughts”.

An essential part of this Computer Science methodology, to which we shall now turn, is compositionality, an idea which is also of great relevance to the context/content and reductionism/holism debates.

1.1 Compositionality

Compositionality is a methodological principle, originating from the work of Frege and others in logic (Janssen, 2001; Janssen & Partee, 1997), which has played a crucial rôle in Computer Science for several decades, but has yet to achieve the recognition in general scientific modelling which it deserves. I believe it is of major potential importance for mathematical modelling throughout the sciences. See e.g. (Werning et al., 2012; Fong & Spivak, 2019) for some recent texts.Footnote 2

Compositionality was originally formulated as a principle for the semantics of natural language: the meaning of an expression should be a function of the meaning of its syntactic constituents, and of how these parts are combined to form the expression. That is, the structure of semantics should follow the grammatical structure of the language—it should be syntax-directed, in computer science parlance.

In mathematical logic, the Tarskian semantics of predicate logic stands as the paradigm of compositional definitions for formal languages (Tarski, 1936; Tarski & Vaught, 1956). It has in turn heavily influenced the development of the formal semantics of programming languages (Scott & Strachey, 1971).

More generally, in computer science, compositionality has become a major paradigm in enabling the structured description of complex systems. We can contrast it with the traditional approach in mathematical modelling, of whole-system (monolithic) analysis of given systems. In the compositional approach we start with a fixed set of basic (simple) building blocks, and constructions for building new (in general more complex) systems out of given sub-systems, and build up the required complex system with these.

A little more formally (and somewhat simplistically), compositionality can be expressed algebraically:

$$\begin{aligned} S \; = \; \omega (S_1 , \ldots , S_n ) \end{aligned}$$

The system S is described as being built up from sub-systems \(S_{1}\), ...\(S_{n}\) by the operation \(\omega \). This allows the hierarchical description of systems, e.g. 

$$\begin{aligned} S = \omega _1 (\omega _2 (a_1), \omega _1 (a_2, a_3)) . \end{aligned}$$

In graphical form:

figure b

Here \(\omega _1\) is a binary operation, \(\omega _2\) a unary operation, and \(a_1\), \(a_2\), \(a_3\) are sub-expressions, which may themselves be built from components in arbitrarily complex fashion.

There is also a logical perspective:

$$ \frac{S_1 \models \phi _1 , \ldots , S_n \models \phi _n}{ \omega (S_1 , \ldots , S_n ) \models \phi } $$

(Read \(S \models \phi \) as “system S satisfies the property \(\phi \)”). Here properties \(\phi \) of the compound system S can be inferred by verifying properties \(\phi _{1}\), ..., \(\phi _{n}\) for the simpler sub-systems \(S_{1}\), ...\(S_{n}\). This allows the properties of the sub-systems to be tracked all the way up (or down) the tree of syntax.

In addition to its major role in Computer Science and linguistics, compositionality is increasingly being introduced into other areas, including physics (Abramsky & Coecke, 2009), systems biology (Danos et al., 2007), game theory (Ghani et al., 2018), and more.

Since compositionality systematically relates the meaning of larger systems to the meanings of their parts, it appears as an antithesis to contextualism, which asserts that a part only acquires meaning in relation to the larger context in which it appears. There is a reductio of contextualism which echoes our opening slogan: if we pursue it to its limit, we end up needing to understand the meaning of everything in order to understand the meaning of anything. And how is this “everything” delimited, anyway? Perhaps we can only fully understand the meaning of an English utterance in the context of the entire history, not yet completed, of English speech.Footnote 3

1.2 Challenges to Compositionality

We have emphasized the importance of compositionality as a methodological principle. It is also interesting to consider some challenges to it which have arisen, explicitly or implicitly, in recent developments.

Independence-Friendly logic A notable challenge to compositionality was made by Jaakko Hintikka in relation to his independence-friendly logic (IF logic), a generalization of branching quantifiers (Hintikka & Sandu, 1989; Hintikka, 1998). Hintikka claimed that this logic could not be given a compositional semantics in a Tarskian style. One had to look at entire formula, and give the meaning in terms of strategies for a game associated with this formula. While Hintikka was correct in his claim that one could not give a semantics for this logic using assignments of elements of a quantificational domain to the variables in a formula, as is done in Tarskian semantics of predicate logic, he was taking too limited a view of the possibilities for a compositional semantics. Wilfrid Hodges subsequently showed that a compositional semantics could be given, using sets of assignments rather than single assignments (Hodges, 1997). This semantics in terms of sets of assignments, nowadays called team semantics, has been extensively developed by Jouko Väänänen and his collaborators, in his logics of dependence and independence (Väänänen, 2007; Abramsky et al., 2016). It turns out that this yields a very interesting extension of the possibilities for compositional semantics, and for logic in general, with connections to databases, foundations of probability and statistics, quantum physics, and more. What this illustrates is that overcoming challenges to compositionality can lead to significant advances.

Emergence A key concept in the reductionism/holism debate is that of emergence: the idea that salient concepts or features of systems can only appear at higher structural levels, and cannot be accounted for at the lower levels. Referring this to the setting of the software tower, we can recognise that, on the level of feasibility and intelligibility, this is clearly true in an unproblematic way. If we think of the analogous mathematical situation, defining the curvature of a Riemannian manifold in the bare language of set theory or type theory, without an intervening tower of definitions and intermediate results, would be hopelessly long and unwieldy. The question is whether there are truly higher-level emergent properties which fundamentally cannot be expressed at all in terms of lower levels. I am not aware of precise results to this effect.

AI One place where we might look for such results is in Artificial Intelligence, in particular in its dominant modern form based on machine learning. The history of AI can be argued to have gone against the compositional grain. Much of early AI was logic- and rule-based, but there has been a big shift towards statistical machine learning, where the wisdom is in the data. This has led to systems with highly impressive performance in terms of their ability to carry out a wide range of specialised tasks in natural language processing, vision, robotics and autonomous devices, games playing, medical diagnosis, financial analysis, protein folding and many more. Many of these encroach well into areas previously considered as requiring distinctively human intelligence, although the challenge of integrative intelligence, encompassing a full range of intelligent behaviour, remains. These systems have been highly resistant to compositional description and analysis. However, there is a major push in current research to achieve this, in order to have explainable, verifiable and accountable AI (Adadi & Berrada, 2018; Huang et al., 2017). This is a fundamental issue in the current research agenda in AI. Indeed, can one be said to have a “theory of intelligence”, or to have achieved scientific understanding of it, if one has a system which produces intelligent behaviour, but has no explanation of how this behaviour is produced? And such an explanation would surely have to refer to an underlying system structure. It can be plausibly argued that structure “cashes out” into compositionality.

Cheap tricks? A more subtle challenge to compositionality is that it is too easily achieved. This is argued, for example, in Hodges (2001, 1998). Indeed, by introducing additional variables, which in effect encode the relevant contextual information, one can, in some generality, make any semantic definition compositional. Does this trivialise compositionality? Rather, it highlights the importance of having additional criteria over and above compositionality for the acceptability of a formal semantics. In the setting of programming language semantics, such criteria are provided by adequacy and full abstraction (Plotkin, 1977; Milner, 1977). Similar criteria can be applied for team semantics, which we discussed above (Abramsky & Väänänen, 2009). As we shall see, there are analogous issues in the foundations of quantum mechanics.

2 Second Lens: Quantum Mechanics

Two issues which arise from the foundations of quantum mechanics can be related to our discussion. Interestingly, they pull in opposite directions:

  • One the one hand, the quantum phenomenon of entanglement has been argued to imply a form of “quantum holism” (Healey, 1991).

  • On the other hand, quantum contextuality, a key non-classical feature of quantum mechanics, is problematic for holism, since it calls into question whether there is a whole.

2.1 Entanglement and Quantum Holism

A fundamental aspect of quantum mechanics is entanglement, a phenomenon where the quantum state of a group of particles cannot be described solely in terms of the states of each particle separately. This behaviour may be exhibited even when the particles are spatially separated.

Intuitively, entanglement violates common sense principles like the “Principle of Local Action”, by which an object is directly influenced only by its immediate surroundings. This counter-intuitive nature led Einstein to describe entanglement as spooky action at a distance, and to the Einstein–Podolsky–Rosen (EPR) paradox (Einstein et al., 1935).

Bell’s seminal idea in (Bell, 1964) was that entanglement has observable implications, which separate the predictions of quantum theory from any attempt at classical explanation by a local, realistic theory. If the particles in an entangled system are spatially separated, and each particle is measured independently, the presence of entanglement implies correlations between the outcomes of the measurements that provably exceed what can be achieved classically. This clear separation of the predictions of quantum theory from any classical theory has been verified experimentally (Freedman & Clauser, 1972; Aspect et al., 1982; Hensen et al., 2015; Giustina et al., 2015; Shalm et al., 2015), and forms the basis of the currently emerging technologies of quantum information and communication.

One point to mention is that Bell’s result bounding the possible correlations achievable by “local realistic theories”, and showing that quantum mechanics exceeds these bounds, can be stated as an impossibility result for hidden-variable theories. There is a striking analogy with our discussion of “cheap tricks” for compositionality in the previous section. Just as we can trivially make definitions compositional by adding extra variables which encode contextual information, so we can construct hidden variable theories to account for any observable phenomena (Abramsky, 2014). However, if we introduce suitable constraints on such theories, e.g. locality of information flow, then results such as Bell’s can be proved. This parallels the way that compositionality has to be tensioned against required properties of a semantics, such as adequacy and full abstraction. Similarly, contextual hidden variable theories can be constructed for quantum mechanics. Bohmian mechanics can be viewed as such a theory. However, these theories are as non-local as quantum mechanics itself.

Returning to entanglement, it has been argued that the non-separable nature of entangled states exhibits a form of holism, since we cannot recover the entangled state from its components. We find this dubious, mainly because it is not clear what is at stake here. While much has been learnt about how to use entanglement in quantum information, a deeper physical understanding of how and why this phenomenon arises, if there is one to be had, remains elusive.

We content ourselves with the following observations:

  • Non-separability in this sense is a common phenomenon, which arises mathematically wherever we have monoidal categories (Fong & Spivak, 2019).Footnote 4 There are many examples of this in classical computation, and in Linear and other sub-structural logics (Girard, 1987; O’Hearn & Pym, 1999).

  • Monoidal categories, and the mathematics of entanglement, can be handled in a thoroughly compositional fashion (Abramsky & Coecke, 2009).

2.2 Contextuality: Is There a Whole?

Contextuality arises from an even more fundamental non-classical feature of quantum theory: the incompatibility of different measurements, meaning that one cannot observe definite values for all physical quantities at the same time. Again, this is not merely a practical limitation, but a fundamental feature of quantum mechanics, as shown by the seminal results due to Bell (Bell, 1964, 1966) and Kochen–Specker (Kochen & Specker, 1967). This feature is known as contextuality, and recent work has shown that it is a key signature of the non-classicality of quantum mechanics, responsible for many of the known examples where quantum computation offers possibilities that exceed classical bounds (Raussendorf, 2013; Abramsky et al., 2017; Howard et al., 2014; Bermejo-Vega et al., 2017; Bravyi et al., 2018; Aasnæss, 2019). Moreover, contextuality subsumes non-locality as a mathematical feature of a physical theory (Abramsky & Brandenburger, 2011).

2.2.1 Contextual Logic in Quantum Mechanics

Logic traditionally emphasises truth (semantically) and consistency (proof-theoretically). While the debates in the foundations of mathematics have, among other things, led to contrasting classical and constructive views of logic, these share an integrated view, going back to at least Plato and Aristotle, which can be summarised as follows:

a logical system should stand or fall as a whole.

Fig. 1
figure 1

M. C. Escher, Klimmen en dalen (Ascending and descending), 1960. Lithograph.

Quantum mechanics challenges this integrated perspective in a new way. This was already revealed by the seminal results of John Bell and Simon Kochen and Ernst Specker in the 1960s (Bell, 1964, 1966; Kochen & Specker, 1967), but we are still in the process of understanding these ideas. To accommodate a non-integrated view, the logical structure of quantum mechanics is given by a family of overlapping perspectives or contexts. Each context appears classical, and different contexts agree locally on their overlap. However, there is no way to piece all these local perspectives together into an integrated whole, as shown in many experiments, and proved rigorously using the mathematical formalism of quantum mechanics.

To illustrate this non-integrated feature of quantum mechanics, we may consider the well-known “impossible” drawings by Escher, such as the one shown in Fig. 1.

Clearly, the staircase as a whole in Fig. 1 cannot exist in the real world. Nonetheless, the constituent parts of Fig. 1 make sense locally, as is clear from Fig. 2. Quantum contextuality shows that the logical structure of quantum mechanics exhibits exactly these features of local consistency, but global inconsistency. We note that Escher’s work was inspired by the Penrose stairs from (Penrose & Penrose, 1958).Footnote 5

Fig. 2
figure 2

Locally consistent parts of Fig. 1.

2.3 Discussion

Conceptually, an intriguing feature of our discussion of quantum contextuality is that, whereas context is customarily aligned with holism, contextuality tends to undermine holism, since it brings into question the existence of an integrated whole. Rather, the picture of reality which it suggests is of an overlapping family of local perspectives, which support local consistency, but cannot be pieced together into a global, context-independent reality. This raises a number of questions, spanning a range of disciplines:

  • Philosophically, how should we understand this lack of global, context-independent truth or consistency? Can contextual logic give a formal foundation for contextualism in contemporary philosophy, such as epistemic contextualism (e.g. DeRose (DeRose, 2009), as a counter to scepticism) and ontic contextualism (e.g. Gabriel, known for Why the world does not exist (Gabriel, 2015))?

  • Logically, we have physically meaningful—and indeed experimentally accessible—systems which, when viewed globally, can validate contradictory propositions. This is, arguably, more radically disturbing than the more familiar fact that some classical tautologies may not be valid constructively.

  • Mathematically, the structures underlying these logical phenomena have a rich geometric and topological content, in which sheaf theory, the mathematics of local-to-global phenomena, and cohomology play a key role, identifying the geometry of the “logical twisting” obstructing a global semantics.

  • Physically, we have the issue of experimentally witnessing these phenomena, and understanding the role they play in a wide range of physical systems. These include many-body systems, e.g. frustration in spin networks (Liang et al., 2011), and quantum simulators (Kirby & Love, 2019, 2020).

  • Computationally, contextuality appears as a key signature of non-classicality, and is at the core of many of the known examples of quantum advantage in information processing tasks. This is both of great practical import and a crucial tool for showing the impact of non-classicality at the macroscopic level.

3 Concluding Remarks

In this brief essay, we have posed some challenges to holism, if it is to be more than a fuzzy feel-good term. What would holistic science, or science done holistically, look like? It is not clear that there are any convincing examples.

Also, we have argued that quantum contextuality poses a challenge to holism, since it is casts doubt on whether there is an integrated whole underlying our perceptions of physical reality.

More positively, we have advocated the importance of compositionality and levels of abstraction, important methodologies in Computer Science, which provide a much richer and more nuanced alternative to crude reductionism.

For those readers interested in more technical presentations of related issues, we refer to papers such as (Abramsky, 2015, 2017, 2020).