1 New thinking about scientific realism

1.1 A brief overview of the history of the scientific realism debate

It is generally agreed today that scientific realism has three dimensions (Psillos 1999) or stances (Chakravartty 2007); a metaphysical, semantic, and an epistemic dimension (see also Kukla 1998). Briefly, the metaphysical dimension implies commitment to a mind-independent reality; the semantic dimension implies commitment to the literal truth of scientific statements and the objective reference of theoretical terms; and the epistemic dimension implies commitment to the view that science conveys knowledge about the mind-independent reality, where this epistemically optimistic commitment is articulated in various ways including most often the claim that scientific theories are approximately true or truthlike or that they at least have some truthlike constituents. Furthermore, scientific realists believe that science offers knowledge of both the observable and the unobservable aspects of reality. As Chakravartty (2017, p. 1) puts it: scientific realism is “a positive epistemic attitude towards the content of our best scientific theories and models, recommending belief in both observable and unobservable aspects of the world described by the sciences”.

The counter view to scientific realism, ‘anti-realism’, is broadly construed as any philosophical position that denies either of the three dimensions of scientific realism. But most typically, anti-realism in science has been tied to instrumentalism, which is the view that scientific theories are not supposed to offer a true description of the unobservable reality behind the phenomena, but rather, to save the phenomena, that is to offer a (mostly mathematical) framework in which the phenomena can be embedded. Theories, then, are seen as useful instruments for the organization, classification and prediction of empirical laws. There are various forms of instrumentalism. Fictionalism is the view that some entities whose existence is implied by the truth of a theory are not real, but useful fictions. Hence, on the fictionalist approach, scientific theories which are prima facie committed to the existence of unobservable entities are false, simply because there are no such entities for the theories to be committed to. On this view, to say that one accepts the proposition that p as if it were true is to say that p is false but that it is useful to accept whatever p asserts as a fiction. Agnostic instrumentalism is a weaker position according to which scientific theories need not be taken to be true or approximately true and that one can employ theories for prediction and control while remaining agnostic about the reality of the unobservable entities posited by theories.

Seen in this light, realists want more than anti-realists. Realists take scientific theories to aim to describe the whole of reality accurately, and to typically succeed in doing do, viz., to explain phenomena and events in reality in truthful ways. Anti-realists, however, expect no more from science than to be able to save the phenomena and to make successful empirical predictions. According to a recent influential form of anti-realism, theories should be taken to aim for no more than empirical adequacy, where a theory is ‘empirically adequate’ just in case what the theory says “about the observable things and events in this world is true” (Van Fraassen 1980, p. 12).

Understanding where we stand now in the realism debate requires to briefly trace the historical development of the debate on scientific realism from the reign of the logical empiricists through to contemporary times. In general, logical empiricists are taken to be instrumentalists. This is due to their allegiance to the well-known verification principle stating roughly that the meaning of a sentence consists in its method of verification, meaning for the logical empiricists, that the empirical confirmation of the observational content of scientific theories is all that is relevant to ensure scientific progress. Things were more complicated, however, since most logical empiricists wanted to steer a middle course by avoiding both instrumentalism and metaphysics; hence, there was a distinction between empirical realism, which was acceptable, and metaphysical realism, which was an anathema.

As Psillos (2017) points out, Feigl (1950) paved the way for semantic realism by suggesting a distinction between the truth conditions (relating to theoretical terms and unobservable entities) and evidence for truth claims (relating in general to observational content). The force of semantic realism was perhaps best illustrated by Wilfred Sellers’ (1963) ‘scientific image of man’.

However, during the late 1950’s instrumentalism received a boost from two accounts of scientific theories (Psillos 2017) in which theoretical terms became seemingly so obsolete that the scientific realist debate stranded on Hempel’s (1958) ‘theoretician’s dilemma’, according to which theoretical terms are dispensable even when they play a useful role in classification and prediction. These two accounts were based on Craig’s Theorem (1956) and on Carnap sentences (1958). Craig’s Theorem shows scientific claims “expressed by the theoretical vocabulary” of science to be inessential elements of theories given that the theorem allows an elimination of theoretical terms “en bloc, without loss in the deductive connections among the observable consequences of the theory” (Psillos 2017). In its turn, a Carnap sentence is a conjunction of a Ramsey sentence of a theory T (a sentence in which all theoretical terms of T have been substituted by bound existential quantifiers—see Ramsey (1931)—and the conditional \(^{R}T\rightarrow T\); which for all intents and purposes, also implied that theoretical content of theories is dispensable without any negative implications for the empirical content of theories.

Carnap, joined by Nagel (e.g. 1961, p. 139), was very optimistic about the potential of this turn of events offering a kind of neutralisation of the scientific realist/anti-realist dichotomy. For instance, Carnap (1966, p. 256) writes:

My own view ... is that the conflict between the two approaches is essentially linguistic. It is a question of which way of speaking is to be preferred under a given set of circumstances. To say a theory is a reliable instrument—that is, that the predictions of observable events that it yields will be confirmed—is essentially the same as saying that the theory is true and that the theoretical, unobservable entities it speaks about exist.

The view that the realism-instrumentalism debate is a verbal dispute was short-lived. In the 1960s there were important developments in the realism debate that led to what Psillos (2017) has called a ‘realist turn’.Footnote 1 Most notably here, is Hilary Putnam’s work to defend the main tenets of semantic realism (e.g. Putnam 1963, 1965) and his formidable development of Kripke’s causal theory of names into a causal theory of reference of theoretical terms, which made a strong case for continuity of reference through theory change (Putnam 1973, 1974, 1975b). Putnam’s realist crusade continued into the 1970s with his coining of the ‘no miracles’ argument for scientific realism—“The positive argument for realism is that it is the only philosophy that does not make the success of science a miracle” (Putnam 1975a, p. 73). Richard Boyd’s (1971) role in paving the way for and developing this argument must be acknowledged (see e.g. Psillos 1999). Boyd’s arguments emphasise the importance of an historical context to the no-miracles argument, i.e. if the (approximate) truth of science is taken as the best explanation for its success, then there must be some historical understanding of success, and some joining of referential continuity and convergence to truth (see Psillos 2017). Thus, by the 1980s scientific realism implied at least three theses (ibid.): “Theoretical terms refer to unobservable entities; ... theories are (approximately) true; and ... there is referential continuity in theory change”.

All was not peaceful for long for the realist camp however. Firstly, Van Fraassen (1980) firmly re-focused and resuscitated the anti-realist position by means of his so-called ‘constructive empiricism’, according to which the aim of science is not to present “a literally true story of what the world is like” (ibid., 8), but rather empirically adequate theories. (See e.g. Churchland and Hooker 1985; Rosen 1994; Ladyman 2000; Teller 2001; Kusch 2015 for more discussion.) Secondly, a serious attack on realism was launched by Hesse (1976) and Laudan (1981) culminating in the well-known pessimistic meta-inductive argument that “... given the track record of science in terms of successful theories which have subsequently turned out to have been misguided or simply ‘false’, by induction, we can never trust any (successful) theory to be immune against revision or even rejection and thus success cannot after all, as realists typically try to do, be explained in terms of truth” (Ruttkamp-Bloem 2013, p. 203). This argument generated heated debates. (See for instance Psillos 1994, 1999 for a defense of realism against the PMI. See also e.g. Lyons 2002; Newman 2005; Doppelt 2007.)

The key response to the PMI by defenders of scientific realism was the advancement of a tactic that was, in a sense, already inherent in the no-miracles argument for realism, and came to be known as the divide et impera move, viz., to focus on a selected aspect of theories, namely the parts that persevere through theory change (e.g. Psillos 1996). As Ruttkamp-Bloem (2013, p. 203) has put it: “Defenders of this form of realism typically separate theories into components or aspects according to some criterion such as “working posits” (Kitcher 1993), structure or what have you; and argue that only the selected components are eligible for realist claims, while components not thus selected (so-called ‘idle’ components) may be ‘false’ or ‘non-referring’, or simply ‘idle’ ..., without any serious implications for realism”. Thus the basic strategy of ‘selective realists’ is to argue that “... only idle parts of past theories have been rejected [through theory change], while truly success-generating features have been confirmed by further inquiry” (Stanford 2003, p. 913). One of the first forms of selective realism was Worrall’s (1989) structural realism. Other forms of selective realism include entity realism (e.g. Hacking 1983). See also e.g. Laudan (1984), Doppelt (2002), Chang (2003), Stanford (2003), Lyons (2006), Ladyman and Ross (2007) and French (2014) for more discussion of the merits and demerits of selective realist defenses of realism. As a last group of recent defenders of realism against PMI, we find so-called pluralists such as Chang (2011) and Ruetsche (2011) pleading for more nuanced forms of realism.

There is one other important problem that realists have to take into account, namely the issue of the underdetermination of theories by data. This problem is perhaps best interpreted as an attack on the first and third theses of scientific realism, namely that science is about a mind-independent reality that can be known. The roots of this problem lie in the work of Pierre Duhem, and dates to before the reign of the logical empiricists. Duhem (1914) gave it its first formulation, which “focuses on the uncertainty around identifying the culprit out of a range of auxiliary and background claims in cases of failed predictions” (Ruttkamp-Bloem 2013, p. 203), and decades later Quine (1951) introduced a “confirmational holistic thesis of underdetermination” (Stanford 2016) in his critique of the distinction between analytic and synthetic truths. The current focus in terms of the realist debate is perhaps more on so-called “contrastive under-determination” (Stanford 2016), which is basically the issue that “more than one (empirically equivalent—see e.g. Van Fraassen 1980, p. 67) theory can be confirmed by the same body of empirical evidence” (Ruttkamp-Bloem 2013, p. 203). For realist reaction to this problem, see for instance Boyd (1973), Newton-Smith (1978), Laudan (1990), Laudan and Leplin (1991), Psillos (1999), Ruttkamp (2005) and Norton (2008).

In light of all this, where are we now? On the one hand, some defenders of scientific realism claim we are in a place where the balance has turned in favour of scientific realism (e.g. Chakravartty 2007, Psillos 2017). On the other hand, there might still be those who, like Fine (1984), claim that the realism debate is dead and that all that possibly remains is a “natural ontological attitude” towards scientific theories. Midway between these stances are philosophers such as McMullin (1984), Stein (1989) and Kukla (1994) arguing from various perspectives that the scientific realist debate is in danger of becoming sterile, but still rooting for it albeit in novel ways. The current richness of variations of realism on the one hand and the seeming stalemate of the scientific realist debate on the other indicated to the organisers of the 2014 conference that new ways forward in the scientific realism are needed.

1.2 New thinking about scientific realism 2014Footnote 2

The over-arching aim of the conference was to offer a chance to consider the past 50-70 years or so of work in the scientific realist debate and demonstrate the novel directions open to participants in this debate in the twenty-first century. Therefore, the motivation for the conference was a desire for a re-evaluation of the status quo of the scientific realism debate spurred by a need to investigate, articulate and open up the possibilities for realising promises of new directions of thought about scientific realism. Part of the appeal of the conference was that it was the first of its kind for decades. The last conference of this nature was organised by the Department of Philosophy at the University of North Carolina, Greensboro in March 1982. [The main contributions of this conference are reflected in Leplin’s (1984) Scientific Realism.]

The keynote speaker of the conference was Anjan Chakravartty. There were six sessions in the programme with invited speakers associated with some sessions: Session 1: General Scientific Realism (Michael Devitt as invited speaker); Session 2: Truth, Progress, Success and Scientific Realism (Ilkka Niiniluoto as invited speaker); Session 3: Selective Realisms; Session 4: The Semantic View and Scientific Realism (Steven French as invited speaker); Session 5: Scientific Realism and the Social Sciences (Uskali Mäki as invited speaker) and Session 6: Anti-Realism. There were a total of 34 contributed papers from participants from North America, Asia, Australia, Canada, the UK, South Africa, and various countries in Europe at the conference. A special feature of the programme was a total of 6 symposia with leading international scientists as invited speakers, contributing to the debate from their areas of expertise. Participating scientists were: Quarraisha Abdool Karim (Epidemiology); Jannie Hofmeyr (Biochemistry); Don Ross (Economics); Bruce Rubidge (Paleontology); Mark Solms (Neuropsychology) and Heribert Weigert (Physics). This issue of Synthese contains a small selection of peer-reviewed contributions to the conference.

2 A brief look at the papers of this volume

A key issue pursued by a number of the papers is the status of scientific realism in light of the historical argument leveled against it, known as ‘pessimistic meta-induction’. The focus is two fold. On the one hand, it is how best to understand the historical challenge itself. On the other hand, it is how best to develop the selective realist response to the historical argument, known as divide et impera move.

In his “Epistemic selectivity, historical threats, and the non-epistemic tenets of scientific realism”, Timothy Lyons builds on his earlier work and defends the view that the best way to understand the historical challenge to realism is to construe it as a deductive argument. In particular, he takes it to be a “(bi-layered) modus tollens”. The argument starts with a realist meta-hypothesis, viz., MH: “those constituents that are genuinely deployed in the derivation of successful novel predictions are approximately true”. It then proceeds as follows:

  1. 1.

    If MH is true, then all of the constituents genuinely involved in a theory’s success (let’s call them s-constituents) are approximately true.

  2. 2.

    But there are s-constituents, which are not approximately true.

  3. 3.

    Hence MH is false.

The problem with this kind of argument, as Lyons recognizes is that it is not historical: it makes the past record of science irrelevant, since one single instance of an s-constituent, which is not approximately true, would be enough to refute MH. Lyons’s tries to meet this challenge by noting that there is a second layer in the foregoing argument, which renders it thus:

  1. 1.

    On the No-miracles argument, if there were s-constituents that were not approximately true, “such constituents would constitute “miracles” which no one of us accepts”.

  2. 2.

    But there is a list of miracles (i.e., s-constituents that are not approximately true)

  3. 3.

    Hence, “the no-miracles argument put forward to justify that meta-hypothesis is unacceptable”.

Presumably, this second layer makes obvious the need to retort to history and find examples of s-constituents, which were not approximately true. By doing so, Lyons argues, it becomes more obvious that “the core claim of the no-miracles argument is false and that the realist argument as a whole is unacceptable”. Here again however, it is not generally the case that ‘the more the merrier’, since just one miracle (one s-constituent which is not approximately true) would be enough to refute the NMA, given the modus tollens above. As Lyons notes in Sect. 1, the debate really hinges on presenting particular cases of s-constituents which can be conclusively shown not to be approximately true. Here, matters are complicated by the fact that theories use idealisations and abstractions in the derivation of predictions.

On the positive side, Lyons argues in favour of a non-doxastic version of scientific realism, according to which “science seeks to increase a subclass of true claims, in particular those whose truth is experientially concretized”. This is a rich notion which allows false claims to contribute to the empirical concretization of “high level posits” by connecting them with “statements that describe experiences”. There emerges a new cumulative conception of scientific change according to which, as science grows, there is “an increase in experientially concretized truth” which is achieved in two ways: either by the experiential concretization of already possessed truths, or by the introduction of new experientially concretized truths.

In his “Understanding the selective realist defence against the PMI”, Peter Vickers aims to refine the divide et impera move against the PMI. He argues that that the onus of proof lies with the antirealist: the antirealist has to reconstruct the derivation of a prediction, identify the assumptions that merit realist commitments and then show that at least one of them is not approximately true by our current lights. But then, Vickers adds, all the realists need to show is that the specific assumptions identified by the anti-realist do not merit realist commitments. It should be noted that this is exactly the strategy recommended by Psillos in his (1994), where he aimed to show, using specific cases, that various assumptions such as that heat is a material substance in the case of the caloric theory of heat, do not merit realist commitment, because there are weaker assumptions that fuel the derivation of successful predictions.

Vickers generalizes this strategy by arguing as follows. Take a hypothesis H that is taken to be employed in the derivation of P and to merit realist commitment. Identify an H* which is entailed by H and show that H* is enough for the derivation of P and does merit realist commitment. However, Vickers adds, strictly speaking, the realist need not show that H* merits commitment. It’s enough to show that H does not. As Vickers puts it: “the posit in question is doing work in the derivation solely in virtue of the fact that it entails some other proposition, which itself is sufficient (when combined with the other assumptions in play) for that specific derivational step”. In order to substantiate this move, Vickers discusses in some detail the case of Bohr’s prediction of the spectral lines of ionized helium.

Note that Vickers’s strategy can be pitted against Lyons’s argument presented above. Presumably not all false components that Lyons has identified merit realist commitment if Vickers’s strategy is thoroughly followed. As Vickers puts it: “We are not here in the business of identifying realist commitments; we are in the business of showing that some specific assumption does not merit realist commitment”. And he adds: “ (...) that is enough to answer the historical challenge”. Vickers then focuses on dismissing a potential challenge to his strategy, what he calls “the disjunction problem”.

In his “Replacing recipe realism” Juha Saatsi takes issue with the very idea of formulating a selective realist response to the historical challenge, based on an abstract and general pattern of retention in theory-change. He calls “recipe realism” the view that there is an abstract, recipe-like, way to specify the realists’ epistemological commitments, independently of the specific details of each particular case; and he contrasts this with his own favourite, “exemplar realism”. He presents some arguments against recipe realism, the major coming from the diversity of the sciences. His chief point is that there is not one true recipe for the realist epistemic commitments to the various parts of the theories but “a plurality of them”. Based on this he argues that “The realist idea that we can thus argue for wholesale realism, as an abductively justifiable theory about all of mature science, was a bad one”. Though Saatsi is certainly right in claiming that it is hard to come up with a well-motivated recipe without concrete exemplars, he might overplay his case for exemplar-based realism. His alternative to adopting a realist theory about science is the adoption of a positive attitude about science. More specifically, this positive attitude is underwritten by commitments to resembling exemplars: “in a domain of science like this, with theories or models like that, empirical success in this sense, is (probably) accountable in those terms (even if these theories or models are radically mistaken ‘on the whole’). In order to fill in the underscored placeholders above a realist can consult the (scientific) experts for the fullness of relevant detail, instead of pretending to be able to figure out the answer from the philosophical armchair”. The potential problem with this move is precisely that unless general criteria of likeness are specified, it might be hard to extend any realist commitments beyond the exemplars themselves. So the question might well be: how strong and broad is exemplar realism?

In his “Predictive success, partial truth and Duhemian realism”, Gauvain Leconte revisits the emblematic-for-realism novel prediction of the white spot in the middle of the shadow of an opaque disk by Augustin Fresnel. Leconte goes carefully through the proof by Poisson and then by Fresnel himself and carefully delineates the various theoretical constituents that played a role in the derivation of the novel prediction. Leconte sees this strategy as a test of Psillos’ divide et impera move. The fact that there are two derivations of the white-spot prediction raises the following questions: “Which derivation is the proper one for the divide et impera move, the one that gives us the “true” constituents of Fresnel’s wave theory of light? Do the two derivations have something in common?”. According to Leconte, the derivations by Fresnel and Poisson utilise different assumptions. In particular, in Poisson’s derivation there is use of a ‘covering law’ concerning Huygens’s principle, whereas in Fresnel’s equation, there is reliance on the mechanism of destructive interference. Apart from being compatible with each other, these two derivations show, according to Leconte, “why the divide et impera move is an interesting strategy for the realist: it proves that ether does not fuel the success of Fresnel’s theory”. Leconte takes it, however, that “Poisson’s and Fresnel’s proofs use different methods which lead to assumptions incompatible with each other”. His conclusion is that “there is no guarantee that the predictive indispensability of a given part of a theory implies that it is worthy of belief and will be retained in theory-change”.

Drawing on Duhem’s views and the holistic account of confirmation, he arrives at the positive thesis that predictive success is opaque: “it may be impossible to circumscribe which theoretical hypotheses are worthy of belief without the benefit of the advancement of science because of the way our theories are structured and the way predictions are made”. Yet, “ we can still be (careful) realists and grant approximate truth to scientific theories”, without being able to predict “which constituents will be eliminated and which will be conserved”. In a certain sense, then, Leconte’s “Duhemian realism” is a version of blind realism: successful theories have truthlike constituents but scientists cannot tell which those are and which are more likely to be retained by future theories.

In an attempt to go beyond blind realism, Gerald Doppelt has argued for Best Theory Realism or Best Current Theory Realism, they key idea of which is that past superseded theories are false, whereas present best theories are true. In his “Resisting the historical objections to realism: Is Doppelt’s a viable solution?”, Mario Alai argues systematically against Doppelt’s position. According to Alai, Doppelt is committed to a radical discontinuity between present theories and past theories and more particularly to “the truth of our best theories, but not to the truth of their successful predecessors, or any components of them”. This radical discontinuity is supposed to undercut the historical challenge to realism: “because of the discontinuity between past and present theories, the objections against the NMA from success to the truth of discarded theories do not apply to current theories”. Now, as Alai points out, this strategy is a dead end. The chief reason is that Doppelt cannot explain the novel predictive success of past theories without arguing that they had truthlike constituents. Besides, as Alai puts it, “current best theories explain the (empirical) success of discarded ones only to the extent that they show that the latter were partly true”.

On Alai’s view, Doppelt is committed to a poor form of realism: “once Doppelt drops his commitment to the complete and final truth of the theories fulfilling the highest standards of today, granting that they too can be discarded like the past ones (which is plausible), by induction he will be forced to grant that also future best theories might be false and liable to rejection (which is plausible again). But then, given his skepticism on the partial truth of rejected theories, he won’t be able to make any commitment to any theoretical truth at any time: which makes for a quite poor form of realism”.

Another key issue in the realism debate is whether we can make good sense of the realist idea that theories, though strictly speaking false, can be approximately true or truthlike. This issue is the crux of Ilkka Niiniluoto’s paperOptimistic realism about scientific progress”. His starting point is that theories are false, even known to be false, because they contain idealisations and approximations; and yet, a theory can be closer to the truth than another theory. Niiniluoto accepts conceptual pluralism, but unlike Putnam’s internal realism, his critical realism combines “conceptual pluralism with the correspondence theory of truth”. This combination relies on the thought that the world (THE WORLD) is mind-independent, but it has many conceptualisations WL; one for each semantically determinate language L. The truth of sentences of the language L that represent the world according to WL is well-defined by Tarski’s account of truth. As he puts it: “Each language L has its own truths, but still truth is objective in the sense that we are free to choose the language L (with its vocabulary and interpretation), but THE WORLD decides the extensions of the L-terms and the truth values of L-sentences”. Niiniluoto denies that there is a (privileged) language L which has THE WORLD as its model. Though the motivation for this claim is clear, it seems more reasonable to assert that the existence of such a language is an open question, since it will be THE WORLD in the end which will settle the matter of the extension of the predicates and the truth-values of the sentences of any language.

Niiniluoto’s fallibilism entails that all parts of theories may be changed and improved, as science grows. Hence, he disagrees with what he takes it to be a key commitment of selective realism, viz., that accumulation concerns only some selected true parts of the theory (those that are favoured by selective realism). In his view, this constant change and improvement of all parts of theories, as science grows, is compatible with real progress in science, as well as with increasing verisimilitude. He argues that he thereby has a weapon against the pessimistic induction: “Instead of emphasizing those stable parts of theories which survive over time, our picture of theory-change is dynamic in the sense that all parts of current theories may be improved by increasing their truthlikeness”. He goes as far as to argue for an optimistic induction: “scientists by their method favour empirically successful theories and the increasing success of such theories is best explained by their increasing truthlikeness”.

Ontic structural realism (OSR) is a species of selective realism. In his paper “(Structural) realism and its representational vehicles”, Steven French claims that “the nature of things gets cashed out in entirely structural terms”. He favours Eliminative OSR over Moderate OSR: no objects, and hence “a wholly structural metaphysics” vs thin objects—just the relata in the structure. But if objects are eliminated, how is structure represented? Besides, how can a physicist talk about hadrons, electrons etc.? For French, a structure is (represented as) a set theoretic entity \(\langle \hbox {A, R}\rangle \). He suggests that \(\langle \hbox {A, R}\rangle \) should be read “ontologically from right to left, taking the A to be entirely characterised—their properties and even their identity—in terms of the R”. This might seem to eliminate objects “from our metaphysics”, but only if relations are taken as not relating objects but instead “as features of the structure (of the world), such as laws and symmetries”. It is these features that are supposed to “yield the properties that are then associated with (...) the A”.

The key issue, as French acknowledges, is this: how can there be representation without the represented? What is represented by A if not objects? French claims: “The point is, we introduce something for representational purposes that we then ontologically reconceive and eliminate altogether”. But what are the representational purposes if not to represent something? And if it is eliminable, what exactly is represented? French’s stance is rather radical: “there are particles” is false, since there are no particles qua objects but “there are particles” is true “in virtue of the fact that there are structures ‘arranged’ such as to yield the features we associate with particles (that is, ‘via the relevant symmetry groups for example)”. This view is metaphysical nihilism. Take the claim ‘X exists’. The truth-maker of this statement is something other than X. So ‘X exists’ is true but without X’s. This move is then applied to theories. French favours eliminativism about theories: “‘there are theories’, asserted in the language of the fundamental level, is false, although ‘there are theories’ uttered by scientists and philosophers of science is true and it is made so by the relevant truth-makers. The question then, of course, is what exactly are the features of the world that ultimately make true our talk of theories?”

Within this eliminative perspective, the question surely is: Can there be realism without theories? That is, what is the content of realism when it comes to scientific theories? For French, T is a theory iff there are “features of practice that we are happy to accept as elements of our ‘fundamental ontology’ that can give an adequate grounding to our talk about the existence of, and properties of T”. But this, if anything, only grounds the existence of the theory T and not its representational features. For T to be true, its representational content must be suitably connected with the world. This issue is different from the issue of how a set of practices relate to the world. The practices per se do not have representational content. French considers this kind of objection but takes the view that “when we, philosophers (or again, scientists or others thinking philosophically), talk (seriously) about theories representing some target system, we have in mind, if perhaps only implicitly, some way of ‘representing’ theories themselves and these systems”. Yet, it seems that if the truth-maker of ‘T exists’ is a set of practices, then according to French, what we ‘represent’ is these practices and not the content on which these practices are based, which (content) after all is the theory.

In his “A pragmatic, existentialist approach to the scientific realism debate”, Curtis Forbes raises the question: has the scientific debate reached a stalemate? Taking a cue from van Fraassen’s account of epistemic stances, he takes it that “varieties of scientific realism and antirealist empiricism are seen as the outgrowth of opposed ‘epistemic stances”’. For Forbes, an epistemic stance is located “at a ‘meta-epistemological’ level”, above “the level of epistemological theories and philosophies of science, which in turn sit above the level of more concrete, ground level facts (about our world’s actual ontology and laws of nature, for example)”. The question then he focuses his attention on is: Given that there is no universally best for every one epistemic stance, how can we choose an epistemic stance towards science? Forbes aims to offer advice as to how to make well-informed choices of epistemic stances. As he puts his main proposal: there are empirical facts which “can pragmatically determine the preferability of one epistemic stance over its alternatives, relative to a specific set of values (both epistemic and non-epistemic) and a specific context”. This approach is a kind of methodological existentialism. It is “existentialist” because the values (be they epistemic or non-epistemic) we choose arise out of our own will and preferences and they are not dictated from the outside (e.g., by reason). Hence, on Forbes’s view, there is no point in trying to argue someone out of their values or to argue anyone into accepting a set of values. But given this, he notes, “it is rational to choose the stance which best serves one’s values, so as not to be self-defeating by one’s own lights”.

Forbes characterizes his view as “pragmatic” because it is “not concerned with determining which position is true or most rational per se, but rather with determining which position is best given some antecedently, idiosyncratically, and unquestioningly held set of values, in a given practical context”. A key question here seems to be this: what kind of rationality is presupposed by Forbes’s view? In reply, he elaborates on what he calls “the menu model menu”: choosing stances is like choosing food from a menu. As “responsible diners” should seek out information about which meal will satisfy most of their desiderata concerning food, so responsible epistemic practitioners should determine which “epistemic option” best satisfies their “specific wants, needs, and values”. He then tries to substantiate all this by looking into how various philosophical theories of science played out in in late nineteenth century Electrodynamics. His key thought is that by using historical methods, it can be shown that “certain epistemic stances facilitate success in certain forms of scientific inquiry more readily than other stances”.

In his “Physicalism as an empirical hypothesis”, David Spurrett engages with van Fraassen’s critique of materialism. According to van Fraassen, materialism involves “false consciousness” since it fails to exclude any kind of theories. On van Fraassen’s view, materialism is not a cognitive position with empirical content, but instead it is “a stance”, “an attitude” which is characterized by “deference to the current content of science (whatever that content is) in matters of Ontology”. As Spurrett reads van Fraassen, “The materialist (...) believes whatever science currently says about what there is, and counts whatever that is, as material”. The problem with this view, van Fraassen notes, is that materialism is immune to repudiation: if there are phenomena which point to its falsity, materialism is reformulated so that the recalcitrant phenomena end up material. It is precisely this point that Spurrett intends to rebut: “key physicalist commitments can be, and have been, formulated in ways that have sufficient empirical content for their purposes”. The key materialist commitment is the causal closure of physics. Spurrett argues that this claim “has empirical content, and (...) evidence could defeat it”. If there are fundamental mental, or vital, etc., entities or processes irreducibly contributing to fixing the chances of physical occurrences, then the core materialist claim is defeated.

Going briefly over some episodes in the history of science, Spurrett stresses that there is no evidence from the history of science for the claim that materialism has accommodated changes in the physical description of the world “that undermine the minimal completeness of physics”. Hence, Spurrett concludes, van Fraassen has not shown that materialism is just a stance not amenable to empirical undermining or support.

Nora Berenstain, in her “The applicability of mathematics to physical modality” discusses the metaphysical implications of the indispensability argument for modality and necessity on the world. If the indispensability argument for mathematical realism and the no-miracles argument for scientific realism are put together, Berenstain argues, it follows that there are non-trivial relations of metaphysical dependence of the physical structure of the world on the mathematical structures in science: “mathematical structures cannot be explanatory unless they bear some determination relation to the observable structures they are taken to explain”. She offers four cases of modal physical structures, where a modal physical structure is “a web of relations of nomological necessity that hold among the various entities and properties that form a physical system or phenomenon. These physical structures are represented by mathematical structures in such a way that either novel prediction follow or mathematical explanations of empirical phenomena are obtained. The key claim then is that in all four cases “mathematical structures and relations are indispensable to our scientific theories and explanations [of] robustly modal features of the physical world”. In this sense, it can be argued, physical modality is grounded in mathematical modality. Actually, Berenstain makes the stronger point that “the modal structure of the empirical system metaphysically depends on that mathematical structure: and that this dependence explains why “we are able to make inferences from features about a certain mathematical structure to consequences about an empirical system”.

What exactly is this relation of dependence of modal facts about physical systems on underlying mathematical structures? Berenstain tries various ideas (grounding, supervenience, instantiation and identity). She favours instantiation (modal physical structures instantiate mathematical structures) because she argues that the modal features of mathematical structures instantiated by physical structures limit the possible features of the physical systems.

Finally, in “Last thoughts on new thinking about scientific realism”, which is the concluding essay of the volume, Anjan Chakravartty weaves the various papers together, discusses critically the various arguments and perspectives and reflects on their novelty and their promise for the future study of scientific realism.

Undoubtedly, the papers in this special issue show that the scientific realism debate has taken new twists and turns and has become richer and more nuanced.