1 Introduction

Logic isn’t special. Its theories are continuous with science; its method continuous with scientific method. Logic isn’t a priori, nor are its truths analytic truths. Logical theories are revisable, and if they are revised, they are revised on the same grounds as scientific theories.

These are the tenets of anti-exceptionalism about logical theories, a position that has its most famous proponent in Quine (1951). In recent years, broadly anti-exceptionalist positions have been defended by Maddy (2002), Russell (2014, 2015), and Williamson (2007, 2013a, b, 2015).Footnote 1 Both Maddy and Williamson are classical anti-exceptionalist: They follow Quine in arguing that anti-exceptionalism provides a justification for classical logic. Their claim is that although logical theories are in principal revisable, the relevant evidence supports retaining classical logic. However, the connection between anti-exceptionalism and classical logic has not gone undisputed. Priest (2006a, 2014), for example, argues for nonclassical logic on anti-exceptionalist grounds. He shares the anti-exceptionalist tenets, but insists that classical logic ought to be revised. Priest is, in other words, a nonclassical anti-exceptionalist.

Priest and Williamson are not only both anti-exceptionalists, they have also defended accounts of theory selection in logic that are remarkably similar, at least at the surface level. The short version is that theories of logic, not unlike scientific theories in general, are chosen on the basis of abductive arguments, that is, inference to the best explanation. They even agree to a large extent on the specific selection criteria, and nonetheless they reach incompatible conclusions.

In what follows I side with Priest: abductivism about logic does not lead to classical logic. It does not follow, however, that abductivism supports a specific nonclassical logic. Instead, the anti-exceptionalist should endorse a form of logical pluralism. The contention is that the pluralism I defend better accommodates the evidence Priest and Williamson offer for their respective theories. Priest himself has argued against certain types of logical pluralism (cf. Priest 2006a), but does not think that pluralism is in general incompatible with his abductivism (Priest 2016, 9). However, the logical pluralism I advocate differ not only from that considered by Priest, it also differs in important ways from other forms of pluralism in the literature (e.g. Beall and Restall 2006; Hjortland 2012; or Shapiro 2014).

To assess the abductive arguments we first need to get straight on some of the anti-exceptionalist assumptions. What is a logical theory for Priest and Williamson, what are logical theories theories of, and what constitutes evidence for such theories? How should the selection criteria for logical theories be articulated and weighted? In Sect. 2 I discuss the abductive methodology of anti-exceptionalism in more detail. In Sect. 3 I present a number of objections against Williamson’s deflationary account of logical theories, and subsequently, in Sect. 4, I introduce an alternative non-deflationary account. I then discuss some of the proposed selection criteria for logical theories in Sect. 5. In Sect. 6, I argue that Williamson’s abductive argument for classical logic fails, and, finally, in Sect. 7, logical pluralism as an alternative is developed and supported.

2 Priest and Williamson on abductivism about logic

For the exceptionalist logic is special. There are a number of ways in which logic can be special, but for our purposes the central exceptionalist claim is that the justification of logical theories is a priori.Footnote 2 Since anti-exceptionalists reject apriorism, they need an alternative story about how logical theories are supported. This is precisely the challenge that Williamson (2015) attempts to answer.

[W]e can use normal scientific standards of theory comparison in comparing the theories generated by rival consequence relations. Thus the evaluation of logics is continuous with the evaluation of scientific theories, just as Quine suggested [...]. (Williamson 2015, 14)

If the anti-exceptionalist is right, the methods of logic are continuous with the methods of science. Anti-exceptionalism defers to the standards of scientific method, but that raises an immediate question: what is the scientific method in question? Williamson offers the following answer:

[S]cientific theory choice follows a broadly abductive methodology. [...] Scientific theories are compared with respect to how well they fit the evidence, of course, but also with respect to virtues such as strength, simplicity, elegance, and unifying power. We may speak loosely of inference to the best explanation, although in the case of logical theorems we do not mean specifically causal explanation, but rather a wider process of bringing our miscellaneous information under generalizations that unify it in illuminating ways. (ibid.)

With his list of criteria for theory selection in logic, Williamson finds an unlikely ally in Priest (2006a, 2014).Footnote 3 They hold opposing positions in many philosophy of logic debates, e.g. on semantic paradoxes and vagueness. Priest is a paraconsistentist—he rejects the law of explosion and with it classical logic. He is fully committed to the revisability of logic, in general, and to the revision of classical logic in particular. It is all the more surprising, therefore, that their views on logical methodology overlap substantially. Priest, like Williamson, is an anti-exceptionalist, although one less impressed with the heritage from Quine:

[I]t might be thought that there is something special about logic which makes it different. [...] I will argue that there is not. Logic is revisable in just the same way that any other theory is. (Priest 2006a, 156)

For Priest, rational revision of a theory proceeds according to the same criteria, regardless of what the theory is a theory of. The criteria he has in mind are precisely those touted by Williamson as characteristic of the scientific method.

Given any theory, in science, metaphysics, ethics, logic, or anything else, we choose the theory which best meets those criteria which determine a good theory. Principal amongst these is adequacy to the data for which the theory is meant to account. In the present case, these are those particular inferences that strike us as correct or incorrect. This does not mean that a theory which is good in other respects cannot overturn aberrant data. As is well recognised in the philosophy of science, all things are fallible: both theory and data. Adequacy to the data is only one criterion, however. Others that are frequently invoked are: simplicity, non-(ad hocness), unifying power, fruitfulness. (Priest 2014, 217)

Priest (2016) elaborates by offering a formal model of theory selection.Footnote 4 Suppose we have a list of criteria for theory selection \(c_1,\ldots , c_n\). For each criterion \(c_i\), we can score a theory T on a scale between \(-10\) and \(+10\) with a measurement function \(\mu _{c_{i}}\). A logical theory T might for instance score well on simplicity c (\(\mu _{c}(T) = +7.5\)), but poorly on unifying power \(c'\) (\(\mu _{c'}(T) = -6\)). The criteria might also differ in importance for the selection task. This is reflected by assigning a weight \(w_{c}\) for each criterion c (on the same scale). Priest then defines a rationality index for theories \(\rho (T)\) as the weighted sum of its criteria scores:

$$\begin{aligned} \rho (T) = w_1\mu _{c_{1}}(T) +\cdots + w_n\mu _{c_{n}}(T) \end{aligned}$$

In a disagreement over logical theories, then, we ought to prefer the theory with the highest rationality index. If two parties agree on the criteria and the weights, that might be a straightforward decision. But, Priest also warns that even under such favourable circumstances there could be no single theory that scores better than every other. If two or more theories receive the same score we have an undesirable but familiar outcome: it is indeterminate which theory is the better one. None of this will deter the anti-exceptionalist. It is merely another way in which logical theories have no epistemological privilege.

The details of the model won’t concern us here. As Priest points out, such a model can be devised in any number of ways. It should be clear, however, that even if we agreed on the general outline of a model for theory selection, we need not agree on the criteria or their weights. Indeed, this is where the agreement between Priest and Williamson ends. The similarity between their methods is potentially misleading. All the proposed criteria require a great deal of unpacking in order to be applied to logical theories, and neither of them have given an exhaustive list of criteria.

It is also an overarching problem that we cannot articulate and weight the criteria before we know the answer to two more basic questions: First, what is a logical theory a theory of? That is, what is the thing we are trying to explain? Second, given an answer to the first question, what counts as evidence for a logical theory? Only when we have answered these questions, are we in a position to discuss the result of abductivism.

3 A deflationary account of logical theories

We might suspect Priest and Williamson of using the abductive method for different, and irreconcilable, purposes. Imagine for example two logicians, one who wants to give a descriptive theory of how people actually reason deductively, and another who is giving a normative theory of how people ought to reason deductively. These two logicians are giving theories explaining different things, and to the extent that the abductive method applies in both cases, it is no surprise that they end up with incompatible results.

Are Priest and Williamson theorizing about different phenomena? Priest (2006a) is concerned with the normative project described above, but it is not clear that Williamson’s aim is the same. Williamson (2015) develops a deflationary anti-exceptionalist account of logical theories.Footnote 5 The theory has no overt normative component, nor does it purport to describe the psychology of deductive reasoning. Instead Williamson thinks of a logical theory as a theory of unrestricted generalizations. These generalizations are not specifically about properties of arguments, sentences, propositions; they are generalizations about absolutely all things in the world. As such, a logical theory is more akin to a scientific theory. It describes some aspects of the world; it just happens to describe the most universal aspects of the world.

There are three important features of Williamson’s logical theories that we should keep in mind:

  1. 1.

    Unrestricted generalization A logical theory consists of sentences that are unrestricted universal generalizations.

  2. 2.

    Universal closure The unrestricted universal generalizations in question are universal closures of valid arguments.

  3. 3.

    Non-metalinguistic Truths of logical theories are not about language or about concepts. They are about the world.Footnote 6

Williamson stresses that ‘[s]uch an investigation is not semantic or epistemological in any distinctive sense. It is more like an investigation in mathematics or physics, an attempt to determine which relevant principles hold’ (Williamson 2015, 7–8). What sets the truths of a logical theory apart is their scope. Unlike truths of chemistry or truths of economics, truths of logic are true about absolutely everything.Footnote 7 These unrestricted truths do correspond to valid forms of argument, but it is the truths that are the members of a logical theory. We can identify truths of logic by producing appropriate universal closures of truth-preserving arguments.

Williamson wants to achieve this in a background language, enhanced with higher-order quantifiers and a detachable conditional:

Investigating which sentences of [a language] L are logically true is tantamount to trying to decide universal generalizations of \(L^{+}\) not containing non-logical constants. (ibid.)

Let us consider the proposal in some more detail. Take for instance the law of double negation elimination, i.e. \(\lnot \lnot A\,\models\,A\). The classical logician wants to include it in her theory. Williamson suggests a process that consists of replacing all non-logical constants with variables and universally binding them (potentially with higher-order quantifiers). In order to define the universal closure, the argument first has to be given in a corresponding theorem form with an appropriate conditional:

$$\begin{array}{lll} \lnot \lnot A\,\models\,A & \leadsto & \models\,\lnot \lnot A \rightarrow A\\ & \leadsto & \forall \phi (\lnot \lnot \phi \rightarrow \phi ) \end{array}$$

The result is higher-order sentence that is a counterpart of the original argument, albeit in the stronger background language.

Here is another example—disjunctive syllogism—where premises are combined conjunctively:Footnote 8

$$\begin{array}{lll} A \vee B, \lnot A \models B & \leadsto & (A \vee B) \wedge \lnot A\,\models\,B\\ & \leadsto & \models ((A \vee B) \wedge \lnot A) \rightarrow B\\ & \leadsto & \forall \phi \forall \psi (((\phi \vee \psi ) \wedge \lnot \phi ) \rightarrow \psi )\\ \end{array}$$

According to Williamson, an unrestricted generalization like the one above stands out because of its generality, but it has no privileged epistemological or normative status. Truths in a logical theory are as hard-won as any scientific truth, and as subject to revision. They are not in any interesting sense metalinguistic. They are not specifically about sentences and arguments, or the properties of sentences and arguments (e.g. truth and validity).

Consider the original argument \(\lnot \lnot A\,\models\,A\). We could spell this out informally as the claim that A follows from \(\lnot \lnot A\), or that the argument from \(\lnot \lnot A\) to A is valid. More formally, for every model M, whenever \(\lnot \lnot A\) is true in MA is true in M. These are also logical claims, but they don’t have the format prescribed by Williamson. Claims about validity are claims about arguments, and therefore metalinguistic. The same goes for claims about truth-in-M for sentences.

The deflationary account of logical theories has several problems, and a first one is raised by Williamson himself. The move from a consequence relation \(\models\) to a class of theorems is not always available. There are several reasons. First, the move isn’t available if the set of premises is infinite. That is, there may not be a logical truth that is the counterpart of the valid argument. If that is the case, the theory consisting of unrestricted generalizations will undershoot. There will be valid arguments the theory does not account for.

If the logic in question is compact, there will nevertheless be an argument with a finite subset of the premises that is valid. Since Williamson’s does not want to rule out non-compact logics, however, he opts for strengthening the background language with an infinite conjunction. That is controversial, but perhaps not as bad as one might first suspect. After all, the language of the logical theory need not be the language we reason in. Granted, there could still be other reasons to object to theories in an infinitary language, but we won’t pursue that discussion here. It should be clear that the introduction of infinitary conjunctions is symptomatic of a limitation with the deflationary approach.

A second problem—also mentioned by Williamson—is that the higher-order universal closure of arguments requires certain assumptions to be in place. There has to be an operational distinction between logical constants and non-logical constants for us to know what the correct closure is. Williamson has a reasonable answer to this. What counts as a logical constant will depend on the language in question, and the purpose of the language. In some contexts identity will be a logical constant, in other contexts the truth predicate is logical. In short, logical constanthood will be as much up for grabs in logical theories as the question about validity.

[F]or present purposes a once-and-for-all criterion [of logicality] is not wanted. Rather, the choice of logical constants is pragmatic. Varying the extension of ‘logical constant’ amounts to varying what one is investigating the general structural features of. (Williamson 2015, 3)

For Williamson, the choice of logical constants is part of the abductive package.

Another assumption that the higher-order quantification requires is that the logic is closed under uniform substitution.Footnote 9 Some would perhaps conclude that systems that do not satisfy uniform substitution are not logics at all, but we must at least acknowledge that there are interesting systems for which it fails, such as dynamic epistemic logics (cf. Ditmarsch et al. 2007). These modal logics are powerful systems with important applications to reasoning about announcements, common knowledge, soft information, and other epistemic phenomena. Any condition that excludes these systems is ill-advised. The general approach should be to find an anti-exceptionalist framework that does not rule out logical theories prior to an abductive argument.

Finally, Williamson’s proposal requires a suitably strong conditional. His procedure starts with a consequence relation, and subsequently yields unrestricted generalizations. In the process, the consequence relation is captured by the conditional of the theory’s background language. However, the conditional in question may not be acceptable to the proponents of the consequence relation.

Consider for example conditionals for which conditional proof (\(\rightarrow\) introduction) or modus ponens (\(\rightarrow\) elimination) fails. The conditional in the Strong Kleene logic fails to satisfy the former, while Priest’s Logic of Paradox fails to satisfy the latter.Footnote 10 These logics are rivals of classical logic in various debates (e.g. about semantic paradoxes), but do not have a class of theorems that corresponds to their respective consequence relation. For Strong Kleene this is obvious. It has no theorems, but a non-empty consequence relation. In contrast, the Logic of Paradox has all the theorems of classical logic, but a weaker consequence relation. In the former case, the logical theory will again undershoot: some valid arguments won’t be captured by the theory. In the latter case, the truths of the logic will not allow us to detach with modus ponens when we know the antecedent for some instance. Either way that is a major restriction on the theory.

Maybe Williamson should simply say: so much for the worse for nonclassical theories. But despite his preference for classical logic, Williamson takes the challenge of rival logical theories seriously. Nonclassical theories, he maintains, cannot be discarded wholesale, nor should they be ruled out by a narrow conception of what counts as a logical theory. Their merits and demerits must be assessed in each individual case. That is already an important concession to the nonclassical logician, one that sets Williamson apart from other classicists with less sympathy for its rivals.Footnote 11 A telling example is his own argument for epistemicism about vagueness, a theory that is staunchly classical, but that is scrutinized in competition with its nonclassical rivals (e.g. continuum logic, supervaluationist logic). Nonclassical logics are not ruled out of the debate for any antecedent reasons. They participate, if not on equal footing, as upstarts.

In order to ensure a more neutral anti-exceptionalist framework for logical theories, Williamson (2015, 13–14) therefore outlines a second approach to logical theories. Instead of insisting on theoremizing a consequence relation, he suggests that the anti-exceptionalist should compare logical systems in the form of consequence operators. A consequence operator Cn, for some consequence relation \(\models\), is defined as:

$$\begin{aligned} Cn(\Gamma ) = \{A \mid \Gamma\,\models\,A\} \end{aligned}$$

Williamson claims that the comparison between consequence operators can be carried out in a deflationary manner. The reason is that the resulting theory, \(Cn(\Gamma )\), is not modal or metalinguistic in any interesting sense. If the antecedent theory \(\Gamma\) is non-metalinguistic, \(Cn(\Gamma )\) will by and large also be non-metalinguistic.

Williamson imagines using an independently well-confirmed theory as \(\Gamma\) (e.g. from physics), and subsequently comparing rival consequence operators with respect to the the outputs \(Cn_{1}(\Gamma ), Cn_{2}(\Gamma ), Cn_{3}(\Gamma )\), etc.Footnote 12 We can then compare the rival consequence operators by comparing the non-metalinguistic truths that they output. If a Cn delivers a sentence A that we have independent reason for thinking is false, from premises we think true, that counts as (defeasible) evidence against the consequence operator.

But Williamson’s modified account of logical theories still contain a serious bias. To see this, just consider supervaluationist logic. For classical consequence \(\models _{CL}\) and supervaluationsist consequence \(\models _{SV}\) we have \(Cn_{CL}(\Gamma ) = Cn_{SV}(\Gamma )\) for every \(\Gamma\). A fortiori, there will be no difference between their logical predictions for sets of well-confirmed sentences. Nor will it be an option to simply reject supervaluationist logic outright. After all, it is precisely one of the nonclassical logics that Williamson (1994) himself considers a serious, if ultimately flawed, candidate for a logic of vagueness.

The point is that supervaluationist logic does differ from classical logic, but the difference is not captured by the consequence relation between a set of premises and a conclusion. If we move to a multiple conclusion consequence relation, however, the two logics are no longer equivalent. Where \(\Gamma , \Delta\) are two sets of sentences, the supervaluationist consequence relation \(\Gamma \models _{SV} \Delta\) can fail when the classical counterpart holds \(\Gamma \models _{CL} \Delta\). If we are willing to accept multiple conclusion consequence, we can define corresponding consequence operators:

$$\begin{aligned} Cn'(\Gamma ) = \{\Delta \mid \Gamma \models \Delta \}. \end{aligned}$$

Alternatively, the difference can be captured in another generalization of the consequence relation. We not only consider the logic as a class of valid arguments, but as a class of valid meta-arguments of the form:

$$\begin{aligned} \frac{\Gamma _1 \models A_1 \quad \cdots \quad \Gamma _n \models A_n}{\Pi \models B} \end{aligned}$$

A number of important classical principles are meta-arguments in this sense, for example standard natural deduction rules such as reductio ad absurdum or conditional proof. Proof-theoretically we say that they are hypothetical rather than categorical inference rules. That is, they rely on an assumption that is discharged when the conclusion is introduced:

$$\begin{aligned} \frac{\Gamma , \lnot A\,\models\,\bot }{\Gamma\,\models\,A} \qquad \frac{\Gamma , A\,\models\,B}{\Gamma\,\models\,A \rightarrow B} \end{aligned}$$

As it happens, both these meta-arguments are invalid in supervaluationist logic.Footnote 13 Hence, although the class of arguments are co-extensional with the class of valid arguments in classical logic, the same does not hold for meta-arguments.Footnote 14 Simply comparing the single-conclusion consequence relation won’t do. Now the Cn operator will have to be further generalized. Let \(Cn^{*}\) be an operator that takes arguments as inputs and yields an argument as output:Footnote 15

  • \(Cn^{*}(\langle \Gamma , A\rangle ) = \{\langle \Pi , B\rangle \mid \Gamma\,\models\,A \Rightarrow \Pi\,\models\,B \}\).

The question is whether the generalization \(Cn^{*}\) of the consequence operator still provides a comparison of logical theories that is non-metalinguistic. One immediate problem is that the operator \(Cn^{*}\) does not output the right form of generalizations. It is not sufficient to assume some theory \(\Gamma\)—well-confirmed or not—and see what follows. We have to assume that a number of arguments are valid, and then see what other arguments follow. But comparing classes of arguments with respect to validity is a meta-linguistic affair. The upshot is that Williamson’s model for comparison of logical theories has to be extended, and it has to be extended in ways that put pressure on the claim that the comparison can be non-metalinguistic.

4 A non-deflationary alternative

Let us take a step back. The overarching problem is that Williamon’s deflationary account of logical theories is too narrow. Fortunately, the anti-exceptionalist is not committed to a deflationary account of logical theories. A logical theory should not be about what holds universally about absolutely everything. What should it be about? Here is Priest on the content of logical theories:

The central notion of logic is validity, and its behaviour is the main concern of logical theories. Giving an account of validity requires giving accounts of other notions, such as negation and conditionals. Moreover, a decent logical theory is no mere laundry list of which inferences are valid/invalid, but also provides an explanation of these facts. An explanation is liable to bring in other concepts, such as truth and meaning. A fully-fledged logical theory is therefore an ambitious project. (Priest 2016, 8–9)

Priest defends what I will call the non-deflationary account of logical theories. According to this view, a logical theory should be about validity, consistency, formality, truth preservation, provability, among other things. After all, these are the properties debated by philosophers of logic. Claims about which arguments are valid, what follows from a contradiction, what is provable and refutable, and so on, form the central content of logical theories. Indeed, the disagreements between Priest and Williamson are testimony to this.

Claims of unrestricted generality are of course not unrelated to claims about validity, but it is not at all obvious that validity can be exhaustively captured in terms of unrestricted generalizations. That is a contentious view on the metaphysics of logic. Maybe the position has merit, but it should not be an assumption required for the anti-exceptionalist’s abductive method. If unrestricted universal generalizations cannot account for important disagreements in the philosophy of logic, so much the worse for the deflationary account of logical theories.

So far we only have a rough idea of what a non-deflationary theory looks like. In what follows I want to develop a more precise account that can be exploited to discuss the consequences of abductivism. Let us start with a simple example. The law of double negation (DNE) is controversial in the philosophy of logic—classicists like Williamson accept it, intuitionistis and other paracompletists reject it. We should expect, therefore, that logical theories differ on the following claim:

  1. (1)

    The law of double negation is valid.

Of course, what is at stake is not whether DNE is classically valid or intuitionistically valid. There is no disagreement about that. Rather, the parties disagree about whether DNE is genuinely valid.Footnote 16 A further complication is that it isn’t clear that the parties agree on what genuine validity is, nor that they agree on the content of the logical expressions occurring in DNE.

But this is not a problem particular to logical theories. The same problem occurs in other sciences, for run of the mill scientific terms, without a paralyzing worry that the disagreement is verbal or otherwise insubstantial. We should not conclude that disagreements about validity, negation, or truth preservation are mere verbal disputes.

The classicist and the intuitionist disagree about whether (1) is true, and, as a consequence, about whether it should be included in the best logical theory. In order to mirror Williamson’s deflationary account of logical theories, we can express the sentence (1) in a formal metalanguage—the language of the logical theory. For example:

  1. (2)

    \(\forall x (Sent(x) \rightarrow Val(\dot{\lnot }\dot{\lnot } x, x))\)

The predicates Sent and Val are interpreted as ‘is a sentence’ and ‘is a valid argument’. The function \(\dot{\lnot }\) takes a Gödel code for a formula to the code for its negation. An important difference between the language of the deflationary and the non-deflationary theory is that the latter now distinguishes between validity and the conditional. Moreover, if needed it can distinguish between the conditional of the background language \(\rightarrow\) and the theorizing of a object language conditional COND(xy).

Priest and Williamson happen to agree that DNE is valid, but disagree about other things. For example, Priest thinks that the law of explosion is invalid; Williamson disagrees. That is, they disagree about the status of claims of the following form:

  1. (3)

    \(\forall x (Sent(x) \rightarrow Val(\ulcorner {A \wedge \lnot A}\urcorner ,x))\)

where \(\ulcorner {A \wedge \lnot A}\urcorner\) is the Gödel code for a contradiction \(A \wedge \lnot A\).

Similarly a number of other claims about validity and related properties are up for grabs in logical theories. One key connection that a logical theory ought to proncounce on is the one between validity and truth. Although the truth-preservation is often seen as a definitional part of validity, the connection has been forcefully contested by Field (2008, 2015). In other words, a logical theory may or may not contain claims of the following form:

  1. (4)

    \(\forall x \forall y (Sent(x) \wedge Sent(y) \rightarrow (Val(x,y) \rightarrow (True(x) \rightarrow True(y))))\)

One might furthermore disagree about the connection truth and negation:

  1. (5)

    \(\forall x (Sent(x) \rightarrow (True (\dot{\lnot } x) \rightarrow \lnot True(x)))\)

In these examples the claims are overtly about validity, truth, negation and so forth. A logical theory consisting of claims of this sort is non-deflationary, and thus at odds with Williamson’s line.

There are several key differences. First, the claims of a non-deflationary theory are not unrestricted—they are restricted generalizations. They are claims, say, about all sentences, all negations, or all contradictions. That should not worry the anti-exceptionalist, however, since she need not be committed to the unrestricted nature of logical claims. Other sciences deal in restricted generalizations, so why should logic be any different? One might reply that the truths of logic are supposed to be universal, but it is not clear why an anti-exceptionalist should agree to this. And even if some sort of universality is a desideratum for a logical theory, there are other ways of spelling out the universality than in terms of unrestricted generalization. More on that below.

Second, the claims above are non-deflationary in the sense that they are metalinguistic. They are specifically about sentences and properties of sentences (albeit mediated by the Gödel coding). Why is this at cross-purposes with anti-exceptionalism? Presumably the thought is that an anti-exceptionalist logical theory should be about the world, not about linguistic or conceptual properties. That is in my opinion too restrictive. An anti-exceptionalist can—and should—accept that the content of a logical theory is in part linguistic or conceptual. What the anti-exceptionalist denies is that the linguistic or conceptual content provides a priori access to logical knowledge, for instance because the claims are analytic. But it doesn’t follow from the content of a logical theory being linguistic or conceptual that we come to know it a priori. A logical theory with restricted generalizations like (2) and (4) must ultimately be justified by inference to best explanation, regardless of the metalinguistic content.Footnote 17

An important lesson from the objections to Williamson’s deflated logical theories is that logic neutrality is hard to achieve, if not chimerical. And in this respect the non-deflationary theories do no better.Footnote 18 Different logical theories will say different things about validity, consistency, and negation. But, equally important, they all have to say these things in a metalanguage with an associated logic. A non-deflationary logical theory is a far cry from neutral ground. The quantifiers and the connectives of the claims (2) and (4) cannot be given a logic that is permissible to everyone in the debate, and yet strong enough to serve the theoretical purpose. So, whatever the language of the theory is, it will be biased.

That is unfortunate for the purposes of theory selection in logic, but it does not mean that logical theories cannot be revised. The anti-exceptionalist should give up on logic neutrality, and concentrate on how revision of logical theories can happen despite initial bias.

Here the non-deflationary theories have an advantage over the deflated theories. Since the non-deflationary theories distinguish between the validity predicate and the conditional ‘\(\rightarrow\)’ of the theory’s language, a theory that revises the validity predicate does not thereby also revise the conditional. That is not to say that the two are unconnected, but revision of a theory can start by rejecting old claims about validity (or accept new ones), and only later investigate the consequences for the conditional or other expressions of the theory, or the other way around. In general, the revision happens stepwise, as one gradually realizes the consequences of previous changes.

5 The abductive criteria

5.1 Fit with the evidence

Following Priest, I hold that logical theories are, first and foremost, theories about validity. I have outlined one way such non-deflationary theories might be formalized.Footnote 19 I have also argued that on my non-deflationary view, logical theories differ in a number of ways from Williamson’s vision. First, the non-deflationary theories are explicitly about logical properties. They include an account of which arguments count as valid, but not only that. They include claims about consistency, truth preservation, provability, and other things. Second, they are restricted, not unrestricted. And, finally, the non-deflationary theories involve metalinguistic claims, but without enabling an a priori epistemology.

This brings us to the next question. How are the theories justified? Following the abductivism discussed above, it is natural to think that logical theories are justified, in part, by the available evidence. But what constitutes evidence for the claims of a logical theory? And is a classical theory a better fit with the evidence than a nonclassical theory?

For most scientific theories the observational data forms the bulk of the evidence.Footnote 20 As we saw in the case of quantum logic, observational data might provide reason to revise a logical theory. But that is only part of the story. The evidence for a logical theory can come from a number of sources: from intuitions about validity or alethic modality, from mathematical theories and practice, from psychology of reasoning, from epistemic norms of rationality, and so on. Priest (2016, 9–10), for example, stresses the importance of our intuitions about what follows from what in natural language arguments. Of course, these intuitions only serve as highly defeasible evidence, and are often overridden by theoretical considerations. Consider, for instance, the case of arguments with vague expressions. Both Priest and Williamson think that theorizing about vagueness provide evidence for a logical theory. Since what they claim to know about vagueness differs in important ways, these considerations point them in different directions.

I will not attempt to assign weight to the various types of evidence. For our purposes, I just want to mention one major source of evidence that will be important in what follows: theories of truth. There are several reasons why truth is especially important for a logical theory. One is the connection between truth preservation and validity, expressed, for example, in 4 above.Footnote 21 Another is the fact that theories of a paradox-free truth predicate have provided arguments for revising classical logic, for example in favour of a paracomplete or paraconsistent logic. Priest and Williamson agree that the debate about truth is a decisive theatre for the rivalry between classical and nonclassical logicians. Williamson even concedes that ‘the case against classical logic from the semantic paradoxes is better than most cases against classical logic’ (Williamson 2015, 21). In other words, evidence from our best theory of truth—whatever it is—will be of paramount importance to logical theories. It will be no surprise, then, that the relationship between truth and logic plays a significant role in Williamson’s abductive argument for classical logic (see Sect. 6).

Even if we can figure out what the relevant evidence for a logical theory is, another problem remains. Fit with the evidence is one important condition for a successful abductive argument. But in the case of logical theories, it is not sufficiently clear what fit with the evidence amounts to. It would be bad for a theory to be inconsistent with the evidence, but consistency itself is precisely one of the things that is up for grabs in a logical theory. It cannot merely be a matter of avoiding contradictions, as some of the candidate theories embrace selective contradictions. Williamson’s (2015) solution is to insist on a weaker condition—non-triviality. Classically, non-triviality is equivalent to non-contradiction, but that is not the case in paraconsistent theories, where the law of explosion fails.

However, consistency is not the only logical property that matters in an account of evidential confirmation. Hypothetical-deductive theories of confirmation, for example, are directly sensitive to an underlying consequence relation. So are Bayesian theories of confirmation, where classical probability theory presupposes classical logic, but nonclassical generalizations of the probabilistic axioms build on nonclassical logics (Paris 2001; Williams 2011; Field 2015). The trend is that the notion of confirmational coherence can be generalized relative to a notion of logical consequence.

What does that mean for the abductive strategies? It means, unsurprisingly, that the abductive criteria of fit with the evidence is not logic neutral. As Priest puts it for his model of theory choice: ‘In a choice situation, we already have a logic/arithmetic, and we use it to determine the best theory—even when the theory under choice is logic (or arithmetic) itself’ (Priest 2016, 17). As a result, the theory selection is always done on the background of a prior logic—justified or not. An abductive argument for a logical theory might therefore have an underlying theory of evidential confirmation that is biased. The anti-exceptionalists will just have to live with that. An abductive argument for a logical theory will inevitably presuppose some laws of logic, but that is not incompatible with revision of logic. All the laws of logic cannot be subject to revision simultaneously, nor is that a requirement. The anti-exceptionalist only needs to hold that no law of logic will be beyond revision.

5.2 Strength

Scientific theories are scored on other dimensions besides fit with the evidence. One that is often mentioned in the context of logical theories is strength. But what is strength in the context of a logical theory? There can be no doubt that when logicians talk about the strength of a logic, the standard meaning is deductive strength. It is problematic, however, to translate talk about deductive strength into talk about the strength of a theory. What, for instance, is the connection between deductive strength and explanatory power? Deductive strength is important, but it is not the only type of strength that matters.

Williamson recognizes that the issue of strength is complex, but he nonethless thinks that deductive strength is a key measure of a logical theory.

In discussion of alternative logics, it is not always recognized that strength is a strength, in logical theories as in others. One often encounters various forms of exceptionalism about logic, according to which weakness is a strength in logic, because weak logics leave open more possibilities, prejudge fewer issues, and achieve higher levels of neutrality. (Williamson 2015, 18)

There is plenty of room to disagree with Williamson on anti-exceptionalist grounds. The anti-exceptionalist should accept part of his claim: weaker consequence relations do not prejudge fewer issues or achieve higher levels of neutrality. Classical or nonclassical—logic isn’t neutral.

But let us consider why the nonclassical logician opts for a weaker consequence relation. More often than not it is because a deductively weaker logic is required to consistently accommodate a new expression. The unrestricted T-schema is one example, set theoretic abstraction is another. Nonclassical consequence relations are not weak because they are neutral, they are weak because they can be consistently conjoined with theories that trivialize classical logic. At least in principle, one of these theories (e.g. about the truth predicate) can be independently well-confirmed, and therefore serve as evidence against classical logic.

That is just a way of saying, pace Williamson, that nonclassical logics do indeed leave open more possibilities. In fact, the possibility of further consistent theories is the very raison d’être for nonclassical logics, be it for vague expressions or set theory. Only when the nonclassical logic is extended in this sense do you have the basis for a theory that can be sensibly compared with classical alternatives. It is important that the extended theories are not deductively weaker than classical logic. They axiomatize expressions in ways incompatible with classical theories. So, even if deductive strength matters, nonclassical logics are not unambiguously weaker than classical logic.

Nor is deductive strength the only formal strength measure we should care about. Another important feature of a logical system is its expressive power or discriminatory power. Logics differ in what they can talk about. Some logics can characterize structures that other logics cannot. One example is the difference between first-order and second-order logic. Although expressive strength can come with a cost (e.g. deductive limitations), it can clearly also be an advantage, not least because expressive strength may improve explanatory strength. A language capable of finer discriminations will prove superior in explaining finer-grained phenomena.

Following Humberstone (2005), we can think of discriminatory power as a logic’s class of synonymous formulae. Two formulae AB are synonyomous over a consequence relation \(\models\) (\(A \equiv _{\models } B\)) iff, for every formula context \(C_{i}(\cdot )\):

$$\begin{aligned} C_{1}(A),\ldots , C_{n}(A) \models C_{n+1}(A)\,\rm{just\,in\,case}\,C_{1}(B),\ldots , C_{n}(B) \models C_{n+1}(B) \end{aligned}$$

That is, the formulae A and B are interchangeable without change in validity across all formula contexs. It follows that, for example, intuitionistic propositional logic has more discriminatory power than classical propositional logic.

More generally, Humberstone shows that for a range of logics (although not all), it will be the case that if \(\models _{1}\) is stronger than \(\models _0\), then \(\equiv _{\models _0}\;\subseteq \;\equiv _{\models _1}\). In other words, logics that are deductively weaker have more discriminatory power. Granted, this is not always the case, but it is the case for a number of interesting cases. What it tells us is that not only are there other dimensions of strength than deductive strength, sometimes deductive strength comes at the expense of other types of strength.

Increased expressive power can come with another advantage. Deductively stronger logics can often be translated into deductively weaker logics. The most well-known example is that classical propositional logic can be translated into intuitionistic propositional logic, but not the other way around.Footnote 22 There is a sense in which classical propositional logic can be expressed within its intuitionistic counterpart (e.g. by the Gentzen-Gödel translation), whereas the opposite does not hold.

Williamson does not think we should be impressed by translations between logics, however. They exist, and they are interesting for mathematical reasons, but they should play no role in how we abductively assess a logical theory. The reason he gives for this is that a logical theory, in his sense, is given in an interpreted language (Williamson 2015, 16–17).

But we can agree with Williamson that the consequence relations and the logical theories in general are expressed in an interpreted language, and nonetheless think that translations matter. Suppose that we are assessing a logical theory based on intuitionistic logic with an appropriately interpreted language. Such a theory cannot simultaneously make any claim to classically interpreted expressions, but that is not the point. The intuitionistic theory has the resources to express classical reasoning, although now couched in a non-classical interpretation.

In sum, deductive strength is no unqualified advantage for a logical theory. For some purposes deductive strength is great, but it can come at the expense of other important properties. If classical logic is a key component in the best logical theory, that has to be for reasons over and above its deductive strength. And as a matter of fact, Williamson’s abductive argument for classical logic does not rely on deductive strength alone.Footnote 23

6 An abductive argument for classical logic

The main lesson that Williamson draws from his brand of anti-exceptionalism is that classical logic is justified on abductive grounds. He is at pains to stress that his classicism is not based on an assumption of conservativism, i.e. that classical logic has an advantage by being the theory presently entrenched in our best science. Rather, Williamson thinks that classical logic simply scores better in a head-to-head competition with rival logical theories.Footnote 24

Once we assess logics abductively, it is obvious that classical logic has a head start on its rivals, none of which can match its combination of simplicity and strength. [...] The case may indeed be strengthened by reference to the track record of classical logic: it has been tested far more severely than any other logic in the history of science, most notably in the history of mathematics, and has withstood the test remarkably well. Nevertheless, the initial case for classical logic would be quite powerful, even if we had only stumbled across that logic a few weeks ago. (ibid., 19)

Part of Williamson’s strategy is to identify the best reasons for rejecting classical logic, and then to show that classical logic nonetheless ought to be retained. As we anticipated above, Williamson thinks that the most promising case for nonclassical logic is provided by the semantic paradoxes. The dialectic is as follows: classical logic is desirable, but so is the unrestricted truth predicate, i.e. a truth predicate axiomatized by the full T-schema. Given some minimal background assumptions, the T-schema and classical logic are inconsistent.Footnote 25 So, even if we think that both have some pro tanto plausibility, we have to restrict either classical logic or the T-schema.

Williamson sums it up as follows:

[A]lthough the restriction of classical logic involves a loss of both simplicity and strength, it compensates us by saving the simplicity and strength of unrestricted disquotation. Saving the simplicity and strength of unrestricted classical logic forces us to sacrifice the simplicity and strength of unrestricted disquotation. Which is the better deal? (ibid., 21)

We might interject that there are plenty of other reasons for defending a nonclassical theory. Presumably, these will have to be dealt with independently. In other words, the intuitionistist, the relevantist, the supervaluationist, and others who endorse nonclassical logics for other purposes, will not be affected by Williamson’s argument. To some extent he already addresses these other positions elsewhere (e.g. Williamson 1994), although arguably not decisively.

But suppose we grant that the semantic paradoxes are at least one important battleground for nonclassical logic. If it turned out that there is a decisive reason to stay classical, that would be a blow against many nonclassical projects (e.g. Field’s 2008 and Priest’s 2006b theories of truth). Williamson’s argument is supposed to convince us that making changes to classical logic to preserve a principle of truth is, by and large, always a bad idea.

What is distinctively anti-exceptionalist about the argument? Williamson calls his argument an abductive argument. It is not an argument that defends or criticizes individual laws of logic, say the law of excluded middle. Instead, it relies on a cost-benefit analysis of a classical logical theory as a whole. If we want, we can imagine that the argument is an application of a Priest style formal model of theory selection. Williamson certainly refers to several of the criteria that we have discussed above.

A crucial component of Williamson’s analysis is a ranking of theories according to how fundamental they are. His point is not that some logical theories are more fundamental than others, but rather that whereas a logical theory is fundamental, a theory of truth is not:

[T]he comparison between classical logic and disquotation looks analogous to the contrast between a successful theory in fundamental physics and a successful theory in one of the special sciences, such as economics. Suppose that the economic theory is found to be inconsistent with the fundamental physical theory. Faced with the choice as to which theory to restrict in order to preserve the other unrestricted, which would you choose? It would normally be better to [...] restrict the economic theory in order to preserve the fundamental physical theory unrestricted. By analogy, then, on general methodological grounds it would normally be better to restrict disquotation in order to preserve classical logic unrestricted, and perverse to do the opposite. (ibid., 21–2)

Here we have the kernel of the argument for classical logic. On ‘general methodological grounds’ we ought to revise the less fundamental theory. The methodological principle is defeasible—it can be overruled by other considerations—but Williamson does not find any in this case. On the contrary, he argues that classical logic is fundamental, whereas a theory of truth, and a fortiori the T-schema, is less fundamental.

But why should we agree with the claim that a disquotational theory of truth is non-fundamental? The reason is familiar: disquotational principles of truth are metalinguistic.

[T]he constants at issue in the disquotational principle—the truth predicate, quotation marks—seem to express much less fundamental matters, specific to the phenomenon of language. Thus the comparison between classical logic and disputation analogous to the contrast between a successful theory in fundamental physics and a successful theory in one of the special sciences, such as economics. (Williamson 2015, 21)

In contrast, classical logic is fundamental. Its logical expressions are, according to Williamson, integral to mathematics, and therefore integral to our best scientific theories. Since it has a privileged role in mathematics, revising classical logic has major ramification for theories in all sciences, and ‘will impose widespread restrictions on its explanatory power.’ (ibid., 22) The conclusion is unsurprising. ‘[T]he classical strategy does significantly better, because its abductive costs are restricted to metalinguistic discourse.’ If we revise the T-schema instead of classical logic, we do less damage to our best scientific theories.

The nonclassicist should not be deterred. There are several problems with the argument. In order to see where the nonclassicists can push back, let us first identify the key steps in Williamson’s abductive argument. Roughly, the argument has the following structure:

  1. P1.

    It’s better to revise a less fundamental than a more fundamental theory.

  2. P2.

    Classical logic is integral to mathematics.

  3. P3.

    If classical logic is integral to mathematics, it’s fundamental.

  4. P4.

    So, classical logic is fundamental.

  5. P5.

    The T-schema is metalinguistic.

  6. P6.

    If the T-schema is metalinguistic, it’s not fundamental.

  7. P7.

    So, the T-schema is not fundamental.

  8. C.

    Therefore, it’s better to revise the T-schema than classical logic.

There are a number of premises worth discussing, but some of them I will simply set aside for now. The notion of fundamentality in play is difficult, and some nonclassicists—and classicists—will simply reject the methodological principle Williamson is relying on. That is, reject Premise 1. For the same reason, we need not be convinced by Premise 3. Even if we grant Premise 2, it is not clear that being integral to mathematics entails fundamentality. But I will bracket these issues in what follows. The focus will be on Premise 2, Premise 5, and Premise 6. I will start with Premise 5 and 6, and return to Premise 2 in the next section.

Premise 5 is based on a simple observation. Disquotational theories of truth are normally deflationary. The main idea is that all there is to truth is its role as expressive device. Williamson concludes that a disquotational theory of truth is essentially metalinguistic: the truth predicate expresses a property of sentences. In contrast, the unrestricted generalizations of a deflationary logical theory express truths about the world. That suggests a hierarchy: theories that are ‘specific to the phenomenon of language’ are less fundamental than theories that are about the non-linguistic world.

Suppose we accept Premise 5 for the sake of argument. Should the nonclassicist accept Premise 6 and the subsequent conclusion? That depends on what one thinks about logical theories. I reject the deflationary account of logical theories, and in the non-deflationary alternative there are explicit connections between truth and validity in the theory itself, e.g. the claims 4 and 5 above. In fact, classical logic has particularly tight connections to truth, both because its connectives are truth-functional and because its consequence relation is truth preserving. Indeed, the connection to truth is supposed to be part of the attractiveness of classical logic. But that connection threatens to undercut Williamson’s argument: if the theory of truth is integral to the classical theory, it cannot be less fundamental.

So how integral is truth to a classical theory of logic? Note that the classical logician cannot simply reject the connection by insisting that the logical theory only requires the technical notion of truth-in-a-model. Although the formal semantics do indeed only invoke a set-theoretic surrogate of truth simpliciter, the appeal of classical logic would be severely reduced if truth simpliciter and truth-in-a-model were not appropriately connected. The classical logical theory must connect validity to truth, not only to truth-in-a-model.

If we cannot sufficiently disentangle classical validity and truth, Williamson’s argument falters. We should reject Premise 6. In fact, the connection sheds new light on the very rationale behind the argument—the dilemma between revising truth and revising logic. If validity is genuinely truth preserving, then any theory that purports to express this property must face the semantic paradoxes. Artificial restrictions, such as hampering self-reference, will be no more desirable here than in the theory of truth.

And it gets worse. Let us suppose, for the sake of argument, that the classical logician can successfully compartmentalize the theory of truth and the logical theory. There is a more damaging objection left. I have argued that a logical theory is, first and foremost, a theory of validity. The conditional in an unrestricted generalization is not a good substitute for explicit talk of validity as a property. A non-deflationary logical theory will include a validity predicate together with some appropriate axiomatization. The predicate can subsequently be connected to other expressions such as truth, consistency, etc. We have already seen truth preservation as an example above (4). The validity expression also allows us to quantify into predicate position in order to express laws such as explosion (3).

The problem is that validity, not unlike truth, leads to semantic paradoxes when it is expressed as an object language predicate. As argued in Shapiro (2010) and Beall and Murzi (2013), a validity predicate cannot be axiomatized in the most straightforward way without re-introducing Curry-like paradoxes. Let Val(xy) be the validity predicate in a sufficiently strong theory. Then the two following natural-looking rules are inconsistent with classical logic:Footnote 26

figure a

Hence, if a logical theory is to express properties of validity, it runs into problems of the very same sort as the theory of truth. The problem of semantic paradox strikes at the very heart of logical theories. As always, where there is paradox, the nonclassical logics follow in its wake. The original dilemma that all parties accepted was whether we revise the theory of truth or the logical theory in the face of paradox. The new dilemma is no different. We either have to reject the simple axiomatization of the validity paradox, or opt for a nonclassical logic.

The argumentative route to classical logic is made more difficult. For let us now reconsider the abductive argument. The issue is no longer truth and the T-schema, but rather the axiomatization of the validity predicate. Since it is equally susceptible to paradox, the same dialectic remains: either we restrict classical logic or we restrict the validity predicate. Both options have disadvantages. It won’t do to argue that the theory of validity is less fundamental than classical logic. Granted, the validity predicate is metalinguistic, but it is expressing the very concept that logical theories are about.

Let us sum up. Even if we accept that the T-schema is metalinguistic, we should not conclude that the resulting theory of truth is less fundamental than classical logic. Whatever principles of truth we endorse, they are decisive for how we understand the foundation of classical logic. Furthermore, even if the theory of truth could be detached from the logical theory, the same cannot be said of validity. A logical theory, classical or not, will face semantic paradoxes.

7 Logical pluralism and anti-exceptionalism

Even if there are several suspect premises in the abductive argument, the classical logician could revive the effort with a more direct argument from Premise 2. Classical logic’s privileged role in mathematics is a strong anti-exceptionalist case for a classical logical theory. Both Quine and Maddy consider it a staple in their anti-exceptionalist programmes. No one doubts that mathematics has a crucial role in the sciences, so if classical logic is integral to mathematics, that indisputably counts in favour of a classical theory.

It can sound like a truism that classical logic is integral to mathematics. However, it is only true if appropriately qualified. Mathematics can be done—indeed, is done—with nonclassical logic as well. There is constructive mathematics, paraconsistent mathematics, and substructural mathematics, to mention a few nonclassical efforts. Shapiro (2014) has recently argued that it would be a mistake to discount the mathematical theories that result from work in nonclassical frameworks. I agree. But for our purposes I still want to grant that classical logic does play an unparalleled role in mathematics. The problem with Premise 2, however, is that it rests on an equivocation. In what sense is classical logic integral to mathematics? There is an important distinction between a classical theory being integral to mathematical theories, and classical reasoning being integral to mathematical practice. Classical logic as a formal theory does play a role in for example classical Peano Arithmetic, but it does not follow from that that the classical theory is integral to mathematics. Formalizations of mathematical theories and proofs are rarely essential to mathematical work, and typically not integral to our best scientific theories. Only in exceptional cases is mathematics done in a formal logical language, with a rigid axiomatization.

Mathematics was done, and done successfully, prior to the formalization of classical logic. So if classical logic is integral to mathematics, it is more likely because whatever the classical formalism is capturing (e.g. the forms of reasoning) is integral to mathematics. That is the more plausible claim.

Mathematical proofs do contain an abundance of instances of classical principles: applications of classical reductio ad absurdum, conditional proof, disjunctive syllogism, the law of absorption, etc. The emphasis, however, should be on the fact that these are instances of classical principles. The mathematical proofs do not rely on any of these principles being unrestricted generalizations of the form that Williamson defends. They do at most rely on the principles holding restrictedly for mathematical discourse, which does not entail that the principles of reasoning hold universally. Put differently, mathematical practice is consistent with these reasoning steps being instances of mathematical principles of reasoning, not generalizable to all other discourses. A fortiori, they may very well be principles of reasoning that are permissible for mathematics, but not for theorizing about truth.

Classical logic in mathematics does not, therefore, provide evidence for Williamson’s unrestricted universal generalizations. Once we reject Williamson’s deflationary account of logical theories, we can let restricted generalizations account for the role of classical logic in mathematical reasoning.

Consider, for instance, the question whether DNE is valid. In the deflationary account of logical theories, DNE is captured by an unrestricted generalization:

  1. (6)

    \(\forall \phi (\lnot \lnot \phi \rightarrow \phi )\)

Suppose, when the evidence is in, (6) turns out to be false. We can reject (6) and still accept a number of interesting restricted generalizations. Recall the non-deflationary formulation of DNE:

  1. (2)

    \(\forall x (Sent(x) \rightarrow Val(\dot{\lnot }\dot{\lnot } x, x))\)

The formula (2) expresses that every sentence obeys DNE. But there is nothing stopping the anti-exceptionalist from adopting a logical theory with a weaker restricted generalization:

  1. (7)

    \(\forall x (Sent^{PA}(x) \rightarrow Val(\dot{\lnot }\dot{\lnot } x, x))\)

where \(Sent^{PA}(x)\) says that x is a sentence in the language of Peano Arithmetic. Note that (7) is consistent with the corresponding claim that DNE fails in the extended language of Peano Arithmetic with a truth predicate (\(Sent^{PAT}(x)\)):

  1. (8)

    \(\lnot \forall x (Sent^{PAT}(x) \rightarrow Val(\dot{\lnot }\dot{\lnot } x, x))\)

There is no antecedent reason to think that the abductive method will rule out a theory where restricted generalizations of this form are included. Perhaps restricted claims are as good as it gets with respect to validity.

In fact, there is good evidence for both (7) and (8). On the one hand, we know that mathematic proofs rely heavily on arguments that are instances of DNE. Since mathematics is independently successful by anti-exceptionalist standards, we have abductive reason to endorse (7). On the other hand, we also know that classical logic leads to inconsistency in the presence of an unrestricted truth predicate. Since the unrestricted truth predicate is prima facie desirable—even Williamson concedes this—we have abductive reason to reject (2), and maybe even accept (8).Footnote 27 In the face of this evidence, the classicist would have to argue that there is some independent merit in accepting the unrestricted generalization, not only the restricted one. That argument, I maintain, cannot simply be that classical logic is integral to mathematics.

In contrast, the nonclassical logician is free to embrace both (7) and (8), but need not. If she insists on rejecting even the restricted generalization (7), there still remains the problem of accounting for the success of classical reasoning in mathematics. Again, here I argue that the theory with both restricted generalizations fares better.

If we accept a theory with both (7) and (8), the upshot is a form of logical pluralism, but one that remains faithful to the spirit of anti-exceptionalism. It is a pluralism where what counts as a valid argument is relative to a language. The position is reminiscent of Carnap’s (1937) principle of tolerance: one is free to devise any language appropriate for one’s purpose, and the choice of language determines the logic.

It is useful to distinguish this kind of logical pluralism from other pluralist positions in the literature. Priest does not consider himself a pluralist, but he still thinks that his abductivist model is at least compatible with some form of logical pluralism. Even though his model is supposed to identify the best logical theory, a pluralist can argue that we need different logical theories for different applications or different domains. That is certainly in the spirit of what I have suggested above, but with a crucial different. Priest’s pluralism is what I have elsewhere called inter-theoretical pluralism (Hjortland 2012). As he puts it, ‘the debate between logical monists and logical pluralists is, in fact, a meta-debate’ (Priest 2016, 9).Footnote 28 An inter-theoretical pluralist claims that there are at least two correct logical theories—perhaps because they apply to different domains. In contrast, the logical pluralism I am recommending here is an intra-theoretical pluralism. We only need one logical theory, but the theory itself recommends restricted logical principles for different parts of the language. In fact, there is no obvious reason why Priest’s abductivist model could not offer up a theory of this sort as the best logical theory.

Beall and Restall (2006) defend yet another type of logical pluralism, one based on the claim that the concept of validity allows for a number of precisifications. An argument can be truth preserving for one class of models, but fail to be truth preserving for another. Their contention is that there are several classes of models that are, in some sense, equally good. They all lead to a class of arguments that has important features of logicality: necessity, normativity, and formality.Footnote 29 But Beall and Restall’s pluralism is different from the one proposed here in at least one important way: Beall and Restall think of each permissible validity relation as language-independent. The classical, intuitionistic, and relevant validity relations apply to arguments in the same language, but they have distinct extensions.

However, Beall and Restall’s pluralism can also be captured in the non-deflationary logical theories. Their type of pluralism constitutes a more dramatic departure from the standard theories. It does not result from restrictions on the language (say, between the language of Peano Arithmetic and the language of truth), but rather from a genuine proliferation of validity properties expressed by, say, \(Val_{1}\) and \(Val_{2}\). The two could come apart, for instance with respect to whether the law of double negation is valid. We would then have two different restricted generalizations:

  1. (9)

    \(\forall x (Sent(x) \rightarrow Val_{1}(\dot{\lnot }\dot{\lnot } x, x))\)

  2. (10)

    \(\lnot \forall x (Sent(x) \rightarrow Val_{2}(\dot{\lnot }\dot{\lnot } x, x))\)

A pluralist theory of this sort has the unexpected consequence that validity is not a monolithic property. Ordinary talk of validity equivocates between different properties of arguments. The anti-exceptionalist should not balk at this either. It is entirely plausible that what we originally thought was one property turns out to be several properties that we need to keep theoretically distinct.

Beall and Restall’s pluralism has been met with a number of objections.Footnote 30 Priest (2006a) and Read (2006) have developed a line of criticism that is often repeated. Suppose a theory does indeed posit two distinct validity relations, \(Val_{1}\) and \(Val_{2}\). Suppose furthermore that neither is strictly stronger than the other.Footnote 31 In fact, suppose that there are sentences A and B such that \(Val_{1}(\ulcorner {A}\urcorner ,\ulcorner {B}\urcorner )\) and \(Val_{2}(\ulcorner {A}\urcorner ,\ulcorner {\lnot B}\urcorner )\). This leaves the pluralist with a problem. If both \(Val_{1}\) and \(Val_{2}\) are truth preserving, and A is true, it follows that both B and \(\lnot B\) are true. That might—in certain cases—be palatable to a dialetheist such as Priest, but most logicians would reject the theory. Furthermore, if a non-dialetheist agent who accepts the logical theory already believes A, what ought she infer? The theory will leave us perplexed.

It is unlikely, however, that a pluralist theory would include two validity relations of this sort. And even if it did, it is possible to be a pluralist without thinking that each permissible validity relation is truth preserving. In fact, because of the threat of semantic paradoxes, we already know that combining validity and truth is a delicate matter. Field (2008), for example, recommends restricting the connection between truth preservation and validity. Both validity relations could be a virtue of an argument, but for different reasons or under different circumstances. The theory does not have to assign the same normative force to both validity relations. If an agent believes A, there could be a definite answer to which conclusion she ought to infer, if any at all.

Fortunately, the language-dependent pluralism I defend avoid this problem altogether. In (7) and (8) there is only one validity relation, and so no possibility of the conflicting situation arising. A purist might insist that only the generalizations that hold throughout all languages are genuinely logical, but that is ill-advised. That logic will likely turn out to be exceedingly weak, maybe even empty. We can call it the One True Logic if we want, but it will have a significant shortcoming. Unlike the pluralist theory it cannot explain the success of classical logic in mathematics.

The advantage of logical pluralism is supposed to be that it retains classical logic for mathematics, but simultaneously allows nonclassical logic for, say, the truth predicate. By giving up unrestricted generalizations it can account for two central pieces of evidence. Yet, the classical logician has another resource to resist pluralism. In introducing the language-relativity, the pluralist trades away another abductive virtue—unity.Footnote 32 The classical logician can argue that it is a theoretic virtue to have a language-independent theory of validity, perhaps because unrestricted generalizations have greater explanatory power. Maybe, but it is far from clear that the virtue of a unified account outweighs the fact that the restricted generalizations can simultaneously accommodate the evidence from mathematics and the evidence from our theory of truth.

In fact, even if we grant that unity is an important criterion in the abductive argument, the pluralist has a reply. Granted, the language-relativity is a cost for a pluralist theory. But its disunity is not as uncoordinated as one might think. The reason is that nonclassical logics are typically generalizations of classical logic.Footnote 33 Many-valued logics generalize truth values; intuitionistic logic generalizes Boolean algebras to Heyting algebras; substructural logics generalize Tarski consequence relations. These generalizations preserve classical logic in special cases, even though their consequence relations are nonclassical. Why do these special cases matter? The anti-exceptionalist should look to other sciences. Consider for example of Newtonian mechanics. Although the theory is strictly speaking false, its equations have useful applications in special cases, in part because of their simplicity. What is more, scientific laws that are now recognized to be strictly speaking false, may live on as limit cases in generalized theories. Such limit cases are not merely accidental properties of more sophisticated theories. They can provide useful simplifications that apply in a range of cases. The anti-exceptionalist should think of classical logic in a similar fashion. Although useful in special cases, it is not a tool suited for a general theory of valid argument.

8 Conclusion

Anti-exceptionalism does not provide support for classical logic. The abductive argument has proved unconvincing: Neither fit with the evidence nor deductive strength unambiguously favour classical logic. Once we reject the deflationary account of logical theories, it becomes clear that Williamson’s new argument from mathematics also fails. A non-deflationary approach, on the other hand, adequately reflects the tight connection between validity and semantic paradox. A theory that accommodates this connection is nudged towards logical pluralism. The anti-exceptionalist ought to promote a ecumenical position.

Logic isn’t special—nor is classical logic.