Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Chapter Overview

figure a

Up to now, we have looked at pure verificationism and pure falsificationism. However, both pure theories have left us unsatisfied when it came to giving an account of the connectives. Therefore, we are moving up to the second level of the pyramid, in which we will be able to employ both verifications and falsifications in the ingredient sense.

Our first item on the “expanded” level, and the topic of this chapter, is the expanded verificationism of Stage II. To make a correct assertion is once again to say something verifiable. But to know how a statement will behave in complex statements will involve knowing both its verification conditions and its falsification conditions.

As I showed in Chap. 5, we are still following Dummett’s footsteps at this stage. However, we found no indication that he thought that this move to an expanded verificationism would have any effect on the logic the constructivist should endorse, i.e., that anything but intuitionistic logic should supply the correct rules of inference. Only when verifications are completely thrown out did he consider changes in the constructive logic, as we have seen in the last chapter.

As there is little in Dummett’s texts on how to set up the logic for Stage II, I will simply give the most plausible picture I can come up with. The resulting logic will not be intuitionistic, but rather a form of Nelson logic.

I will begin the chapter by going straight into the BHK clauses of this logic. It is actually quite natural to formulate verification and falsification conditions for complex statements, especially as we have the experience of the pure theories behind us and are able to draw on the ideas that worked at those stages. Even some of the ideas that failed us there will now come useful, because now we have the necessary resources in the ingredient sense to make them work.

Then, we will look at the Kripke semantics of the logic. Some thought has to go into the question whether we want to allow for gluts in the semantics, but in the end, I will only allow gaps. That is to say, there will be worlds in which some statements are neither verifiable nor falsifiable, but no worlds in which a statement is both verifiable and falsifiable.

The end of the chapter will bring a bit more discussion of the connectives that we have often found to be in need of comment before: the conditional and the negation.

2 BHK Interpretation

Although this chapter is very similar to the last one in structure, I will change the order of things a bit. I will start out by giving the BHK clauses for a Stage II logic, and then, I will present a matching Kripke semantics.

The task of the BHK interpretation is to give both verification and falsification conditions for all of the connectives, so our list of clauses will be twice as long as those we have seen in the lower, pure stages.

Luckily, the task is already almost accomplished, as we have seen all the important ideas in the chapters before. We only need to collect them in one place. Here it goes:

Both conjunction and disjunction are as well behaved as they have always been. To verify a conjunction, verify both conjuncts; for a disjunction, verify at least one of them. As we know from the last chapter, to falsify a conjunction, we have to falsify only one of the conjuncts and to falsify a disjunction, we have to falsify both disjuncts.

Also from the last chapter (Sect. 7.6.2), we know what we really want to say about the falsification condition of a conditional: We will need a verification of the antecedent and a falsification of the consequent. To falsify my “If it rains, I’ll write my report,” you will have to show that it is raining and that I am not writing my report.

The difference to what we found in the last chapter is that now, we can actually use this definition, because now we may use both verifications and falsifications in the ingredient sense.

But what of the verification condition for the conditional? Nothing quite as perfect as this falsification condition seems on offer, but until now we were doing all right with the intuitionistic understanding: To verify a conditional is to show how any verification of the antecedent can be turned into a verification of the consequent.

Lastly, the negation I want to propose is, as I had mentioned earlier (Sect. 5.3.1), a toggle negationFootnote 1 that brings us from verification to falsification and vice versa. To me, this sounds intuitively right, and it exhibits a pleasing symmetry. Moreover, if we have to countenance verifications and falsifications in our semantics, it would seem plausible to suppose that language has a means of going back and forth between the two notions and that this device should be negation.

I will show you the characteristics of the logic that is based on this understanding of the negation and then, at the end of the chapter, come back to the question why we should prefer the toggle negation over the intuitionistic one.

Here, then, are the BHK clauses corresponding to the ideas above:

  • \(c\) is a verification of \({{A}}\wedge {{B}}\) iff \(c\) is a pair \((c1,c2)\) such that \(c1\) is a verification of \({{A}}\) and \(c2\) is a verification of \({{B}}\)

  • \(c\) is a falsification of \({{A}}\wedge {{B}}\) iff \(c\) is a pair \((i,c1)\) such that \(i=0\) and \(c1\) is a falsification of \({{A}}\) or \(i=1\) and \(c1\) is a falsification of \({{B}}\)

  • \(c\) is a verification of \(A\vee B\) iff \(c\) is a pair \((i,c1)\) such that \(i=0\) and \(c1\) is a verification of \({{A}}\) or \(i=1\) and \(c1\) is a verification of \({{B}}\)

  • \(c\) is a falsification of \(A\vee B\) iff \(c\) is a pair \((c1,c2)\) such that \(c1\) is a falsification of \({{A}}\) and \(c2\) is a falsification of \({{B}}\)

  • \(c\) is a verification of \({{A}}\supset {{B}}\) iff \(c\) is a procedure that converts each verification \(d\) of \({{A}}\) into a verification \(c(d)\) of \({{B}}\)

  • \(c\) is a falsification of \({{A}}\supset {{B}}\) iff \(c\) is a pair \((c1,c2)\) such that \(c1\) is a verification of \({{A}}\) and \(c2\) is a falsification of \({{B}}\)

  • \(c\) is a verification of \(-{{A}}\) iff \(c\) is a falsification of \({{A}}\)

  • \(c\) is a falsification of \(-{{A}}\) iff \(c\) is a verification of \({{A}}.\)

3 Kripke Semantics

Lopez-Escobar and Wansing Footnote 2 have argued that the above BHK clauses correspond not to classical logic, but rather to one of the Nelson logics, named after D. Nelson (1949).Footnote 3 There are two main variants of these logics called N\(_{3}\) and N\(_{4}\) . N\(_{3}\) allows for gaps in the semantics, and N\(_{4}\) features both gaps and gluts. I will introduce Kripke semantics for both of these logics, although eventually I will endorse N\(_{3}\). I will justify this choice in Sect. 8.4.

Now then: A model for N\(_{3}\) is once again a structure \(\left[ W,\le ,{ {v}}\right] \), \({ {w}}\) being a non-empty set of partially ordered (\(\le )\) worlds and \({ {v}}\) a valuation function from formulas to \(1\) and \(0\). Worlds are again intuitively to be understood as stages of investigation, and the accessibility relation marks that one stage is an epistemically possible development from one stage to another.

This time, though, we give both of the values \(1\) and \(0\) a substantive reading: \(1\) stands for “verifiable,” \(0\) for “falsifiable.” This is in contrast to the semantics of intuitionistic and dual intuitionistic logic, in which one of the values marked the constructive notion and the other the mere absence of that notion.

Moreover, for N\(_{3}\), we allow \({ {v}}\) to be a partial function, so that statements might not receive either value at a given world. This reflects the fact that at a stage of investigation, a statement might be neither verifiable nor falsifiable. Note that \({ {w}}\Vdash _{0}p\) is not equivalent to \({ {w}}\nVdash _{1}p\) any more and that the same of course goes for \({ {w}}\Vdash _{1}p\) and \({ {w}}\nVdash _{0}p\).

The logic N\(_{4}\) gives even more options: It allows \({ {v}}\) to assign \(1\), \(0\), neither, or both values to a statement at a world. That is, we are not dealing with a valuation function any more, but with a valuation relation. This is the only difference between the two logics, and everything else that follows in this section applies to both of them.

We assume that verifications and falsifications are conclusive, and therefore, we will have hereditary constraints for both \(1\) and \(0\):

For all \(p\) and all worlds \({ {w}}\) and \({ {w}}'\), if \({ {w}}\le { {w}}'\) and \({ {w}}\Vdash _{1}p\), then \({ {w}}'\Vdash _{1}p\), and

for all \(p\) and all worlds \({ {w}}\) and \({ {w}}'\), if \({ {w}}\le { {w}}'\) and \({ {w}}\Vdash _{0}p\), then \({ {w}}'\Vdash _{0}p\).

To illustrate, here is the good old example of a Kripke model again:

figure b

Here is a valuation that suits the requirements of N\(_{3}\):

 

\({\textit{w}}_{1}\)

\({\textit{w}}_{2}\)

\({\textit{w}}_{3}\)

\({\textit{w}}_{4}\)

\({\textit{w}}_{5}\)

\(p\)

-

-

1

1

1

\(q\)

-

0

0

0

0

\(r\)

1

1

1

1

1

\(s\)

-

-

-

1

0

Both 1 and 0 project forward, because both of them record a constructive achievement (verification and falsification, respectively) that is taken to be permanent. There is a third option, here represented by “-”: a gap, a mere absence of either verification or falsification. Other than that, all behaves quite as you would expect.

Now, a different valuation will show the peculiarities of N\(_{4}\): As we said, this logic allows for gluts as well as gaps. The valuation below reflects this by assigning both values, 1 and 0, to some of the statements.

 

\({\textit{w}}_{1}\)

\({\textit{w}}_{2}\)

\({\textit{w}}_{3}\)

\({\textit{w}}_{4}\)

\({\textit{w}}_{5}\)

\(p\)

-

-

1

1

1

\(q\)

-

0

0

0

1,0

\(r\)

1

1

1,0

1,0

1,0

\(s\)

-

-

-

1

0

Once again, both 1 and 0 project forward, so that a statement that receives both 1 and 0 will never change in status. On the other hand, a statement that is only verified (falsified) may always become both verified and falsified later on.

As we are concerned with a species of verificationistic logic, we choose the definition of logical consequence we had given for intuitionistic logic:

\(\Gamma \vDash {{A}}\) iff in every model and every \({ {w}}\in { {w}}\), if \({ {w}}\Vdash _{1}{{B}}\)

for every \({{B}}\in \Gamma \), then \({ {w}}\Vdash _{1}{{A}}\).

3.1 The Connectives

We now have to give separate clauses for \(\Vdash _{1}\) and \(\Vdash _{0}\) when defining the connectives. Guided by the above BHK-style clauses, we get:

\({ {w}}\Vdash _{1}{{A}}\wedge B \;{\text {iff}}\; { {w}}\Vdash _{1}{{A}} \;{\text {and}}\; { {w}}\Vdash _{1}{{B}}\)

\({ {w}}\Vdash _{0}{{A}}\wedge {{B}} \;{\text {iff}}\; { {w}}\Vdash _{0}{{A}} \;{\text {or}}\; { {w}}\Vdash _{0}{{B}}\)

\({ {w}}\Vdash _{1}{{A}}\vee {{B}} \;{\text {iff}}\; { {w}}\Vdash _{1}{{A}} \;{\text {or}}\; { {w}}\Vdash _{1}{{B}}\)

\({ {w}}\Vdash _{0}{{A}}\vee {{B}} \;{\text {iff}}\; { {w}}\Vdash _{0}{{A}} \;{\text {and}}\; { {w}}\Vdash _{0}{{B}}\)

\({ {w}}\Vdash _{1}-{{A}}\) iff \({ {w}}\Vdash _{0}{{A}}\)

\({ {w}}\Vdash _{0}-{{A}}\) iff \({ {w}}\Vdash _{1}{{A}}\)

\({ {w}}\Vdash _{1}{{A}}\supset {{B}}\) iff for all \(x\ge { {w}}\), \(x\nVdash _{1}{{A}}\) or \(x\Vdash _{1}{{B}}\) Footnote 4

\({ {w}}\Vdash _{0}{{A}}\supset {{B}}\) iff \({ {w}}\Vdash _{1}A\) and \({ {w}}\Vdash _{0}B\)

Conjunction, disjunction, and negation look pleasantly unspectacular. Negation is, unlike intuitionistic negation, an extensional connective that only concerns itself with the world at hand.

Indeed, note that the only clause that makes reference to other worlds at all is the positive clause for the conditional. The first-degree (conditional-less) fragments of N\(_{3}\) and N\(_{4}\) are actually equivalent to two systems we have met before: K3 and FDE. In the Dunn style semantics we met on page 4.3.2, the connection is quite clear:

figure c

\(1\) obviously corresponds to \(t\) and \(0\) to \(f\) ; the only real difference lies in the interpretation of these values. Whereas before, we heard talk of “told-truth values,” we now want to read these values as “verifiable” and “falsifiable.”

Given all this, it is not surprising that one thing that clearly marks a difference between N\(_{3}\) and N\(_{4}\) is that N\(_{4}\) is a paraconsistent logic. N\(_{3}\), on the other hand, will never give us a situation in which both \({ {w}}\Vdash _{1}A\) and \({ {w}}\Vdash _{1}-A\) hold, and thus, the consequence relation becomes explosive.

I will talk more of characteristic features of Nelson logic, and in particular about the conditional, below. But first, it is time to make a choice between N\(_{3}\) and N\(_{4}\) for our project.

4 Do We Want Gluts?

So, should we want to use models of the N\(_{3}\) variety, in which no glutty valuations are allowed, or move to the more liberal N\(_{4}\) models, in which gluts are available? The answer to this question hinges on whether we want to allow for statements that are both verifiable and falsifiable.

We have an uncharacteristically clear notion of what Dummett’s own answer to this question is: He explicitly rejects gluts, as we have seen in the quote on p. 91. Nothing that is verified can ever be falsified, nothing that is falsified can be verified. That is, to reconstruct his vision of a Stage II theory, we should certainly stay away from glutty models.

There are other voices around, though. To see their point, we have to come back to the question we have bracketed up to now: Are we really well advised to model our semantics on “conclusive” verifications and falsifications?

The case for gluts is based on the fact that, in the empirical realm, we must always be prepared to find our best corroborated hypotheses to fail, and to being forced to accept what we had formerly thought was ruled out by the evidence. Verifications and falsifications are based on evidence, and evidence is usually taken to be defeasible.

This line of argument is pursued by Cogburn (2004). His considerations are directly aimed at Dummett’s project. He argues that verificationists of Dummett’s ilk should embrace not only gluts in the semantics, but also an outright dialetheism.

The central claim of the paper can be summarized thus: Once we move out of the pristine realm of mathematics and step into the rough and dirty empirical world, there is no such thing as absolutely certain and conclusive proof any more. The most we may hope for is very good but still defeasible evidence. But then, it is hard to deny that there might be situations where we are confronted with very good evidence for a statement \({{A}}\) and equally good evidence for its negation \(\lnot A\).

Then, if verifiability is spelled out in terms of very good evidence, we must face the possibility of verified contradictory statements. If, furthermore, negation is the kind of verification–falsification switch that we have taken it to be in this chapter,Footnote 5 we end up with a glutty semantics. And, lastly (and optionally), if we want to make the final leap from verification to truth, we end up with genuine dialetheism.

Now, Cogburn is well aware of the fact that there is a rather obvious response to his line of argument: If the evidence we are dealing with here is supposed to be defeasible, why not say that, if we have good evidence for both a statement \({{A}}\) and its negation \(\lnot A\) (or, equivalently for our purposes, verifying and falsifying evidence for the same statement \({{A}}\)), these bodies of evidence cancel each other out, so that neither \({{A}}\) nor \(\lnot A\) is verified?Footnote 6

There are several answers Cogburn gives to this, not all of them bear directly on our topic. For our purposes, the most interesting of these answers is the last one:

As a final note, the claim that a warrant for \(P\) automatically undermines warrants for \(\lnot P\) involves an ugly equivocation between not having enough evidence for a claim and having too much evidence for the claim and its negation. For the dialetheist, this is obviously not so. When there is not enough evidence, we can either optimistically wait and hope, or we can decide the claim is neither true nor false. When there is too much evidence both ways, the claim is both true and false.

Indeed, I believe that he is right: The distinction that is in danger of being smeared over is one that we should like to keep making. It is the distinction between a just-so story and a carefully researched report that acknowledges contrary evidence.

However, I also believe that the intuition that contrary evidence can disqualify a verification is worth holding on to. In principle, the Nelson models give us enough fine structure to accommodate both of these demands.

The proposal would be as follows: A verification is not only a body of very good evidence, but rather good evidence plus the absence of good counterevidence.

At this point, you should be reminded of the exactly true logic ETL that I introduced in Sect. 4.4. The logic that is beckoning, call it ETL\(_{\supset }\), is related to ETL in the same way as N\(_{4}\) is to FDE. The difference is effected by an obvious modification of the consequence relation of N\(_{4}\):

Consequence ETL\(_{\supset }\):

\(\Gamma \vDash A\) iff in every model and every \({ {w}}\in { {w}}\), if \({ {w}}\Vdash _{1}B\) and \({ {w}}\nVdash _{0}B\) for every \(B\in \Gamma \), then \({ {w}}\Vdash _{1}A\) and \({ {w}}\nVdash _{0}A\).

As one would expect, the rather strange syntactic features of ETL are not alleviated by adding a conditional to it. I will not delve into the resulting system, for I believe that this proposal, while conceptually an improvement on N\(_{4}\) for this application, is actually not going far enough.

Consider the case of a statement for which we have evidence that would be good enough to verify it, were it not for the additional good evidence we have for its negation. In the semantics for ETL\(_{\supset }\), such a statement could never become assertible, due to the heredity constraint for the two semantic values. However, it seems that the idea of defeasible evidence entails not only that a statement taken to be verified might become unverified due to counterevidence, but also that this counterevidence might become so overwhelmingly strong that the original evidence becomes negligible and the negation of the statement will be considered verified.

If this kind of dynamic is to be modeled, then we need to turn our attention to constructive non-monotonic systems. Incidentally, I believe this is one of the most interesting ways in which the ideas in this book could be further developed, but the topic is too large for me to attempt even a start of an investigation. For the rest of these pages, I will keep assuming the conclusiveness of verifications and falsifications, and if this assumption turns out to be unacceptable, then at least I think some groundwork has been laid on which new non-monotonic systems can be based.Footnote 7

5 Features of the Logic N\(_{3}\)

If we assume that verifications and falsifications are conclusive, then I can see no reason to hold on to N\(_{4}\), so I will concentrate on N\(_{3}\) and its characteristic features from now on.

I will contrast these features with intuitionistic logic, the logic that Dummett seemed to think apt for a Stage II theory. As has been hinted at before, all intuitionistic inferences can be mimicked in N\(_{3}\), because intuitionistic negation is actually definable in the following way:

$$ \sim A=_{\text {def}}\, A\supset -A $$

With the aid of the Kripke clauses, it is easy to check that the formula \(A\supset -A\) will be verified iff \({{A}}\) is never verified in a subsequent world. It is also clear that \(\vDash -A\supset \sim A\), but not \(\vDash \sim A\supset -A\). For this reason, the Nelson negation is also known as strong negation.Footnote 8

With all that in mind, here now is a list of similarities and differences we find with regard to intuitionistic logic:

figure d

We find, once again, that the toggle definition of negation gives us both double negation laws . LEM fails, as in intuitionistic logic: Not every statement is either verifiable or falsifiable. However, \(-(A\vee -A)\) can never be verified, so it implies any arbitrary statement, again just as it does in intuitionistic logic.

But in Nelson logic, as is clear from the validity of Double Negation Elimination, the Law of Excluded Third, \(--(A\vee -A)\), must fail to be valid as well. This marks a difference to intuitionistic logic, as we had seen in Chap. 3.

Semantically, these validities seem to have the correlations that Dummett predicted: Bivalence clearly fails, but also tertium non datur must go: We are unashamedly endorsing gaps in the semantics of N\(_{3}\).

Even absolutely undecidable statements, statements which we know will never be decided, are a possibility in Nelson logic. The formula \((A\supset -A)\wedge (-A\supset A)\) Footnote 9 is satisfiable, but only in a model in which \({{A}}\) fails to receive a value at all worlds. This allows the constructivist employing N\(_{3}\) to say something that the intuitionist could not, for to know that \({{A}}\) would never be verified is for the intuitionist already enough to prove \(\sim A\).

The failure of \(\vDash -(A\wedge -A)\) corresponds to the failure of LEM, and it is instructive to see how close the connection is. \(-(A\wedge -A)\) will be true if there is a falsification of \((A\wedge -A)\). Such a falsification, under the present proposal, will consist in either a falsification of \({{A}}\) or a falsification of \(-A.\) It would be just as preposterous to claim to be able to be able to supply either a falsification for every statement or its negation as it was preposterous to claim to always be able to verify one of them. Therefore, both \(\vDash (A\vee -A)\) and \(\vDash -(A\wedge -A)\) will have to go.

We saw in Sect. 3.3 that there was a more general meta-logical property of intuitionistic logic connected to the failure of LEM, the disjunctive property: A disjunction is a theorem of a theory closed under intuitionistic logic iff at least one of the disjuncts is a theorem as well.

N\(_{3}\) has this property as well, but in addition, it has the constructible falsity property Footnote 10: A negated conjunction, \(-(A\wedge B)\), is a theorem of a theory closed under N\(_{3}\) iff either \(-{{A}}\) or \(-B\) is a theorem.

But of course, all this does not mean that \((A\wedge -A)\) can be verified in a semantics without gluts. Indeed, it obviously cannot be verified, and as a consequence, Explosion, \((A\wedge -A)\vDash B\), is valid. As I said above, this marks a principal difference between N\(_{3}\) and N\(_{4}\).

\(-(A\wedge B)\vDash (-A\vee -B)\), along with all of the other de Morgan’s laws, is valid. Spelling out the verification conditions of antecedent and consequent makes it quite clear that both are verifiable in the same circumstances: when at least one of \({{A}}\) or \({{B}}\) is falsifiable. There is nothing non-constructive inherent in this inference.

So far, none of the features of N\(_{3}\) seem objectionable. The same goes for the next line in the list above: Modus ponens holds in N\(_{3}\), as we surely should have hoped.

On the other hand, the next two items seem less appealing. Both contraposition (\({{A}}\supset {{B}}\vDash -B\supset -A\)) and modus tollens (\(-B,{{A}}\supset {{B}}\models -A\)) fail in N\(_{3}\). The counter model to both of these inferences is extremely simple: A model with only one world at which \({{B}}\) is falsifiable, but \({{A}}\) is neither verifiable nor falsifiable.

Informally, the reason why the two principles fail is that not much is assumed about the relation of verifications and falsifications (other than that no statement can be both verified and falsified). If you know how to turn a verification for \({{A}}\) into another verification of \({{B}}\), then the logic does not want to make a commitment to your being able to turn a falsification of \({{B}}\) into a falsification of \({{A}}\).

Interestingly, seeing how very distinctive a feature this failure is, I have seen little by way of examples that show that verifications and falsifications are indeed so unrelated. A simple-minded example would be this:

If you can verify that the test slip turns blue after you put it into the test-tube, you can thereby verify the presence of substance XYZ in the tube. This does, according to Nelson logic, not imply that if you have some unrelated means of falsifying the presence of XYZ in the tube, you can from that construct a method to falsify the claim that the slip would turn blue if you put it into the tube. But, at least in this example, it seems clear that you can come up with a conclusive method of falsifying the claim: Just put the test slip into the tube!

Not that there might not be more sophisticated counter examples, but intuitively, one might well expect contraposition and modus tollens to hold for a conditional that is explained in terms of verifications and falsifications.

5.1 Some Attempts to Get Contraposition Back

We can of course ask whether we can find a different positive clause for the conditional, one that gives us contraposition and modus tollens back. In Sect. 7.6.2, we had met \(\supset _\text {TOL}\), the conditional that was based on the following BHK clause:

  • \(c\) is a verification of \(A \supset B\) iff \(c\) is a procedure that converts each falsification of \({{B}}\) into a falsification of \(A\)

Back then, we had little use for the idea, because we were not allowing ourselves talk of verifications. Here, of course, we feel no such inhibition, and we proceed to consider the Kripke clause:

\({ {w}}\Vdash _{1}A \supset B\) iff for all \(x\ge { {w}}\), \(x\nVdash _{0}B\) or \(x\Vdash _{0}A\)

I will adopt the name N\(_{3TOL}\) for the logic that we get by using this clause instead of the standard one. It would be rather surprising if this clause did not bring us modus tollens back, and indeed, it does. However, contraposition still fails, and in exchange for modus tollens, we have to give up modus ponens.Footnote 11 Not a great improvement, then.

Seeing that neither the intuitionistic idea of a transformation of verifications nor the alternative idea of transforming falsifications works too well, we might also consider using both of these conditions simultaneously.

Two alternatives present themselves here: First, we might say that a conditional is verified if one or the other of the conditions hold:

\({ {w}}\Vdash _{1}A\supset _{\text {OR}}B\) iff for all \(x\ge { {w}}\), \(x\nVdash _{0}B\) or \(x\Vdash _{0}A\) OR for all \(x\ge { {w}}\), \(x\nVdash _{1}A\) or \(x\Vdash _{1}B\)

Call the logic that results from adopting this condition N\(_{\text {OR}}\).

Second, we can be more demanding and ask that both conditions should have to hold:

\({ {w}}\Vdash _{1}A\supset _{\text {AND}}B\) iff for all \(x\ge { {w}}\), \(x\nVdash _{0}B\) or \(x\Vdash _{0}A\) AND for all \(x\ge { {w}}\), \(x\nVdash _{1}A\) or \(x\Vdash _{1}B\)

I will call the Nelson logic that features this conditional N\(_{\text {AND}}\).Footnote 12

Here is a list of the salient differences between these options:

figure e

Of the four alternatives, we seem to get by far the most pleasing results from N\(_{\text {AND}}\). Not only contraposition, modus ponens, and modus tollens all hold up, we are getting rid of \(-A\models A\supset _{\text {AND}}B\) and \(B\models A\supset _{\text {AND}}B\), two inferences that were never too much in favor anyway.

However, there is a price to pay: The deduction theorem will fail us. There will be cases in which we have \(A\vDash B\), but not \(\vDash A\supset _{\text {AND}}B\). The reason for this is that the failure of contraposition is built into the turnstile: We may have \(A\vDash B\), but not \(-B\vDash -A\).Footnote 13 A logic in which the conditional contraposes, but the deduction theorem fails and contraposition around the turnstile fails seems a lackluster affair.

Now, we could try to make everything right by tweaking the definition of logical consequence we gave. Instead of only verifiability preservation left to right, we might additionally require falsifiability transmission right to left:

Contraposable Consequence:

\(\Gamma \vDash _{\text {Contrap}}A\) iff in every model and every \({ {w}}\in { {w}}\), if \({ {w}}\Vdash _{1}B\) for every \(B\in \Gamma \), then \({ {w}}\Vdash _{1}A\), and if \({ {w}}\Vdash _{0}A\), then \({ {w}}\Vdash _{0}B\) for some \(B\in \Gamma \)

This consequence, together with \(\supset _{\text {AND}}\), indeed gives us all the contraposition we could hope for. But the loss is, once again, grave: Neither modus tollens nor modus ponens hold any more.Footnote 14

It is from the frying pan into the fire, and then from there into something else that is at least equally unpleasant.Footnote 15

5.2 Embracing Contraposition Failure

As none of these options seem much better than the standard clause for the Nelson conditional, maybe we should simply make our peace with the idea that we have to do without contraposition and modus tollens. And maybe we should look more carefully for counterexamples.

In ending up with a logic in which contraposition fails, we are actually in good company. Both R. Stalnaker and D. Lewis developed so-called conditional logics. In these logics, contraposition fails, but modus tollens (unlike in N\(_{3}\)) holds. Both authors bite the bullet and argue that contraposition is invalid for natural language conditionals.

Here is the counterexample Stalnaker gives:

For an example in support of this conclusion [i.e., that contraposition is invalid], we take another item from the political opinion survey: “If the U.S. halts the bombing, then North Vietnam will not agree to negotiate.” A person would believe that this statement is true if he thought that the North Vietnamese were determine to press for a complete withdrawal of U.S. troops. But he would surely deny the contrapositive. “If North Vietnam agrees to negotiate, then the U.S. will not have halted the bombing.” He would believe that a halt in the bombing, and much more, is required to bring the North Vietnamese to the negotiating table (Stalnaker 1968, p. 107).

If you are like me, then you will have to read this twice before you fully understand what is supposed to be going on here. It is much easier to understand if you add a small word at the beginning of the original conditional:

Even if the U.S. halts the bombing, then North Vietnam will not agree to negotiate.

Lewis gives a different example that is likewise helped along by a small word, in this case “still.” He asks us to consider the inference:

If Boris had gone to the party, Olga would still have gone.

\(\therefore \) If Olga had not gone, Boris would still not have gone (Lewis 1973, p. 34).

He goes on to argue that this is actually an invalid inference: “Suppose that Boris wanted to go, but stayed away solely in order to avoid Olga, so the conclusion is false; but Olga would have gone all the more willingly if Boris had been there, so the premise is true.”

This is easier to follow than Stalnaker’s original example, but cross out the “still,” and it becomes much harder to deny that the inference seems quite compelling. I have to say that I have some doubts that the conditionals in these examples really are normal conditionals. It seems that an “even if\(\ldots \) then\(\ldots \)” conditional calls for a different analysis than a simple “if\(\ldots \) then\(\ldots \)” conditional.Footnote 16

In a later work, Stalnaker addresses this worry and suggests that the “even” has a purely pragmatic effect on conditionals.Footnote 17 It might be pragmatically unacceptable for someone with the political view above to say “If the U.S. halts the bombing, then North Vietnam will not agree to negotiate.” However, if asked to judge the truth of it, she will agree to it.

If these conditionals really are ordinary conditionals that show the invalidity of contraposition in natural language, then the friend of N\(_{3}\) has reason to smirk. For these, examples motivate her logic better than the logics of Lewis and Stalnaker, because they also invalidate modus tollens.

Assume that “If the U.S. halts the bombing, then North Vietnam will not agree to negotiate.” is true for the reason above and that North Vietnam in fact agrees to negotiate. You should better not be forced to draw the inference that the U.S. did not halt the bombing from this.

But this inference should go through in Stalnaker’s and Lewis’s systems.Footnote 18 N\(_{3}\), on the other hand, has an easy time telling you why the inference should not be drawn: It is invalid.

So far, so good. However, the reason why contrapostions fail in the Lewis-Stalnaker examples on the one hand and in N\(_{3}\) as interpreted by the BHK clauses on the other hand do not seem to match up too well. The conditional “If the U.S. halts the bombing, then North Vietnam will not agree to negotiate.” is, if it is true, not true because every verification of the antecedent can be transformed into a verification of the consequent. To verify that the USA halts the bombing and withdraws all its troops is, in part, to verify the antecedent, but in this case, the consequent will not be verifiable.

It would seem that to do full justice to these examples, the Nelson conditional would have to be supplied with some mechanism tocapture ceteris paribus (or “all other things being equal”) clauses. This is yet another interesting task that I will not attempt to go into here.

Here is an example that is closer to home. One of the consequences of our setup is that

If \({{A}}\), then it is verifiable that \({{A}}\).

is always verifiable. For, if \({{A}}\) is verifiable, then surely it is verifiable that \({{A}}\) is verifiable. There is just no good reason to deny this. However, the contrapositive

If it is not verifiable that \({{A}}\), then not \({{A}}\).

need not be verifiable. The antecedent must be strengthened to “It is falsifiable that \({{A}}\)” for this conditional to go through. I trust this assessment of the second conditional is relatively uncontroversial. What might be less uncontroversial is that we should want to endorse each and every instance of the original conditional. In particular, if verifiability is identified with truth, some constructivist might feel a bit uncomfortable with it. Is what is true at this moment really only that which is verifiable at this moment?

For my part, I think they should not feel embarrassed by this consequence. It might violate some intuitions, but this is better than to patch up the view in some ad hoc way to cater to those intuitions.

If they feel this consequence is really unacceptable, then maybe they should consider an eclectic or pluralistic view on logic, such as sketched earlier on in Sect. 2.11. I will come back to this topic in the last chapter.

However, a different way to avoid commitment to this conditional is to interpret it as \(\supset _{\text {AND}}\). In that case, it will fail to hold precisely because its contrapositive holds. But then, one will be denying that every instance of “If \({{A}}\), then it is verifiable that \({{A}}\).” is verifiable, but conceding that “It is verifiable that \({{A}}\).” follows logically from “\({{A}}\)”.Footnote 19 Again, I would feel somewhat uncomfortable with this result, but maybe it can be made plausible after all.

To sum up these considerations about the conditional, I will (somewhat tentatively) keep endorsing the original account we find in \(N{}_{3}\), viz., the positive clause of the intuitionistic conditional, and with it the failure of contraposition and modus tollens. Of the alternatives I considered, N\(_{\text {AND}}\) seems by far the most tempting logic, and I will be keeping an eye on it in what follows. To endorse N\(_{\text {AND}}\) would suggest to argue (1) that the deduction theorem is not as essential as it is usually taken to be and (2) that “even if” conditionals are not really normal conditionals with some pragmatic swirls added, but rather a whole different type of conditional.

6 Toggle Negation Versus Intuitionistic Negation

Let me end this chapter by picking up the question that was raised back in Sect. 5.3.1. Given the choice between intuitionistic negation and toggle negation, which should we go for? As I said repeatedly, I find the toggle negation in Nelson logic much more natural. Of course, an intuitionist may just shrug his shoulders at this and claim that his intuitions point elsewhere. Let me then try to give some more substantial argument for my preference.

One tempting line of argument would be this: As we had seen, the intuitionistic negation depends on our acceptance of empty promise conversions (cf. p. 39). The toggle negation, on the other hand, is not dependent on this idea, so in order to attack intuitionistic logic, one might argue that such conversions are inadmissible.

However, we find similar empty promise conversions in N\(_{3}\) as well. Just consider the valid inference

$$-A\models {{A}}\supset {{B}}$$

To make a strong argument against empty promise conversions while accepting such an inference seems to make for an unstable position.

Of course, if there in fact is a problem with empty promise conversions and \(-A\models {{A}}\supset {{B}}\), then it should be pointed out that it is not really a problem with the toggle negation in Nelson logic. The problem lies in the positive clause for the conditional (which of course is the same as in intuitionistic logic), just as the problem with intuitionistic negation can be seen as a problem of the conditional if we think of \(\sim A\) as defined as \(A\supset \bot \).

Still, if the contest is not between toggle negation and intuitionistic negation in isolation, but between N\(_{3}\) and intuitionistic logic at large, then both seem committed to empty promise transformations. If these are unacceptable from the constructive point of view, then the Nelson logician should look for a new verification clause for his conditional; again, the conditional \(\supset _{\text {AND}}\) seems like an interesting alternative here.

If we stick to N\(_{3}\), however, then we can find nothing to object to in intuitionistic negation. Indeed, as we have seen, it would be strange if we could, because intuitionistic negation, \(\sim \)A, is definable in N\(_{3}\) Footnote 20 as \(A\supset -A\)!

And this is exactly, I believe, the strong point of Nelson logic. It has no reason to deny intuitionistic negation’s legitimacy as a constructive notion, nor does it have to show that it is unnecessary.

On the other hand, the intuitionist who rejects toggle negation has to show either of two things:

Claim 1: Toggle negation is objectionable from the constructive viewpoint. Its definition is somehow flawed.

Claim 2: We don’t need toggle negation, and it should fall victim to Occam’s Razor.

To seriously make Claim 1 seems absurd. If we do allow verifications and falsifications in the ingredient sense, then the definition of toggle negation seems quite as innocent as the definitions of conjunction and disjunction. As we have just seen, one can envisage doubts about the legitimacy of the conditional, but surely not about the simple verification/falsification switch that is \(-\). The only position the intuitionist could take that would support some resistance would be to claim that all ways to show that a verification of \({{A}}\) is impossible must suffice to falsify \({{A}}\). In this case, ‘It is falsifiable that \({{A}}\) is falsifiable’ does not imply ‘\({{A}}\) is verifiable’, but this is an implication that has to be accepted along with the toggle negation account. However, I have rejected “\({{A}}\) is impossible to verify” as an explication of “\({{A}}\) is falsifiable” back in Sect. 6.4.

It would have to be Claim 2, then: We do not need toggle negation, so we should not have it in our logic.

Now, it seems that the plausibility of this claim depends on the exact project we are engaged in. Are we, for example, trying to give a constructive interpretation of mathematics only? Then, maybe, intuitionistic negation might suffice to express all that we need. If we are, on the other hand, going for a full-scale verificationistic revision of the meaning of negation in all areas of discourse, then the availability of toggle negation is clearly an asset.

If we revise logic, we are in effect claiming that normal people have a somewhat incorrect grasp of the meaning of the logical constants they use. One would suppose that we should be as charitable as possible here. In other words, we should employ a principle of minimum mutilation: The less revision our interpretation makes necessary, the better.

It seems clear that speakers often see no problem in canceling a double negation. But to argue that they always do so would involve some very detailed investigation into negation items in natural language. Is, say,

She is not unhappy, therefore she is happy

a case of Double Negation Elimination? If so, the intuitionist may have a point in rejecting DNE. On the other hand, the negation items here seem quite different in kind, which might incline us to look for a logical system with more than one negation. The task of finding the right representation of natural language negations is formidable, fascinating, and beyond the scope of this book.Footnote 21

However, there is at least one item in English that surely marks negation: “Not.” And there are cases in which an intuitionistic negation would be verifiable, but a natural language statement featuring “not” would not be judged verifiable. Moreover, toggle negation agrees with natural language intuitions in these cases.

An example is this statement:

The largest Brachiosaurus alive on September 1st 154888328 B.C. was female.

Suppose that we know that the only way to verify this claim is to make a time travel and that the idea of time travel is absurd. Then, any verification of this statement would lead to the absurd conclusion that time travel is possible. Thus, if the “not” in the following statement is to be analyzed as intuitionistic negation, then this statement is:

The largest Brachiosaurus alive on September 1st 154888328 B.C. was not female.

But the intuitions of most speakers are surely that neither of these examples is verified, and toggle negation explains why in a most straightforward manner: We can neither verify nor falsify these claims.

If there are examples in which a constructive account of the language use of ordinary speakers is only possible if we employ toggle negation as a tool of analysis, then I think we should do so. The alternative would be to argue that these speakers are at fault. But, if I am right about the constructive admissibility of toggle negation, it is quite unclear on what this criticism of actual practice should be based.

7 Chapter Summary

We have met the constructive Nelson logic N\(_{3}\), a logic that is able to deal with verifications and falsifications side by side. In the semantics, we allow for gaps, but not for gluts. The logical consequence relation transmits, just as in intuitionistic logic, the property of verifiability.

Concerning the connectives, we made the following choices: The device that takes us from verification to falsification and back again is negation, a simple toggle device that needs no information from other worlds in the Kripke semantics. A direct consequence of our adopting this kind of negation was that all the double negation laws are valid. Together with the account of conjunction and disjunction, we came to see that the de Morgan laws are likewise valid, while both LEM and LET failed to be valid.

A peculiarity of the account is that the conditional does not support contraposition, nor is modus tollens a valid rule. However, I gave some arguments for accepting these failures. Admittedly, the matter was not resolved beyond doubt, and a new conditional, \(\supset _{\text {AND}}\), appeared to be a close contender for the job.

Lastly, I compared N\(_{3}\)’s toggle negation and the intuitionistic negation. The upshot of that discussion was this: Even if there might be cases in which we should want to model natural language negations by intuitionistic negations, we have that resource available in N\(_{3}\), as intuitionistic negation is definable.

Therefore, a supporter of N\(_{3}\) can’t and does not have to complain too much about intuitionistic negation. The only thing he has to show is that his negation can play a useful role in analyzing the meaning of logical constants.

On the other hand, I argued that the concept that toggle negation captures is clearly one of negation, clearly constructive, and clearly useful, and that we should accordingly adopt it.

This summarizes what I have to say about the expanded verificationism of Stage II. In the next chapter, which is on the expanded falsificationism of Stage IV, we will once again ban verifications from the assertoric sense and adopt the falsificationistic stance: As at Stage V, a statement will be correctly assertible iff it is not falsifiable.

However, we will keep employing verifications and falsifications alongside each other in the ingredient sense. The nice consequence of this is that we can just employ the definitions of the connectives that we have worked out in this chapter. The only task is to tweak the definition of logical consequence and to see where that leads us. Unfortunately, some problems lie ahead as well.