Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Classifying Conditionals: Where Does the Demarcation Lie?

1.1 Entailments, Implications, Defeasible Conditionals

Although propositional logic is about the analysis of all logical connectives, we must undoubtedly recognise a primus inter pares in this class: the conditional connective “if…then”. Since the ancient times reams of paper have been depleted, and rivers of ink have been spilt, in order to discuss the logical properties of conditionals—even crows on the roofs once did so, according to an oft-quoted passage by Callimachus. Here I’ll beg those birds to move over and let me join them in croaking about which conditionals are sound and which are not.

Given the massive proportions of such a debate, it is to some extent surprising that there is comparably little agreement among the specialists on how to classify conditional sentences in natural languages like English. For the purpose of the present discussion, let us focus on what is in my opinion the most accurate taxonomy of conditionals sentences from a logical viewpoint. This taxonomy, or something closely resembling it, is to be found in several places in the literature (e.g. Routley et al. 1982; Mares 2004); conditionals are ranked in decreasing order according to the logical cogency of the connection between their antecedents and their consequents.

  • At the top of the ladder we find entailments, where the degree of logical cogency is maximal: necessarily, if the antecedent holds true, then so does the consequent. For example,

    1. (1)

      If it rains and it is hot, then it rains.

    Using a dichotomy suggested by Meyer (1986), entailment is the kind of notion that logical systems like S4 or the relevant system E from Anderson and Belnap (1975) mean to express, while systems like classical propositional logic or the relevant system R only content themselves with indicating: in the theorems of R or of CPC one is invited to read only asserted (principal) occurrences of the conditional connective as entailments, while in the theorems of S4 or of E we are justified in interpreting any occurrence of that connective as formalising an entailment.

  • The next sentence is an example of an implication, or of a sufficiency conditional:

    1. (2)

      If I am in Melbourne, then I am in Australia.

    This is not an entailment, at least if we are persuaded that there may well be some possible world where Melbourne fails to be in Australia (which could be doubted by partisans of rigid designation). In any case, the obtaining of the antecedent is a sufficient condition for the obtaining of the consequent. Classical logicians and relevant logicians who adhere to R usually invite us to read non-principal occurrences of the conditional connective in the theorems of their own favourite systems as implications, not entailments.

  • Finally, we have defeasible conditionals like

    1. (3)

      If this match is struck, it will light.

    Here the connection between antecedent and consequent is still looser. The obtaining of the former is not even a sufficient condition for the obtaining of the latter, in general: it is such only under “normal” conditions (for example, if the match at issue does not happen to be wet). According to Priest (2001), the real logical form of conditionals like (3) is “If A and C A , then B”, where the ceteris paribus clause C A can be read as “other things being equal”, or “if everything else relevant remains unchanged”. More precisely, C A captures an open-ended set of conditions and depends strongly on A, a feature which is notationally represented through the use of the subscript.

Now, it looks like most scholars have inadvertently followed a “division of labour” plan that led them to focus on one rung or another of the previously described ladder, losing sight of the whole. For example, one can find a copious literature about paradoxes of entailment or implication where defeasible conditionals hardly ever get a mention. On the other hand, conditional logicians (e.g., Nute 1984; Bennett 2003) traditionally disregard most of such debate in their analysis of “if…then” sentences in natural language. Only a few authors (e.g., Sanford 1989; Mares 2004; Humberstone 2011) seem to have undertaken the praiseworthy task of giving a unified account of the phenomenon. Before trying to join them and offering my view of the problem, however, I must dwell a little longer on the internal subdivision of the category of ceteris paribus conditionals.

1.2 Defeasible Conditionals: The Main Competing Theories

The classification of defeasible conditionals in English is one of the most controversial issues in the whole area of philosophical logic. Do English sentences having the grammatical form “If A, then B” share the same logical form as well, or else may such hypothetical clauses express different connectives according to circumstances? If the latter alternative is correct, where should the dividing lines be drawn?

The former option (i.e. the claim that the meaning of “if…then” is basically uniform) has enjoyed some popularity from time to time (Bryant 1981; Lowe 1995). The difference between such counterfactual sentences as the famous

  1. (4)

    If kangaroos had no tails, they would topple over.

and non-counterfactual conditionals has been explained e.g. in epistemic terms, claiming that it does not depend on an ambiguity of “if…then”, but merely on the speaker’s subjective opinion about the truth value of the antecedent. However, it is well-known that a heavy burden of proof lies upon the supporters of such uniform (or monist, as they are also labelled) theories of conditionals. In fact, they owe us a plausible account of the contrast between (5) and (6) below by Adams (1970):

  1. (5)

    If Oswald didn’t kill Kennedy, someone else did.

  2. (6)

    If Oswald hadn’t killed Kennedy, someone else would have.

(5) and (6) seem to have different truth conditions: if we take on trust the Warren report—and its claim that Oswald killed Kennedy unassisted by any accomplice—(6) is false, while (5) is trivially true given only that Kennedy has been murdered by someone.

The traditional dualist view (Adams 1970; Lewis 1973; Jackson 1987), therefore, has it that conditional sentences whose antecedents are in the indicative mood—in plain words, indicative conditionals—actually express a different connective from subjunctive conditionals, whose protases are in the subjunctive mood. In particular, sentences like (6) can be rephrased by forming new conditionals whose verbs are indicative, and therefore fully susceptible of being assigned a truth value:

  1. (7)

    If it had been the case that Oswald didn’t kill Kennedy, it would have been the case that someone else did.

Hence, it is argued that on the level of “deep structure” (5) and (6) have exactly the same antecedent and the same consequent, and that the moods of the verbs in (6) are parts not of the component sentences, but rather of the conditional construction, viz. of a “subjunctive conditional” connective which is different from its indicative counterpart (Nute 1984).

Although supporters of the standard dualist view agree that indicative and subjunctive conditionals have distinct truth conditions, it is a matter of dispute what these conditions really amount to. According to Lewis (19731976), for example, subjunctive conditionals have an intensional nature, while indicative conditionals are truth-functional; Stalnaker (19681975), on the other side, believes that although both kinds of conditionals can be modelled by means of possible worlds semantics, in the case of indicative conditionals a decisive role is played by appropriate contextual presuppositions.

The strongest competitor of the previous approach is surely the classification in Dudman (198319841989), which gained increasing support during the 1980s and beyond. Roughly put, Dudman claims that the difference between “hadn’t-would” (HW) conditionals like (6) and “doesn’t-will” (DW) conditionals like

  1. (8)

    If Oswald doesn’t kill Kennedy, someone else will.

is one of tense, not of mood: sentences of the former type express at time t what sentences of the latter would have expressed at a particular time t  < t. As Bennett (1988) once put it, “Every hadn’t-would was once a doesn’t-will”. Actually, Dudman contends that HW conditionals are only seemingly subjunctive: careful linguistic analysis reveals that the verbs contained therein are indicative—more precisely, the antecedent is in the past perfect tense, the consequent in the simple past tense. Both kinds of conditionals, in Dudman’s opinion, correspond to imaginative projections highlighted, on the linguistic level, by a “forward tense shift”: “The imagined course of events is appended to the course of previous actual history, and the use of ‘Vs’, ‘Vd’, or ‘had Vd’ locates at present, past or ‘past past’ the point at which history gives way to imagination. And since the satisfaction of the antecedent is always part of what is imagined, it is always later than this ‘changeover point” (Smiley 1984, p. 249). On the other side, “didn’t-did” (DD) conditionals like (5) express condensed arguments whose antecedents “signal the entertainment of a hypothesis from which a conclusion is deduced” (Smiley 1984, p. 248).

A similar distinction is drawn by Gibbard (1981): epistemic conditionals, whose assertion is guided by a subjective connection in the utterer’s belief system and whose paradigmatic examples are DD conditionals, must be kept separate from factual conditionals, whose assertion is guided by an objective connection between states of affairs and whose paradigmatic examples are HW conditionals. The main difference between the accounts by Gibbard and Dudman is that for the former DW hypothetical clauses may fall into either category according to circumstances, while the latter (supported by Bennett 1988), as already remarked, claims that DD stay on the one side of the fence and DW and HW on the other.

The traditional account, disparagingly labelled in Bennett (1988) the “phlogiston theory of conditionals”, experienced a resurgence over the last 15 years. Edgington (1995), Weatherson (2001),Footnote 1 and Bennett (1995) have advocated it against Dudman’s attacks. In particular, examples have been provided both of epistemic DW conditionals and of factual DD conditionals, showing that the “objectivity point” (as Bennett calls it) cannot be used to defend Dudman’s view of the matter. Later we shall examine in greater detail some of Bennett’s allegations.

2 Two Conditionals or Three?

2.1 The Ambiguity of Disjunction

As we have just seen, according to the received view—pleaded e.g. by Stalnaker and Lewis—all indicative conditionals must be assigned to the same class, whereas Dudman and Gibbard claim that some of them belong together with the subjunctives. I agree partly with the former and partly with the latter. They possess some degree of semantical uniformity,Footnote 2 in that they can be rephrased with no essential alteration of meaning as “Either not-A or B”. Yet, they do not belong to the same class, in so far as the previous disjunction is inherently ambiguous. I will contend that there are at least three different kinds of indicative conditionals in English, and that what distinguishes them from one another are the operational properties of the disjunctions underlying each conditional—i.e. of the disjunctions in terms of which each conditional can be rephrased.

Let us examine disjunction first. Consider the following three sentences:

  1. (9)

    Either \(2 + 2 = 4\), or London is in Alaska.

  2. (10)

    Either the butler did it, or the gardener did it.

  3. (11)

    Either it will rain, or the match will be played.

Has the “either…or” construction the same meaning in each of these sentences? Since none of (9)–(11) expresses an exclusive disjunction, it could be believed that it has. However, in the tradition of relevant and substructural logics (see e.g. Read 1988; Restall 2000; Paoli 20022005; Allo 2011), it has been argued at length that inclusive disjunction is, in turn, ambiguous. The arguments are altogether well-known, and I will not try to recapitulate them. Put quite roughly: in sentences like (9), no special relationship is presumed to hold between the disjuncts; such disjunctions are asserted simply on the ground of the acceptance of at least one of the disjuncts themselves. This kind of disjunction has been labelled lattice-theoretical, or additive (especially by linear logicians), or also extensional (especially by relevant logicians). Here, it will be denoted by means of the symbol ⊔ . It is an associative, commutative and idempotent disjunction: as to the last property, it is easily realised that A ⊔ A is accepted in virtue of the acceptance of one of its disjuncts if and only if A itself is accepted.

On the other hand, suppose that (10) is uttered in a context where it is not known who committed a given crime, but the only suspects are the butler and the gardener (who, possibly, may have acted by common consent). (10) presupposes then a connection between the disjuncts: it is such a connection that produces the acceptance of the disjunction, not the previous acceptance of one or of the other disjunct. In substructural logics, this kind of disjunction has been termed group-theoretical, or multiplicative (especially by linear logicians), or also intensional (especially by relevant logicians). Here, it will be referred to by means of the symbol ⊕ . In terms of its operational properties, it is an associative and commutative connective, but it is not idempotent. For example, I can accept the first disjunct of (10) without accepting

  1. (12)

    Either the butler did it or the butler did it.

(with an intensional “or”), if I entertain as a genuine alternative the possibility that the culprit was the gardener.

So far, nothing new under the sun. Yet, I want to go a step farther and claim that even (10) and (11) cannot belong in the same lot—a difference which seems to have passed unnoticed in the debates on the ambiguity of logical constants. Like (10), (11) requires a connection between the disjuncts, but of a somewhat different kind. While the meaning of (10) is not disturbed by the inversion of the disjuncts—both alternatives are entertained together, and enjoy so to speak equal rights—this is not the case for (11), where a tacit ceteris paribus clause attached to the first disjunct seems to award it a privileged role in the sentence. In other words, (11) appears to mean something like: either it will rain, or something wholly unexpected other than the rain will prevent the match from being played, or the match will be played. Permutation of disjuncts would render the clause idle, thus affecting the meaning of the whole sentence. The disjunction in (11) (hereafter called superintensional and referred to by the symbol ), therefore, lacks not only the property of idempotency, but also that of commutativity.Footnote 3 And, just to anticipate a bit, the noncommutativity of is tightly related to the failure of contraposition for the corresponding conditional.

2.2 From Multiple Disjunctions to Multiple Conditionals

Virtually all authors who denied the equivalence between the (indicative) conditional and material implication have also denied the equivalence between “If A then B” and “Either not-A or B”. There is something more in an English indicative conditional, so goes the received view, than there is in its disjunctive paraphrase: the latter, unlike the former, is a truth-functional sentence where the meaning connection between the antecedent and the consequent of the corresponding conditional is irremediably lost. As anticipated above, I disagree with this analysis. Once we acknowledge the ambiguity of disjunction, we can vindicate the correctness of the disjunctive paraphrase of conditionals without being committed to accepting the equivalence between indicative conditionals and material implications. To clarify this point, let us now rewrite (9)–(11) in conditional form:

  1. (13)

    If 2 + 2≠4, then London is in Alaska.

  2. (14)

    If the butler didn’t do it, then the gardener did.

  3. (15)

    If it doesn’t rain, then the match will be played.

Apparently, no gross change in their meanings has been produced. This should come as no surprise, since the equivalence of “Either A or B” and “If not-A, then B” for indicative conditionals has long been taken for granted before being challenged by some non-classical logics. In my opinion, indeed, such darts were not aimed at the proper target: it is not such an equivalence that must be dropped, it is the ambiguity of disjunction that ought to be duly acknowledged. Once this is done, it becomes reasonable to suppose that the features which distinguish the above kinds of disjunctions are mirrored by different features of the corresponding conditionals. In fact, we will now see that such typologies of sentences abide by different logical laws.

For a start, consider (13). Just like there is a sense in which (9) was assertible simply on the ground of the acceptance of its first disjunct, so there is a sense in which (13) is acceptable simply on the ground of the rejection of its antecedent, i.e. simply in virtue of the fact that its subordinate clause is ruled out: if 2 + 2≠4, then anything whatsoever can be the case.Footnote 4 In particular, therefore, it is implied by the negation of its antecedent.Footnote 5 As the debate about the paradoxes of the material conditionals has demonstrated, this is the case neither for (14) nor for (15): there is another sense of “if…then” in which the fact that if it wasn’t the butler who did it then it was the gardener does not arise as a consequence of the butler’s failing to do it, while from the fact that it rains it does not follow that if it doesn’t, the match will be played.Footnote 6 Thus, (13) cannot belong to the same category as (14) or (15), for the ex absurdo quodlibet holds for the former but not for the latter.

Next, consider the following variants of (15):

  1. (16)

    If the match isn’t played, it will rain.

  2. (17)

    If it doesn’t rain and no player turns up, the match will be played.

  3. (18)

    If it doesn’t rain, the match will be played. If it snows, it won’t rain. Therefore, if it snows, the match will be played.

Failures of contraposition, transitivity, and monotonicity for some kinds of conditionals sentences are, as a matter of fact, widely recognized in the literature (see e.g. Sainsbury 1991). The need for a distinction between ordinary conditionals and conditionals for which these principles may fail has been widely recognized. For Lewis, Jackson and others (see Lewis 1976 or Jackson 1979), validation of such laws marks a watershed between indicative conditionals, which are truth-functional, and subjunctive conditionals, which lead us into the realms of intensionality and are captured by possible worlds semantics. However, the above examples show that a conditional need not be in the subjunctive mood to exhibit intensional features and fail to abide by the laws of transitivity, contraposition, monotonicity: it is easy to imagine situations where (15) is true while (16) and (17) are false and (18) has true premisses and a false conclusion. On the other hand, with (14) at least contraposition does not seem to cause trouble: its contrapositive sounds as a fair paraphrase of the original sentence, whose meaning it does not seem to affect. Thus, (15) cannot belong to the same category as (14), for contraposition (and, possibly, monotonicity and transitivity) holds for the latter but not for the former.

If failure of the above principles is not a distinctive property of subjunctive conditionals but is shared also by some indicatives, when exactly does it arise? An interesting suggestion, which we already hinted at when introducing our noncommutative disjunction, has been advanced for example by Priest (2001), who maintains that these logical principles break down when a hypothetical clause expresses a ceteris paribus enthymeme: what we are assenting to when we endorse (15), for example, is not the conditional itself, but rather something like

  1. (19)

    If it doesn’t rain then, other things being equal, the match will be played.

This duly accounts e.g. for the failure of contraposition: while the original conditional meant “If A and nothing relevant about A changes, then B”, the meaning of the contraposed sentence is “If ¬B, and nothing relevant about ¬B changes, then ¬A”. It is evident how this phenomenon is linked to the noncommutativity of the corresponding disjunction.Footnote 7

The previous observations lead to surmise that there are (at least) three kinds of indicative conditionals in English. In short, some conditionals—like (13) above—satisfy the ex absurdo quodlibet: we call them extensional or squiggle conditionals and use for them the symbol ↝ . Among those which do not, some—like our (14)—satisfy contraposition, other ones—for example, (15)—provide patent violations of this schema and can be assimilated, at least under this respect, to subjunctive conditionals. We call the former intensional or arrow conditionals and the latter superintensional or corner conditionals, hereafter denoting them, respectively, by the symbols → and > .

It seems appropriate, now, to try and answer some questions which may have occurred to the reader:

  1. 1.

    In the preceding section we surveyed a number of taxonomies of conditionals. How does our classification relate to them?

  2. 2.

    What is the relationship between the above connectives and the material conditional (here referred to by the symbol ⊃ ) of classical logic?

  3. 3.

    Does the relevant conditional of Anderson and Belnap (1975) coincide with any of the previous conditionals?

Let us face such issues one by one.

  1. 1.

    According to my proposal, some indicative conditionals share with subjunctive hypotheticals strongly intensional (I used, in fact, the term “superintensional”) features, which determine the failure of such logical principles as transitivity, contraposition, monotonicity. This placement of the cut-off point signals an irreconcilable rift with the traditional theory, and brings my suggestion closer to the approaches by Priest, Dudman and Gibbard. I guess that Priest’s distinction between ordinary and ceteris paribus conditionals, as well as Dudman’s distinction between “condensed arguments” and “imaginative projections”, or Gibbard’s one between epistemically and factually based conditionals, are all approximately correct and hinge at least in part on similar intuitions. I only think that marking the boundaries of each class of conditionals by means of an appeal to the operational properties of the respective underlying disjunctions allows one to remain on a firm logical ground, while Dudman’s grammatical criterion (DD on the one side, DW and HW on the other) or Gibbard’s epistemic criterion do not.

    Furthermore, it seems to me that my approach to the issue yields an extra bonus. I believe that the appeal to grammatical or epistemic criteria in the classification of conditionals has befuddled the debate to the extent that it has prevented some authors from recognising that DD (or epistemic) conditionals obey exactly the same logical laws as sufficiency conditionals. If this observation is correct, there is no need to create a separate category for implications: they belong together with epistemic conditionals to the class of arrow conditionals. In this way, we can finally have hope to bridge the theories of entailment, of implication and of natural language conditionals, which have been kept artificially separate for such a long time, by aiming at a logical theory which expresses entailment and indicates both sufficiency and defeasible conditionals. In the final section of this paper I will try to flesh out in mathematical terms this basic informal intuition.

  2. 2.

    Although I am joining here the majority of relevant logicians (e.g. Anderson and Belnap 1975 or Read 1988) in drawing a sharp distinction between extensional and intensional connectives, mainstream relevant logicians also claim that the material conditional is no conditional connective, while I maintain that it is rather two conditional connectives in one: as argued more thoroughly in Paoli (2007), it is an ambiguous concept which has been paralogistically assigned the properties of both the squiggle and the arrow. Such a difference, to some extent, could be disregarded as merely verbal; more importantly, however, relevant logicians do not seem to distinguish sharply enough between the material conditional and the squiggle. In fact, even though modus ponens for the squiggle is not relevantly valid, if you replace ⊃ by ↝ all the classical implicational tautologiesFootnote 8 become theorems of the logic which is at the forefront of all relevant logical systems—Anderson’s and Belnap’s R. Hence, the horseshoe and the squiggle come to obey exactly the same laws, though not the same inference rules. This consequence of the presence of suitably strong contraction principles in R is, in my opinion, a further reason not to favour the adoption of such principles. If A is neither accepted nor rejected, in fact, we have no ground for accepting A ↝ A, because we can neither reject its antecedent nor accept its consequent; nonetheless, A ↝ A should hold true if the squiggle obeyed the same laws as the material conditional. Hence, the material conditional cannot coincide with the squiggle; but it cannot coincide with the arrow either, because → does not satisfy the paradoxes of material implication while ⊃ does. On the contrary, there is no connective fulfilling both the laws characterizing ↝ (such as the law of a fortiori, or the ex absurdo quodlibet) and those holding of → (such as the principles of identity, assertion and transitivity, or the rule of modus ponens). An endless series of paralogisms, of which C.I. Lewis’s “independent proof” is the prototypical example, originated from this equivocation.Footnote 9

  3. 3.

    It is instructive to notice that Anderson and Belnap, while insisting that the intensional disjunction means nothing else than “if not-A, then B”, where “if…then” stands for the relevant conditional, are not completely clear about whether such a conditional should be read as an indicative or a subjunctive conditional. Our impression is that the relevant conditional is seen by Anderson and Belnap as an all-purpose logical concept which can be used to model both the arrow and the corner:

    The truth of A-or-B, with truth functional “or”, is not a sufficient condition for the truth of “If it were not the case that A, then it would be the case that B”. […] On the other hand the intensional varieties of “or” which do support the disjunctive syllogism are such as to support corresponding (possibly counterfactual) subjunctive conditionals. When one says “That is either Drosophila melanogaster or Drosophila virilis, I’m not sure which” and on finding that it wasn’t Drosophila melanogaster concludes that it was Drosophila virilis, no fallacy is being committed. But this is precisely because “or” in this context means “if it isn’t the one, then it is the other”. […] But it should be equally clear that it is not simply the truth functional “or” either, from the fact that a speaker would naturally feel that if what he said was true, then if it hadn’t been Drosophila virilis, it would have been Drosophila melanogaster (Anderson and Belnap 1975, p. 176).

    The remark by Anderson and Belnap can be disputed (cp. Paoli 2007). Let us return to our early example of Oswald and Kennedy. When one says “Either Oswald or someone else killed Kennedy, I don’t know who”, and on finding that Oswald didn’t do it concludes that someone else did, no fallacy is being committed. But, as we already noticed, a speaker would not naturally feel that if what he said was true, then if Oswald hadn’t done it, someone else would have! What Anderson and Belnap seem to do, here, is blending into their concept of relevant conditional the properties of two different connectives (the arrow and the corner), both of intensional nature - although to a different degree.Footnote 10

Summing up: my suggested taxonomy provides for at least three kinds of indicative conditionals: squiggles ( ↝ ), arrows ( → ) and corners ( > ), respectively definable in terms of an associative, commutative and idempotent disjunction ( ⊔ ), an associative and commutative, but non-idempotent disjunction ( ⊕ ), and a possibly non-associative and surely non-commutative and non-idempotent disjunction (). Subjunctive conditionals can be assimilated to corner conditionals, except for the fact that—for grammatical reasons—it is unclear how to obtain therefrom syntactically adequate disjunctive paraphrases.Footnote 11 The next Table summarises the information just given.

As far as I can see, even though such a picture is new on the whole, each single aspect of it has been anticipated in the debate over conditionals:

  •  Lewis (19731976) and Jackson (1979), among others, realised the need for a distinction between corner conditionals (even though they mistakenly equated them with subjunctive conditionals) and some other kind of conditional, but neglected the divide between arrow and squiggle conditionals, identifying them with the hybrid concept of material conditional.

  • Dually, Anderson and Belnap (1975) correctly acknowledged the need for a distinction between squiggle conditionals (even though they mistakenly had them obey the same laws as material conditionals) and some other kind of conditional, but overlooked the divide between arrow and corner conditionals, identifying them with the hybrid concept of a relevant conditional.

Symbol

Name

Disj. and Conj.

Properties of such

Type of connection

 ↝ 

squiggle

 ⊔ ,  ⊓

assoc., comm., idemp.

no connection

 → 

arrow

 ⊕ , ⊗ 

assoc., comm.

subjective

 > 

corner

, 

 

objective

It seems to me that only by amending the faults in the partly correct intuitions of both sides one can get the right demarcation lines.

To the best of my knowledge, the sole author who advocated a three-level logic with three different kinds of conjunctions, disjunctions and conditionals, characterised by distinct operational properties—even though in view of different applications—was Casari (1997). His research was a major source of inspiration for the present paper, in ways that will become more and more evident in the subsequent pages.

2.3 An Evaluation of Some Arguments by Bennett

In a paper on the classification of conditionals, Bennett (1995), Jonathan Bennett discusses some features of conditional sentences in order to corroborate the traditional dualist view, to which he reverted on that occasion after having taken sides with Dudman’s reforming proposal for a number of years. In this subsection I will examine some of his arguments, trying to assess them in the light of my own suggestion.

Bennett mainly discusses the placement of indicative DW conditionals; his chosen example is the sentence

  1. (20)

    If Booth doesn’t kill Lincoln, someone else will.

Bennett imagines the following situation:

Suppose that a bit before the fatal time, one conspirator is sure that plans are in place for Booth to make the attempt and for someone else to take over in the event that he fails. This conspirator, Oscar, has objectively connecting grounds for accepting something which he expresses in the words “If Booth doesn’t kill Lincoln, then someone else will”. Another conspirator, Sam, has subjectively connecting grounds for accepting something that he expresses in the very same sentence […e.g.] he hears someone being ordered to kill Lincoln; he thinks he was Booth, but he isn’t sure; he is sure that whoever gets the order will carry it out (Bennett 1995, pp. 334–336).

The main point of disagreement between Bennett and me arises precisely over the following issue: Bennett takes the sentence believed by Sam and the sentence believed by Oscar to be tokens of the very same proposition. The grounds for asserting it may be different in each case, but this is thought to have little to do with the meaning of the sentence itself:

Edgington issues this challenge: why not say simply that [it] expresses a single proposition—or means just one thing—which Oscar and Sam accept for different reasons? […] Suppose that Oscar has his objectively connecting reasons and doesn’t know Sam’s reason. Sam says “I think that if Booth doesn’t kill Lincoln, someone else will—don’t you agree?”. It would be excessively odd for Oscar to reply “It depends on what you mean” (Bennett 1995, p. 336).

The reason why the sentences uttered by Oscar and Sam might not express a single proposition was discussed above: epistemic conditionals imply their own contrapositives, while factual conditionals need not. Failure of contraposition may not affect (20) in particular; however, that something might go wrong with the ceteris paribus clause after the contraposition move is emphasised by the fact that, if someone says

  1. (21)

    If no one else kills Lincoln, Booth will.

probably Oscar will not assent; rather, he might correct his interlocutor by bringing into play an appropriate backtracking version of the conditional: “No, but perhaps what you mean is that if no one else kills Lincoln, it is because Booth already did it”. Vice versa, Sam is more likely to agree with (21) (“If it’s no one else, it means that I got it right, it’s going to be Booth”). This asymmetry should at least cast some doubts on Bennett’s claim.Footnote 12

The second argument is connected to Lewis’s celebrated triviality result according to which no non-trivial conditional proposition is such that a person’s confidence in it is proportional to the confidence that she accords to the consequent on the supposition that the antecedent holds true (i.e. no non-trivial conditional has what Bennett calls the confidence property). Bennett argues as follows: suppose that DW conditionals express factually based propositions (corner conditionals) on some occasions and subjectively based propositions (non-corner conditionals) on other occasions. In both cases, such conditionals patently have the confidence property. While non-corner conditionals, however, are amenable to be treated as either material conditionals or conditional assertions, and so do not contradict the theorem by Lewis (for material conditionals can after all be denied the confidence property, while conditional assertions do not express conditional propositions), such an escape is not available when corner conditionals are at issue. Thus, the triviality result causes a contradiction: the indicated instances of DW should both have and lack the confidence property.

I believe that the argument rests on a dubious premise—namely, that DW conditionals have the confidence property. In my opinion, on the contrary, none of ↝ , → , > really has the property. That squiggles lack it should be fairly obvious. But the same can be repeated also for arrows and corners, because they require a connection (of subjective or objective kind) between the antecedent and the consequent. My confidence in the conditional

  1. (22)

    If England beat France today, then the sun will rise tomorrow.

is virtually null whether the “if…then” is interpreted as an arrow or as a corner, but the probability I am willing to accord to “The sun will rise tomorrow” on the supposition that England beat France is as high as it can be. Thus, there is no contradiction in supposing that a DW conditional can express, according to circumstances, a squiggle, an arrow or a corner. At the very least, no such contradiction is entailed by Lewis’ theorem.

Let us now examine a couple of the remaining arguments advanced by Bennett in defence of the traditional taxonomy.

(A) The opt-out property. A subjunctive conditional, according to Bennett, has the opt-out property: “It can properly be accepted by someone who would, if he became sure of its antecedent’s truth, simply drop it, opt out, say that his conditional had presupposed something false and was therefore inoperative” (Bennett 1995, p. 341). On the other side, Bennett claims that indicative conditionals lack the property.

However, some counterexamples can be devised. Suppose that Oscar, who wants the death of Lincoln, hears that Booth is probably going to shoot him on that same day. Such a course of events would obviously suit Oscar, who could achieve his own treacherous end without exposing himself. Therefore, he will simply stand by and wait—if Booth has the nerve to kill the president, all the better; otherwise, he will do it personally. After a few months, Booth is convicted for the murder; Sam, who had come to know about Oscar’s plot, remarks: “If Booth hadn’t killed Lincoln, Oscar would have”. Finally, suppose that Booth is fully discharged on appeal. Now that he knows that the antecedent of his previous conditional was true, does Sam have any reason to opt out? Not quite: the most reasonable thing to do would seem to presume that, after all, Oscar actually carried out his plan. The above HW conditional does not seem to have the opt-out property.

Now, let us switch to our conditional (13) above, which is indicative and therefore, according to Bennett, should not allow the opt-out move. It is evident, however, that it does: squiggle conditionals are the prototypical examples of hypothetical sentences that are not (to speak the jargon of Jackson 1979) robust with respect to their antecedents, which means that it is possible to opt out once the truth of the protasis has been ascertained.

If the opt-out property induces any demarcation at all, then, it cannot be the one which Bennett points to. Rather, such a divide seems to cut the field across, setting squiggles and some corner conditionals in the subjunctive mood apart from arrows and other corner conditionals, both in the indicative and in the subjunctive mood.

(B) The zero property. Bennett says that a conditional “has the zero property if it is a conditional for which nobody could have any serious use while giving (its antecedent) a probability of zero”. He maintains that the zero property characterises indicative conditionals in opposition to subjunctive ones. However, if indicative conditionals of the (13) sort are something for which anybody could have any use at all, this would happen precisely when their antecedents have probability zero.

3 Simplification of Disjunctive Antecedents

3.1 Background

Let us now take stock and examine more closely the three logical levels previously introduced. As the word “level” itself suggests, I assume that the progressive decrease in the number of operational properties observed in passing from the extensional disjunction to the intensional and then to the superintensional one corresponds to an underlying ordering: the first level, where idempotency is retained, should be the most basic or fundamental one, while the remaining levels should constitute subsequent steps towards greater generality and should therefore be characterised by the rejection of some basic properties of disjunction. I will also award the intermediate level a distinguished status, appointing the arrow conditional as the linguistic analogue of a metalinguistic derivability relation. In more precise terms, this means that when we set up an axiom system for our logic, it will be the intensional level that will provide a set of equivalence formulae (namely, the symmetrisation of the arrow conditional) for the resulting deductive system. Remark that such a role is played by the material conditional in most conditional logics, including the systems by Stalnaker and Lewis, and by the relevant conditional in most relevant logics, including R.

A question now arises quite naturally: how should these levels be linked to one another? An appealing prima facie thought would lead us to rank our disjunctions and conditionals in order of inferential strength, with connectives of upper levels ranking higher than their lower level counterparts. However, I think we should resist this easy temptation, which would commit us to accepting both (23) and (24) below:

  1. (23)

    (A → B) → (A ↝ B).

  2. (24)

    (A > B) → (A → B).

(Remark that the principal connectives of both formulae, which intuitively express derivability claims, are arrow connectives! This is a consequence of the distinguished status I awarded to the intensional level.) I already discussed some reasons for disliking (23): given some modest assumptions in the underlying logic, it leads to an undesirable confusion between the squiggle and the horseshoe. (24), on the other side, closely resembles the principle known in the literature on conditionals as MP, or conditional modus ponens—indeed, it would be just MP if we had a horseshoe in place of the arrow. Although MP is a thesis of most conditional logics, it fails in some well-known basic conditional systems such as CK or V (Chellas 1975; Nute 1980), and it can be plausibly argued that it is an unwelcome principle for ceteris paribus conditionals.Footnote 13 However, the main reasons for my distrust in both (23) and (24) are the fact that, as argued above, the identity principle should hold for → but not for ↝ , thus contradicting (23); and the fact that there are principles, of which more below, which hold for > and not for → , thus contradicting (24).

Taking up a suggestion advanced in Casari (1997), I prefer to choose a different way to connect together the levels of our construction. I already underlined the fact that in substructural logics there are two families of conjunction and disjunction connectives—the intensional and the extensional. Generally speaking, distribution of conjunction over disjunction, and of disjunction over conjunction, fails within the same family, but holds if the distributing connective is intensional and the connective which is distributed over is extensional. In other words: neither extensional disjunction (conjunction) distributes over extensional conjunction (disjunction), nor does intensional disjunction (conjunction) distribute over intensional conjunction (disjunction), but intensional disjunction (conjunction) does distribute over extensional conjunction (disjunction).Footnote 14

My conditional logic simply adds one more superintensional level on top of the building: now we also have a noncommutative disjunction which is assumed to distribute from both sides over the conjunction of the level immediately below—i.e. the intensional level. Summing up, I assume that for n ∈ { 1, 2}, the disjunction (conjunction) of level n + 1 distributes over the conjunction (disjunction) of level n. This ensures the required connection among the different levels.

Distribution of intensional connectives over extensional connectives of different name is commonplace in substructural logics, and is well-motivated in the light of the inferential content of such constants (in fact, you do not need either weakening or contraction to derive such principles in the context of the sequent calculus for classical logic).Footnote 15 But why should we assume distributivity of superintensional connectives over intensional ones? Is such a move triggered merely by an aesthetic desire of symmetry, or can it be justified by deeper philosophical and logical reasons? Well, it turns out that upholding such distribution patterns amounts to taking up an especially plausible form of the well-known and controversial principle of simplification of disjunctive antecedents (SDA: Nute 19801984). If we denote by ∧ (respectively, by ∨ ) the classical, truth-functional conjunction (disjunction), the standard form of this principle reads as follows:

  1. (25)

    (A ∨ B > C) ⊃ (A > C) ∧ (B > C).

In many cases, this law appears to encode an intuitively plausible inference. For example, the following conditional with disjunctive antecedent from Nute (1984) seems to imply the conditionals that retain the same consequent and have as respective antecedents the members of the disjunction:

  1. (26)

    If the world’s population were smaller or agricultural productivity were greater, fewer people would starve.

Nonetheless, the validity of SDA has been the centre of a heated debate in conditional logic. On the “pro” side, it has been remarked that inferences like the one we just considered appear to be valid not in virtue of the meanings of the terms occurring therein, but in virtue of their logical form: and, if SDA happens to be invalid, what other valid argument schema could they possibly instantiate?

On the “con” side, however, it has been observed that SDA yields all the Undesirables (transitivity, monotonicity, contraposition) if paired with the seemingly innocent principle of substitution of provable equivalents (SPE) in the framework of even very weak conditional logics. By way of example, let us show that SDA together with SPE entails monotonicity: Let \(\chi = A \wedge C,\psi = A \wedge \neg C\):

$$\begin{array}{lll} 1.&A \equiv \chi \vee \psi &\mathbf{CPC}thesis \\ 2.&(A > B) \supset (\chi \vee \psi > B) &1,SPE \\ 3.&(\chi \vee \psi > B) \supset (\chi > B) \wedge (\psi > B)&SDA \\ 4.&(A > B) \supset (\chi > B) \wedge (\psi > B) &2,3,transitivity \supset \\ 5.&(\chi > B) \wedge (\psi > B) \supset (\chi > B) &conj.simplification \\ 6.&(A > B) \supset (\chi > B) &4,5,transitivity \supset \end{array}$$

Furthermore, it has been contended that, even though most of its instances look unexceptionable, there are cases which are not that self-evident. For example (27) below does not seem to imply (28) (Nute 1984):

  1. (27)

    If the US devoted more than half of its national budget to defence or to education, it would devote more than half of its national budget to defence.

  2. (28)

    If the US devoted more than half of its national budget to defence, it would devote more than half of its national budget to defence; and if the US devoted more than half of its national budget to education, it would devote more than half of its national budget to defence.

A way out of the puzzle has been suggested by Loewer (1976), who claims that SDA, per se, never expresses a reliable mode of inference. Even its seemingly correct instances do not confirm its soundness, because the real logical form of their antecedents is (A > C) ∧ (B > C), rather than A ∨ B > C: hence such instances are actually instances of the identity principle. This solution, however, besides having a slight ad hoc flavour, borders on circularity: if asked when it is the case that a conditional with disjunctive antecedents should be formalised as a conjunction of conditionals, the supporter of this “translation lore” account cannot help but replying that this happens exactly when SDA fails.

A more convincing variant of this approach has been put forward by Humberstone (1978), who introduces a unary connective in the form of an antecedent forming operator (“If A”): thus, the binary conditional connective connects an antecedent whose logical form is “If A” and a standardly formed consequent. The difference between conditionals of the (26) and of the (27) type is that in (26) the antecedent distributes over the disjunction, while in (27) it does not; put differently, it has wide scope in the latter and narrow scope in the former. Although one cannot repeat here the same charges that had been levelled against the Loewer account, I observe that also in this proposal sentences having the same surface grammatical form are treated as having different logical forms. This, at the very least, shifts upon its propounder the burden of providing independent reasons—namely, independent from their accounting for the failure of SDA—for such a move.

Other writers prefer to retain SDA and to drop SPE. Nute (1980), for example, sets up systems of conditional logic where substitution of provable equivalents is not unconditionally valid; but “these systems are extremely cumbersome and there still is the extra-formal problem of justifying the particular choice of substitutions which are to be allowed in the logic” (Nute 1984, p. 416). As the last quotation shows, Nute later changed his mind and came to distrust SDA, attributing its prima facie appeal to the action of pragmatic conversational rules.

Finally, some linguists have recently suggested theories based on nonstandard natural language semantics accounts of disjunction or of the conditional (Alonso-Ovalle 2008; Klinedinst 2007). The merits of such proposals remain to be carefully assessed.

In sum, there is still no universally accepted account of the problem of simplification of disjunctive antecedents, as well as of the discrepancy between SDA and SPE.

3.2 A New Proposal

My approach proceeds along the following lines. Both SPE and SDA, if understood classically, are partly faulty. SPE is a principle of substitution of provably material equivalents; but material equivalence is no less ambiguous than the material conditional is. That allowing substitution of provably material equivalents is indeed wrong is borne out by line 1 of the above proof of monotonicity, which is a notorious paradox of material implication, used by C.I. Lewis in his proof that A entails B ∨  ¬B for A, B whatsoever. Therefore, since the conditional connective which mirrors at the language level the metalinguistic derivability relation in my logic is the arrow, I only endorse a substitution principle for provably arrow equivalents. The previous proof, as a result, breaks down at its very beginning. In the formal theory below, moreover, I will show that it is not only this specific proof of the Undesirables which fails—these principles, in fact, are demonstrably independent of the axiom system I shall set up.

So much for SPE. But even SDA needs to be rendered more precise, since it contains ambiguous classical connectives by the score. Once this has been done, the observed tension between cases which seem to disconfirm the principle and cases where no trouble is caused immediately disappears. To see why it is so, return for a while to (27) and (28) above. What kind of “if…then” and “either…or” are at issue here? Well, the conditional is a subjunctive one, hence necessarily a corner conditional; as to the disjunction, it is readily acknowledged that if the antecedent of (27) were to be asserted, it would not possible to do so because of a connection between the disjuncts, but only on the ground that a single disjunct is accepted. We have therefore an extensional disjunction. The logical forms of (27) and (28) are thus, respectively,

  1. (29)

    A ⊔ B > C.

  2. (30)

    (A > C) ⊓(B > C).

which are in turn respectively equivalent, via the interdefinability of > and and the De Morgan laws, to

  1. (31)

    ( ¬A ⊓ ¬B) ⋎C.

  2. (32)

    ( ¬A⋎C) ⊓( ¬B⋎C).

(32) would then follow from (31) if distribution of superintensional disjunction over extensional conjunction were permissible. But our discussion above suggests that it is not: there is no reason why a disjunction of level n + 2 should distribute over a conjunction of level n.

Another case where SDA seems to fail can be accounted for along similar lines. (33) below does not seem to imply (34):

  1. (33)

    If the butler or the gardener did it, then if it wasn’t the butler it was the gardener.

  2. (34)

    If the butler did it, then if it wasn’t the butler it was the gardener, and if the gardener did it, then if it wasn’t the butler it was the gardener.

Here, both the disjunction and the conditional are obviously intensional. Arguing as above, it is soon realized that (34) would follow from (33) if distribution of intensional disjunction over intensional conjunction were permissible. But our discussion above suggests that it is not: there is no reason why a disjunction of level n should distribute over a conjunction of the same level.

A careful examination of the intuitively plausible instances of SDA—like (26)—reveals that the conditionals occurring therein are corner conditionals, while the disjunction is an intensional one. Hence, such instances are sound precisely because they call into play the distribution principle whose assumption we advocated at the beginning of this section. Here are other relevant examples drawn from the literature:

  1. (35)

    If Thorpe or Wilson were to win the next general election, Britain would prosper (Fine 1975).

  2. (36)

    If New Zealand had either not sent a rugby team to South Africa or had withdrawn from the Montreal games, then Tanzania would have competed (Ellis et al. 1977).

4 The Formal Theory

4.1 Syntax

In this subsection, I provide the informal intuitions of the preceding sections with a more formal clothing. I will extend the system HL of Paoli (2002), corresponding to a Hilbert-style axiomatisation of subexponential linear logic without lattice bounds, by adding superintensional connectives to it. The deductive system thus obtained will be dubbed CHL. It corresponds to a very weak conditional logic, a sort of substructural (and paraconsistent) version of the system CE of Nute (1980).

Definition(The language of CHL). 

CHL is formulated in a propositional language £ containing a denumerable set Var(£) of variables and the connectives 0, 1 (nullary), ⊓, ⊔ , ⊗ , → ,  (binary). Defined connectives are:

$$\begin{array}{ll} \neg p & = p \rightarrow 0 \\ p \rightsquigarrow q& = \neg p \sqcup q \\ p \oplus q & = \neg (\neg p \otimes \neg q) \\ p \curlyvee q & = \neg (\neg p \curlywedge \neg q) \\ p > q & = \neg p \curlyvee q \end{array}$$

A ↔ B will be sometimes used as a metalinguistic abbreviation for the set {A →  B, B → A}. We follow the convention according to which unary connectives bind stronger than binary ones, and ⊓, ⊔ , ⊗ , ⊕ , ,  bind stronger than ↝ , → , > . The class of formulae of £ (Fm(£)) is defined as usual. The ⟨⟩-free fragment of such language will be denoted by £ .

Definition(Postulates of CHL). 

Here are the postulates of CHL:

Axioms for nullary and unary connectives:

\(\begin{array}{ll} 0.1&\neg \neg A \rightarrow A \\ 0.2&(\neg A \rightarrow \neg B) \rightarrow (B \rightarrow A) \\ 0.3&1 \\ 0.4&1 \rightarrow (A \rightarrow A) \\ 0.5&0 \leftrightarrow \neg 1\end{array}\)

First level axioms:

\(\begin{array}{ll} 1.1&A \rightarrow (\neg A \rightsquigarrow B) \\ 1.2&B \rightarrow (A \rightsquigarrow B) \\ 1.3&\neg ((A \rightarrow C) \rightsquigarrow \neg (B \rightarrow C)) \rightarrow ((\neg A \rightsquigarrow B) \rightarrow C)\end{array}\)

Second level axioms:

\(\begin{array}{ll} 2.1&A \rightarrow A \\ 2.2&(A \rightarrow B) \rightarrow ((B \rightarrow C) \rightarrow (A \rightarrow C)) \\ 2.3&(A \rightarrow (B \rightarrow C)) \rightarrow (B \rightarrow (A \rightarrow C)) \\ 2.4&A \rightarrow (B \rightarrow A \otimes B) \\ 2.5&(A \otimes B \rightarrow C) \leftrightarrow (A \rightarrow (B \rightarrow C))\end{array}\)

Third level axioms

\(\begin{array}{ll} 3.1&(A > B \otimes C) \leftrightarrow (A > B) \otimes (A > C) \\ 3.2&(A \oplus B > C) \leftrightarrow (A > C) \otimes (B > C)\end{array}\)

Rules

\(\begin{array}{ll} R1&A,A \rightarrow B \vdash B \\ R2&A,B \vdash A \sqcap B \\ R3&A \rightarrow B,B \rightarrow A \vdash (A > C) \rightarrow (B > C) \\ R4&A \rightarrow B,B \rightarrow A \vdash (C > A) \rightarrow (C > B)\end{array}\)

Observe that 3.2 encodes the nonproblematic version of SDA, while R3 and R4 formalise SPE—or, rather, substitution of provably arrow equivalents. The notions of proof (from assumptions) and derivability in CHL are defined as usual; by Γ ⊢  CHL A I will mean that the formula A is derivable in the calculus CHL from the assumptions in Γ. The relation ⊢  CHL is a finitary and substitution-invariant consequence relation. Therefore, we can abstractly identify CHL with the deductive system ⟨Fm(£), ⊢  CHL ⟩, something I will feel free to do hereafter.

Here is a list of additional postulates, named after the traditional labels they receive in the literature, from which one could draw to extend the third level of CHL:

RCK ⊓: A 1A n  → B ⊢ (C > A 1) ⊓ ⊓(C > A n ) → (C > B) (n ≥ 0)

RCK ⊗ : A 1 ⊗  ⊗ A n  → B ⊢ (C > A 1) ⊗  ⊗ (C > A n ) → (C > B) (n ≥ 0)

ID: A > A

CA: (A > BC) ↔ (A > B) ⊓(A > C)

MOD: A⋎A → (B > A)

CSO: (A > B) ⊗ (B > A) → ((A > C) → (B > C))

CV: (A > B) ⊓(A⋏B) → (AC > B)

CS: AB → (A > B)

CEM: (A >  ¬B) ⊕ (A > B)

MP: (A > B) → (A → B)

TR: (A > B) ⊗ (B > C) → (A > C)

CONTR: (A >  ¬B) → (B >  ¬A)

MON: (A > B) → (AC > B)

4.2 Algebra

The aim of this subsection is identifying a class of algebras which functions as an equivalent algebraic semantics for CHL. Let me remark that this semantics is meant to be, in the terminology of Copeland (1983), a typically applied semantics—as opposed to a pure, explicative semantics which yields deep insights on the logic which is being interpreted. Its only aim is showing that the above logic is consistent and that SDA can happily live therein together with SPE, while keeping the Undesirables from the door. To begin with, let us recall a fundamental concept from the algebraic semantics of substructural logics (see e.g. Galatos et al. 2007):

Definition

An FL e -algebra (also called pointed commutative residuated lattice) is an algebra

$$\mathbf{L} =\langle L,\otimes,\rightarrow,\sqcap,\sqcup,0,1\rangle,$$

of type £ , Footnote 16 such that:

  • L,  ⊓, ⊔ ⟩ is a lattice;

  • L, ⊗ , 1⟩ is an Abelian monoid;

  • For every a, b, c ∈ L, we have that a ⊗ b ≤ c iff a ≤ b → c, where ≤ denotes the induced order of the lattice reduct ⟨L,  ⊓, ⊔ ⟩.

An FL e -algebra is called involutive iff it satisfies the identity

$$(p \rightarrow 0) \rightarrow 0 \approx p$$

The class of (involutive) FL e -algebras is a variety in its type: the residuation quasi-equations can be dispensed with in favour of a finite set of equations (Galatos et al. 2007). We now want to define an expansion of involutive FL e -algebras in the language £, in order to provide a suitable interpretation for the superintensional connectives.

Definition

A ringoidal involutive FL e -algebra is an algebra

$$\mathbf{L} =\langle L,\otimes,\rightarrow,\sqcap,\sqcup,\curlywedge,0,1\rangle,$$

of type £, such that:

  • L, ⊗ , → ,  ⊓, ⊔ , 0, 1⟩ is an involutive FL e -algebra;

  • The term reduct ⟨L, , ⊕ ⟩ is a ringoid, i.e., for every a, b, c ∈ L,

    $$\begin{array}{rcl} a \curlywedge (b \oplus c)& =& (a \curlywedge b) \oplus (a \curlywedge c); \\ (b \oplus c) \curlywedge a& =& (b \curlywedge a) \oplus (c \curlywedge a)\end{array}$$

While the variety of involutive FL e -algebras has been investigated in great detail (see e.g. Galatos et al. 2007 or Paoli 2002, where it is actually a term equivalent variant which is under scrutiny), the variety of ringoidal involutive FL e -algebras—hereafter denoted by —is new. Our first duty, therefore, is showing that it is not empty by providing appropriate examples. Here are some.

Example

Any lattice-ordered ring

$$\mathbf{R} =\langle R,+,\cdot,\sqcap,\sqcup,-,\mathbf{0}\rangle$$

gives rise to a ringoidal involutive FL e -algebra by taking, for any a, b ∈ A,

$$\begin{array}{rcl} a \otimes b& =& a \oplus b = a + b \\ a \rightarrow b& =& b - a \\ a \curlywedge b& =& a \curlyvee b = a \cdot b \\ 0& =& 1 = \mathbf{0}\end{array}$$

The previous example does not yield, of course, any nontrivial finite algebra. The next one, due to Pierluigi Minari (and cited in Casari 1997), does. Remark that in this case the FL e -algebra reduct is actually a FL ew -algebra.

Example

It is possible to extend the three-element MV chain by ring-theoretical operations in such a way as to get the algebra MV 3 r with the following tables:

4.3 Semantics

We now establish the required bridge between CHL and ringoidal involutive FL e -algebras:

Theorem

The deductive system CHL is strongly and finitely algebraisable with equivalence formulas {p → q,q → p} and defining equation {1 ⊓ p ≈ 1}, and its equivalent algebraic semantics is the variety \(\mathbb{R}\) of ringoidal involutive FL e -algebras.

Proof.

Me must show that:

  1. 1.

     ⊢  CHL can be faithfully interpreted into the equational consequence relation \({\vDash }_{\mathbb{R}}\) of , i.e. for any Γ ⊆ Fm(£) and any A ∈ Fm(£),

    $$\Gamma {\vdash }_{\mathbf{CHL}}A\text{ iff }\{1 \sqcap B \approx 1 : B \in \Gamma \} {\vDash }_{\mathbb{R}}1 \sqcap A \approx 1;$$
  2. 2.

    \({\vDash }_{\mathbb{R}}\) can be faithfully interpreted into ⊢  CHL , i.e. for any set of £-equations Γ ≈ Δ and any £-equation A ≈ B,

    $$\begin{array}{rcl} \Gamma & \approx \Delta {\vDash }_{\mathbb{R}}A \approx B\text{ iff }\left \{C \rightarrow D,D \rightarrow C :\right .& \\ & \quad \left .C \in \Gamma,D \in \Delta \right \} {\vdash }_{\mathbf{CHL}}\{A \rightarrow B,B \rightarrow A\};& \\ \end{array}$$
  3. 3.

    The two interpretations are mutually inverse, i.e.

    $$\begin{array}{rcl} & & p {\vdash }_{\mathbf{CHL}}\{1 \sqcap p \rightarrow 1,1 \rightarrow 1 \sqcap p\} \\ & & \{1 \sqcap p \rightarrow 1,1 \rightarrow 1 \sqcap p\} {\vdash }_{\mathbf{CHL}}p \\ & & p \approx q {\vDash }_{\mathbb{R}}\{1 \sqcap (p \rightarrow q) \approx 1,1 \sqcap (q \rightarrow p) \approx 1\} \\ & & \{1 \sqcap (p \rightarrow q) \approx 1,1 \sqcap (q \rightarrow p) \approx 1\} {\vDash }_{\mathbb{R}}p \approx q \\ \end{array}$$

By Proposition 7.2 in Jansana (201+), it is enough to establish item 1 and the last two lines of 3. As to the first item, this is the content of a standard strong soundness and completeness theorem, and so it can be established as usual—via an inductive argument on the length of the derivations for the soundness part, and a Lindenbaum algebra argument for the completeness part. The second half of item 3. can be proved as follows. Suppose that A ∈ , that a, b ∈ A and that a = b. Then a ≤ b, whence 1 ≤ a → b, and b ≤ a, whence 1 ≤ b → a. Conversely, if 1 ≤ a → b, b → a, then a ≤ b and b ≤ a, whereby a = b. □ 

The main point of the previous semantics was to show the independence of the principles of transitivity, monotonicity and contraposition, in order to prove that SPE and SDE, if appropriately disambiguated, can live together without forcing us to take the Undesirables aboard. The next proposition does the trick.

Proposition

The principles ID, MOD, CS, CEM, MP, TR, CONTR, MON are all independent of CHL.

Proof.

We provide falsifying models for some instances of the mentioned principles, whence the result follows by Theorem 11.1. Consider the ringoidal involutive FL e -algebra MV 3 r of Example 11.2. A counterexample to ID is given by

$$p > {p}^{{\mathbf{MV}}_{3}^{r} }\left (\frac{1} {2}\right ) = \frac{1} {2}$$

Counterexamples to MOD, CS, CEM are respectively given by

$$\begin{array}{rcl} p \curlyvee p \rightarrow {(q > p)}^{{\mathbf{MV}}_{3}^{r} }\left (\frac{1} {2},1\right )& =& \frac{1} {2} \\ p \sqcap q \rightarrow {(p > q)}^{{\mathbf{MV}}_{3}^{r} }\left (1, \frac{1} {2}\right )& =& \frac{1} {2} \\ (p > \neg q) \oplus {(p > q)}^{{\mathbf{MV}}_{3}^{r} }\left (1, \frac{1} {2}\right )& =& \end{array}$$
(0)

CONTR is falsified in any noncommutative lattice-ordered ring, as (p >  ¬q) → (q >  ¬p) is equivalent to ( ¬p⋎ ¬q) → ( ¬q⋎ ¬p). Finally, consider the -ring Z of the integers. Counterexamples to MP, TR, MON are respectively given by

$$\begin{array}{rcl} (p > q) \rightarrow {(p \rightarrow q)}^{\mathbf{Z}}(+3,-1)& =& -1 \\ (p > q) \otimes (q > r) \rightarrow {(p > r)}^{\mathbf{Z}}(+3,-1,+2)& =& -11 \\ (p > q) \rightarrow {(p \sqcap r > q)}^{\mathbf{Z}}(+3,-1,+2)& =& -1\end{array}$$