Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 On Repugnancies and Contradictions

In his well-known invective, Bishop Berkeley tried to reveal contradictions in the infinitesimal calculus, perplexed as he was by the “evanescent increments” that are neither finite nor infinitely small quantities (and “nor yet nothing”, but merely “ghosts of departed quantities”). In Section L, ‘Occasion of this Address. Conclusion. Queries.’ of The Analyst; or, a Discourse Addressed to an Infidel Mathematician (cf. Berkeley 1734), Berkeley asks:

Whether the Object of Geometry be not the Proportions of assignable Extensions? And whether, there be any need of considering Quantities either infinitely great or infinitely small?

and

Whether [mathematicians] do not submit to Authority, take things upon Trust, and believe Points inconceivable? Whether they have not their Mysteries, and what is more, their Repugnancies and Contradictions?

In an indirect sense, Berkeley’s The Analyst was very influential in the development of mathematics. The piece was a direct attack on the foundations and principles of calculus and, in particular, Newton and Leibniz’s notion of infinitesimal change (or fluxions).Footnote 1 It is not to be denied that the resulting controversy gave some impetus to the later foundations of calculus and led them to be reworked, in a much more formal and rigorous form, through the use of limits.

However, Berkeley’s criticisms did not have an effect on everyone. Leonard Euler, for instance, paid little attention to the invectives against the use of infinite series, and found an astonishing new proof of the fact, originally proved by Euclid, that there are infinitely many prime numbers. Departing from Euclid’s original purely combinatorial proof, Euler realized (cf. Sandifer 2006) that a distinction between divergent and convergent series could explain an infinitude. Indeed, by comparing infinite sums and products:

$$\frac{2 \cdot 3 \cdot 5 \cdot 7 \cdot 11\ldots } {1 \cdot 2 \cdot 4 \cdot 6 \cdot 10\ldots } = 1 + \frac{1} {2} + \frac{1} {3} + \frac{1} {4} + \frac{1} {5}\ldots $$

or, in contemporary notation:

$${\prod \nolimits }_{p} \frac{p} {p - 1} ={ \sum \nolimits }_{n} \frac{1} {n}\quad \textrm{ for $p$ primes},n \geq 1$$

it is easy to see that the right-hand harmonic series is divergent; hence there must be infinitely many primes, otherwise there would be an equality between a divergent and a convergent series. Had Euler been afraid of the label “mathematician” (with its connotations of magician and astrologist—as opposed to “geometer”), he would never have dared to mix the continuous with the discrete. Euler’s idea turned out to be extremely fruitful: bringing the “forbidden” analysis into the investigation of prime numbers allows for a much more powerful technique than mere combinatorial counting, as further results by Dirichlet and others have revealed. What was at stake was not so much how many numbers there are, but how they are distributed. Analytic number theory was thus born, owing its existence to free-thinkers like Euler and ultimately paving the way for the Riemann Hypothesis (see Derbyshire 2003).

If we accept that repetitive patterns are to be found in science, it may not be a coincidence that paraconsistent negations, that is, negations such that a contradiction does not imply everything, raise perplexities of an analogous sort. This is certainly the case, in particular, with the kind of negation supported by the Logics of Formal Inconsistency (LFIs) introduced in Carnielli and Marcos (2002) and further developed in Carnielli et al. (2007). Some criticisms of negations in LFIs are based on the idea that paraconsistent negations are not negations (in the same way that infinitesimal numbers would considered not to be numbers). Other criticisms, specifically those concerning the core of LFIs, fail to see a central point: how is it possible that A and ¬A can be simultaneously held as true (and be not explosive), and the “consistency” of A also be held as true? How can both something and its negation be true and consistent? Is it not inherent in the nature of consistency to require that anything and its negation necessarily have different truth status?

In fact, we wish to argue that this is not only plainly possible, but quite usual: logicians, infidel or not, perform this type of reasoning very often, and it is precisely the fact that A and ¬A are true, while it is at the same time not the case that A is consistent (see Sect. 3.3) which blocks the deductive explosion. On the other hand, with respect to the claim that paraconsistent negations are not negations, a possible answer is that paraconsistent negations can be seen as generalized negations in the same way that infinitesimal numbers are generalized (non-standard) real numbers.

But let us first examine an example of an argument that greatly baffled Berkeley. The example is a contemporary rephrasing of a more geometric one, given by Berkeley himself and found in Lemma 2 of Sect. 1 of the Philosophiae Naturalis Principia Mathematica (Newton 1726). It is a typical argument using infinitesimal quantities: in order to find the derivative f′(x) of the function f(x) = x 2, let dx be an infinitesimal; then

$$\begin{array}{ll} f{^\prime}(x)& = \frac{f(x + dx) - f(x)} {dx} \\ & = \frac{{(x + dx)}^{2} - {x}^{2}} {dx} \\ & = \frac{{x}^{2} + 2x.dx + d{x}^{2} - {x}^{2}} {dx} \\ & = \frac{2x.dx + d{x}^{2}} {dx}\\ & = 2x + dx \\ & = 2x \end{array}$$

since dx is infinitely small.

The fundamental problem pointed out by Berkeley is that dx is first treated as non-zero (because we divide by it), but later discarded as if it were zero:

These Expressions [dx, ddx, etc] indeed are clear and distinct, and the Mind finds no difficulty in conceiving them to be continued beyond any assignable Bounds. But if we remove the Veil and look underneath, if laying aside the Expressions we set ourselves attentively to consider the things themselves, which are supposed to be expressed or marked thereby, we shall discover much Emptiness, Darkness, and Confusion; nay, if I mistake not, direct Impossibilities and Contradictions. Whether this be the case or no, every thinking Reader is entreated to examine and judge for himself.

In a sense, Berkeley was right: the idea of infinitesimal was at that time a naive concept, namely, “a number whose absolute value is less than any non-zero positive number”. Thus, using the order properties of real numbers, it can be easily proved that there are no non-zero real infinitesimals! From Berkeley’s perspective, infinitesimals are not numbers but aberrant intruders, and so should be removed from mathematical reality. But how can Berkeley be completely right, and infinitesimal calculus be what it is today?

Karl Weierstrass and others, by using the rigorous notion of limit, were able to give a formal mathematical foundation for calculus in the second half of the nineteenth century: the epsilon-delta interpretation. This mathematical formulation, without using “abnormalities” such as infinitesimals, removed all concerns about the illegitimacy of calculus. This was, however, a sort of foundational compromise, with concessions to both sides: the notion of infinitesimal had to be replaced by a process (the limits), and in this Berkeley’s criticisms of the eighteenth century mathematicians were in principle correct; but the task was not impossible, and in this he was wrong. Two centuries later, in any case, the fluxions vindicated their reputation at the hands of logicians.

2 Infidelities and Perplexities: Are Paraconsistent Negations Genuine Negations?

What Bishop Berkeley couldn’t see is that the feasibility of infinitesimals depends on the breadth of the mathematical context: they are not mathematically “impossible” or “wrong”, they just find a place in a wider mathematical scenario. As is well known, the original appealing idea of Leibniz and NewtonFootnote 2 of describing differential calculus by using infinitesimal quantities can be recovered in rigorous mathematical terms by means of the non-standard models of contemporary model theory. What this achievement shows is that infinitesimals are numbers of a new kind, introduced conservatively by extending the previous field of numbers.

Such new species of numbers are nothing else than infinitesimals made possible: Abraham Robinson’s nonstandard analysis of the 1960s (cf. Robinson 1966) makes use of the set of real numbers extended by the hyperreals and thus containing numbers less (in absolute value) than any positive real number. In this formulation an infinitesimal is a non-standard number whose absolute value is less than any non-zero positive standard number. Other paradigms were introduced to deal with infinitesimals, such as John Conway’s surreal numbers (which are algebraically equivalent to hyperreals; cf. Conway 1976 and also Knuth 1974), Edward Nelson’s internal set theory (cf. Nelson 1977), and synthetic differential geometry (also known as smooth infinitesimal analysis; cf. Lawvere 1998), which is based on category (topos) theory and provides a new approach to Robinson’s nonstandard analysis.

The latter makes use of nilsquare or nilpotent infinitesimals, that is, numbers x such that x 2 = 0 holds but x = 0 is not necessarily true. This allows for rigorous algebraic proofs using infinitesimals, like the one given above which so irritated Bishop Berkeley.

Asides from infinitesimals, there are several other examples of generalized mathematical structures losing “classical” features. Non-Euclidean geometry is one of them. Concerning numbers and their operations, the following are obvious examples (assuming a “classical” perspective in each case):

  • Integer numbers are not numbers from the point of view of natural numbers: there cannot be any x such that \(x + 2 = 1\)!

  • Rational numbers are not numbers from the point of view of integer arithmetic: there cannot be any x such that 2x = 1!

  • Real numbers are not numbers from the point of view of rational numbers: there cannot be any x such that x 2 = 2!

  • Complex numbers are not numbers from the point of view of real numbers: there cannot be any x such that \({x}^{2} = -1\)!

There are certain subtle similarities between Berkeley’s criticism of infinitesimals and some criticisms of paraconsistent negations found in the literature. For instance, Slater’s well-known criticism in Slater (1995) is based on the contention that negations of paraconsistent logics are not proper negation operators given that they are not a ‘contradictory-forming functor’, but just a ‘subcontrary-forming one’.

But arguing along these lines requires strong presuppositions about what a negation should be. As, supposedly, natural language is our basic source of inspiration for understanding negation, it is advisable to pay close attention to it before sermonizing. Is there really only one negation, and does it necessarily have the role of suppressing whatever comes after it? R. Giora argues against the view that negation is unique in natural language, and against the purported functional asymmetry of affirmation and negation—a view that supports the “suppression hypothesis” which assumes that negation necessarily suppresses what is inside its scope:

Indeed, many discourse functions assumed to uniquely distinguish negatives from affirmatives, such as denying, rejecting, disagreeing, repairing (both linguistically and metalinguistically), eliminating from memory, communicating the opposite, attenuating or reducing the accessibility of concepts and replacing them with alternative opposites, are equally enabled by affirmatives. Similarly, discourse roles assumed to uniquely distinguish affirmatives from negatives, such as representing events, conveying agreement, confirmation, or affective support, highlighting and intensifying information, introducing new topics, conveying an unmarked interpretation, establishing comparisons, effecting discourse coherence and discourse resonance, are equally enabled by negatives. Such evidence attesting to some functional affinity between negative and affirmative interpretations can only be explained by processing mechanisms that do not operate obligatorily but are instead sensitive to global discourse considerations. (Giora 2006, pp. 1009–1010)

By analogy with the case of infinitesimals, paraconsistent negations can be seen as extending the classical one by generalizing some of its features. And this makes sense in terms of the above landscape of many negations: the more properties a negation operator has, the more restricted and specific the operator is. Thus, paraconsistent negations are neither logically “wrong” nor “impossible”, but they are part of an enhanced and more general logical scenario, in the same way that infinitesimals are legitimate numbers in a wider sense.

It will be worthwhile to examine for a few moments the model theory of propositional quantified logic, and again to compare this model theory with the underlying model theory of the algebraic extension of fields (from the real to the complex numbers).

Recall that if \(\mathfrak{A}\) and \(\mathfrak{B}\) are two first-order structures for the same language, such that \(\mathfrak{A}\) is a substructure of \(\mathfrak{B}\) and φ(x) is a formula of that language with just x as free variable, and where only the logical operators ∃, ∧ and ∨ occur in φ(x), then

$$\mathfrak{A}\models \exists x\varphi \ \ \textrm{ implies}\ \ \mathfrak{B}\models \exists x\varphi .$$

In particular, taking the language of fields consisting of symbols + ,  ⋅, 0, 1, and \(\mathfrak{A}\) and \(\mathfrak{B}\) as being the structures of real numbers and complex numbers over that language, respectively, then

$$\mathbb{R}\models \exists x\varphi \ \ \textrm{ implies}\ \ \mathbb{C}\models \exists x\varphi $$

for any φ(x) as defined above. As is well-known, the converse implication is not valid because \(\mathbb{C}\) is an algebraic extension of \(\mathbb{R}\), and so it satisfies an existential sentence that the latter does not satisfy. For instance,

$$\mathbb{C}\models \exists x(x.x + 1 = 0)\ \ \textrm{ but}\ \ \mathbb{R}\nvDash \exists x(x.x + 1 = 0)$$

This means that \(\mathbb{R}\) is not an elementary substructure of \(\mathbb{C}\).

Consider now PCL and { C} 1, propositional classical logic and da Costa’s paraconsistent logic over the signature ∧ , ∨ , → ,  ¬, respectively. Consider a fixed set \(\mathcal{V}\) of propositional letters, and let { For} be the algebra of formulas generated over that signature from \(\mathcal{V}\). Let V { PCL} and \({V }_{{<Emphasis Type="Bold">\text{ C}</Emphasis>}_{ 1}}\) be the sets of bivaluations characterizing PCL and { C} 1, respectively. Then { PCL} = ⟨{ For}, V { PCL} ⟩ can be conceived as a semantic structure interpreting quantified propositional logic in an obvious way, representing PCL. Similarly, \({C}_{1} =\langle \mathit{For},{V }_{{<Emphasis Type="Bold">\text{ C}</Emphasis>}_{ 1}}\rangle\) can be seen as a structure for that language representing { C} 1. Since \({V }_{<Emphasis Type="Bold">\text{ PCL}</Emphasis>} \subseteq {V }_{{<Emphasis Type="Bold">\text{ C}</Emphasis>}_{ 1}}\) then

$${\it { PCL}}\models \exists {p}_{1}\ldots \exists {p}_{n}\,\varphi \ \ \textrm{ implies}\ \ {C}_{1}\models \exists {p}_{1}\ldots \exists {p}_{n}\,\varphi $$

for any φ(p 1, , p n ) depending on the propositional letters p 1, , p n without quantifiers. In a sense, { PCL} is a “substructure” of C 1. But the converse implication does not hold, and so { PCL} is not an “elementary substructure” of C 1. In fact,

$${C}_{1}\models \exists p(p \wedge \neg p)\ \ \textrm{ but}\ \ {\it { PCL}}\nvDash \exists p(p \wedge \neg p).$$

By analogy with the extension of , which adds new objects outside the classical (real) scope satisfying unexpected (or “absurd”) properties, the paraconsistent model-theoretic extension of classical logic by C 1 adds new valuations v (or new formulas p) satisfying “exotic” or “absurd” properties such as \(v(p) = v(\neg p) = 1\). Recalling that the inconsistency operator  ∙ of LFIs allows or guarantees such “exotic” properties (see next section), then, in some sense, the inconsistency operator ∙ or, more precisely, inconsistent formulas of the form ∙ φ, play a very similar role to that of non-real complex numbers, that is, complex numbers a + b { i} such that b≠0.

C. Dutilh-Novaes convincingly argues that “there is no real negation” and that paraconsistent negation is as “real” as any other (Dutilh-Novaes 2008, p. 470). So, how much is one denying in “real negation”? The comparison is analogous to the joke about naive tourists who keep asking how much is the price of this and that in “real money”: the problem only arises if you insist that your money, or your negation, is unique and is the real one.

3 Can One Sustain a Consistent Contradiction?

But more pointed criticisms to paraconsistency explicitly concern the LFIs of Carnielli and Marcos (2002) and Carnielli et al. (2007). The main feature of the approach in these works is that sets ○ (p) of formulas (depending only on p) are used to convey the idea that ○ (α) expresses the fact (or the information, or even the hypothesis) that “α is consistent”. Thus Γ ⊢ α  { and}  Γ ⊢  ¬α do not ensure that Γ is trivial. Instead, Γ is logically explosive iff

$$\Gamma \vdash \alpha \ \ \textrm{ and}\ \ \Gamma \vdash \neg \alpha \ \ \textrm{ and}\ \ \Gamma \vdash \bigcirc (\alpha ).$$

In most LFIs the set ○ (p) is a singleton, defining a consistency connective (primitive or not) denoted by ∘ . Thus ∘ α means “α is consistent”, and the usual “Classical Explosion Principle”:

$$\textrm{ (exp)}\;\;\alpha \Rightarrow (\neg \alpha \Rightarrow \beta )$$

is replaced by a weaker version, the “Gentle Explosion Principle”:

$$\quad \quad \quad \textrm{ (bc)}\;\;\circ \alpha \Rightarrow (\alpha \Rightarrow (\neg \alpha \Rightarrow \beta )).$$

Therefore, a contradiction (involving α) plus the information that α is consistent produces a trivial set. The inconsistency of a sentence α can be expressed by a sentence of the form ∙ α, where ∙ is an inconsistency operator. In most LFIs, both operators are related as follows: \(\bullet \alpha \equiv \neg \circ \alpha \) and \(\circ \alpha \equiv \neg \bullet \alpha \). Let us go a bit further in order to appreciate the importance of this approach to logic and reasoning.

A well-known (by now) twelfth century example of a derivation by Petrus Abelardus in his Dialectica is recalled by W. Kneale and M. Kneale (Kneale and Kneale 1985, p. 217): the conclusion si Socrates est lapis, est asinus (“if Socrates is a stone, he is an ass”) as a consequence of the validity of the Disjunctive Syllogism α ∨ β,  ¬α ⊢ β. Indeed, from the hypothesis Socrates est lapis one derives Socrates est lapis or Socrates est asinus. But surely Socrates non est lapis, ergoSocrates est asinus.

W. Kneale and M. Kneale recognize this “very interesting contention” of Abelard as the beginning of the long Medieval debate on paradoxes of implication:

On the other hand, he thinks that we have departed from the highest standard of rigour as soon as we put forward a consequentia which involves the assumption of two distinct substances ⋯ For he says that in such a case the sense of the consequent is not contained in the sense of the antecedent and that the truth of the whole can be established only by special knowledge of nature (‘posterius ex naturae discretione et proprietatis naturae cognitione’).

They recognize that “it is difficult to find any satisfactory interpretation for this passage”. But the reason, as Abelard himself explains (cf. Petrus 1970, p. 284), is that the nature of man and stone are incomparable:

quod nouimus natura ita hominem et lapidem esse disparata.

Now, it is an immediate theorem of LFIs that the Disjunctive Syllogism is not to be held unrestrictedly, but only for situations in which additional assumptions (or proofs) of certainty or consistency are present, i.e.:

$$\alpha \vee \beta ,\neg \alpha \nvdash \beta $$

Here, the Disjunctive Syllogism does not hold in general, but does hold in a controlled form:

$$\circ \alpha ,\alpha \vee \beta ,\neg \alpha \vdash \beta $$

is a valid rule, where ∘ is the consistency connective (in the LFIs).

This property can be easily proved in almost all systems developed in Carnielli et al. (2007), and in fact, aside from its conceptual interest, it is essential for the development of effective and natural “paraconsistent logic programming”, because it is precisely the appropriate generalization of the famous resolution rule.

With regard to what concerns Abelard and the allegation that the truth involved in a reasoning such as that which proves that Socrates est asinus can only be established by special knowledge of nature, our additional hypothesis ∘ α in the controlled form of the Disjunctive Syllogism above is perfectly able to express the proviso that the nature of man and of stone cannot be disparate in order to perform a reasoning of this sort. Indeed, just take ∘ α to be true, in this case, if you are prepared to defend any reasoned connection between the nature of Socrates and the nature of a stone. If so, you may derive Socrates est asinus; if not, you know where your mistake is. Thus the LFIs, basic and simple as they are, not only are coherent with Abelard’s advice but may also help to express this restriction.

Let us examine the criticisms of this approach. There seem to be, to start with, several cases of misunderstanding, misapprehension or pure disregardFootnote 3 on what paraconsistent logic(s) is, or rather are: for instance, R. Sorensen manifests his firm (and erroneous) belief that paraconsistent logics (in the plural!) must reject weakening (the inference rule from p to p ∨ q):

Paraconsistent logics are designed to safely confine the explosion. For instance, they reject the inference rule ‘p, therefore, p or q’ on the grounds that a valid argument must have premises that are relevant to the conclusion. They extend this relevance requirement to conditionals in an effort to head off the paradoxes of implication. (Sorensen 2003, p. 114)

He blames dialetheistic logics for this inability, and in a hasty generalization continues on the same page:

Dialetheists portray themselves as friends of contradiction. They remind me of ranchers who present themselves as friends of the horses they castrate. A gelding is not just a tamer sort of stallion; it is not a stallion at all. The dialetheist’s ‘contradiction’ may look like contradictions and sound like contradictions, but they cannot perform a role essential to being a contradiction; they cannot serve as the decisive endpoint of a reductio ad absurdum. At best they can be the q in a modus tollens argument: If p then q; not q, therefore, not p. So in the end, I think Priest falls into Antisthenes’ skepticism about contradictions. (Sorensen 2003, p. 114)

The quotation refers to Antisthenes of Athens (445-360 B.C.), a student of Socrates previously trained under the Sophists. Antisthenes thinks there are no contradictions, and Sorensen puts Antisthenes and the dialetheists (and by his reduction, all paraconsistentists) in the same class: their contradictions are not contradictions as such. We cannot respond for dialetheists, but perhaps Sorensen would be happy to learn that LFIs neither reject the weakening rule, nor pose themselves as friends of geldings or stallions: they merely separate geldings from stallions, as we argued above. Consistent contradictions do indeed serve as the decisive endpoints of reductio ad absurdum reasoning: it is proved in Carnielli et al. (2007) that the following reductio rule holds in most LFIs:

$$\textrm{ If}\;\Gamma \vdash \circ \alpha ,\;\;\;\Delta ,\beta \vdash \alpha \textrm{ and}\;\;\Lambda ,\beta \vdash \neg \alpha \;\;\textrm{ then}\;\;\Gamma ,\Delta ,\Lambda \vdash \neg \beta $$

This illustrates an instance of a more general phenomenon: Any classical rule can be recovered within a class of LFIs known as C-systems, if a sufficient number of ‘consistency assumptions’ are assumed (see the “Derivability Adjustment Theorem” in Carnielli et al. 2007, Remark 21).

Some authors consider, however, that this perspective has inherent problems. Specifically, the following passage on p. 27 of Carnielli and Marcos (2002) has been criticized:

So one may conjecture that consistency is exactly what a contradiction might be lacking to become explosive—if it was not explosive from the start. Roughly speaking, we are going to suppose that a ‘consistent contradiction’ is likely to explode, even if a ‘regular’ contradiction is not.

One of the main criticisms comes from F. Berto, who asks:

How could we have α, ¬α and keep claiming that α is consistent? (Berto 2007, p. 162)

In fact you cannot, and that is precisely what is meant! In the realm of classical logic, the corresponding objection is: How could we have α and ¬α in our theory? Of course, a classical logician simply cannot, and this is exactly what the “Classical Explosion Principle” says:

$$\alpha ,\neg \alpha \vdash \beta $$

for any β; that is, if you have a contradiction, you are in trouble.

This principle, as explained above, is generalized in the LFIs by the “Gentle Explosion Principle”:

$$\alpha ,\neg \alpha ,\circ \alpha \vdash \beta $$

for any β; that is, if you have a consistent contradiction, then you have trouble!

The situation is rather similar to chess: no piece can capture the King, but the King can be under the threat of being captured, that is, in check. In this contradictory situation (the King cannot be captured and is being threatened with capture) the King has to be protected, and there may be several moves to protect him. However, if there is no move which can put the King out of check (that is, if the threat is fatal), this is checkmate and the game is over. This point seems to have been overlooked by Berto, however, who abandons the game:

These difficulties seem to speak against the philosophical import of the Brazilian approach to paraconsistency (which is why it has been dealt quickly in this Chapter). (Berto 2007, p. 162)

It is perhaps M. Bremer, however, who persuades Berto to resign so quickly:

introducing ‘consistent contradictions’ […] awaits epistemic elucidation: If we have A and ¬A, then we should take ∘ A as false, shouldn’t we? And how can we take A to be consistent and have A and ¬A at the same time? (Bremer 2005, p. 117)

Indeed, Bremer had previously stated his intention to abandon what has called “da Costa systems” from a certain point on, due to what seemed to him insurmountable philosophical difficulties:

Da Costa-Systeme werden deshalb hier nicht weiter betrachtet.Footnote 4 (Bremer 1998, p. 53)

Nevertheless, as observed above, this criticism can be similarly posed to classical logicians as well: “If we have A, then we should take ¬A, as false, shouldn’t we? And how can we take A, and have A and have ¬A at the same time?”

Obviously you cannot, of course, and that is precisely what the “Classical Explosion Principle” says!

Bremer also claims:

Die da Costa-Negation ist überhaupt nich rekursiv!.Footnote 5 (Bremer 1998, p. 50)

The reason, he says, is that the truth-value of ¬A cannot be computed from the truth-value of A. But this is just non-functionality, and if this were a valid philosophical criticism, it could be posed to several other logics, including intuitionistic and almost all modal logics. It is hard to see the point behind his criticism. The fact is that da Costa (and LFI) negations are bounded non-deterministic functions, and there is no purpose in disqualifying them for such reasons. It happens that all LFIs (including da Costa calculi in the hierarchy C n ) are decidable (and thus, recursive). This is accomplished by the original valuation semantics in the case of C n , and by the possible-translations semantics (cf. Carnielli et al. 2007) in the general case of LFIs.

This is amply confirmed by the non-deterministic semantics of A. Avron and his collaborators (cf. e.g. Avron and Lev 2005). But specifically for da Costa calculi C n , decidability results by means of the procedure of quasi-matrices have been known for three decades (cf. Alves 1976 and also da Costa and Alves 1977, Sect. 2.2.2—some errors being corrected in Marcos 1999). Disqualifying non-determinism in logic matters thus does not seem to be so easy. Non-deterministic Turing machines are equivalent to deterministic Turing machines: issues on complexity apart, they are the same. Even modal logics of non-deterministic partial recursive functions, which are extensions of the classical propositional logic, are studied in Naumov (2005), and moreover proved to be decidable. Confusions of this sort just add to the difficulty of appraising the real ideas behind LFIs. There is no better epistemic elucidation of something, and no better way of assessing its “philosophical import”, than by examining it from a fair perspective.

4 From Contradictoriness to Buddhism, and Back

It was apparently not a paraconsistent logician who first noted that some contradictions in Buddhist reasoning would possibly made Nāgārjuna, a Buddhist thinker of the II century, to endorse paraconsistent logic; Garfield and Priest (20022003) credit this observation (with which they also concur) to Tillemans (1999).

Garfield and Priest take a decisive step, however, by studying the possibility that these contradictions may be seen as structurally analogous to those arising in the Western tradition. By a penetrating analysis of catuskoti or tetralemma, the four-cornered negations of Nāgārjuna, they aptly conclude that Nāgārjuna is not

⋯ an irrationalist, a simple mystic, or crazy; on the contrary: he is prepared to go exactly where reason takes him: to the transconsistent.

There is not doubt that Nāgārjuna’s reasoning involves contradictions, and J. Garfield and G. Priest claim to partake what they call the “dialetheist’s comfort” (that of admitting the possibility of true contradictions) when they state that Nāgārjuna is “indeed a highly rational thinker”. A similar argument that contradictions in Buddhism are essentially dialetheist is found in Deguchi et al. (2008).

The reasoning involved in a tetralemma is shown in the following examples. An interlocutor poses four questions to the Buddha about the re-birth of an arhat (illuminated person):

  • Is the arhat reborn?

  • Is the arhat not reborn?

  • Is the arhat both reborn and not reborn?

  • Is the arhat neither reborn nor not reborn?

The Buddha replies to each question saying that one cannot say that this is so.

Now, looking at the three first questions, we might conclude that the Buddah would be forced to accept true contradictions—but is that really the case? According to M. Siderits:

[…] the third possibility involves equivocation on ‘existent’: that the arhat does exist when ‘existent’ is taken in one sense, but does not exist when it is taken in some other sense. For when the Buddha rejects both of the first two lemmas, this generates an apparent contradiction. And one way of seeking to resolve this contradiction is to suppose that there is equivocation at work. (Siderits 2008)

Siderits then concludes:

To consider this possibility is not to envision that there might be true contradictions. It is a way of trying to avoid attributing to the speaker the view that a contradiction holds. (Siderits 2008)

Our point is this: if indeed contradictions in Buddhist reasoning are structurally analogous to those we refer to in the Western tradition, and can be put under the scope of a paraconsistent formalism, the task would be to specify which kind of paraconsistent logic could better reflect the logic of Nāgārjuna reasoning.

A conducive strategy would be to get some inspiration on regard to how Buddhist thinkers see negation. And once more, in concord with what has been defended for natural language, we find that negation in this tradition is not restricted to a single interpretation. B. Galloway argues in Galloway (1989) that the prasajya negation of the Madhyamaka school of Buddhist philosophy (members of a Buddhist school founded by Nāgārjuna) is not the same as that of the other schools. Would it appease the Madhyamikas to conceive of them as endorsing dialetheism and swallowing contradictions as real? This does not seem to be so – again quoting Siderits:

Madhyamikas say that only mad people accept contradictions. (Siderits 2008, p. 132)

This seems to exclude that possibility that members of the Nāgārjuna school would embrace dialetheism, the view that there are dialetheias (i.e., sentences which are both true and false). This difficulty is in line with another, pointed out by P. Gottlieb (see also Carnielli and Coniglio 2008):

While Aristotle is clearly not a dialetheist, it is not clear where he stands on the issue of paraconsistency. Although Aristotle does argue that if his opponent rejects PNC across the board, she is committed to a world in which anything goes, he never argues that if (per impossibile) his opponent is committed to one contradiction, she is committed to anything, and he even considers that the opponent’s view might apply to some statements but not to others (Metaph IV 4 1008a10-12). (Gottlieb 2007)

The Shōbōgenzō, a masterpiece that records the teachings of the Japanese Soto Zen Master Eihei Dogen of the thirteenth century, contains several Zen Koan stories that are sometimes disconcertingly contradictory. The text is full of contradictions of several dimensions: contradictions between chapters, contradictions between paragraphs, contradictions between sentences, and even contradictions within a sentence (see Nearman 2007 for a careful translation from Japanese). For instance, the discourse by Master Dogen “On Buddha Nature” (Nearman 2007, p. 244) contains two ostensibly contradictory statements, namely, that all sentient beings have a Buddha Nature and that all sentient beings lack a Buddha Nature.

Such a contradiction, however, is not anything like a dialetheia. Buddha Nature is not the existence of something: on the one hand, Buddha Nature is encountered everywhere, and on the other hand sentient beings do not readily find an easy or pleasant way to encounter Buddha Nature (see Nearman 2007, Chap. 21, for a long discussion).

Gudo Wafu Nishijima, a Japanese Zen Buddhist priest and teacher, in trying to explain why the Shōbōgenzō is difficult to understand because of this intricate weave of contradictions, makes clear the following point in Nishijima (1992):

At this point I want to make a very fundamental point about the nature of contradiction itself. We feel in the intellectual area that something called contradiction exists; that something can be illogical. But in reality, there is no such thing as a contradiction. It is just a characteristic of the real state of things. It is only with our intellect that we can detect the existence of something called contradiction.

It thus seems that both the Buddhist and Aristotelian traditions see perfect coherence in the distinction between reasoning in the presence of contradictions and accepting them. The former position expands the latter, and any appeal to the principle of rational accommodation would support the following conclusion: If by ‘accepting a contradiction’ we mean ‘considering it as consistent’, then both traditions would agree with our vision of paraconsistentism. Infidelity, but only at a bare minimum.

5 Acknowledgements

Both authors acknowledge support from FAPESP—Fundação de Amparo à Pesquisa do Estado de São Paulo, Brazil, Thematic Research Project grant LogCons 2010/51038-0, and from CNPq Brazil Research Grants. The first author has been additionally supported by the Fonds National de la Recherche, Luxembourg.