Keywords

1 Introduction

Henkin’s argument for the completeness theorem consists of the proof that a consistent set of sentences in first-order logic is satisfiable, from which, by a classical reduction ad absurdum, one concludes that if F is a semantic consequence of a set of sentences Γ, then there must be a derivation in classical first-order logic of F from Γ. Given the nonconstructive character of Henkin’s proof, it is perhaps somewhat surprising that it offers a way to obtain results concerning the relationship between intuitionistic logic and classical logic, such as Glivenko’s theorem for propositional logic, and a characterization of the weakest extension of first-order intuitionistic logic for which that theorem holds with respect to the full first-order language. It seems also remarkable that the proof under consideration is the one that Henkin pointed out to Smullyan and can be found in Smullyan’s celebrated book First Order Logic [6, p. 96], with the comment that “It is hard to imagine a more direct completeness proof!” The crucial observation is that the proof directly entails the following facts: (1) for a set of propositional formulae to be satisfiable, it suffices its consistency with respect to intuitionistic logic; (2) for a set of first-order sentences to be satisfiable, it suffices its consistency with respect to the extension of intuitionistic logic obtained by allowing the inference of ¬¬∀xF from ∀x¬¬F; (3) the latter strengthening is not required if only ∀-free sentences are involved; (4) if, furthermore, they are also →-free, even the ex-falso rule can be dispensed with. The latter two observations lead to obtain directly refined forms of the completeness theorem.

2 Henkin’s Proof and Glivenko’s Theorem

In the following, we will adopt the natural deduction system with both ¬ and ⊥ taken as primitive and denote by N the system that has only the introduction and elimination rules for each logical constant, namely the system for intuitionistic logic, and by NC the system for classical logic, obtained by adding to N the classical reduction ad absurdum rule ⊥ c . For Γ a set of formulae, ℒ(Γ) denotes the set of all formulae built over the symbols that occur in Γ, including ⊥ and ¬, even if they do not occur in Γ. Given a set of formulae Σ and a formula F, ΣF denotes the derivability relation of N; more precisely, ΣF means that there is a deduction 𝒟 in N such that all the formulae that occur in 𝒟 belong to ℒ(Γ), the active assumption of 𝒟 belongs to Σ, and the conclusion of 𝒟 is F. Similarly, Σ c F denotes the derivability relation of NC. N p and NC p denote the systems that are obtained from N and NC by excluding the rules concerning quantifiers and the corresponding derivability relations Σ p F and Σ cp F. A purely propositional formula is a formula that contains only propositional parameters, ⊥, and propositional connectives. Since, assuming that every set can be well ordered, it does not require any additional ado, we formulate Henkin’s argument with respect to an arbitrary set of formulae, rather than to the countable ones as in [6].

Proposition 1

If Γ is a set of purely propositional formulae, and \(\varGamma\not \rhd_{p} \bot\), then Γ is satisfiable in classical propositional logic.

Proof

Let < be a well-ordering of ℒ(Γ) whose order type is a cardinal. By <-recursion, for every F∈ℒ(Γ), we define a set of formulae Γ F of ℒ(Γ) as follows:

$$\varGamma_F = \begin{cases} \varGamma\cup\bigcup_{G < F} \varGamma_G \cup\{ F \}&\mbox{if}\ \varGamma\cup \bigcup_{G < F} \varGamma_G \cup\{ F \} \not\rhd_p \bot\\ \varGamma\cup\bigcup_{G < F} \varGamma_G & \mbox{otherwise} \end{cases} $$

Clearly, Γ F is defined for every F in ℒ(Γ), and if GF, then Γ G Γ F . Since the property \(\varSigma\not\rhd_{p} \bot\) is preserved under unions of increasing (under inclusion) chains of sets, by <-induction it follows that for every F∈ℒ(Γ), \(\varGamma_{F} \not \rhd_{p} \bot\). Hence, letting , for the same reason, we have that \({\bar{\varGamma}}\) is consistent with respect to ▷ p , namely:

  1. (i)

    \(\bar{\varGamma} \not\rhd_{p} \bot\)

Since \(\varSigma\not\rhd_{p} \bot\) is preserved under subsets, \({\bar{\varGamma}}\) is maximal with respect to the property \(\varSigma \not \rhd_{p} \bot\), namely, letting \({\bar{\varGamma}}, F\) denote \({\bar{\varGamma}}\cup\{F\}\), we have:

  1. (ii)

    if \(\bar{\varGamma}, F\not\rhd_{p} \bot\), then \(F\in{\bar{\varGamma}}\)

For, if \({\bar{\varGamma}}, F\not\rhd_{p} \bot\), then, by the definition of Γ F , we have that FΓ F and hence that \(F\in {\bar{\varGamma}}\) by the definition of \({\bar{\varGamma}}\). As a consequence, we also have:

  1. (iii)

    if \({\bar{\varGamma}} \rhd_{p} H\), then \(H \in{\bar{\varGamma}}\)

In fact from \({\bar{\varGamma}} \rhd_{p} H\) it follows that \({\bar{\varGamma}}, H \not\rhd_{p} \bot\), otherwise by the cut rule, namely the possibility of composing deductions, we would get \({\bar{\varGamma}} \rhd_{p}\bot\), against (iii). Hence, by (ii), \(H\in{\bar{\varGamma}}\). Notice that so far only the structural properties of weakening and cut of ▷ p , satisfied also by classical as well as by minimal propositional logic, have been used. The logical rules are needed to verify the following:

  1. (1)

    \(\neg G \in{\bar{\varGamma}}\) if and only if \(G\notin{\bar{\varGamma}}\)

  2. (2)

    \(G\wedge\ H \in{\bar{\varGamma}}\) if and only if \(G\in {\bar{\varGamma}}\) and \(H\in {\bar{\varGamma}}\)

  3. (3)

    \(G\vee H\in{\bar{\varGamma}}\) if and only if \(G\in{\bar{\varGamma}}\) or \(H\in{\bar{\varGamma}}\)

  4. (4)

    \(G\rightarrow H\in{\bar{\varGamma}}\) if and only if \(G\notin {\bar{\varGamma}}\) or \(H\in{\bar{\varGamma}}\)

More precisely: (1) is a consequence of (i) and the ¬-elimination rule in the only if direction and of (ii), the ¬-introduction rule, and (iii) in the other. (2) is a consequence of the ∧-elimination rules and (iii) in the only if direction and of the ∧-introduction rules and (iii) in the other. As for (3), assuming that \(G\vee H\in{\bar{\varGamma}} \), we cannot have both \(G\notin{\bar{\varGamma}} \) and \(H \notin{\bar{\varGamma}} \) since, otherwise, by (1), \(\neg G \in{\bar{\varGamma}} \) and \(\neg H\in {\bar{\varGamma}} \), and by the ¬ and ∨-elimination rules we would have \({\bar{\varGamma}} \rhd_{p} \bot\) against (i). Conversely, if \(G\in {\bar{\varGamma}} \) or \(H\in{\bar{\varGamma}} \), then, by the appropriate ∨-introduction rule and (iii), we have that \(G\vee H \in{\bar{\varGamma}} \). As for (4), if \(G\rightarrow H \in{\bar{\varGamma}} \) and \(G \in{\bar{\varGamma}}\), then by the →-elimination rule and (iii) it follows that \(H\in{\bar{\varGamma}} \), so that either \(G\notin {\bar{\varGamma}} \) or \(H\in{\bar{\varGamma}} \). Conversely, if \(G\notin {\bar{\varGamma}} \), then by (1), \(\neg G \in{\bar{\varGamma}} \). Since, by the ¬-elimination, ex-falso, and the →-introduction rules, GH is deducible from ¬G, by (ii) we have that \(G\rightarrow H \in{\bar{\varGamma}} \). The same conclusion is reached if \(H\in{\bar{\varGamma}} \) since, by the →-introduction rule, GH follows from H. Thus, in both cases, \(G\rightarrow H \in{\bar{\varGamma}} \). On the ground of (1)–(4), it is immediate to verify, by induction on the height of formulae, that the assignment v that gives the value t to a propositional parameter P if and only \(P\in{\bar{\varGamma}}\) assigns the value t to a (purely propositional) formula F if and only if \(F\in{\bar{\varGamma}}\). In particular, v satisfies all the formulae in Γ, showing that Γ is indeed satisfiable. □

Corollary 1

(Glivenko’s Theorem)

If a purely propositional formula F is deducible in NC p , then ¬¬F is deducible in N p .

Proof

If F is deducible in NC p , then ¬F cannot be satisfied in classical propositional semantics. By the previous proposition ¬F p ⊥; hence, ▷ p ¬¬F by the ¬-introduction rule. □

From the proof of Proposition 1 it is apparent that the availability in N p of the ex-falso rule is needed only to establish condition (4) concerning →. Therefore, we have also a proof that if a set of propositional formulae is →-free, namely it does not contain occurrences of → and is consistent with respect to minimal propositional logic, then it is satisfiable in classical propositional semantics, so that if a purely propositional formula is →-free and is deducible in classical propositional logic, then its double negation is deducible in minimal propositional logic.

3 Extension to First-Order Logic

In extending to the first-order language, the previous proof we will strictly follow the one suggested by Henkin and reported in [6, p. 96]. Let us recall that a canonical interpretation of ℒ(Γ) is an interpretation whose domain is the set of all closed terms of ℒ(Γ) and in which function and constant symbols are given their canonical interpretation. Thus, a canonical interpretation is completely determined by which atomic sentences of ℒ(Γ) are considered to be true. For C a set of constants, let ℒ(Γ,C) be defined as ℒ(Γ) except that also constants in C are allowed.

Proposition 2

If Γ is a set of first-order sentences and \(\varGamma\not \rhd_{c} \bot\), then Γ is satisfiable in a canonical interpretation of ℒ(Γ,C) for any set C of constants not occurring in Γ of cardinality not less than |ℒ(Γ)|.

Proof

Let C be any set of constants not occurring in Γ of cardinality not less than |ℒ(Γ)|, be the set of all sentences in ℒ(Γ,C), and < be a well-ordering of whose order type is a cardinal. Let Γ F be defined by <-recursion as follows:

$$\varGamma_F = \begin{cases} \varGamma\cup\bigcup_{G < F} \varGamma_G \cup\{ F \}&\mbox{if}\ \varGamma\cup \bigcup_{G < F} \varGamma_G \cup\{ F \} \not\rhd_c \bot{} \\ &\mbox{and}\ F \mbox{is neither of the form}\ \exists xH\\ &\mbox{nor of the form}\ \neg \forall x H\\ \varGamma\cup\bigcup_{G < F} \varGamma_G \cup\{ F \} \cup\{ H \{ x/c\} \}&\mbox{if}\ \varGamma \cup \bigcup_{G < F} \varGamma_G \cup\{ F \}\not\rhd_c \bot{}\\ & \mbox{and}\ F\ \mbox{is}\ \exists x H\\ \varGamma\cup\bigcup_{G < F} \varGamma_G \cup\{ F \} \cup\{ \neg H \{x/c\} \}&\mbox{if } \varGamma\cup\bigcup_{G < F} \varGamma_G \cup\{ F \}\not\rhd_c \bot{}\\ &\mbox{and}\ F\ \mbox{is}\ \neg\forall x H \\ \varGamma\cup\bigcup_{G < F} \varGamma_G & \mbox{otherwise} \end{cases} $$

where c is one of the constants in C not occurring in Γ∪⋃ G<F Γ G ∪{F}.

Γ F is defined for every F since in the second and third cases of the definition of Γ F , which we name C and C ¬∀, there are always constants in C that do not belong to Γ∪⋃ G<F Γ G because of cardinality reasons. The rest of the proof proceeds as for Proposition 1 except that we have to:

  1. (a)

    check that also in cases C and C ¬∀, the underivability of ⊥ is preserved,

  2. (b)

    complete the proof that the canonical interpretation of ℒ(Γ,C) in which an atomic sentence is considered to be true if and only if it belongs to \({\bar{\varGamma}}\) satisfies a sentence F if and only if \(F\in{\bar{\varGamma}}\), by taking care also of existential and universal sentences.

As for (a), the underivability of ⊥ is not lost in the C case since the ∃-elimination rule ensures that if c does not occur in Σ,∃xH, then

\((C'_{\exists})\) :

if Σ,∃xH,H{x/c}▷ c ⊥ then Σ,∃xH c ⊥.

For, given a deduction 𝒟 of ⊥ from Σ,∃xH,H{x/c}, it suffices to replace c by a variable y not occurring in 𝒟, to obtain a deduction 𝒟{c/y} of ⊥ from Σ,∃xH,H{x/y}, from which an application of the ∃-elimination rule yields a deduction of ⊥ from Σ,∃xH. Dually, the ∀-introduction rule is needed in dealing with the C ¬∀ case, but N is not strong enough. \(\bf NC\) certainly suffices since, assuming that c does not occur in Σ,¬∀xH, the following holds:

\((C'_{\neg\forall})\) :

if Σ,¬∀xHH{x/c}▷ c ⊥ then Σ,¬∀xH c ⊥.

In fact, if 𝒟 is a deduction of ⊥ from Σ,¬∀xHH{x/c} and y is a variable not occurring in 𝒟, then the following is a deduction of ⊥ from Σ,¬∀xH:

As for (b), it suffices to check that:

(∃′):

\(\exists x H \in{\bar{\varGamma}}\) if and only if for some closed term t in ℒ(Γ,C), \(H \{ x/t \} \in{\bar{\varGamma}}\);

(∀′):

\(\forall x H \in{\bar{\varGamma}}\) if and only if for all closed term t in ℒ(Γ,C), \(H \{ x/t \} \in{\bar{\varGamma}}\).

By (iii) and (1) in the proof of Proposition 1, the ∃-introduction rule and the ∀-elimination rule ensure the if direction of (∃′) and the only if direction of (∀′), respectively. The only if direction of ∃′ is a direct consequence of C . Finally, for the if direction of ∀′, we note that if \(\forall x H \notin{\bar{\varGamma}}\), by (1) in the proof of Proposition 1, \(\neg\forall x H \in{\bar{\varGamma}}\), and hence, by C ¬∀, \(\neg H\{x/c\}\in{\bar{\varGamma}}\) for some constant c, contradicting, by (i) in the proof of Proposition 1, the assumption that for every closed term t, \(H\{x/t\} \in\bar{\varGamma}\). □

Since ⊥ c is needed only to ensure that \((C'_{\neg\forall})\) holds, taking into account the remark concerning minimal propositional logic at the end of the previous section, the following corollary is immediate.

Corollary 2

  1. (a)

    If Γ is a set of ∀-free sentences and \(\varGamma\not\rhd\bot\) in ℒ(Γ), then Γ is satisfiable in a canonical interpretation of ℒ(Γ,C) for any set C of constants not occurring in Γ of cardinality not less than |ℒ(Γ)|.

  2. (b)

    If a ∀-free sentences is deducible in first-order logic, then its double negation is deducible in intuitionistic first-order logic.

  3. (c)

    If Γ is a set of {∀,→}-free sentences, consistent with respect to minimal logic, then Γ is satisfiable in a canonical interpretation of ℒ(Γ,C) for any set C of constants not occurring in Γ of cardinality not less than |ℒ(Γ)|.

  4. (d)

    If a {∀,→}-free sentence is deducible in classical first-order logic, then its double negation is deducible in first-order minimal logic.

While ⊥ c suffices to establish \(C'_{\neg\forall}\), it is more than what is needed for that purpose. In fact, to establish \(C'_{\neg\forall}\), we could use the following deduction:

based on the rule that allows the inference of ¬¬∀xH from ∀x¬¬H. Thus, if, in accordance with [2] and [1], we denote with MH the system that is obtained by adding to N such a rule, to be called the double negation shift (DNS) rule, and let ▷ mh stand for the corresponding derivability relation, we have the following:

Corollary 3

If Γ is a set of sentences such that \(\varGamma\not\rhd_{mh} \bot\), then, for any set C of constants not occurring in \({\bar{\varGamma}}\) of cardinality not less than |ℒ(Γ)|, Γ is satisfiable in a canonical interpretation over ℒ(Γ,C).

Corollary 4

(Glivenko’s Theorem for MH)

If a sentence F is deducible in NC, then ¬¬F is deducible in MH.

Furthermore, we have the following corollary which refines the completeness theorem.

Corollary 5

(Completeness Theorem)

If for a set C of constants not occurring in Γ,F, of cardinality not less than |ℒ(Γ)|, there is no canonical interpretation of ℒ(Γ,F,C) in which Γ is satisfied while F is not; a fortiori, if ΓF, then Γ c F. Furthermore, if the use of the DNS rule is allowed or Γ,F is ∀-free, then there is a deduction of F from Γ that contains only one final application of c , and if Γ is bothand →-free, such a derivation, except for its final application of the c rule, can be taken to belong to minimal logic.

That MH is indeed an optimal answer to the question of which logical strength is needed to ensure that the consistency of a set of sentences implies the existence of a classical structure in which it is satisfied can be seen as follows. Obviously, the DNS rule is derivable from the corresponding DNS principle, namely the schema ∀x¬¬F→¬¬∀xF. If L is strong enough to ensure that consistency implies existence, in the sense just explained, then Glivenko’s theorem holds for L and double negations of formulae that are classically valid are derivable in L. In particular, the double negations of the formulae in the DNS principle are derivable in L. As it is easily seen, the formulae in the DNS principle are equivalent in N to their double negations. Therefore, provided that L includes intuitionistic logic, the DNS principle and, hence, also the DNS rule are derivable in L, so that L must be at least as strong as MH.

Remark

Other ways of introducing MH are considered in [2, 4, 7], and [5].

Remark

The replacement of the constant c with a new variable in the proof of Proposition 2 can be avoided, provided that we adopt first-order languages endowed with individual parameters together with truth-value semantics and the ∃-elimination and ∀-introduction rules are extended by allowing individual parameters to occur in place of their proper variables (see [3] for a brief outline). In that case, Proposition 2 extends to formulae that do not contain free variables but may contain individual parameters, and for the proof, the set C of new constants is replaced by a set of new individual parameters.