Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Main Ideas Behind the Conjecture

The question that the conjecture was intended to answer is roughly whether the elimination rules of Gentzen’s system of natural deduction for intuitionistic logic are the strongest possible ones. This question already arises against the background of Gentzen’s own understanding of his intuitionistic system of natural deduction, in particular the significance of his classification of the inference rules into introduction and elimination rules. Gentzen’s idea was that the introduction rule for a logical constant gives the meaning of the constant while the corresponding elimination rule becomes justified by this very meaning. The rules for conjunction constitute a simple illustration of this idea:

$$ \begin{aligned} \frac{A \quad B}{A \& B}\,\, \& I \qquad \frac{A \& B}{A} \& E \qquad \frac{A \& B}{B} \& E \end{aligned}$$

Understanding the introduction rule for conjunction (&\(I\)) as giving the meaning of conjunction by telling how a conjunction is proved, we see that the elimination rule (&\(E\)) in its two forms becomes justified: according to the meaning of conjunction, a proof of the premiss of an application of the rule &E already contains a proof of the conclusion.

It seems obvious that there can be no stronger elimination rule for conjunction that can be justified in this way in terms of the introduction rule. A similar remark can be made concerning all the other logical constants of predicate logic. To illustrate the idea with a slightly more complicated example, consider the introduction and elimination rules for disjunction:

$$\begin{aligned} \frac{A}{A \vee B}\quad \vee I \qquad \frac{B}{A \vee B} \vee I \qquad \frac{\begin{array}{cc@{\quad }c} &{} [A] &{} [B]\\ &{} \vdots &{} \vdots \\ A \vee B &{} C &{} C\end{array}}{C} \vee E \end{aligned}$$

The elimination rule for disjunction (\(\vee E\)) is justified by the meaning given to disjunction by the introduction rule (\(\vee I\)) in view of the following consideration. According to the meaning of disjunction, a proof of the major premiss \(A \vee B\) of an application of \(\vee E\) must contain either a proof of \(A\) or of \(B\). In the first case we may substitute this proof of \(A\) for the hypothesis \(A\) in the proof of the first occurrence of the minor premiss \(C\), and in the second case we may instead substitute the proof of \(B\) for the hypothesis \(B\) in the proof of the second occurrence of the minor premiss \(C\). The result is in either case a proof of \(C\) that does not depend on the hypotheses discharged by the application of \(\vee E\), without making use of this instance of \(\vee E\). In other words, the proofs of the premisses of an inference by \(\vee E\) are seen to contain already elements that combined appropriately yield a proof of the conclusion of the inference.

Can we imagine a stronger elimination rule for disjunction that is possible to justify in terms of the introduction rule in this way? Martin-Löf’s type theory for intuitionistic logic does contain such a stronger rule, but its formulation requires the richer language of type theory. The question should therefore be put a little more carefully: Is there an elimination rule for disjunction that can be formulated in the language of predicate logic and is stronger than \(\vee E\) but can nevertheless be justified in terms of \(\vee I\)? My conjecture is that the answer is no and that the corresponding thing holds for all the other logical constants in the language of predicate logic.

It seems obvious that the elimination rules of Gentzen’s system are the elimination rules that correspond to his introduction rules. Or, again to put it more carefully: although there are of course weaker elimination rules and even elimination rules that are deductively equivalent with the ones formulated by Gentzen, there are no stronger rules that can be formulated in the language of predicate logic and are justifiable in terms of the introduction rules. The problem is to formulate this obvious idea more precisely in the form of a conjecture and then to prove it.

2 The Notion of Validity

The ideas sketched above that are in need of a more precise formulation are in particular the two ideas that an introduction rule gives the meaning of the logical constant in question and that this meaning justifies the corresponding elimination rule. As for the second idea concerning the justification of elimination rules, anyone familiar with the reduction steps in the normalization of natural deductions sees immediately that they are the main elements in these justifications. When I introduced the normalization procedure for natural deductions, I saw it as something that brought out Gentzen’s idea concerning how introduction and elimination rules are related to each other.Footnote 1 But, admittedly, it does not bring out his idea that the introduction rules are meaning constitutive and that the meanings that they assign to the logical constants justify the elimination rules. To bring out these ideas explicitly requires a semantics in which one can speak about the validity of inferences. Analogously to how meaning is explained in classical semantics by reference to truth conditions, we should now explain the meaning of a sentence by telling what is required of a valid proof of the sentence. The notion of convertibility introduced by [11] for terms and adapted to natural deductions under the name of computability by [3] seemed to me to offer a suitable means for such a semantics. Modifying Martin-Löf’s notion, I used it to prove strong normalization for various systems of natural deductions. Furthermore, I argued in an appendix to the chapter “Ideas and results in proof theory” [6] that when slightly modified in another direction, the notion could be used to give a semantic explication of the justification of inference rules. Because of this use, I called it validity.

I shall now give a condensed account of that notion. To do so it is convenient to introduce a few other notions. I shall say that an inference such as \(\rightarrow \)-introduction or \(\vee \)-elimination, by which occurrences of a hypothesis become discharged, binds the occurrences in question and that an inference such as \(\forall \)-introduction or \(\exists \)-elimination, which puts restriction on a variable (so-called ‘Eigenvariabel’) and which would lose its correctness if the variable were replaced by a constant, binds the variable in question. An occurrence of a hypothesis in a deduction \(D\) or of a variable in a formula in \(D\) is said to be free in \(D\) if it is not bound, and a deduction that contains free hypotheses or free variables is said to be open. A deduction that is not open is closed. A deduction \(D\) is said to reduce to the deduction \(D^*\), when \(D^*\) is obtained from \(D\) by successively replacing subdeductions by their reductions (as defined in the context of normalizations).

A triple consisting of set \(\mathcal {T}\) of closed individual terms, a set \(\mathcal {R}\) of relational symbols, and a set \(\mathcal {P}\) of profs of atomic sentences built up from \(\mathcal {R}\) and \(\mathcal {T}\) (thus not containing any proof of the constant for falsehood, \(\bot \)) will be called a base. We shall be concerned with formulas in a first-order language determined by such a base; the set \(\mathcal {T}\) is supposed to fix the range of the individual variables (and hence the domain of the quantifiers), and the set \(\mathcal {P}\) the meanings of the atomic sentences. A deduction \(D\) of a sentence \(A\) is said to be in canonical form relative to such a base \(\mathcal {B} = (\mathcal {T},\mathcal {R},\mathcal {P})\), if \(D\) is closed and either \(A\) is an atomic sentence and \(D\) is in \(\mathcal {P}\) or \(A\) is compound and the last inference of \(D\) is an introduction.

We can now formulate two principles of validity (relative to a base \(\mathcal {B}=(\mathcal {T},\mathcal {R},\mathcal {P})\)) for closed and open deductions respectively:

(I):

A closed deduction \(D\) of a sentence \(A\) is valid (relative to a base \(\mathcal {B}\)) if and only if either (a) \(D\) is in canonical form (relative to \(\mathcal {B}\)) and, in case \(A\) is compound, the immediate subdeductions of \(D\) are valid (relative to \(\mathcal {B}\)), or (b) \(D\) is not in canonical form but reduces to a deduction that is valid according to clause (a).

(II):

An open deduction \(D\) is valid (relative to a base \(\mathcal {B}\)) if and only if each closed instance \(D'\) of \(D\) is valid (relative to \(\mathcal {B}\)) when \(D'\) is obtained by first replacing all free individual variables in \(D\) by individual terms (in \(\mathcal {T}\)) and then replacing all free occurrences of each hypothesis in \(D\) by a valid closed deduction (relative to \(\mathcal {B}\)) of that hypothesis.

 

Principle (I) can be seen as a condensed semantic expression of Gentzen’s two basic ideas concerning introduction and elimination rules. His first idea, that an introduction rule for a logical constant gives the meaning of the constant, is here understood as implying that the rule is self-justifying in the sense that inferences conforming to the rule are valid simply in virtue of what the constant in question means. This is expressed in (Ia) by saying that applications of introduction rules preserve the validity of a deduction. However, this states only a sufficient condition for validity, half of the meaning of the logical constant so to say: a deduction is valid, if its ends with an introduction and its immediate subdeductions are valid. To take this to be also a necessary condition would result in an obviously too restrictive notion of validity. But we could demand that a deduction is valid only if it can be reduced to a deduction that ends with an introduction. This requirement is stated in (Ib), which also expresses Gentzen’s second idea about justifications of elimination inferences: inferring a conclusion \(A\) by such an inference yields a valid deduction if and only if it reduces to a deduction of A that does not use the inference in question.

Principle (II) in turn is an expression of the simple idea that an open deduction is to be seen as a schema in which free individual variables are blanks for (closed) individual terms and free hypotheses \(A\) are blanks for closed proofs of \(A\). It is hence valid if and only if all results of appropriately filling in the blanks are valid.

Because of the fact that the premisses of an introduction and the hypotheses that it binds are of lower complexity than that of the conclusion, principles (I) and (II) can be seen as a recursive definition (over the complexity of the end formulas) of the notion of validity of natural deductions.

An inference rule \(R\) can now be defined as valid when it preserves validity relative to an arbitrary base \(\mathcal {B}\), that is, when a deduction that ends with an application of \(R\) is always valid relative to \(\mathcal {B}\) given that its immediate subdeductions are valid relative to \(\mathcal {B}\). To prove the validity of the inference rules of Gentzen’s system of natural deduction for intuitionistic logic is now an easy exercise.

3 Generalizing the Notion of Validity

A limitation of the notion of validity as described above is that it is defined only for deductions in a given formal system. In contrast, a notion like truth is defined for sentences in general but singles out a subset of them. Similarly, one would like to have a notion of validity defined for a broader range of reasoning, singling out a subdomain of correct reasoning that could properly be called proofs. Such a generalization is especially important in the context of the conjecture discussed here, since in that context we want to consider not only inference rules in a given system but other possible elimination rules that no one has proposed so far.

The goal of the chapter in which the conjecture was formulated was precisely to extend the notion of validity to a broader class of reasoning, what I called arguments. The simple idea is to consider not only deductions that proceed by applications of a given set of rules of inferences but trees of formulas built up of arbitrary inferences. As in a natural deduction, the top-formulas of such a tree are to represent axioms or hypotheses, and the inferences are allowed to bind hypotheses and variables. It is supposed to be determined for each inference which occurrences of hypotheses and variables become bound by the inference. Such an arbitrary tree of formulas (in the language of first-order logic) with indications of how variables and hypotheses are bound was called an argument skeleton. The notion of being an open or closed deduction can thus be carried over to argument skeletons.

Of course, most argument skeletons do not represent valid forms of reasoning. But some of them do and can be justified by procedures similar to the ones used when justifying the elimination rules in Gentzen’s system. To generalize the idea of such justifying procedures, I define a justification of an inference rule \(R\) different from the introduction rules to be an operation that is defined on some argument skeletons whose last inference is an application of \(R\) and that yields as value, when applied to an argument skeleton \(S\), another argument skeleton with the same end formula and with no more free hypotheses or variables than \(S\). I consider sets \(J\) of such justifications of inference rules. They are assumed to be consistent in the sense that they are not to contain two operations \(j\) and \(j'\) defined on the same skeleton and yielding different values. An argument skeleton \(S\) is said to reduce to another argument skeleton \(S^*\) relative to such a set \(J\) of justifications, if there is a sequence \(S_{1},S_2,\ldots ,S_{n}\) (\(n > 0\)) where \(S_{1}=S,S_{n}=S^*\), and, for each \(i\) (\(i=1, 2,\dots ,n-1\)), there is a \(j\) in \(J\) such that \(S_{i+1}\) is obtained form \(S_{i}\) by replacing some subskeleton \(S'\) of \(S_{i}\) by \(j(S')\).

An argument skeleton together with a set of justifications is called an argument. It is still the case that most arguments do not represent valid forms of reasoning. What is called justifications above are in fact only alleged justifications. It is the notion of validity generalized to arguments that is now to lay down the conditions that the alleged justifications have to satisfy in order to be real justifications. This generalization of validity is easily obtained by simply carrying over the two principles of validity from deductions to arguments. The notion of a base \(\mathcal {B} = (\mathcal {T}, \mathcal {R}, \mathcal {P})\) and the notion of a canonical form is to be kept as before. Principle (I) now becomes

  • (I’) A closed argument \((S, J)\) for a sentence \(A\) is valid relative to a base \(\mathcal {B}\) if and only if either (a) \(S\) is in canonical form relative to \(\mathcal {B}\) and, in case \(A\) is compound, for each immediate subskeleton \(S'\) of \(S\) it holds that \((S', J)\) is valid relative to \(\mathcal {B}\), or (b) \(S\) reduces relative to \(J\) to a skeleton \(S^*\) in canonical form such that \((S^*,J)\) is valid relative to \(\mathcal {B}\) according to clause (a).

To generalize principle (II) we need to speak about consistent extensions of a set of justifications, where again we require that there is no conflicting overlap between the different justifying operations. Principle (II) now becomes

  • (II’) An open argument \((S, J)\) is valid relative to a base \(\mathcal {B=(T, R, P)}\) if and only if for each consistent extension \(J'\) of \(J\) and for each instance \(S^*\) of \(S\) it holds that \((S^*, J')\) is valid relative to \(\mathcal {B}\), when \(S^*\) is obtained from \(S\) by first replacing all free occurrences of individual variables by terms in \(\mathcal {T}\) and then replacing all free occurrences of each hypothesis \(A\) in \(S\) by closed skeletons \(S'\) ending with \(A\) such that \((S', J')\) is valid relative to \(\mathcal {B}\).

An argument that is valid in the sense defined may proceed by quite different inferences than those obtained by applying the inference rules of Gentzen’s system, but it may claim to represent a proof with the same right as the deductions in Gentzen’s system.

That an inference rule is valid is to mean as before that it preserves validity, now validity of an argument, relative to an arbitrary base. In case \(R\) is an inference rule different from the introduction rules, this is to mean that there is a justification \(j\) of \(R\) such that for all bases \(\mathcal {B}\) and for all consistent extension \(J\) of \(\{j\}\), if \(S\) is an argument skeleton whose last inference is an application of \(R\) and is such that for each immediate subskeleton \(S'\) of \(S, (S', J)\) is valid relative to \(\mathcal {B}\), then \((S,J)\) is valid, too, relative to \(\mathcal {B}\).

Having arrived at this notion of valid inference rule, the conjecture that there are no stronger justifiable elimination rules within first-order logic than the ones formulated by Gentzen can now naturally be formulated as follows:

Every valid inference rule that can be formulated within first-order languages holds as a derivable inference rule within the system of natural deduction for intuitionistic logic.

Is this notion of validity rightly explicating a constructive reading of the logical constants? Besides my own discussions of this question in several papers dealing with constructive approaches to the notion of logical consequence (for a relatively recent paper, see [8]), there is a particularly comprehensive presentation and discussion of essentially my notion of validity in Michael Dummett’s book The Logical Basis of Metaphysics [1] (for a comparison of my and Dummett’s notions of validity, see [9]). (Added in proof: After this paper was composed, Peter Schroeder-Heister has presented in [10] a very detailed and thoughtful study of the notion of validity, proposing among other things some changes in my notions of justification and validity. If I had written my paper today, I would have adopted some of them, and would have written a different paper also in many other respects.) In the rest of this paper, I shall take up some doubts that one may have about the notion of validity presented here, and shall consider a quite radical revision of the notion.

4 A Modified Approach

A shortcoming of the notion of argument as described above is that the justifications are operations defined on argument skeletons rather than on arguments, i.e. skeletons together with justifications, and furthermore that the value of the justifying operations consist of just argument skeletons instead of skeletons with justifications. It is the skeletons with justifications that represent arguments, valid or invalid ones, and when an argument step is to be justified it is conceivable that one wants the justification to depend on the entire arguments for the premisses and not only on their skeletons. As for the value of a justifying operation, it is a shortcoming that it consists of just a skeleton if one wants it to contain new inferences that were not present in the skeleton to which the operation is applied.Footnote 2 This last limitation may be taken care of without changing the framework very much. However, in order to achieve that the justifications operate not just on argument skeletons, but, as it were, on skeletons with justifications, we must make a more radical change in the approach. We need then to conceive of the valid arguments, i.e. proofs, as built up of operations defined on proofs and yielding proofs as values.

To outline an approach of that kind, let us as before start with a base \(\mathcal {B}\), now determined by a set \(\mathcal {T}\) of closed individual terms, a set \(\mathcal {R}\) of relational symbols, and a set \(\mathcal {C}\) of constants standing for proofs of atomic formulas with relational symbols in \(\mathcal {R}\) and terms in \(\mathcal {T}\). Each constant in \(\mathcal {C}\) is to be typed by the atomic formula \(A\) that it is a proof of; we may write such a constant \( {c}^{A}\). The base determines as before a first-order language \(\mathcal {L_{B}}\), whose formulas are built up as usual with the symbols given by \(\mathcal {R}\) and \(\mathcal {T}\), individual variables, and the logical constants \(\bot \), &, \(\vee \), \(\rightarrow \), \(\forall \), and \(\exists \).

For each formula \(A\) in the language \(\mathcal {L_{B}}\) , we introduce (proof) variables of type \(A\), written \(\alpha ^{A}\). The variables of type \(A\) are thought of as ranging over proofs of \(A\). From the constants in \(C\) and the proof variables, we build up what I shall call proof terms, using operators that are to be thought of as standing for operations on proofs that yield proofs as vales. The proof terms are to be typed by the formulas of \(\mathcal {L_{B}}\); a proof form of type \(A\) is to be thought of as standing for a proof of \(A\). Constants \( {c}^{A}\) in \(\mathcal {C}\) and variables \(\alpha ^{A}\) are thus proof terms of type \(A\). For all the logical constants except \(\bot \), we introduce primitive operators, which we may call &\(I\), \(\vee I_{1}\), \(\vee I_{2}\), \(\rightarrow I\), \(\forall I\), and \(\exists I\) (for disjunction there are thus two operators \(\vee I_{1}\) and \(\vee I_{2}\)), naming them in the same way as the introduction rules. Some of the operators are variable binding, and the variables that they bind are as usual listed after an occurrence of the operator. The rules for forming proof terms with the help of the primitive operators are as follows:

  1. (1)

    &\( I(P_{1}, P_{2}; A_{1}, A_{2}/A_{1} \& A_{2})\) is a proof term of type \( A_{1} \& A_{2}\), if \(P_{i}\) is a proof form of type \(A_{i} (i = 1, 2)\)—the formulas written after the semi-colon and before the slash indicate the types of the terms on which the operator is applied, and the one written after the slash indicates the type of the resulting term,

  2. (2)

    \(\vee I_{i}(P; A_{i}/A_{1} \vee A_{2})\) is a proof term of type \(A_{1} \vee A_{2}\), if \(P\) is a proof term of type \(A_{i} (i = 1, 2)\),

  3. (3)

    \(\rightarrow \!\!{I}\alpha ^{A}(P; B/A \rightarrow {B})\) is a proof term of type \(A \rightarrow B\) and binds free occurrences of \(\alpha ^{A}\) in \(P\), if \(P\) is a proof term of type \(B\),

  4. (4)

    \(\forall Ix(P; A(x)/\forall xA(x))\) is a proof term of type \(\forall xA(x)\), if \(P\) is a proof term of type \(A(x)\)—it binds free occurrences of the variable \(x\) that occur (in type indices) in \(P\), and there is the restriction that \(x\) is not to occur free in the indices of free variables \(\alpha ^{A}\) in \(P\),

  5. (5)

    \(\exists I(P, t; A(t)/\exists xA(x))\) is a proof term of type \(\exists xA(x)\), if \(P\) is a proof term of type \(A(t)\) (obtained from \(A(x)\) in \(\exists xA(x)\) by replacing free occurrences of \(x\) by the individual term \(t\)).

In addition to these primitive operators, we introduce operational parameters for all possible elimination rules. For instance, corresponding to a rule of the form

$$\begin{aligned} \frac{A_1 \quad A_2 \ldots A_n}{B} C,x \end{aligned}$$

that binds occurrences of the hypothesis \(C\) and the individual variable \(x\), we introduce a parameter \(\Phi \) with the formation rule:

\(\Phi \alpha ^{C}x(P_{1}, P_{2},\dots ,P_{n}; A_{1}, A_{2},\dots , A_{n}/B)\) is a proof term of type \(B\), if \(P_{i}\) is a proof term of type \(A_{i} (i= 1, 2, \dots , n)\)—it binds free occurrences of \(\alpha ^{C}\) and of \(x\) in \(P_{i}\) with the restriction that \(x\) is not to occur free in type indices of free variables \(\alpha ^{C'}\) in \(P_{i}\) except when \(C'\) is \(C\).

Particular such operation parameters are those that correspond to the usual elimination rules, which we may write &\(E_{1}\), &\(E_{2}\), \(\vee E\), \(\rightarrow E\), \(\forall E\), and \(\exists E\). For instance,

  1. (1)

    &\(E_{i}(P; A_{1}\) & \(A_{2}/A_{i})\) is a proof term of type \(A_{i}\), if \(P\) is a proof term of type \( A_{1} \& A_{2} (i = 1, 2)\);

  2. (2)

    \(\vee E \alpha ^{A}\beta ^{B} (P, Q, R; A \vee B, C, C/C)\) is a proof term of type \(C\), if \(P\), \(Q\), and \(R\) are proof terms of type \(A \vee B\), \(C\), and \(C\), respectively—it binds free occurrences of the variables \(\alpha ^{A}\) and \(\beta ^{B}\) in \(Q\) and \(R\); and

  3. (3)

    \(\exists E \alpha ^{A(x)}x (P, Q; \exists x A(x), B/B)\) is a proof term of type \(B\), if \(P\) is a proof term of type \(\exists x A(x)\) and \(Q\) is a proof term of type \(B\)—it binds free occurrences of the variables \(\alpha ^{A(x)}\) and \(x\) in \(Q\) with the usual variable restrictions.

To the operation parameters we assign definitions that tell how the parameters are to be interpreted as standing for operations on proofs that yield proofs as values. Examples of such definitions are:

$$ \begin{aligned}{} \& E_{2}( \& I(p_{1}, p_{2}; A_{1}, A_{2}/A_{1} \& A_{2}); A_{1} \& A_{2}/A_{2})&= p_{2} \\ \vee E \alpha ^{A}\beta ^{B}(\vee I_{1}(P; A/A \vee B), Q(\alpha ^{A}), \textit{R}(\beta ^{B}); A \vee B, C, C/C)&= Q(P) \\ \rightarrow \!\!{E}(\rightarrow \!\!{I}\alpha ^{A}(P(\alpha ^{A}); B/A \rightarrow B), Q; A \rightarrow {B}, A/B)&= P(Q) \\ \exists E\alpha ^{A(x)} x(\exists I(P, t; A(t)/ \exists x A(x)), Q(\alpha ^{A(x)},x); \exists xA(x), B/B)&= Q(P,t) \end{aligned}$$

A proof term \(P\) (of type \(A\)) together with a set \(\Delta \) of definitions of this kind, I shall call an interpreted proof term (of type \(A\)). A proof term \(P\) is said to reduce to another proof term \(P^*\) relative to a set of definitions \(\Delta \), if \(P^*\) is obtained from \(P\) by successively replacing subterms of \(P\) that appear as definienda of definitions in \(\Delta \) by their definientia.

We may now define a notion of validity for interpreted proof terms in essentially the same way as for arguments. We have thus the two principles:

(I”):

A closed interpreted proof term \((P,\Delta )\) of type \(A\) is valid relative to a base \(\mathcal {B=(T,R,C)}\) if and only if either (a) \(A\) is atomic and \(P\) is a constant \(c^{A}\) in \(C\), or \(A\) is compound with \(\chi \) as its principal constant and \(P\) is a proof term with the primitive operation \(\chi \) I as its outer operator such that for each immediate subterm \(Q\) of \(P\) it holds that \((Q,\Delta )\) is valid, or (b) \(P\) reduces to a proof term \(P^*\) relative to \(\Delta \) such that \((P^*,\Delta )\) is valid according to (a).

(II”):

An open proof term \((P,\Delta )\) is valid relative to a base \(\mathcal {B=(T,R,P)}\) if and only if each \((P^*,\Delta ^*)\) is valid relative to \(B\) when \(\Delta ^*\) is an extension of \(\Delta \) and \(P^*\) is obtained from \(P\) by first replacing all occurrences of free individual variables by terms in \(T\) and then in the result got after this substitution replacing all free occurrences of each variable \(\alpha ^{A}\) by closed proof terms \(Q\) of type \(A\) such that \((Q, \Delta ^*)\) is valid relative to \(B\).

 

Valid closed interpreted proof terms accord well with proofs as usually described in intuitionism. If \((P, \Delta )\) is such an interpreted proof term, then either \(P\) has introductory form, i.e. its outer operation is one of the five primitive operators, or is definitionally equal to such a proof term, i.e. reduces effectively to such a term by replacing a definienda in a definition in \(\Delta \) by its definientia. For instance, if \(P\) is of type \(A \rightarrow B\), then \(P\) has the form \(\rightarrow \!\!{I} \alpha ^{A}(P(\alpha ^{A}); B/A \rightarrow B)\) or reduces relative to \(\Delta \) to such a form. Furthermore, the validity of \((\rightarrow \!\!{I} \alpha ^{A}(P(\alpha ^{A}); B/A \rightarrow B), \Delta )\) implies that \((P(\alpha ^{A}), \Delta )\) is valid, which according to principle (II”) means that for any closed proof term \(Q\) of type \(A\) such that \((Q, \Delta ')\) is valid, it holds that \((P(Q), \Delta \cup \Delta ')\) is valid. Valid closed interpreted proof terms may thus be thought of as representing intuitionistic proofs.

Comparing an interpreted proof term \((P, \Delta )\) with arguments as defined earlier, we find that one obtains what was called an argument skeleton when one attends to just the types in the subterms \(P\) and that the operators occurring in \(P\) when interpreted by the definitions in \(\Delta \) can be seen as names of operations that appear as justifications of the inference steps in the argument skeleton. The difference between arguments and interpreted proof terms is the one discussed at the beginning of this section, namely that the justifying operations in an argument are applied to argument skeletons while the operations in an interpreted proof term are defined for objects built up from this very kind of operations.

The system of interpreted proof terms may be seen as an extension of a typed lambda calculus or a fragment of Martin-Löf’s intuitionistic type theory. Of course, \(\rightarrow I\) corresponds to \(\lambda \)-abstraction, and \(\rightarrow \!\!{E}\), when defined as above, to application of a lambda term, while the definition of \(\rightarrow \!\!{E}\) corresponds to the rule of \(\beta \)-conversion. Also, \(\forall I\) is a kind of \(\lambda \)-abstraction, while &\(I\) corresponds to pairing. To this kind of well-known operations, \(\vee I\) and \(\vee E\) add other specific operations. Haskell Curry seems to have been the first to note the relevance of lambda calculus to the semantic interpretation of a fragment of intuitionistic sentential logic. Extensions of Curry’s observation to deal with full intuitionistic predicate logic occur in [2] (and [5] - what is there called construction term corresponds to what is now called proof term).

In the approach outlined above, we may now define the validity of an inference rule in essentially the same way as before. For instance, if \(R\) is an inference rule

$$\begin{aligned} \frac{A_1 \quad A_2}{B} C, x \end{aligned}$$

that binds a hypothesis \(C\) and an individual variable \(x\), we say that \(R\) is valid if there is a definition d of the operational parameter \(\Phi \alpha ^{C} x(\pi _{1}, \pi _{2}; A_{1},A_{2}/B)\) such that for any base \(\mathcal {B}\), if \((P,\Delta )\) and \((Q,\Delta )\) are two interpreted proof terms valid relative to \(\mathcal {B}\) where \(P\) and \(Q\) are of type \(A_{1}\) and \(A_{2}\) respectively, then applying \(\Phi \) to \(P\) and \(Q\) we get a proof term \(\Phi \alpha ^{C} x(P, Q; A_{1},A_{2}/B)\) such that \((\Phi \alpha ^{C} x(P, Q; A_{1},A_{2}/B),\Delta )\) is valid relative to \(\mathcal {B}\).

All the elimination rules of Gentzen’s system of natural deduction for intuitionistic logic are easily seen to be valid in the sense now defined. The conjecture is that conversely all inference rules that are valid in this sense hold as derived rules in Gentzen’s system of natural deduction for intuitionistic logic.