1 Introduction: Carnap’s Problem

The aim of Carnap’s book Formalization of Logic (1943) was to point out the existence of what he called non-normal interpretations of classical logic: non-standard interpretations of logical constants which nevertheless validate all the classical laws. He also indicated ways to strengthen the usual proof systems so that such interpretations would not arise. In his review, Church recognized the problem, but was sceptical of Carnap’s remedy, arguing that the proposed proof-theoretic revisions were “a concealed use of semantics” (Church 1944, p. 496). Instead, he suggested, the use of semantic notions should be made explicit and vindicated, since no purely syntactic solution would work.

Later logicians and philosophers discussing the issue have not followed Church’s advice, however, but mostly engaged in the strengthening of proof rules. In this paper we do exactly what Church recommended. More precisely, we show that assuming a few largely uncontroversial principles of semantic interpretation—the most important of which is the principle of compositionality—Carnap’s Problem can be solved. Moreover, in contrast with earlier attempts, which in effect deal only with propositional logic, we show that the result holds for first-order logic as well. Indeed, eliminating non-normal interpretations of the quantifiers is the hard case, in the light of which any tentative solution to Carnap’s problem should be evaluated.

But do we really need to worry about Carnap’s Problem? Carnap himself took the issue quite seriously, and so do those who have pursued it later. At least we can safely say the following. Logical constants are instrumental to inference, and their usage is captured by means of syntactic rules in a formal system. When the language is given a definition of truth, logical constants receive interpretations representing their contributions to the semantic values of sentences in which they occur. So it certainly is a legitimate question to ask if the proof rules for logical constants determine these interpretations.

One may go further and consider the presence of non-normal interpretations unsatisfactory in principle, at least for a basic system like first-order logic. Carnap seems to have taken the absence of such interpretations to be desirable in its own right, expressing the ability of a syntactic system to adequately reflect semantics, on a par with more familiar properties such as soundness and completeness. Also, from the perspective of a theory of meaning, failure of proof rules to determine semantic interpretation appears to spell trouble for an account of meaning as use applied to the logical constants; witness the renewed interested in Carnap’s Problem in connection with debates about inferentialism (e.g. Murzi and Hjortland 2009; Garson 2013). At the very least, the presence of non-normal interpretations would make it hard to learn the (classical) meaning of logical constants for someone with access only to their rules of proof.

2 The Space of Solutions

We shall first further explore the space of possible solutions, in order to compare our explicitly semantic approach to existing solutions. Carnap’s Problem is the underdetermination of semantics (the interpretation of logical constants) by syntax (a syntactically defined notion of consequence). Therefore, since the problem concerns the match between syntax and semantics, three different kinds of strategies naturally emerge:

  1. (a)

    Syntactic strategy One may target the syntax, and strengthen the proof system, so that it imposes additional burdens on the semantics.

  2. (b)

    Semantic strategy One may target the semantics, and a priori constrain the class of possible interpretations, so that making the semantics determinate is easier.

  3. (c)

    Strong pairing strategy One may target the relationship between syntax and semantics and require more than correctness of provable sequents, so that the same proof system places heavier constraints on the semantics which is to match it.

Carnap himself followed a syntactic strategy and outlined different ways to strengthen proof systems that would eliminate non-normal interpretations. One was to formalize logical contradiction on a par with logical consequence, the other to allow for multiple conclusions. This second option has become popular among proof theorists (see Shoesmith and Smiley 1978; the idea goes back to Gentzen’s notion of a sequent). Alternatively, one may add a primitive notion of rejection, construed as a pragmatic force complementary to assertion (Smiley 1996; Rumfitt 2000). Technically, these systems all succeed in eliminating non-normal interpretations in classical propositional logic. Moreover, Hjortland (2014) shows that categorical characterizations of connectives can even be given for many-valued propositional logics, using multi-sided sequents. But it is easy to share Church’s scepticism about the philosophical significance of these results for Carnap’s Problem. Where does the extra expressive power embodied in these proof formats come from? Precisely because they outreach the familiar notion of inference (according to which some proposition follows from some other propositions), such formats may be suspected to rely upon semantic notions which are not part and parcel of our inferential practice. For example, multiple conclusions appear to presuppose a primitive understanding of disjunctive contents, and building the duality of assertion and rejection into the proof system could be argued to presuppose a grasp of negation (see Steinberger 2011 for a detailed rebuttal of multiple conclusions for the inferentialist).

Even more importantly, success in the propositional case for strategy (a) does not easily carry over to the first-order case. The latter is only cursorily dealt with in the literature, and existing treatments either assume a non-standard interpretation of quantifiers (Carnap 1943; Hacking 1979), or relax the standards for what it means to fix the interpretation of quantifiers (Smiley 1996). Non-standard treatments construe universal and existential quantification as infinitary conjunction and disjunction (under the assumption that every element in the domain has a name, and that the proof system contains an \(\omega \)-rule). This procrustean strategy shows at best that if quantifiers are reduced to connectives, what works for connectives works for quantifiers as well. But since at least Mostowski (1957) we recognize \(\forall \) and \(\exists \) as instances of generalized quantifiers, and the real question is whether the proof rules allow any other generalized quantifiers than these.

The strong pairing strategy, strategy (c) above, has recently been advocated in Garson (2013). Standardly, the semantics matching a syntactically given consequence relation is simply to be such that the consequence relation is correct with respect to it. Whenever a sequent is derivable, it must be valid, that is, any interpretation which makes its antecedents true is to make its consequent true as well. When rules are given which generate the consequence relation, stronger ties may be demanded. Inference rules are not only a way to produce derivable sequents. They say that if some consequences hold, some other consequence holds. Hence, as Garson suggests, one may ask that the semantics be such that inference rules preserve validity. Whenever a sequent can be obtained by means of an inference rule from sequents which are valid with respect to the semantics, that sequent should also be valid. Garson shows that such a strengthening of the ties between syntax and semantics resolves the underdetermination and yields intuitionistic semantics. The cost for this solution is a rather complex grasp of logical consequence—involving not just the knowledge of valid consequences, but the understanding of validity preserving mechanisms. After all, valid logical reasoning is often from premisses that are not themselves valid.

In the present paper, we wish to make a case for strategy (b). We will show that correctness with respect to classical logical consequence, together with a few principles of semantic interpretation, jointly suffice to fix the classical interpretation of the logical constants. Thus a potential learner does not need to grasp more than the idea of valid consequence—truth preservation, rather than the stronger notion of validity preservation as in (c)—and can get to the classical meanings. Comparing with solutions obtained along the lines of strategy (a), nothing more than standard inferential practice will be needed, and the classical interpretation of quantifiers will be recovered, by the same tactics which crack the propositional case. Thus, strategy (b), which, despite Church’s recommendation, has never been put to work, would seem to provide the most satisfactory solution to Carnap’s Problem.Footnote 1

Comparison with strategy (c) as implemented in Garson’s work draws attention to the importance of the choice of the underlying semantic framework. In Garson (2013), the models with respect to which logical connectives get an interpretation are sets of valuations. By contrast, Carnap’s Problem is usually phrased in a simpler extensional framework, where models are just valuations (Carnap 1943; Hjortland 2014; Shoesmith and Smiley 1978). This matters. First, for the question to be well-defined, one needs not only to pick up a certain relation of logical consequence, but also to fix the syntax and semantics of the language in which it is formulated. For logical languages, syntax is unproblematic, but one must choose the semantic values of expressions of various categories. Second, the richer the semantic values are, the more difficult Carnap’s Problem becomes. In keeping with Carnap’s original framing of the issue, we first adopt a standard extensional setting, solving Carnap’s Problem for propositional logic in Sect. 4 and for first-order logic in Sect. 5. But maybe we made our task too easy by sticking to an extensional semantics? In Sect. 6, we ask—and answer—Carnap’s question in a different, but also standard, framework for propositional logic, namely, that of possible worlds semantics.

In the next section, we lay down the semantic principles which will guide us. Finding restrictions on possible interpretations which do the job is not difficult. As noted by Garson, it would suffice, for propositional logic, to only consider valuations in which at least one connective among \(\lnot \), \(\vee \), \(\rightarrow \), and \(\leftrightarrow \) receives its standard interpretation: the interpretation of all the other is hereby forced to be standard (Garson 2013, p. 32). But such a restriction is clearly ad hoc: why assume that the standard interpretation of one connective is known in advance? The difficulty thus lies in finding a principled restriction on possible interpretations which does the job. We shall argue in the next section that general and independently motivated semantic principles provide what we need.

3 Three Semantic Principles

Our semantic strategy is completely standard from the perspective of contemporary formal (model-theoretic) semantics, in which compositionality is a corner-stone. It can also be supported by a learnability argument. Suppose the only empirical evidence available to a learner of the meaning of the logical constants is their behaviour in inferences. Carnap’s observation seems to indicate that this is not enough. But if there are semantic principles one can assume to hold for any language, these might sufficiently constrain the range of possible interpretations.

The argument rests on the hypothesis that a competent speaker needs to know the classical meaning of logical constants. This follows from the further assumption that semantic competence encompasses mastery of (classical) truth-conditions (even if meaning is not equated with truth conditions). Logical constants carve out truth-conditional content, and the fact that principles governing their use may suffice to fix their interpretation in advance is a distinctive property of logical constants. The extension of empirical predicates such as “red” or “blue” is not fixed by the functional role of colour concepts alone; it essentially depends on the way the world is. By contrast, we do not expect the world to help us determine which truth-function interprets “and” and which interprets “or”, or which more elaborate function is the interpretation of “all” or “some”. If this is to be knowable at all, it is knowable by any speaker who masters the appropriate rules of use.

The following three principles, which may be regarded as semantic universals, will suffice:

  • Non-triviality The language contains at least one false sentence.

  • Compositionality The semantic value of a compound expression is determined by the semantic values of its immediate constituents and the mode of composition.

  • Topic-neutrality (needed only for the first-order case) Logical constants are permutation invariant.

Non-triviality is a very weak requirement, hardly in need of motivation. The learnability argument for compositionality is well known: If a language can express indefinitely many distinct propositions, compositionality is our currently best explanation of its learnability (see e.g. Pagin and Westerståhl 2010 for discussion). And topic-neutrality, in the precise form of invariance under permutations of the universe, is almost universally agreed to be a necessary condition for logicality.Footnote 2 It guarantees that the logical core of a language is general enough to carve out content in any conceivable situation of language use, irrespective of what objects are being talked about.

Once again, note that one must choose the semantic values of expressions belonging to a given syntactic category. Only then does compositionality make a definite contribution. In the next two sections, we take for granted a standard extensional framework for propositional and first-order logic, and in the last section we look at propositional logic in an intensional framework where the semantic values of sentences are sets of worlds.

Thus, the learnability argument presupposes that our hypothetical language learner already knows, or guesses, what kind of language is to be learnt: what the syntactic categories are, and what kinds of things expressions of these categories stand for. We are not claiming that this framework may itself be derived from a learnability argument, nor that it should be. It could well be adopted just on the basis of its simplicity: if the learner succeeds in making sense of the data available to her using extensional semantic values, she has no incentive to consider richer semantic values. Of course, as linguistic data flows in, she may have to adjust and go for richer values.

Now suppose I know, or assume, that a given expression is of a certain category, though I don’t know what it means. Then its interpretation must belong to a certain class of semantic objects, but this class may well be infinite, even uncountable. Can I find out exactly which interpretation it has, merely by studying the given relation of logical consequence? This is (our version of) Carnap’s question.Footnote 3

4 Non-normal Interpretations in Propositional Logic

We start with classical propositional logic, where non-triviality and compositionality suffice to solve Carnap’s Problem. Let PL be a standard language for propositional logic, usually with connectives \(\lnot ,\wedge ,\vee ,\rightarrow \), but others may be added. A valuation is a bivalent assignment of truth values 1 or 0 to all sentences.Footnote 4 A consequence relation is a relation \(\vdash \) between sets of sentences and sentences. A valuation v is consistent with a consequence relation \(\vdash \) if and only if, whenever \(\Gamma \vdash \varphi \), if \(v(\psi ) = 1\) for all \(\psi \in \Gamma \), then \(v(\varphi ) = 1\). Carnap’s question, expressed in the current terminology, is about the valuations which are consistent with \(\vDash _{PL}\), where \(\vDash _{PL}\) is classical logical consequence in PL.

Following Carnap, call a valuation v normal if it interprets the connectives as the intended truth functions, that is, if v is \(\#\)-boolean for each connective \(\#\), where

  • v is \(\lnot \)-boolean if for every \(\varphi \), \(v(\lnot \varphi ) = 1\) iff \(v(\varphi ) = 0\)

  • v is \(\wedge \)-boolean if for every \(\varphi \) and \(\psi \), \(v(\varphi \wedge \psi ) = 1\) iff \(v(\varphi ) = v(\psi ) = 1\)

  • etc.

Carnap showed that there are exactly two kinds of non-normal valuations consistent with \(\vDash _{PL}\). The first consists just of the valuation \(v_{\mathrm{T}}\) which gives every sentence the value 1. It is trivially consistent with \(\vDash _{PL}\), but since \(v_{\mathrm{T}}(p) = v_{\mathrm{T}}(\lnot p) = 1\), it is not \(\lnot \)-boolean (the other connectives can be interpreted normally). The second kind has at least one false sentence and fails to be boolean for at least one binary connective. A typical example is the valuation \(v^*\) given by

$$ v^*(\varphi ) = 1\,\,{\hbox{iff}}\,\,\varphi \,\,{\hbox{is a tautology}}$$

Since logical consequences of tautologies are themselves tautologies, \(v^*\) is consistent with \(\vDash _{PL}\), but we have \(v^*(p) = v^*(\lnot p) = 0\) and \(v^*(p\vee \lnot p) = 1\), so \(v^*\) is neither \(\lnot \)-boolean nor \(\vee \)-boolean. In fact, Carnap showed that all non-normal valuations of the second kind classify each unary and binary connective as boolean or not in the same way. And all valuations consistent with \(\vDash _{PL}\) are \(\wedge \)-boolean, which points to an asymmetry between \(\vee \) and \(\wedge \) that Carnap found alarming.

With a consequence relation allowing multiple conclusions, the \(\vee \)-elimination rule

$$ \varphi \vee \psi \vdash \varphi ,\psi $$

would restore symmetry, and in fact such rules eliminate non-normal valuations. Our point, however, is semantic: non-normal valuations of the second kind are not compositional. To repeat, the compositionality principle says that the semantic value of a compound expression is determined by the semantic values of its immediate constituents and the mode of composition. In formal languages, where the syntactic rules are clear and we don’t have to worry about ambiguous expressions, this means that an assignment \(\mu \) of semantic values to expressions is compositional iff the following holds:

(PC):

For every n-ary syntactic rule \(\#\) there is a semantic composition function \(F_\#\) such that for every well-formed expression \(\#(e,\ldots ,e_n)\) we have \(\mu (\#(e,\ldots ,e_n)) = F_\#(\mu (e_1),\ldots ,\mu (e_n))\).

For the language PL, where \(\mu \) is a valuation and the semantic values of sentences are truth values, (PC) says that for every n-ary connective \(\#\) there is a function \(F_\#\) such that for all sentences \(\varphi _1,\ldots ,\varphi _n\),

$$ v(\#(\varphi _1,\ldots ,\varphi _n)) = F_\#(v(\varphi _1),\ldots ,v(\varphi _n))\quad (\#{\hbox{-compositionality}}) $$

Thus, it follows from compositionality that, in this case, \(F_\#\) is an n-ary truth function. When v is compositional we can moreover also take it to interpret \(\#\) as \(F_\#\), and thus write

$$ v(\#(\varphi _1,\ldots ,\varphi _n)) = v(\#)(v(\varphi _1),\ldots ,v(\varphi _n)) $$

The valuation \(v^*\) above is not \(\lnot \)-compositional: \(v^*(p)\) and \(v^*(p\wedge \lnot p)\) have the same value (0), but \(v^*(\lnot p)\) and \(v^*(\lnot (p\wedge \lnot p))\) have different values. Now, restricting attention to compositional valuations also restores the symmetry between \(\vee \) and \(\wedge \). Here is the general situation, for an n-ary connective \(\#\):

  1. (1)

    A compositional and \(\vDash _{PL}\)-consistent valuation v is \(\#\)-boolean if and only if \(v(\#)(1,\ldots ,1) = 1\).

So \(\wedge \), \(\vee \), and \(\rightarrow \) get their normal interpretations by any such valuation, but not \(\lnot \) or, say, the Sheffer stroke. In fact, the usual introduction and elimination rules for the first three connectives fix their normal meaning. For example, consider disjunction. We saw that there are valuations v consistent with \(\vDash _{PL}\) and sentences \(\varphi ,\psi \) such that \(v(\varphi ) = v(\psi ) = 0\) but \(v(\varphi \vee \psi ) = 1\). But this cannot be if v is compositional, for then

$$ v(\varphi \vee \psi ) = v(\vee )(v(\varphi ),v(\psi )) = v(\vee )(v(\varphi ),v(\varphi )) = v(\varphi \vee \varphi ) = 0 $$

since \(\varphi \vee \varphi\,\vDash _{PL}\,\varphi \), which is in fact an instance of the usual \(\vee \)-elimination rule.

The problem with negation (or the Sheffer stroke) comes from the trivial valuation \(v_{\mathrm{T}}\), which is compositional. However, \(v_{\mathrm{T}}\) is the only problem:

  1. (2)

    All compositional and \(\vDash _{PL}\)-consistent valuations are normal except \(v_{\mathrm{T}}\). In other words, the classical laws of propositional logic, together with the semantic universals of non-triviality and compositionality, eliminate non-normal interpretations of propositional connectives.

It is quite remarkable that proofs of essentially both (1) and (2) can be found, if one looks carefully, already in Carnap (1943).Footnote 5 Compositionality in the setting of classical propositional logic amounts to truth-functionality, or extensionality as Carnap called it. He duly noted which interpretations are extensional and which are not, but he didn’t assign any special role to this property. Church doesn’t mention the property in his review. Carnap’s idea of semantics in 1943 was very much inspired by Tarski, but one should bear in mind that the modern notion of model-theoretic semantics is of a significantly later date.

5 Non-normal Interpretations in First-Order Logic

Let us now see whether our semantic strategy also cracks the first-order case. To begin, what kind of non-normal interpretations are we to consider for first-order quantifiers? The situation is parallel to the propositional case, where non-standard compositional interpretations for connectives consist in alternative truth-functions. Guided by the syntactic category of \(\forall \) and \(\exists \), and more generally by the standard interpretation of noun phrases in formal semantics, we take symbols of this category to denote unary generalized quantifiers, that is, sets of subsets of the domain. The standard interpretation for the existential quantifier is the set of all non-empty subsets; for the universal quantifier, it is the singleton of the domain. Accordingly, a non-normal interpretation for the existential or universal quantifier is any set of subsets different from these.

More precisely, consider a first-order language L interpreted over a domain M and the corresponding classical relation of logical consequence \(\vDash _L\) in first-order logic. Since we assume compositionality, our interpretations amount to giving syntactically adequate semantic values to the logical and non-logical vocabulary. Since we furthermore assume non-triviality, we need not worry about the interpretation of connectives, which has to be standard, by (2). Hence, our interpretations can be taken to be pairs of the form \({\mathcal {M}},Q\) where \({\mathcal {M}}\) is a standard L-structure based on M interpreting the non-logical vocabulary of L, and Q is a set of subsets of M, interpreting \(\forall \) (we take \(\exists \) to be defined as \(\lnot \forall \lnot \)). Given \({\mathcal {M}}, Q\), every sentence of L receives a truth-value by means of a recursive definition of satisfaction. Clauses for atomic formulas and connectives are the standard ones; the clause for \(\forall \) now reads:

$$ {\mathcal {M}},Q\,\vDash\,\forall x \varphi \,\sigma \,\,{\hbox{if and only if}}\,\,\{a \in M | \,{\mathcal {M}}, Q\,\vDash\,\varphi \,\sigma [x:=a] \} \in Q$$

where \(\sigma \) is an assignment over M and \(\sigma [x:=a]\) is the assignment which is just like \(\sigma \) except that \(\sigma (x)=a\). In keeping with the propositional case, we say that a pair \({\mathcal {M}},Q\) is a normal interpretation if and only if \(Q=\{M\}\). When \({\mathcal {M}}, Q\) is normal, the previous satisfaction clause reduces to the more familiar

$$ {\mathcal {M}},Q\,\vDash\,\forall x \varphi \,\sigma \,\,{\hbox{if and only if for all}}\,a \in M, {\mathcal {M}},Q\,\vDash\,\varphi \,\sigma [x:=a] $$

Under our current working hypotheses, Carnap’s question is whether all pairs \({\mathcal {M}},Q\) consistent with \(\vDash _L\) are normal. As we shall shortly see, the answer is negative: non-triviality and compositionality do not suffice to eliminate non-normal interpretations. The problem is thus indeed harder for quantifiers than it was for connectives. But, again, an independently motivated universal semantic constraint, in this case topic-neutrality, makes it possible to zero in on normal interpretations, so that Carnap’s Problem is solved after all.

We shall now characterize the interpretations of \(\forall \) which are consistent with \(\vDash _L\), first in general and then when topic-neutrality is assumed. For the sake of simplicity, we will assume that our language contains predicate variables, and not just predicate symbols—without this simplifying assumption, similar results still hold but one must restrict attention to definable sets.Footnote 6 Given an L-structure \({\mathcal {M}}\), we say that a principal filter Q generated from a set A is closed under the interpretation of terms in \({\mathcal {M}}\) iff it is such that, for every term t with n free variables, for every sequence \(a_1,\ldots ,a_n\) of elements of A, \(||t||^{\mathcal {M}}(a_1,\ldots ,a_n) \in A\) where \(||t||^{\mathcal {M}}: M^n \rightarrow M\) is the function interpreting t in \({\mathcal {M}}\). As a particular case, when t is a term with no free variables, the condition is meant to require that \(||t||^{\mathcal {M}}\in A\). We then get the following characterization of possible interpretations for \(\forall \) (the proof is given in the “Appendix”):

  1. (3)

    An interpretation \({\mathcal {M}}, Q\) is consistent with \(\vDash _L\) if and only if Q is a principal filter closed under the interpretation of terms in \({\mathcal {M}}\).

As it should be, the standard interpretation \(\{ M \}\) for \(\forall \) is among the consistent ones, but, in general, there are many principal filters which are different from the trivial filter \(\{ M \}\), so there will be many non-normal interpretations for \(\forall \). In view of (3), how wild are these non-normal interpretations? When Q is a principal filter, there is a subset A of M such that a set \(B \subseteq M\) is in Q if and only if A is included in B. The satisfaction clause for \(\forall \) then simplifies to

$$ {\mathcal {M}}, Q\,\vDash\,\forall x \varphi \,\,{\hbox{if and only if for all}}\,a \in A, {\mathcal {M}},Q\,\vDash\,\varphi \,\sigma [x:=a] $$

Thus, the quantifiers inhabiting the jungle of non-normal interpretations are still quite well-tamed. “All” means “all A” for some non-empty set of objects A included in the full domain M. Dually, “some” means “some A” for the same set A. The rules of logic do not determine which objects the quantifiers range over, except for the fact that objects with a name are in its range, and its range is also closed under functions named in the language. This is exactly as far as non-normality goes. Objects in the set A generating the filter are the real objects, for which existential import is valid: \( \varphi (x) \,\vDash\,\exists \varphi (x)\) is satisfied by any \([x:=a]\) for \(a \in A\), for all formulas \(\varphi (x)\). The objects outside A are dummy objects which happen to be in the domain but do not have existential import in the sense just stated. Indeed, the satisfaction clause we get for \(\forall \) is nothing but the clause used in some semantics for free logic: A is the so-called inner domain of real objects and its complement in M is the outer domain of non-existing things (Bencivenga 1986). Since our interpretations are consistent with the rules of classical logic, all objects which can be named are to be interpreted in the inner domain, which is guaranteed by the fact that A is closed under the interpretations of terms.

By (3), whenever there is at least one constant symbol in the language, there is a smallest principal filter Q consistent with \(\vDash _L\): it is the principal filter generated by the set of objects interpreting constant symbols closed under the interpretations of function symbols. Identifying terms and the objects they denote, this amounts to a substitutional interpretation of the quantifiers. Thus, (3) says that the substitutional interpretation has a specific position among all possible interpretations of \(\forall \): it is the weakest interpretation consistent with the rules for \(\forall \) (weakest in the sense that the smaller the set from which the principal filter is generated, the easier it is to satisfy \(\forall x\varphi \)).

In non-normal interpretations, quantifiers make a difference between two kinds of objects, depending on whether they belong to the set generating the filter. This difference disappears only in the limiting case of normal interpretations, where this set is the entire domain and no object is left aside. Accordingly, the supplementary assumption that quantifiers treat all objects on a par, formally rendered as invariance under permutation, forces the interpretation of quantifiers to be normal:

  1. (4)

    A principal filter Q on M is invariant under permutation if and only if \(Q=\{M\}\).

Note that this also forces the equality symbol to be interpreted by real identity. The interpretation of the universal quantifier and the connectives being standard, axioms for identity guarantee that the equality symbol is interpreted by a congruence relation. Since the language contains predicate variables, this congruence needs to be the finest.

Permutation invariance for logical constants is thus our last semantic universal, labelled topic-neutrality. Together with non-triviality and compositionality, it ensures that connectives and quantifiers receive their normal interpretations in all interpretations which are coherent with the standard relation of logical consequence. Remarkably, permutation invariance, which is the traditional hallmark of quantifiers qua logical constants, was shown along the way not to follow from quantifier rules. As far as rules or logical consequence are concerned, quantifiers could well not be invariant under permutations; invariance is a supplementary semantic feature which cannot be guessed on the basis of inferential practice.

6 Non-normal Interpretations in Intensional Propositional Logic

Carnap’s question about the determination of the meaning of the logical symbols can be asked for any logic. We end by showing that in an intensional context, where sentences denote sets of possible worlds, the usual propositional connectives are still determined.

Let an intensional language be one built from propositional letters and the connectives \(\lnot ,\wedge ,\vee ,\rightarrow \), plus possibly intensional propositional operators such as \(\Box \). Now \(\lnot \) and \(\Box \) plausibly have the same syntactic category, so they should receive the same kind of semantic values. Clearly, truth values no longer suffice. So let a set W of ‘possible worlds’ or ‘states’ be given. We take, as in standard possible world semantics, the semantic values of sentences to be subsets of W. Compositionality (principle (PC) in Sect. 4) then dictates that the operators must be interpreted as operations on \({\mathcal {P}}(W)\) (of the appropriate arity). Thus, an interpretation I assigns such an operation \(I(\#)\) to each operator \(\#\).Footnote 7 For simplicity, we now treat propositional letters not as symbols to be interpreted, but as variables to be assigned values. So an assignment f is a function from propositional letters to \({\mathcal {P}}(W)\). This has the advantage that there is no trivially true interpretation (i.e. one under which every sentence is true), so we actually don’t need the non-triviality assumption any more.Footnote 8 Let \(\llbracket \varphi \rrbracket ^I_f\) be the value of \(\varphi \) under interpretation I and assignment f. The truth definition, relative to I, becomes:

  • \(\llbracket p\rrbracket ^I_f = f(p)\)

  • \(\llbracket \lnot \varphi \rrbracket ^I_f = I(\lnot )(\llbracket \varphi \rrbracket ^I_f)\)

  • \(\llbracket \varphi \wedge \psi \rrbracket ^I_f = I(\wedge )(\llbracket \varphi \rrbracket ^I_f,\llbracket \psi \rrbracket ^I_f)\)

and similarly for all other operator symbols in the language.

Continuing to use Carnap’s terminology, the normal interpretation, \(I_{\mathrm{n}}\), interprets \(\lnot \) as complement, \(\wedge \) as intersection, etc.Footnote 9 But if W is infinite, there are in principle uncountably many possible interpretations of the connectives, and Carnap’s question in the current setting is whether the laws of classical propositional logic single out \(I_{\mathrm{n}}\) as the only one.

In the intensional setting, an interpretation I is consistent with a consequence relation \(\vdash \) if \(\Gamma \vdash \varphi \) implies that for all assignments f,

$$ {\bigcap }_{\psi \in \Gamma }\llbracket \psi \rrbracket ^{I}_f \,\subseteq \,\llbracket \varphi \rrbracket ^{I}_f $$

(with the understanding that \({\bigcap }_{\psi \in \emptyset }\llbracket \psi \rrbracket ^{I}_f = W\)). Using fairly standard terminology, let us say that an intensional logic is a consequence relation, in some intensional language as above, which contains all tautological consequences. Then we can prove the following:

  1. (5)

    If I is an interpretation consistent with an intensional logic, then I is normal on the connectives \(\lnot ,\wedge ,\vee ,\rightarrow \).

Note that, modulo the assumption about the propositional variables, the earlier result (2) about \(\vDash _{PL}\) is a special case of (5), namely, when W is a unit set. The compositionality of I is the only assumption required for (5). We give a proof in the “Appendix”. Interestingly, this result requires more of \(\vDash _{PL}\) than the proof of (2): it is easy to see that in the truth-functional case, already the intuitionistic part of \(\vDash _{PL}\) is enough to fix the classical (!) meaning of the propositional connectives, whereas our proof of (5) requires double negation elimination and other non-intuitionistically valid laws of propositional logic.

In classical possible worlds semantics, all assignments to propositional variables are allowed. This is essential for the proof of (5). To see this, consider the so-called possibility semantics of Humberstone (1981); Holliday (2015) gives a comprehensive modern treatment. In the language with \(\lnot \), \(\wedge \), and \(\rightarrow \) as primitives, but \(\varphi \vee \psi \) defined as \(\lnot (\lnot \varphi \wedge \lnot \psi )\), and with the standard truth definition clause for \(\varphi \wedge \psi \), but the clauses from Kripke semantics for intuitionistic logic for \(\lnot \varphi \) and \(\varphi \rightarrow \psi \), logical consequence turns out to be exactly \(\vDash _{PL}\).Footnote 10 It is instructive to see why this is not a counter-example to (5). The reason is that possibility semantics, just as ordinary Kripke semantics for intuitionistic logic, places constraints on the allowed assignments, i.e. on the allowed models. Every assignment f must be persistent: if \(w \in f(p)\) and \(wRw^{\prime }\), then \(w^{\prime }\in f(p)\); possibility semantics adds a further constraint, called refinability. Clearly, imposing such constraints can in principle make room for more interpretations of the connectives being consistent with a given consequence relation. Classical possible worlds semantics, on the other hand, has no such constraints.

7 Conclusion

Our take on Carnap’s Problem is that it is made artificially difficult by considering all possible interpretations, no matter how bizarre. As speakers, we know that our language is going to be compositional, that it will have some true and some false sentences, and that its logical constituents will be topic-neutral. Therefore attention may be restricted to interpretations which satisfy these principles. Following Church’s advice, this amounts to explicitly factoring out the role of semantic principles and the role of inference rules in fixing the interpretation of logical constants, rather than covertly using semantic notions to make sense of extended inference rules. This strategy proves successful both for propositional connectives and for quantifiers. In the case of classical propositional logic, the mere change of perspective to that of compositional formal semantics shows that the technical solution is in fact given already in Carnap (1943). Moreover, in a possible worlds setting, where the connectives are interpreted as operations on sets of worlds, it turns out that compositionality still suffices for the solution. Interestingly, it does not quite suffice to get the standard interpretation of quantifiers from first-order logical consequence. With compositionality as the only semantic assumption, quantifier rules essentially pick up the semantics for free logic. Still, it is rather surprising that nothing more than compositionality is required for the laws of classical logical consequence to fix the interpretation of the logical symbols in these ways. Classical logicians will happily acknowledge topic-neutrality as the extra assumption needed to get to the standard interpretation of the quantifiers.

In addition to the solution of Carnap’s original problem, however, we hope to have at least indicated why Carnap’s question can be a reasonable and interesting one to ask about any consequence relation in any logical language. In fact, this opens up an area of logical investigation that seems quite promising to us. The most immediate further issue, from the perspective of the present paper, is for which intensional logics also other logical symbols like \(\Box \) are determined, but we shall leave this for another occasion.