1 Introduction

This paper deals with the ontological-epistemological foundations of two-dimensional (2D) semantics. Its aim is to argue that the two dimensions refer to different kinds of possible worlds and different notions of truth and to explain how the two dimensions are related. Thus it runs counter to the common presentations of 2D semantics, which refer to just one kind of world, usually without much thought about how to conceive of possible worlds, and to just one notion of truth in a world, which is, often tacitly, conceived as correspondence. A noticeable exception is the work of Chalmers (2006) who also realized that things are not so simple. However, my general scheme will radically diverge from his.

My scheme is, in a nutshell, that there is a pervasive ontological-epistemological dualism at the bottom of 2D semantics. This is no surprise, in a way; 2D semantics is made for first separating and then combining the ontological and the epistemological dimension of linguistic meaning. What has not been fully realized, however, is how profoundly this dualism already affects all the basic concepts.

In order to explain this, I give a very succinct summary of 2D semantics in Sect. 2. Essentialism will be a crucial premise of my paper; it is introduced in Sect. 3 with a very brief discussion of Leibniz’ principle of the identity of indiscernibles. On this basis, Sect. 4 argues that there are in fact three kinds of possible worlds (properly understood and not as ersatz worlds). Section 5 then asks what truth in a world could mean and argues that it indeed means different things for different kinds of worlds. Section 6 gives a sketch of how the dualism I have set up is to be bridged again. Section 7 concludes with a few comparative remarks, in particular regarding Chalmers (2006).

It is clear that this paper will address very deep and broad issues. Thus it is bound to be of a rather programmatic nature, and its arguments are far from complete and conclusive. However, the issues addressed must not be avoided. The paper would serve its purpose if the suggested scheme would provoke the insight that at least some such scheme must apply and start a discussion about which scheme that is if not mine.

2 A summary of 2D semantics

2D semantics ultimately originates from the path-breaking work of Kripke (1972) and Putnam (1975). It may be seen to result from the observation that many simple and complex expressions are context-dependent; they take on a specific meaning, i.e., an intension, only in a given context. According to the canonical description since Carnap (1947), intensions are functions mapping possible worlds to categorically appropriate extensions (i.e., objects for names and designators, sets of n-tuples of objects for n-ary relations, and truth-values for sentences). So, this observation entails that in general the linguistic meaning of a simple or complex expression should be described as a character or a two-dimensional function from contexts (of utterance) and indices to categorically appropriate extensions. This is how Kaplan (1977) has paradigmatically laid out the matter (who called indices points of evaluation—the terminology of contexts and indices is taken from Lewis (1980)). I shall refer to such a function as a 2D intension. It may also be conceived as providing an expression, for each possible context, with an intension as explained above—which has later been called a secondary intension (by Chalmers 1996, Sect. 2.4) or C-intension (by Jackson 1998), in order to distinguish it from other kinds of intensions to be introduced.

Of course, we have to say what possible contexts and indices are. The standard answer is this: Indices just are possible worlds, and contexts are centered worlds, i.e., triples 〈w, s, t〉 consisting of a world w and an object s and a time t existing in w. The center 〈s, t〉 is required for accounting for the basic indexicals “I” and “now”, which hopefully suffice for fixing all the other indexicals. And the shortest way to explain why an entire world is needed as a feature of a context is the 2D account of the attributive and the referential reading of definite descriptions. If we read “the first man on the moon” attributively, its reference is the first man on the moon in the index world, independent of the context. If we read it referentially, its reference is to be taken rigidly, i.e., as the same for each index world, namely as the first man on the moon in the context world, who is Neil Armstrong in the actual context world. Since definite descriptions may refer to anything in a world, we hence need an entire world as a feature of a context.

Sentences are true in a context and at an index. This presupposes a notion of truth in an index world. The crucial point about the entire 2D business, however, is that we also have a notion of truth in a context. A sentence is true in a context if it is true in that context and at the index identical with, or determined by, the context. This presupposes that contexts are somehow at least as rich as indices, so that the former indeed determine the latter. If contexts are centered worlds and indices are worlds, this crucial presupposition is satisfied; we can simply take the index world to be the context world itself.

This point generalizes to all kinds of referential expressions. The 2D intension of such an expression defines its diagonal intension—a term introduced by Stalnaker (1978)—which directly assigns to it a categorically appropriate extension for each context, namely its extension in the context and at the index determined by that context. For instance, the diagonal intension of the referential reading of “the first man on the moon” assigns to each context the first man on the moon in the context world. It is thus of a descriptive nature. Or, as we might say, diagonalization is derigidification. Diagonal intensions are also called primary intensions (Chalmers 1996, Sect. 2.4) or A-intensions (Jackson 1998).

Now, the next fundamental point is that these diagonal intensions are also supposed to serve epistemological purposes; they are to represent the epistemic meaning or the cognitive significance of expressions. That is, the diagonal intension of a sentence ϕ is to represent the belief a subject would express by uttering ϕ. This claim is already foreshadowed by Kaplan (1979, pp. 84f.) who called a sentence analytic (and meant what is called a priori in the present 2D picture) if it is true in all contexts. However, it is fully endorsed only in the so-called epistemological reinterpretation of 2D semantics originating from Stalnaker (1978). We know well enough how philosophically problematic belief and its expression are. However, I think that this claim concerning 2D semantics is basically correct (cf. Haas-Spohn 1995, in particular Sect. 1.3 and 3.9); so let us simply accept it here. (In fact, I think that de Finetti’s philosophy of probability has anticipated the epistemological use of diagonalization; see Spohn 2008, p. 19.)

This claim has a further crucial consequence, namely that diagonal intensions of sentences are, or represent, belief contents. A belief content in turn is usually represented as a set of doxastic possibilities, or as its characteristic function. This means that possible contexts and doxastic possibilities are the same or correspond to each other; otherwise those epistemological purposes cannot be served. In the standard account developed by Lewis (1979) this consequence holds. There, doxastic possibilities are represented precisely by centered worlds or triples 〈w, s, t〉, in order to account for propositional belief as well as for belief de se and de nunc, and hence in the same way as contexts.

Let me summarize so far: At the heart of 2D semantics lies what I call the congruence principle:

$$\begin{aligned} {\text{contexts}}&\,\left\langle {w,s,t} \right\rangle \Rightarrow {\text{indices}}\,w \hfill \\ &\Updownarrow \hfill \\ {\text{doxastic}}& {\text{ possibilities}}\,\left\langle {w,s,t} \right\rangle \hfill \\ \end{aligned}$$

In this way, 2D semantics promises to capture two basic modalities in one scheme, namely ontic or metaphysical and epistemic modality, and to explain their connection via diagonalization. Or in Chalmers’ (2006) grand, but apt terms: 2D semantics promises to establish the golden triangle of reason, meaning, and modality. For this reason, I henceforth prefer to speak of 2D intensions (=meaning) that incorporate epistemic or E-intensions (=primary or diagonal intensions) and ontic or O-intensions (=horizontal or secondary intensions). In my view, these matters are still most clearly and systematically laid out in Haas-Spohn (1995).

One should recall the tremendous clarificatory potential of 2D semantics. Before Kripke (1972), the entire literature on intensions (=meanings) was systematically ambiguous between C- or O-intensions and A- or E-intensions, though the latter reading often fits better. Kripke determinately interpreted intensions as C-intensions (“names are rigid designators”). This was unfair to the previous literature, but crucial, since it initiated the way to disambiguation, which was completed then by the conception of diagonal or A-intensions.

I should add, though, that I find this standard account still unsatisfactory. I think that all three items, indices, contexts, and doxastic possibilities, must be amended by a possibly infinite sequence of objects: the indices by a variable assignment (as Tarski has shown us), the contexts by a sequence of demonstrata or demonstrated objects (as Montague 1970 has insinuated and contrary to Kaplan 1977 who proposed to enrich the context by demonstrations), and the doxastic possibilities by sequences of intentional objects. I have argued for this view in Spohn (1997a; 2008, Ch. 16); in particular I have given there an argument for the need of intentional objects that is of a purely epistemological and not of a linguistically admixed nature, in a manner similar to the arguments of Perry (1980) and Kamp (1981). However, in the sequel I shall ignore this complication.

We could leave it at that. For all practical purposes we can perfectly develop 2D semantics on this non-committal basis. However, as philosophers we must not be silent about how to conceive of the entities referred to in the congruence principle and of possible worlds in particular, and we must say more about how to conceive of truth in those entities. This will be the topic of the paper.

In fact, I will argue later on that there are different kinds of worlds. It is evident that this may have dramatic consequences for the congruence principle and for 2D semantics in general. Thereby, it becomes an open question whether the w in the congruence principle always stand for the same kind of world. In order to leave that open we should depict the congruence principle more cautiously as:

$$\begin{aligned} {\text{contexts}}&\,\left\langle {w,s,t} \right\rangle \Rightarrow {\text{indices}}\,w^{o} \hfill \\ &\Updownarrow \hfill \\ {\text{doxastic}}& {\text{ possibilities}}\,\left\langle {w^{e} ,s^{e} ,t^{e} } \right\rangle \hfill \\ \end{aligned}$$

where w e stands for epistemologically possible worlds or epistemic worlds, for short, which are clearly required in the doxastic possibilities, and w o stands for metaphysically possible worlds or ontic worlds, for short, which are clearly required in the indices. The question is not whether the epistemic worlds only form a proper subset of the ontic worlds; this would be widely agreed. The question rather is whether epistemic and ontic worlds are entirely different entities. If the answer is yes, as I will argue in Sect. 4, that is, if w e ≠ w o, the arrows in the congruence principle become problematic. We may understand contexts epistemically; but how, then, do they determine indices? Or we may understand contexts ontically so that this determination is obvious; but how, then, do they correspond to doxastic possibilities? My picture leaves it open as to how we should conceive of contexts. Either way, though, we have to do something to fix 2D semantics.

Moreover, if w e ≠ w o, i.e., if “possible world” becomes ambiguous, truth in a possible world becomes ambiguous as well. This forces us to take a stance towards this possible ambiguity of truth. I will do so in Sect. 5. Section 6, then, will try to bridge the gaps we have found. These issues set our agenda, which help to eventually elucidate the philosophical foundations of 2D semantics.

3 Leibniz’ principle

Before continuing with my topic proper, I have to introduce my essentialistic premises, which are basic for my further considerations. We will have to talk about worlds and other objects, and then the first question is: how can we do so? What is an object? Or, what is perhaps the same, what makes for the identity of an object? Here, Leibniz’ principle provides an answer. The one direction, the indiscernibility of identicals (for which the label “Leibniz’ law” is often reserved), is clearly to be accepted; intensional constructions that may be true of an object under one description, but not under another, do not state properties of that object. Its other direction, the identity of indiscernibles (which I also call Leibniz principle and take to be its interesting part), has provoked many discussions; depending on which properties we allow for discernment, it seems trivially true or trivially false. For instance, if we allow identity with something to be among those properties, it is trivially true, and if we allow only intrinsic qualitative properties, it seems trivially false.

I think that there must be a substantial true version of Leibniz’ principle. The basic reason is that the assumption of primitive haecceities is entirely obscure to me. I fully accept the critical conclusions of Adams (1979) and take myself to be a metaphysical anti-haecceitist in the sense of Fine (2005, pp. 30f.), though without the actualist inclinations of Adams and Fine. For, either haecceities are unintelligible non-properties, as it were, that are nevertheless capable of individuating objects, or they are identity properties, rendering the insights promised by Leibniz’ principle outright circular (see also Fine 2005, pp. 180f.).

Hence, this assumption can only be avoided by some version of the identity of indiscernibles, which must not refer to identity properties. It may thus refer only to proper properties or relations, as one might call them, which are explained without overt or hidden reference to identity. However, indiscernibility cannot be restricted to intrinsic or qualitative properties, since we can easily imagine, and indeed find, many qualitatively indiscernible objects. This is generally agreed. So, relations are crucial for individuation. Indeed, the most common method of discernment is via spatiotemporal relations; objects occupying different (or not entirely the same) spatiotemporal locations must obviously be numerically different.

However, spatiotemporal relations are mostly contingent, and contingent relations in general are not good enough for distinguishing actual or possible objects. That is, they are good enough for most practical purposes, but not in principle. The general point is that contingent discernment in this world is sufficient for non-identity (this is why we can rely on it for all usual purposes), but it is not necessary. In terms of possible worlds, non-identity may show up only in other possible worlds. However, discernment in other possible worlds presupposes so-called trans-world identity, which cannot rely on contingent (relational) properties, but needs to be based on essential properties. This is how we are driven to an essentialistic version of Leibniz’ principle.

Let me give a familiar kind of example (a modal version of Tibbles, the cat, and Tib, the same (?) cat without a tail): There is me and two-legged me. Two-legged me has essentially two legs and ceases to exist as soon as I lose a leg, whereas I cease to exist only at my death (unless we believe in reincarnation and other fancy things). So, clearly, I and two-legged me are two different objects with possibly diverging temporal extensions. Hopefully, I will never lose a leg and will die with two legs. Then, I and two-legged me actually occupy exactly the same spatiotemporal region and have exactly the same contingent properties and relations. Two-legged me is a philosophical invention and not an ordinary object, but a perfect object nonetheless. However, it can be discerned from me only via its essential properties, which differ from mine, but make only a potential and not an actual difference.

I conclude that Leibniz’ principle must be stated only in terms of an object’s essential proper properties including its relational ones. That is, the substantial version sought after is this: if x and y are (numerically) different, they differ in their essence, i.e., in at least one of their essential, possibly relational genuine (non-modal) properties; and this holds for all possible objects x and y. In short: each object is individuated by its essence (the requirement that the essence consists of non-modal properties is to exclude properties like necessarily being F from the essence; I agree with Fine (1994) that we first have to introduce essences and can understand modal properties only on their basis). Forbes (1985) has powerfully argued for this conclusion; see also Mackie (2006, ch. 2). We need, however, not engage into the sophistications of Fine (1994) and distinguish between constitutive and consequential essence.

This version at least fits paradigmatic examples. For instance, I am essentially human and essentially procreated from this egg of my mother and this sperm of my father, indeed uniquely procreated; I essentially have no monozygotic twin. And since my parents are individuated in a similar way, my entire genealogy belongs to my essence. This relational essence distinguishes me from all other possible objects. Natural numbers are another example. They are essentially related to all the other natural numbers as fixed in second order arithmetic, the structure of natural numbers, in which each number plays its distinguished relational role. This is how structuralism in mathematics conceives of natural numbers (cf., e.g., Shapiro 1997).

So, I take this version of Leibniz’ principle and the essentialism entailed by it to be at least plausible and defensible. Of course, I am aware that I am thereby stirring up a hornets’ nest. I have given a somewhat more extensive defense in Spohn (2007). Leitgeb and Ladyman (2008) have criticized it at least with respect to mathematical objects. Sophisticated discussions are going on concerning the identity of elementary particles as described in quantum theory, in which the essentialistic Leibniz’ principle ramifies in surprising ways; cf., e.g., Linnebo and Muller (2013). So, maybe my blunt version of Leibniz’ principle requires sophistication. Mackie (2006) strengthens the present version, only in order to emphasize its centrality—and to demolish it afterwards. Clearly, it has its costs, but the alternatives are not cheap, either. In fact, I take primitive haecceities to be a high cost, and I could not see how the various criticisms can avoid this cost. However, this is not the place of trying to reach a balanced judgment. Let me rather accept the essentialistic Leibniz’ principle as an explicit background for the issues I am going to develop.

Possible worlds will be one central issue, and in discussing them we will obviously have to discuss David Lewis’ ontological picture, as most extensively presented in Lewis (1986). The essentialism just introduced is fundamentally at cross-purposes with this picture. Let me explain this important point right away.

Lewis (1986) unfolds a thoroughly mereological picture. At its base is mereological identity: two objects are identical iff the one is a proper or improper (mereological) part of the other and vice versa. Thus, I and two-legged me are identical objects (if I will die with two legs). Upon this mereological base Lewis builds a kind of ersatz essentialism with the help of his counterpart theory developed already in Lewis (1968): the essential properties of an object are those which it shares with all its counterparts. The counterpart relation across worlds entails mereological disjointness and is formally a bit weaker than ordinary identity (it is neither symmetric nor transitive, for instance). This helps Lewis in dealing with various puzzles of identity.

I do not want to doubt that this is a coherent picture. Within it, it is not so urgent to make up one’s mind about Leibniz’ principle, because it builds on a different standard of identity, which is perfectly clear without referring to Leibniz’ principle. However, when we look at this picture with an essentialistic eye, as I will do here, it becomes incoherent. In this perspective, the mereological properties of an object must be taken as its essential ones, since they define its identity. But this applies to none of our ordinary objects. To some extent my shape may be essential to me. However, the entire wit of all ordinary objects, as we conceive of them, is that their location and motion are contingent. However, this does not hold for mereological objects essentialistically construed; they occupy their spatiotemporal region essentially.

Thereby, Lewis establishes a double standard of objecthood and identity. On the object language level he can simulate, as explained, our talk of ordinary objects, their identity, and their essential properties. On the metalinguistic level we have a different, mereological ontology. Here, one ordinary object is represented by many mereological objects (namely by its counterparts in different worlds), and many objects become one mereological object (namely all those only possibly, but not actually differing in their spatiotemporal extension, like me and two-legged me). Either the metalinguistic level is supposed to deal as well with ordinary objects by identifying them with mereological objects. Then we indeed have the double standard of identity, and ordinary objects like me are treated differently on the object and the meta-level. Or it is granted that the metalinguistic level in fact assumes a revolutionary mereological ontology in which we find none of the ordinary objects. But why should we explain talk of ordinary objects with the help of a revolutionary ontology? To be sure, the revolutionary ontology is contained in an essentialistic ontology, but it does not play a distinguished role there; there is no essentialistic reason for reinterpreting ordinary objects in that revolutionary part.

So, either position seems awkward. I will return to these points. Of course, these remarks do not decide between a mereological and an essentialistic picture. In order to do so we would have to engage in a broad ontological argument, e.g., about the various puzzles of identity. As explained above, though, essentialism is not an untenable presupposition. And from that perspective Lewis’ picture looks as incoherent as displayed. Let me now proceed to the issues announced in the title.

4 Three kinds of worlds

My first claim is that there are three kinds of possible worlds. That there should be three kinds may make you curious. However, that there is more than one kind is pretty obvious from the literature. I am not referring to possible worlds as used in modal logic or linguistic semantics. There they just form an exhaustive set of mutually exclusive don’t-know-what’s, of reference points; so, exactly one must be the actual one. It is not important to be more specific; for those purposes, the non-committal picture first presented here is good enough. I am also not referring to the many kinds of ersatz worlds, as paradigmatically discussed by Lewis (1986); I fully accept his criticism. I am concerned here only with genuine worlds (whatever “genuine” is to precisely mean here).

There we have, first, Lewisian worlds, huge, in fact maximal spatiotemporal extensions that are somehow filled, that is, possible objects that are maximal regarding their space–time. Indeed, with Lewis we should speak here of space–time-like extensions, because it is not so clear what possible space–times are; so much we have learned in the 200 years after Kant. This is a perfectly acceptable sense of “possible world”. There is a harmless sense in which there are not only real objects, but also possible objects (like the book I intended to write, but didn’t), and if we grant this, the size of the objects does not matter at all. Let us call Lewisian worlds universes, in order to distinguish them from the other kinds to be introduced.

However, the above version of the Leibniz principle requires us to conceive of these universes, just as of other possible objects, under the essentialistic perspective. That is, they must be individuated by their essential properties. What are they? Universes have all their parts and all their intrinsic properties essentially. They have no space of contingency at all. As soon as the tiniest bit of such a universe is changed, we have another universe. And the precise way in which a universe is filled already determines which universe it is; recall Lewis’ peculiar characterization in (1986, p. 2) that “every way that a world could possibly be is a way that some world is”. I cannot think of any other possible object that is individuated simply by its intrinsic properties.

There is a different, older conception of possible worlds. Recall Wittgenstein’s dictum: “die Welt ist alles, was der Fall ist” (1922, Proposition 1). And it is states of affairs that do or do not obtain. Here, an atomic state of affairs is a combination of a property or relation and an appropriate number of categorically suitable objects, i.e., a Russellian proposition; and complex states of affairs are finite or infinite Boolean combinations of atomic ones. Hence, a possible world in the Wittgensteinian sense is a collection or conjunction of states of affairs that is consistent or somehow coherent and somehow maximal or complete. What all this means is not easy to say, and I am not sure that is has been said in a satisfying way. For instance, such a world must contain only compossible objects, and with each object it must contain all objects from which that object ontologically depends; each world that contains me must contain my parents. What completeness might mean here is vastly more difficult to say; my hunch is that Wittgensteinian worlds are much richer than is usually thought (see also Sect. 6 below). In any case, in his continuing efforts (cf., e.g., Armstrong 1997) Armstrong has paradigmatically explored those Wittgensteinian worlds. However, I do not share his actualist inclications, which is why I do not completely side with him, either.

Be this as it may, it is clear that Wittgensteinian worlds belong to an entirely different ontological category than Lewisian worlds or universes. They are of the ontological category of states of affairs, which is clearly distinct from the ontological category of concrete objects to which universes belong. In order to have a distinctive label, let me call them totalities.

Lewis (1986, Sect. 3.2) rejects this conception, though in an apparently odd way: He subsumes the conception under linguistic ersatzism by turning such a totality into a linguistic description by what he calls the Lagadonian method. Within his premises he must do this. Beyond those premises, however, we do not seem forced to this move. This is no substitute, though, for a critical examination of Lewis’ arguments, for which there is no place here.

To sum up so far: We seem to have to accept two kinds of possible worlds, universes and totalities. As announced, this opens the issue which kinds of worlds to use in 2D semantics. And it raises the further issue how the two kinds relate. 2D semantics owes us a response to those issues. The common attitude, including that of David Lewis, seems to be that these issues are fairly trivial. A universe simply determines a totality, namely the set or conjunction of all facts obtaining in that universe. Therefore, universes are the basic entities, totalities are derived ones, and 2D semantics need not be concerned. The crux, however, is: How does this determination work? I will argue below that this is far from trivial. In any case, as long as this is an open question, we should accept both, universes and totalities, and consider those issues vis à vis 2D semantics.

Before continuing, a side remark: Of course, states of affairs are usually taken to be objects in turn. However, if so, they are a different kind of objects, not concrete objects in space and time, but somehow abstract objects, which may obtain in a totality and simply exist, but which do not exist in a totality (or in a universe). (Similarly, I always found it, strictly speaking, nonsense to say that numbers exist in a possible world, indeed necessarily exist in all possible worlds. In an essentialistic perspective, necessary existence is not existence in all possible worlds.) Moreover, turning states of affairs into (abstract) objects is not got for free; it is additional business, and a delicate one insofar as it is threatened by paradox, just as the identification of classes with sets.

I announced a third kind of world. What might it be? Let us look again at the universes. They are very unusual objects insofar as each and every detail is essential to them; as I said, they have no space of contingency. Usual objects like me, by contrast, have an essence, but also many contingent properties and relations, such as presently writing a paper. Indeed, most things in the world have a large space of contingency and even exist contingently. But which sense of “world” is involved here? I think there simply is, and we speak of, the world, not in the sense of the actual universe (taken rigidly), nor in the sense of the actual totality, both of which leave no room for contingency. Rather, the world is the object with the largest space of contingency that harbors any contingent object. Thus it has a minimal essence, which consists only in having a maximal space–time-like extension that is somehow filled (and which may also comprise further a priori features of the world such as being sensible and intelligible—more on this below in Sect. 6). Let us put the precise content of that essence to one side; this would entangle us in difficult ontological and epistemological argument. Here, the point is that each specific filling of its maximal space–time-like extension is contingent to the world. That is, each Lewisian world or universe is one most specific realization or unfolding of the world having the minimal essence. Or as we might say, in variation of the peculiar Lewis quote above: a Lewisian possible world is a way the world might be. Let me henceforth refer to it as the World.

My model here is the familiar example of the lump of clay. This lump has some essential properties that distinguishes it from other lumps and other objects. Still, this lump has a large space of contingency; it may be turned into a statue or a vase or whatever. The statue, the vase, etc., have a richer essence. They could not have been made from a different lump of clay; but they have more essential properties than the lump they are made of. For instance, their shape is essential to them (within limits), but not to the lump, and they usually start existing after and cease existing before the lump. I and two-legged me provide another example. I have some essential properties (distinguishing me according to the essentialistic version of Leibniz’ principle), and two-legged me has one more, namely having two legs. Take these examples to the extreme, and you understand the relation between the World and the many universes.

In making up the World I seem to have adopted a very doubtful principle of individuation: If each object is individuated by its essential (relational) properties, then it seems that one may conversely take any set of properties, declare them to be essential of an object, and thus individuate that object. Can I really constitute the golden mountain in this way, the object that is essentially golden and a mountain and essentially nothing else? In constituting the World I seem to have done just this: take very few properties and declare them to be essential of the World. However, this principle of individuation is at least implausible, and it is not my intention to endorse it. It is a most interesting issue, though, to which extent it holds. And it certainly does hold to some extent. For instance, one move we can take is this: whenever we are given an object, we can take any set of contingent properties of that object and declare them to be essential of a new object thereby constituted, as I did in the case of me and two-legged me. I call this move essentialization. (I became convinced of that idea of essentialization through Benkewitz (2011, pp. 87ff.), or rather through its 20 years older first version.)

This is the move that obviously carries us from the World to all the universes, but not the other way around. So, when above introducing the World, I appealed to the essentialistic version of Leibniz’ principle, but not to any specific principle of individuation. I simply appealed to your intuition that there is some concrete object with a maximal space of contingency and hence a minimal essence.

The World is my third kind of world. Well, it is not really a kind, because there is only one of it. In fact, in an essentialistic perspective it is of the same kind as Lewisian universes; it is also a concrete object essentially of maximal space–time-like extension. Calling it a third kind is rather an emphatic device justified by the fact that the World does not come into view within the Lewisian picture, which indeed contains only universes.

Even the last claim is perhaps not quite accurate. We might take any universe and ask what its counterparts in Lewis’ sense might be. We certainly do not have a good intuitive grip on this question, but one sensible answer seems to be: all universes are counterparts of any given universe. However, this amounts to there being only one universe, the World—on the object language level; that is, “necessarily, there exists exactly one universe” is a true sentence on that level. Still, there are uncountably many universes on the meta-language level. What I above called the Lewisian double standard of identity is thereby driven to the extreme. No such ambiguity affects the essentialistic picture.

We need not take the maximal move of essentialization from the World to all the universes; we could essentialize only some contingent aspects of the World. This would correspond to giving the counterpart relation among universes less than maximal extension. There are all possibilities in between. This makes clear that there is an entire continuum of worlds between the World and all the universes and hence that there is not really a third kind of world in the essentialistic perspective. However, I shall not further refer to that continuum in between.

So, to resume: I have argued that here are two categorically different kinds of possible worlds: totalities and universes. And within an essentialistic perspective, the latter category is indeed much richer and does not only contain universes with a maximal essence, but also the World with a minimal essence (and a continuum in between, which we may ignore). Sometimes I will still speak of worlds or a world, however only in a generic sense.

5 Two kinds of truth

So far, so good. As long as we are only doing pure ontology, we might accept many things. We might as well accept the various kinds of possible worlds. Claims of pure ontology become contestable only when they have consequences. What may the consequences be? This is why I have introduced the background of 2D semantics, which makes essential use of possible worlds. We may thus think about which kind of worlds plays which role there. The role they play is that reference and truth are explained relative to contexts and to indices and thus twice relative to worlds—if we neglect the other components of contexts, as I shall do here. So, we have to think about two things at once: what truth might mean relative to the different kinds of worlds, and whether and in which way this fills the role of ontic and epistemic truth in 2D semantics, i.e., of truth in ontic and in epistemic worlds.

Let’s start in a trivial way. We may say that an utterance, i.e., a sentence in a context, expresses a certain ontic proposition, which is simply its ontic O-intension. We may represent that ontic proposition as a set A o of ontic worlds, and then this proposition A o is true in the ontic world w o if and only if w o ∈ A o. Likewise, we may say that a sentence expresses a certain belief or belief content, i.e., an epistemic proposition, which is simply its epistemic or diagonal E-intension. Again, we may represent that epistemic proposition as a set A e of epistemic worlds, which is true in the epistemic world w e if and only if w e ∈ A e. However, these are trivial formal explanations; we must ponder the substance behind them.

The ontic side seems to be the simpler one. Ontic worlds are to capture metaphysical or ontic necessity and possibility. What is the origin of these necessities? Analyticity certainly is one source; analytic truths a fortiori are metaphysically necessary. However, the specific character of metaphysical or ontic necessity flows from the identity and the essence of objects and of properties and relations as well; as mentioned, I entirely agree here with Fine (1994) concerning the direction of analysis between essence and metaphysical modality; “a metaphysical necessity has its source in the identity of objects” (Fine 2005, p. 7). And this character is directly and explicitly reflected in totalities or Wittgensteinian worlds. They consist of states of affairs, which in turn are built from objects, properties, and relations and thus from entities that are the sole genuine source of metaphysical necessity. Therefore, the most straightforward thing we can do is to identify ontic worlds as they are used in 2D semantics with these totalities.

If we do, ontic propositions, i.e., ontic intensions of utterances, are just states of affairs in the sense explained. And an utterance is true in an ontic world, i.e., in a totality w o, if it corresponds to the facts, i.e., if the state of affairs it describes obtains in the world, i.e., is contained in the totality w o (either as an element of a set of facts or as conjunct of an infinite conjunction of facts).

This is nothing but the traditional correspondence theory of truth, and it sounds fairly trivial. When we have to be specific about the sentences or utterances and the states of affairs described, it sounds even more trivial, because we have to use the very same words for grasping the linguistic as well as the ontic side, as we do in Tarski’s convention T. The (context-independent) sentence “snow is white” is true iff it is a fact that snow is white, i.e., if snow is white. This is why we presently have a bunch of deflationary truth theories, which differ in subtle details, but which I take to be variants of the correspondence theory of truth (the variants are most thoroughly displayed, e.g., in Künne 2003; Halbach 2011). I think the correspondence theory can be rejected only by those who ontologically burden facts so heavily that there are too few facts for correspondence, or by those who epistemologically burden the correspondence with the unsatisfiable demand that it should serve as a vehicle for recognizing the truth.

Even if trivial, my remarks already have an uncommon consequence: ontic worlds are not universes. The idea that they are basically universes instead of totalities can be motivated only by the allegedly trivial determination of totalities by universes. Somehow, Lewisian worlds have too much dominated our notion of a metaphysically possible world.

Indeed, Lewisian universes are not well suited as ontic worlds as used in 2D semantics, because they do not well represent the character of ontic necessity, despite Lewis’ efforts to the contrary. The reason lies in Lewis’ mereological picture, which, as argued at the end of Sect. 3, only provides a kind of ersatz essentialism, but becomes problematic if essentialistically construed. With the help of the counterpart relation Lewis attempts to retroactively reconstruct ontic modalities within universes. This cannot hide the fact, though, that these modalities are originally foreign to them. From an essentialistic perspective we better stick to our conclusion: ontic worlds are totalities.

Of course, my arguments against Lewis’ set-up are not conclusive; there are no conclusive arguments in this area. Also, I had emphasized that Lewis’ picture is coherent by itself; it becomes peculiar only by essentialistic lights. However, my arguments should sufficiently motivate why I want to shift the picture. I should also hasten to add that Lewisian possible worlds remain to be metaphysically possible objects, indeed world-like objects; I have introduced them here as such. It is only that they are not well suited for playing the role of ontic worlds in 2D semantics.

Let us turn to epistemic worlds or doxastic possibilities and to the truth of epistemic propositions in them. First, we should note that totalities, though apt as ontic worlds, are not well suited as epistemic worlds. At least, it is a familiar point that states of affairs or Russellian propositions do not represent belief contents, or do so only within an externalistic conception of mental contents. However, it was and is the ambition of 2D semantics to provide an internalistic account of mental contents. (Recall, e.g., how vigorously Fodor 1987, Ch. 2, defends individualism or narrow contents with the help of 2D semantics. See also Haas-Spohn 1995, Sect. 3.8–9 and 4.4.) And then that familiar point holds, simply because we are usually not acquainted, in the strict Russellian sense, with objects (and properties and relations); we have no rigid epistemic access to them, even if we rigidly refer to them. This is so in turn because we usually do not know the essential properties of objects (and properties and relations); we do not need to know those properties for rigidly referring to the objects. This is not to say that belief contents cannot be represented at all by sets of totalities; but it prevents all straightforward ways of doing so.

This point at least motivates the search for an alternative conception of epistemic worlds or doxastic possibilities. In any case, they are supposed to provide something like the space of possibilities in which we gather all our experiences and about which we form our beliefs. By acquiring, revising, and strengthening our beliefs in response to our experiences we attempt to reduce this space as far as possible and to exclude as many possibilities in this space as possible, thus arriving at ever more certain and determinate beliefs. A belief in the epistemic proposition A e precisely excludes all the possibilities outside A e. But what precisely is this space of possible experiences? It seems perfectly natural to say that we experience the world and nothing else, whatever it is. And it is a further natural step to say that the object of our experience, the world, is precisely the World as introduced above with its minimal essence and its maximal space of contingency. In other words, this space is the set of Lewisian universes into which the World might unfold.

If we thus conceive of the space of contingency spread by the World as the space of epistemic possibilities, we thereby load the World with our a priori knowledge that holds in all epistemic possibilities. A priori we know that we are amidst some space–time-like environment that must be maximal in a space–time-like sense, i.e., that must be a universe, but the nature of which is completely unknown and to be explored by us. Maybe the World should be bestowed with further a priori features besides space–time-like maximality. As I have indicated, sensibility and intelligibility are plausible candidates for this, and I will return to this idea in the next section. However, for the present purposes it is not necessary to develop an account of apriority.

So far, I have suggested, again contrary to widespread opinion, that Lewisian universes precisely serve as epistemic worlds. Next, we have to inquire into the pertinent notion of truth. I want to argue that this is not the correspondence notion, but another one, namely the pragmatic notion of truth. So, let us ask what truth in or relative to a universe might mean.

We thus speak of truth in or relative to a concrete object. That’s strange talk. It cannot be understood in a correspondence way. What should it mean that sentences or propositions correspond to a single concrete object, say, my dog Fido? Are many sentences or propositions here supposed to correspond to one object? However, one should have thought that the intended correspondence is one–one, at least at a propositional level. I intend to thereby raise a serious problem for the somewhat careless talk of truth in universes.

A serious problem? Hardly. There seems to be a simple answer. To speak of truths in or relative to Fido is simply bad English; what is meant are the truths about Fido. Those are many even if Fido is only one, and thus my cheap cardinality argument has no force. However, what are the truths about Fido, i.e., Fido as such? Most of Fido’s intrinsic or relational properties are contingent. He does not have them as such, he has them in one world, but not in another. In other words, most truths about Fido are world-relative. If there are any truths simpliciter about Fido, they can refer only to Fido’s essential, possibly relational properties, which are few. So, this also provides us with an insufficient notion of truth in or relative to a concrete object; in this sense most assertions, say, about Fido are neither true nor false.

Has my argument thereby improved? On the contrary. It now seems that I can be refuted by my own standards. Above I said that under an essentialistic perspective Lewisian universes are peculiar concrete objects, insofar as all of their intrinsic properties, all of the events occurring or facts obtaining in them are essential to them. Hence, if I explain truth in or relative to an object via its essential properties, then truth in or relative to a universe is complete. In this special case, it seems, my explanation leaves nothing to be desired.

Have we thereby saved a correspondence notion of truth for universes? I do not think so. Note that universes are realizations or, in my above technical sense, essentializations of the World. So, we should find a notion of truth that applies to the World just as much as to universes (and all the worlds in the continuum in between). With respect to the World my above problem about truth is definitely aggravated. Nothing but a few a priori truths are true of the space of experience, of the World as such, given its very poor essence.

Still, we would like to say that there are lots of contingent truths about the World, which, however, do not apply to the World as such. We have to find out what they are; we have to grasp the World cognitively and epistemically. We have to form concepts that are apt for the various parts and levels of the World; we must acquire beliefs about the World that are composed of these concepts; and we have to extend, confirm, revise, and complete those beliefs. Only if we could drive this exploration to its ideal limit—something we can hardly counterfactually imagine and never actually carry out—only if we could thus acquire complete experience in the World and form a complete judgment about the World, only then would we know all contingent truths about the World, i.e., in which universe we live or into which universe the World has actually unfolded.

The point I want to make is that it is not determinate beforehand, but only after our complete conceptual and epistemic grasp of the given universe, what that universe is into which the World has unfolded, and what all of its essential properties are. Afterwards, we may again apply the correspondence notion of truth; that’s trivial. But doing so presupposes having acquired that complete conceptual and epistemic grasp.

We cannot reverse the order. The picture might be: There it is, the universe with all the objects, properties, and relations it contains, so that the intrinsic properties of the universe consist in all the facts obtaining in it; and now we only have to grasp this inventory of the universe. However, this picture does not work; we could speak in this way only from the fictitious vantage point of the ideal limit of inquiry, when we have perfected our conceptual and epistemic grasp of the given universe.

What then is the notion of truth that is pertinent to the World and its possible unfoldings into universes? I have already referred to it, more or less explicitly. It is, I think, Peirce’s pragmatic notion of truth or the internal notion vigorously propounded by Putnam in many writings, e.g., in Putnam (1981). The coherentist notion, relatively best captured by Rescher (1973), and the evaluative notion of truth ventured by Ellis (1990) may be said to similarly aim at a cognitively relevant notion of truth, too. Certainly, these attempts may not be equated; the matter is obviously not so determinate and not so well understood.

I prefer Peirce’s and Putnam’s characterization. According to it, a present belief of mine is true if and only if it survives all further rational belief formation, actual as well as counterfactual. In other words: a proposition, whether or not presently believed, is true if and only if it is held to be true in the ideal limit of inquiry after complete experience and full exercise of judgment. And it is true then, simply because there is no experience and no reason left that could prove otherwise. Of course, this ideal limit of inquiry is extremely counterfactual in various ways. Only tiny spatiotemporal sections of the actual world are accessible to us, and we can actually process only tiny amounts of information. We cannot at all inquire into other possible universes; at best we could build midget fakes of some of them in the actual world. Moreover, the limit of inquiry tacitly assumes that we could explore a universe and leave it unchanged at the same time. Still, this does not invalidate the notion of an ideal limit of inquiry.

Where that ideal limit comes to lie is not fixed beforehand. Indeed, my point is that there is no full predetermined truth about a Lewisian world that we only have to discover and fully grasp in the end. Rather, what the truth is turns out in the endless process of rational concept and belief formation. So, this notion of truth is firmly bound up with belief dynamics; the limit of inquiry is the limit of this dynamics.

It is crucial here that epistemic rationality does not simply reduce to truth as the supreme goal of inquiry, so that the ideal limit is reached precisely when the full truth in some predetermined sense, presumably correspondence truth, is reached. If this were so, nothing would be gained by the pragmatic theory of truth. However, epistemic rationality has its own rules and principles governing the structure and the dynamics of epistemic states. And truth and rationality are closely entangled in these principles without one being reducible to the other. These principles and that entanglement are not well understood; this is why the pragmatic or internal notion of truth is still so indeterminate. In my view, though, this does not discredit that notion. We make progress only by working on it, not by discarding it. (My pertinent attempts are found in Spohn 2012, Sect. 17.3; they make my claim of the mutual entanglement of truth and rationality more intelligible.)

So, this section has argued, or at least suggested, that the two different kinds of worlds are indeed accompanied by two different notions of truth. Wittgensteinian totalities play the role of ontic worlds in the 2D picture, and the ontic correspondence notion of truth is appropriate to them. By contrast, the World as well as Lewisian universes play the role of epistemic worlds in the 2D picture, for which the epistemic pragmatic notion of truth is the pertinent one. And this is indeed a different, independent notion truth.

6 The epistemic-ontic map

So, the situation is indeed as problematic as indicated at the end of Sect. 2. If the 2D picture contains different kinds of worlds, then the congruence principle is violated; either doxastic possibilities can no longer be equated with contexts, or the diagonalization of contexts and indices can no longer be maintained.

The strategy for solving this problem is obvious: If epistemic worlds are not identical with ontic worlds, it should at least be possible to map the former into the latter, and all is well again. The general point is that 2D semantics presupposes such a map in order to work. I call this the epistemic-ontic map. That such a map is required is not surprising; what has been insufficiently recognized is that this map is neither identity nor trivial in some other way. It is most substantial—and fundamental to our entire onto-epistemological business. In this section, I want to briefly pursue two issues: What is this map? And is it at all a map, i.e., a function from the set of universes into the set of totalities? Only then is all really well.

I have characterized the epistemic-ontic map already in the previous section. It is provided precisely by the ideal, though not well-defined limit of inquiry. In that limit, we know all the facts about the universe in question, which then constitute the complete essence of that universe. This entails having identified objects in that universe and having determined their individual essences. Similarly, we will have determined the essences of all properties and relations. In the process of belief formation we only have guesses about all these essences (see Haas-Spohn and Spohn (2001) for how beliefs about these essences are built into our concepts and how we may learn about the essences). However, in the limit we will know the full truth about them; and then no reason whatsoever can turn up that could change the truth, i.e., our beliefs in the limit. This is to say: in the limit, we will have associated a full Wittgensteinian totality with the Lewisian universe. This establishes the map required by diagonalization, and thus two-dimensional semantics still works properly even within the more sophisticated picture I have sketched. Of course, we cannot possibly carry out a specification of this map. But we can conceive of it in this way, and this is all we need.

However, is this really a map? There are two kinds of doubt. First, the map may not work for all universes, and second the map may not be unique. I think the doubts can be dissolved. However, I have to restrict myself to some cursory remarks, which cannot settle the deep issues involved here; they would require papers of their own. So, my point will rather be to only identify and explain those issues—and thereby to show that this epistemic-ontic map is indeed a substantial and fundamental one.

The first worry is that there are many possible universes that escape our epistemological bounds, that are not accessible to our senses or not intelligible through our conceptualizations. It seems fairly obvious that such universes may exist. We can even describe some of them. For instance, there might be universes full of neutrinos and nothing else, and we would not be able to detect anything whatsoever in such universes. The epistemic-ontic map cannot be defined for them.

There are then two ways to go. We can leave the essence of the World so poor—just space–time-like maximality—so that even universes consisting only of neutrinos are realizations of the World. Then we would have to appropriately restrict the domain of the map to the sensible and at least partially intelligible realizations of the World. Or as indicated earlier, we might load the World with what we a priori know about it, i.e., that the World is sensible and partially intelligible; and so are then all the Lewisian universes into which it might unfold. Either way, whether we should assume insensible or not even partially intelligible universes is epistemologically entirely idle; we lose nothing by denying them. (This may take care of alien universes, the possibility of which is one important reason for Lewis (1986, pp. 159ff.) to think that ersatz worlds described in a Lagadonian language, i.e., our totalities, provide too poor a space of possibilities.)

This is not the end of the worry, though. Sensibility and partial intelligibility are not enough. The epistemic-ontic map appeals to this not well-defined limit of inquiry and hence to complete intelligibility. This seems even more problematic. Our cognitive capacities have brought as very far. Still, complete chaos may rupture at each corner, behind the moon or in other galaxies, below what we presently take to be elementary particles, or in the middle of the earth—a chaos that defies all of our idealized intellectual capacities. This is well imaginable. And again, the epistemic-ontic map could not be defined for such universes.

I think the same defense as before is feasible; we may restrict the domain of the map even to the fully intelligible worlds. Again, this is no genuine restriction. Whenever we start finding our ways within a sensible and partially intelligible universe, it is at least an epistemic possibility that we can complete our conceptual and epistemic business. We might fail, even though we try hard. However, no failure would be accepted as conclusive; all failures would count as still vincible. In fact, we could never distinguish conclusive from preliminary failures. And so again, we lose nothing whatsoever for our epistemological purposes, if we restrict the doxastic possibilities to completely intelligible universes; we might well take the complete intelligibility of the World as given a priori. And then the epistemic-ontic map would be sufficiently completely defined. Even if my arguments appear airy, they show what must be argued for in the interest of 2D semantics.

I should emphasize that this worry concerned only the domain of the epistemic-ontic map and not its range. We can well allow that this range does not exhaust the set of totalities. Indeed, the neutrino world envisaged above would be a totality outside the range of the map, since we have a priori excluded the corresponding universe from its domain. Indeed, this universe could never be grasped as such; only the description of that totality created the semblance of a comprehensible universe. However, I don’t see any problems forthcoming from this surplus of totalities.

Let me turn to the second worry that the epistemic-ontic map is not a unique function. It seems quite possible that there are two complete conceptualizations of one and the same Lewisian universe, which generate two different, possibly incomparable Wittgensteinian totalities. This is an extreme variant of the problem of the underdetermination of theories, applied to the ideal theory reached at the limit of inquiry. This problem has a venerable history in the literature (e.g., cf., Kukla 1996), and therefore this possibility seems at least plausible.

However, I don’t see this possibility. If two different descriptions apply to one and the same universe in an equally adequate way and there is no way at all to drive any wedge between them, then it cannot be that the one describes the facts and the other doesn’t. Rather, both describe the facts, and all those facts must be contained in the corresponding totality. In other words: All facts described by all equivalent descriptions belong to one and the same complete totality.

For instance, there are not only rabbits in the actual totality, but also undetached rabbit parts, rabbit stages, and whatever else there is in the Quinean inventory. We can describe the actual world in either terms, and so all the facts described thereby belong to that totality. Or to take a more realistic example: If particle and field theories really are perfectly equivalent and should remain to be so, then both, particles and fields, exist, and must be contained in the actual totality.

Or in still other words: Wittgensteinian totalities are not subject to Ockham’s razor. On the contrary, they are ontologically maximally rich, and whenever they contain some objects, they contain also all the other objects definable or constructible from them. It is only a secondary question whether some of the objects should or could be distinguished as basic; and it may well be that this question allows for various answers.

The uniqueness of the epistemic-ontic map seems threatened also by old skeptical worries. Let’s suppose we make all the experiences we could possibly make in a Lewisian universe and subject it to complete conceptualization and judgment. Could it not be that we are still entirely wrong? Might we not have been deceived by an evil demon? Perhaps we are brains in a vat? Or might it not be, as occasionalism originally had it, that each apparent pair of cause and effect actually finds a common cause in God’s activities? It seems that there might always be a reality behind or below the reality. Then it seems to be ambiguous which value the epistemic-ontic map should take for this universe. The totalities with the richer realities seem to fit as well.

Could this happen? This depends on the kind of possibility involved here. It is always metaphysically possible that there is a universe or an epistemic world \(w_{1}^{e}\), in which we potentially make all those experiences which count as complete in the universe \(w_{0}^{e}\) and further experiences beyond. And it is also metaphysically possible that many of the facts that obtain in the totality or ontic world \(w_{0}^{o}\) corresponding to \(w_{0}^{e}\) do not obtain in the totality \(w_{1}^{o}\) corresponding to \(w_{1}^{e}\). That is, the singular facts of \(w_{0}^{o}\) will also obtain in \(w_{1}^{o}\), but many negative and general facts etc. will not. So, ontically, we might well be brains in a vat. But this is not epistemically possible, as Putnam (1981, ch. 1) has insisted. Complete experiences in \(w_{1}^{e}\) are richer than those in \(w_{0}^{e}\). However, for applying the epistemic-ontic map to \(w_{0}^{e}\) we had presupposed the completeness of the experiences in \(w_{0}^{e}\). So, by definition there can’t be a reality behind or below the reality in \(w_{0}^{e}\). From the perspective of \(w_{0}^{e}\) this can only happen in other possible universes, not in \(w_{0}^{e}\) itself.

At least, this is how metaphysical and internal realism may be made compatible within the present framework. Even though the notion of complete experience is clouded by obscurities, the point required seems clear: namely that complete experience in one universe can be the very same, but incomplete in another. Therefore we have to grant that the skeptical problem has a real base. From our finite point of view we can never distinguish between \(w_{0}^{e}\) and \(w_{1}^{e}\). Even if we had gathered complete experience in \(w_{0}^{e}\), no one would tell us that this is now the complete experience; even if we had reached the limit of experience, we would not know that we did. In this way it is forever an open question in which universe we live.

So, let me sum up this paper: I have argued that 2D semantics refers to totalities, ontic worlds, and to universes, epistemic worlds, which come along with two notions of truth, the correspondence and the pragmatic notion. Thereby, a fundamental epistemological-ontological schism opens within 2D semantics. And I have proposed that this schism can be closed again by what I have called the epistemic-ontic map, which is basically provided by the ideal limit of inquiry on which the pragmatic theory of truth builds. This map is indeed a proper map. And so the congruence principle, which is at the heart of 2D semantics, can be maintained. Thus are the rich and stable foundations of 2D semantics.

7 A few comparative remarks

Let me conclude this paper with a few remarks about the work of David Chalmers. As far as I know, he is the only one who has at least envisaged the view that epistemic worlds are not to be identified with ontic worlds; and he developed this view including its difficulties and prospects in various writings. However, he sets up matters in entirely different ways. This is not the place for an extensive comparison, all the more as such a comparison would require me to raise to the same level of elaboration as Chalmers. So, I will restrict myself to a few strategic remarks that should rather illuminate the differences than start a close argument and may thus serve as a further contrastive clarification of my own position. I will use Chalmers (2006, 2012) as my reference.

In explaining 2D semantics Chalmers often appeals to the phrases coined by Jackson (1998), namely that, if the intension or truth condition of a sentence is given by the set of worlds in which it is true, then its primary, epistemic intension is the set of those worlds considered as actual and its secondary, ontic intension is the set of those worlds considered as counterfactual. In short, epistemic worlds are worlds considered as actual, as they may (turn out to) be, and ontic worlds are worlds considered as counterfactual, as they might have been. On the one hand, this way of speaking is vivid and helpful; on the other hand, it is ontologically unclear. Are x considered as y and x considered as z two different things? And if so, what kind of things? I think they are the same; only the way of considering them varies, and truth in a world also becomes relative to the way of considering it. This is not a way of providing clear foundations.

Chalmers also appeals to conditionals. We get to the primary intension of p by looking at indicative conditionals of the form: if things are such and such, is p the case? Whereas the secondary intension is revealed by counterfactual conditionals of the form: if things had been such and such, would p have been the case? Again, this a vivid explanation. The two explanations are related. One might say that in indicative conditionals the worlds satisfying the antecedent are considered as actual, whereas in counterfactuals the antecedent worlds are considered as counterfactual. Therefore, according to both explanations the epistemic-ontic map seems to reduce to identity (although the issue is clouded by the unclear ontological status of an x considered as y). However, when I look at the deep and wide disagreement in the literature on how to understand indicative and counterfactual conditionals, I can’t find the explanation(s) theoretically fruitful, either (see Spohn (2015) for my view on indicative and counterfactual conditionals). So, we are better not to further rely on such vivid descriptions.

Let us rather look directly at what Chalmers says about the nature of epistemic and ontic worlds. Concerning the latter, he is quite non-committal; only the purpose of ontic worlds, namely to represent metaphysical modality, is clear. I agree, of course, with the purpose and therefore propose to be explicit here, namely to identify ontic worlds with Wittgensteinian totalities, the only world-like entities which reflect metaphysical necessities directly. Chalmers (2012, p. 240) comes near to this when stating that metaphysically possible worlds resemble or correspond to certain complex Russellian propositions (or states of affairs in my terminology). So, I don’t see any disagreement here on this score.

Disagreement emerges when we get to the epistemic worlds, which are also called scenarios by Chalmers. In Chalmers (2006, pp. 82f.) he admits the possibility of identifying scenarios with centered ontic worlds. Then we are back at the standard picture sketched in Sect. 2 and back at identity as the epistemic-ontic map. However, he much prefers a different conception of scenarios, according to which they are a specific kind of fully determinate and complete conceptual or linguistic constructions. This may seem apt; after all, all our cognitive efforts result in some such thing, though not fully determinate and complete.

Still, already here a basic difference emerges. As explained, I take epistemic worlds as Lewisian worlds, which Chalmers would classify only as (one version of) ontic worlds. For me, epistemic worlds, or rather the World, are concrete all-embracing objects of our experience, which we must capture cognitively, conceptually, and epistemically. For me, the results of these cognitive efforts are open and not predetermined. However, they are not a scenario; they generate a scenario, they determine what the Lewisian world, the object of experience, is like. And they are not ontic worlds, but they generate, via the epistemic-ontic map, the corresponding ontic world. This is perhaps not much of a difference. Still, I would like to emphasize, as it behooves a good realist, that not all is conceptual construction, that the epistemic worlds rather are the objects underlying those constructions.

Above, I was a bit indeterminate. Are scenarios linguistic or conceptual constructions? Chalmers clearly prefers linguistic constructions; they are more tangible, indeed apparently sequences of items of some alphabet. However, the advantage is spurious; the linguistic constructions must be interpreted. Thus, one runs into all the ambiguities of linguistic phrases that 2D semantics attempts to clarify. Chalmers is fully aware of this and takes great pains of disambiguation. Still, I feel uneasy. We were searching for the ontological-epistemological foundations of 2D semantics and hoped not to get involved again in its complexities on the meta-level. I see no reason why this hope should be unsatisfiable. Didn’t I satisfy it so far? Chalmers (2012, pp. 239ff.) thinks so as well, in terms of what he calls Super-Rigid Scrutability, which saves him from these complexities. He accepts Super-Rigid Scrutability, but he is aware (2012, Sect. 8.5–6) that this is a very strong version of his so-called scrutability theses.

Therefore I tend to read Chalmers as taking scenarios as conceptual constructions. Constructions from which concepts? (Chalmers (2006, p. 84) takes it to be “likely that actual languages do not have the expressive resources to express an epistemically complete hypothesis”, and thus he assumes that the language on which his linguistic constructions are based “should have terms that express every possible concept, or at least every concept of a certain sort”.

However, this is, I think, a questionable direction of analysis. Chalmers is certainly correct in not assuming that we are endowed with a fixed set of concepts, so that we can only grasp universes that submit to this fixed set. Therefore he appeals to the set of all possible concepts (and a language having terms for all of them). Then we need not fear to have too little material for constructing scenarios. However, what are possible concepts? Possible worlds, whether taken as universes or as totalities, already stretch our imagination. But I find possible concepts far more imperspicuous, and my preferred strategy would be to represent possible concepts within the 2D framework with its various kinds of possible worlds instead of appealing to possible concepts for clarifying this framework (see Haas-Spohn and Spohn (2001) for an attempt of explicating concepts within an essentialistic perspective).

Chalmers himself feels uneasy with his appeal to all possible concepts. This is not the least motive for him for most impressively pursuing a huge program of reducing concepts in terms of his various scrutability theses; the program originates in Chalmers and Jackson (2001) and is in full bloom in Chalmers (2012). The main motives are broader epistemological ones. The leading idea is that there is a compact set of basic concepts from which all other concepts are constructed. Hence, a scenario needs to be characterized only by a complete set (or conjunction) of assertions (propositions) made from those basic concepts; whatever else is true in that scenario is then conceptually or a priori entailed by this set. If this idea worked, the fantasy of possible concepts would be largely reduced and solidly grounded (this is why the above quote refers to “every concept of a certain”, i.e., basic “sort”).

What might those basic concepts be? Chalmers (2012) discusses surprisingly many alternatives. The guiding idea is summarized in his famous PQTI formula: the basic concepts include fundamental physical concepts (P), fundamental phenomenal concepts (Q), fundamental indexical concepts, i.e., “I” and “now” (I), and a closure principle (T) to the effect that there is nothing beyond what is completely assertible in terms of PQI. One should carefully look at what exactly is contained in PQTI; this would take us too far, however. In any case, the aim of Chalmers (2012) is to minimize its content and thus to make his conceptual foundationalism as strong as possible.

This guiding idea seems strongly inspired by the ontological paradigm, too strongly in my view. Ontologically, most of us are convinced that everything metaphysically supervenes on the microphysical distribution of matter. There is variation in what everything is supposed to be. Ontological dualists think that qualia do not thus supervene and are independent fundamental entities. The case that phenomenal concepts are fundamental concepts is certainly much stronger (although ordinary phenomenal concepts like appearing red are already quite complicated matters in my view; cf. Spohn 1997b). Another contested issue is whether modal facts supervene as well. In any case, all ordinary facts supervene on the microphysical distribution of matter; we need not be more precise.

However, professing that supervenience is easy. An entirely different matter is to spell out the supervenience relations in detail. This is very hard empirical work generating a lot of a posteriori necessities, such as that water is H2O. Moreover, it will often take several steps till we arrive at the ultimate microphysical base; several levels of reality may have to be filled in. Our relevant knowledge is very rich in the meantime—and still remains very insufficient (try to think of the supervenience base of proteins or genes or money).

This is precisely why I don’t see any analogy between metaphysical supervenience and conceptual entailment. We do not acquire new concepts by applying idealized ratiocinative powers to the constructions from the basic PQTI concepts, and we do not verify propositions that are couched in new concepts within complete scenarios that are specified only in PQTI terms, even given idealized ratiocinative powers. This may perhaps work for concepts we already possess and for propositions we already grasp, precisely by the method of cases Chalmers often refers to, e.g., in (2012, Sect. 1.3). But even this seems doubtful to me. We may still miss several levels of bridge laws stated with concepts we also miss, and we may thus be unable to work out the relevant entailment from the PQTI basis to the propositions we grasp. It seems that in such a case further empirical work is required in order to establish the bridge laws needed.

For instance, could we disentangle the terribly complex world of proteins from a complete scenario in terms of quarks, leptons, etc.? Genes seem still worse, because we do not grasp them through their constitution, but begin to grasp them only through their functional patterns, which, however, are certainly not patterns specified in Q-terms. Could the conceptualization of proteins and genes be merely a matter of idealizing rationicative powers? This sounds distorted. Hard empirical science seems the only way to get a grip on those concepts. With money matters are definitely worse.

The point seems still clearer to me in the case of concepts which are only possibly, but not actually possessed by us and which Chalmers wants to cover as well. We would not know which patterns to see in the complete basic PQTI scenario, which concepts to form, and which propositions to infer from it; we would have to explore all this, with our Q-sensitivity, in a real universe built from ultimate entities referred to by the P-terms. Exploring such a universe is very different from pondering such a scenario. Somehow, Chalmers looks only at the extreme boundary of our rich conceptual body, the phenomenal and the microphysical and thinks that the body is determined by its boundary. In my view, this does not do justice to the structure of the body. This structure is not given by quasi-deductive entailment relations, but rather by inductive relations, which are flexible, changeable, and open-textured (think, e.g., of the pervasive phenomenon of ceteris paribus conditions), and which are hard to grasp, though certainly not in a deductivist way. So, ultimately, I sense a deductivist prejudice in Chalmes’ conception, at which already the logical empiricists foundered.

True, I have not accounted for that structure, either. I have sketched only my opposing picture of driving the investigation of a sensible and intelligible universe to its ideal limit, in the course of which the conceptual body suitable for that universe will be developed and the truth about this universe will be determined in the pragmatic sense. Thus, again, I haven’t given any cogent argument. However, opposing the two pictures should have been illuminating and might have at least have shifted plausibilities.

A final remark: We have seen that two-dimensional semantics has a built-in tendency of duplicating entities on the epistemic and the ontic side. Now, there already existed a great entity duplicator in the history of philosophy, namely Kant. I more and more come to see parallels between him and the two-dimensional business. Of course, the parallels are not smooth, there are bumps. Still, I find the parallels helpful in understanding what might be going on in Kant. Here, the parallel runs between the epistemic and ontic worlds I talked about and Kant’s noumenal and phenomenal world. Kant speaks in the singular, he does not indulge in excesses of possibilities. Otherwise, though, the parallel is quite salient. It is, however, the reverse of what the labels might suggest. Lewisian universes, the epistemically possible worlds, or, for that matter, the World, correspond to Kant’s noumenal world, though they are not unknowable; they only are initially unknown and awaiting our grasp and exploration. And Wittgensteinian totalities, the ontically possible worlds, correspond to Kant’s phenomenal world, which is fully conceptualized and harbors things as appearances and their properties.

I am not sure whether an interpretation or a parody, a proof or a spoof of Kant is forthcoming here. However, I would like to recall my remark in Sect. 2 that in my view the three items making up 2D semantics in its epistemological interpretation, namely doxastic possibilities, contexts, and indices, need to be amended by sequences of objects. If so, we have to do it all over again, i.e., to distinguish ontic objects and epistemic objects and to explain their relation. Indeed, we would have to engage in another triplication. And then the comparison with Kant could be carried out in a much more sensible way.