Keywords

1 Introduction

Mulligan claims in “Two Dogmas of Truthmaking” that in the sentence: “the proposition that Sam exemplifies sadness is made true by the obtaining state of affairs that Sam exemplifies sadness”, exemplification is a real relation but “made true” expresses no relation, only a tie of essence. Mulligan distinguishes “is true because ” and “is made true by”, as in the sentence: “the proposition that p is true because the state of affaires that p obtains”; the second formula can be paraphrased by “is true because of ”, as in the sentence, “the proposition that Sam exists is true because of Sam”. Mulligan claims that in the two cases, “because” is not a relation, but a connector, relating two sentences in the first case, one sentence and a noun in the second case. A relation has a semantic value, but a connector has not, therefore “making true” is not a relation.

In the ontological square, with universal substances at its left corners, particular substances and at its right corners, universal accidents (or moments, or properties ) and particular accidents, exemplification is a diagonal relation: A universal accident (sadness) is exemplified in a particular substance (Sam). It relates ontological entities that belong to different categories, but that are both ontological basic types. By contrast, “made true” relates here a proposition (the ontology of which is complex) and its truthmaker, not just a state of affairs , but the obtaining of that state of affairs. Mulligan is here in conflict with Armstrong , who thinks that there is a real relation between a complex entity, a particular “state of affairs”, and the truth of the proposition that Sam exemplifies sadness. In the perspective of Armstrong, the truth of this proposition depends on the state of affairs. For Mulligan, the tie of essence works in the other way, and here the determination of the state of affairs depends on the exemplification expressed by the proposition.

Let us take for granted that: “exemplification is a real ontological relation”. What about the status of “making true”? In comparison to connectors in the usual sense—a connection between two formulas that is governed by rules of combining truth values—the “connector” “because ” in the first case is more complex: It does not only return a truth value but also anchors the truth of the proposition that p is true in the ontological fact that the state of affairs of p obtains. If we admit that “propositions” can be considered as ontological correspondents for epistemic combinations between different categories of entities—properties and substance, for example, or tropes and compresence—such a connector relates the epistemic-ontological stance and the purely ontological one. The “connector” in the second case (“because of Sam”) shares this complexity, in a still more tricky way: It relates a proposition that has an even stronger ontological impact (the existence of Sam) to the ontological entity itself—via the proper name of this entity. We can then suspect that the specificity of these connectors, the impossibility of “made true” to be a “real” relation, and its necessity to be a different kind of link could be related to the relations between an epistemic stance and an ontological one.

The truthmaker doctrine is related to the relation between propositions and more basic ontological entities. Its slogan could have been: Do not bother with the infinity of propositions. Sam and his sadness make true a lot of propositions, including “Sam is sad” and “Sam exemplifies sadness”. But we have to pay attention to Sam and his sadness, not to these propositions. We have to go down to the fundamental entities that make these propositions true.

As Mulligan mentioned in the same paper, this move towards the basic entities avoids a lot of problems and allows philosophers to be real realists, so to speak. But it has its own troubles. Truthmaker maximalism has problems with negative and disjunctive fact s. Defining the truthmakers of negative facts would imply to determine the set of all the facts that there are, and to try to take the complement, but this requires a close world assumption, and facts like the undecidability of some propositions are problems for this assumption. Disjunctive facts would require at the same time to determine which of the disjuncts are the case and to deny that this determination is the case.

Remember that antirealists do not have these two problems. For them, negation is an epistemic operation with no realist claim, and so can be the classical disjunction—leading some philosophers to prefer “honest disjunction”, asserted only when we have the capacity to be sure that one disjunct is true.

If you are an antirealist, “making true” a proposition does not anchor it in reality but strengthens its epistemic accessibility. If you stick to the realism of the truthmaker doctrine, “making true” anchors the proposition in reality. This anchoring itself cannot be a real relation because in this case this relation would have to be expressed by propositions, and we would go back up in the heaven of propositions without reaching the ground of truthmakers. We would be attracted by the epistemically infinite potential of propositions and the ontological parsimony of truthmakers would no longer be obtained.

This little story makes us suspect that the attachment of the truthmaker doctrine to a stronger and more intimate tie than the one of a relation, as well as its difficulties with negative facts and disjunctive facts, could be a kind of negative track of the absence of the epistemic side, of the will to dispense with it. Maybe the “tie of essence” related to the alethic problem has to be so specific because truth implies the combination of the epistemic and of the ontological sides in a sort of unity.

In what follows, we suggest that instead of putting aside this epistemic side as linked with the antirealist stance, we would rather pay attention to the ontological side of the epistemic side, to the ontological bases of the epistemic processes as such. In this perspective, the difference between real ontological relations and “ties” (particularly when associated with truth) could be a trace, in the ontological way of speaking, of an epistemic way of speaking that the truthmaker trend has tried to put aside.

2 The Ontological Basis of the Epistemic Side

As it is now obvious, we do not want to come back to a pure epistemic and antirealist or constructionist stance: We believe that sound epistemic operations are anchored on ontological bases. Such ontological bases have to be found for the classical epistemic operations—identifying, classifying, and making inferences—and we could consider inferences as transformations between different ways of classifying. The ontological processes that are at the basis of these operations ensure the access to the entities, their distinction from other entities as well as the possibility of putting them together with other entities, and the validity of the transformations from one distinction or collection to another one. The constraints that these processes have to satisfy in order to be operational are at the same time ontological and epistemic constraints.

The truthmaker trend puts the focus on the truth and the anchorage of true propositions on fundamental entities, instead of focussing on the constraints on operations and the processes (or relations) that are the ontological basis of the determination of propositions. But in order for propositions to be true, their constitution and consequently the constraints on the processes that carve them up have to be co-natural, so to speak, to the operations of identification, classification, and inference. If we are allowed to call Mulligan ’s version of the truthmaker story a “proposition-entity” version, we could point at this other side of the truthmaker story as an “operation-process” version. Without convenient constraints on these processes, the combinations of propositions cannot be assumed to keep the truth from one proposition to another one. The very basic compresence of two qualities could not be tracked without satisfying these constraints. As we cannot assume that all propositions are well constituted in this respect, we have to find what constraints ensure that this constitution is sound.

If developing this version were shown to be possible, then the problem of the negative and the disjunctive fact s could vanish. Negation and disjunction can be defined in terms of constraints on the processes of classifying and making inferences. Classical disjunction “A or B” implies that the only classification that is operational does not determine which of A and B is the case. Negation is related to the constraint that passing from the left side of the turnstile to its right has to be marked by negation (and similarly when passing from the right side to the left one). This implies that we cannot benefit from the conjunction of classifications to be transformed and processes of transforming classifications and benefit at the same time and at the same level from these very same processes and the result of these transformations. Conversely, when we pass from the right to the left side, we use again the processes of transformation, but are not sure to get again the transformed classes, or find again these classes, but by other processes of transformation. These constraints seem not only to be epistemic ones but also constraints on ontological processes (think of physical transformations and mixtures).

The “process-operation” version seems to be open to an objection. On each epistemic operation, another epistemic operation can be applied. Not only do we have to find ontological bases for operations of identification, distinction, and classification, and inferences but also other ontological bases for examining the validity of these identification, classification, and inferences, and so on and so forth.

There are two answers to this objection. First, the ontological bases of epistemic activity are not the particular processes but the constraints on these processes, and these constraints are the same at each level. Second, at the beginning—the identification of entities—basic entities can be assumed to be singularized by themselves: A particular (substance or quality) is singularized by its own being. When we start from substrates and particular properties , for example, we presuppose also the capacity of entities of one type to distinguish from entities of another types. In a tropist ontology, particular qualities are presupposed to have the capacity to distinguish from the relation of compresence. We will see that these two last moves are disputable.

At a further step, for example, classification, we have to add new processes (putting together and separating basic entities), but their ontological constraints, which determine their ontological types, remain the same all the way up. The differences between the classes depend not on new types of processes, but only on their combination with different processes of the same type.

If we assume that a fact can be determined just by identification and classification—corresponding in a sentence to a simple predication—at this level, we need neither negative facts nor disjunctive ones. Negation has only all its potential when it is related to inferences—transformations from one classification to another one—and when there can be conflicts between two transformations. Negation is then the starting point of the revision of classifications. In the same way, disjunction is related not to conflict, but to difference of granularities of classifications: At one level of the process of classification, two classes cannot be differentiated, and this is not only a property of our cognitive limitations, but can be a property of the real processes of putting things together.

3 Use and Explicitation

We can suppose that processes and their constraints are the ontological bases of epistemic operations. But the alliance between ontology and epistemology implies the possibility of the presence of the ontological basis without the activation of the upper levels of epistemic processes. For example, a process of classification or gathering could be present and its gathering with other processes of classification (a collection of second order) could be absent, not to speak of the possible inferential transformations of some classifications into other ones. More generally, when a process is active or in use, it does not classify itself or transform itself, it does not make itself explicit. Other processes are needed for that operation that could be called “explicitation”. Mulligan ’s example just shows this point: Saying that Sam “exemplifies” sadness is making explicit the relation of exemplification , while the state of affairs of Sam’s sadness does not make explicit this relation, but only gives the ontological basic entities that could be made explicit as bases for the relation of exemplification. While Armstrong claims that the basic entities are sufficient, Mulligan claims that the “explicitation” process is the condition of the state of affairs that Sam exemplifies sadness (a state of affairs which stays at an upper level than the one of the state of affairs of Sam’s sadness, even if it is based on it).

We could generalize. Any ontology needs two regimes: a regime of “being in use”—entities are “at work”—in which the capacities needed for the entities to operate are simply presupposed and not made explicit, a regime in which we begin to make explicit the ontology of these capacities. This difference is in a way analogue to the difference between propositions and the set of their proofs. Martin-Löf gives the second as semantics for the first, and this is sound and illuminating but would require a perfect and complete explicitation—an ideal situation. Making explicit the implicit presuppositions is in fact only possible step by step, from the fundamental entities towards the different levels of epistemic operations. In this way, even if making every operation completely explicit is an infinite task, one level does not have to wait for an infinite hierarchy of explicitations in order to begin to work.

But in order to be reasonably confident that no bad surprise will occur in this progress, a further condition has to be satisfied. It is required that making explicit the ontology of epistemic operations at higher levels should not change the type of the basic ontological entities, and so on at every level of explicitation. To use an analogy, the projection of the successive operations of explicitation on the level of basic entities should have a null value measure. This seems possible if such ontological operations are processes, not in the usual sense of processes: four dimensional entities, extended in space and time , but processes in the sense of entities whose ways of being are their ways of doing. Whatever new ways of doing the processes at upper level will present, they still will be of the same general type of processes.

Another way of putting things is to require what could be called stationarity in the progress of explicitation. The process of making explicit the previous ways of processing does not change the type of the present process relatively to the previous one, except of course that we have built upon it a new level of explicitation so that the previous process is now made explicit. We can be ensured that this stationarity will also be satisfied in further steps if the process of explicitation is of a kind that can be reapplied, not really on itself, but on its previous steps of use, as in a recursive process. In this way, the stationarity—the stability up to the differences of steps in the recursive process—is warranted.

4 Biases of the Top-Down Perspective

Stationarity is not always the case. For example, paradox es such as Russell ’s paradox of the class of classes that do not belong to themselves cannot satisfy stationarity. Most of the paradoxes are built in a “bottom-up” and “top-down” way, adding new higher levels on the top of the level of basic classes, in conjunction with negation. Such paradoxes arise from nonstationary attempts of explicitation.

Some apparently non-paradoxical ontological notions seem to be created in this way, by following the top-down way, when we create them from higher levels of explicitation and then add these new entities to the ones at more basic levels. We had an explicitation problem and solve it by imagining a new entity; then, we go backward and assume that this entity works at the level of the more basic ones. We forget that the explicitation, in order to be hoped stationary, has to be built on the top of the basic entities, as it makes explicit their articulations and is not supposed to create these articulations.

For example, the relation of compresence in tropes is introduced in order to give an account in the pure tropist world of what appears in our usual world as objects, linking several tropes together. It could be a kind of retrospective illusion, a retro-projection, onto the basic entities, of the epistemic operations introduced in order to identify more complex objects. The articulations constitutive of these complex objects have to be made explicit, and then we retrospectively imagine a type of articulation compatible with pure tropes (particular qualities or properties ) and project it back onto the basic tropes. In a sense, compresence is introduced as a reminiscence of the problem of the attachment of qualities to substances, since a substance plus a quality can be considered as a proto-object. The problem is that if a substance can be assumed to distinguish itself by itself from another one, as well as a property or quality from another quality or property, we are not sure that a substance can distinguish itself by itself from its quality. The articulation of a substance and its quality could be so tight that the two entities could not distinguish themselves from one another. The distinction here is one of those that have to be made explicit, by differentiating the type of substances and the type of qualities. But if the differentiation is necessary for explicitation, this does not imply that the difference that has been made explicit is itself an entity to be added to the fundamental ones.

If this sounds right, “compresence” could be a trace of a collapse of the distinction between the two ontological regimes, the one of the functioning of basic entities and the one of making their functioning explicit. This distinction works in a bottom-up way. If we try to make it work in a top-down way, we would be tempted to transform categories that are the result of making explicit the articulations of fundamental entities into entities that are supposed to be at the same time the cement and the distinctive boundaries between entities of different types. But these entities are no more than the traces of explicitation operations.

If we generalize this way of thinking, and want to acknowledge that there could be entities that add something to basic ones, instead of being only explicitations of the basic ones, we could introduce a distinction among relations, contrasting relations with “null projection”, related to the operations of making explicit presupposed ontological processes, and relations that add structure to entities. We would call the first type of relations, characterizing relations, and the second ones, structural relations. Instantiation is a characterizing relation.

Usual connections—except Bergman’s connection, a non-relational tie, which is a characterizing relation—are structural ones. Of course, in a sense, making explicit the articulation between fundamental entities adds structures (the structures of the processes of explicitation), but in principle these structures have a null projection at the basic level. Exemplification , by contrast, can be said to introduce a new structure: The diagonal relation between universal property and particular substrate, or universal substance and particular accident, is not taken here as a relation between the basic entities as such, a relation that makes explicit the articulation between two basic entities (it would then still be an explicitation relation). It works as a relation between two different types of entities, a relation at the level of types. This is a correct move as long as we do not imagine that such a relation exists at the level of basic entities: A particular substance and its property do not bother to distinguish themselves as types of entities, they just are linked together. This could be an argument against an ontology requiring the four corners of the ontological square. If universal properties— or universal substances—can be taken as playing at the same time the role of basic entities and the role of types—universalization could be a kind of typification—then they constrain us to present incorrectly these processes that only make explicit types of entities as processes adding some structure—the structure added at the level of types.

In this sense, Mulligan ’s “tie because of essence” seems to be a pure explicitation relation. Note that Mulligan calls it a tie and not a relation precisely in order to avoid that its introduction adds something to the ontological picture, something similar to an articulation between different basic entities—as propositions are surely not basic entities.

Is “making true” a characterizing or a structural relation? The problem has to be made distinct from the one of “facthood”: How is a fact made, and to what extent is a complex ontological entity like a state of affairs required? If we needed an additional structure for passing from fundamental entities to states of affairs, we would have confused processes that make types explicit with processes that add structure. Surely, most of the facts need a rich structure. But when we pass from the “fact” to the “state of affairs”, we pass from the structured complex to the explicitation of the articulation of its components, taken only globally as allowing us to take the state of affairs as the ontological correspondent of a proposition in order to give a sense to the truth or falsity of this proposition. We pass from the constitution of the fact and its structure to the simple explicitation of the possibility for a fact to be isolated and considered as a global entity of a higher level. The articulation is not taken as an additional structure, but just as the explicitation of the “symplokê” (the global articulation) required for truth to be relevant.

Explicitation of “symplokê” ensures that a state of affairs is a relevant ontological complex for the question of the truth of a proposition to be asked. But for questions of truth, we are not satisfied with simple relevance. We need “real” truth. This difference presupposes structural links between some cognitive processes and the state of affairs. But we can also limit ourselves to making explicit what is needed for truth. This explicitation is a peculiar one: It is an explicitation of the articulation between structuration and explicitation, as truth on one hand needs a structuration, but on the other hand only makes explicit that the explicitation is coherent with the structuration.

Our hypothesis is that this explicitation of truth requires to make explicit (1) the basic ontological entities, (2) the epistemic operations or processes and their ontological types, and in the end (3) a kind of coherence between the structure of the bundle of entities or state of affairs (if there is such a structure) and the structure of the epistemic processes. The “tie because of essence” suggested by Mulligan seems to indicate that this coherence constraint is satisfied.

Theories of truth may focus on one of the structural requisites of truth or on its explicitational or characterizing aspect. If we emphasize the characterization aspect of the notion of “making true” and believe (wrongly) that explicitation relations have to be cancelled out in order to access “real” truth, we are led to a disquotational theory of truth . If you are a correspondentist, you focus on the structural aspect of the problem. Truth seems to imply a “tie” or an implicit articulation between a structural constitution and the constitution of an explicitation. Making this articulation explicit is a dangerous manoeuvre as long as we do not understand the role of explicitation and its constraint of having a “null projection”, its requirement of not adding ontological structure to the fundamental entities.