Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Preservationism was first developed by P.K. Schotch and R.E. Jennings in a series of papers going back to the late 1970s. (Jennings and Schotch 19841981; Jennings et al. 19811980; Jennings and Johnston 1983; Johnston 1978; MacLeod and Schotch 2000; Schotch and Jennings 1980b and Thorn 2000). The central theme of this program in philosophical logic is the suggestion that the standard semantic understanding of logical consequence as (guaranteed) preservation of truth is too narrow. In particular, if a set of sentences cannot all be true, any sentence is guaranteed to preserve what truth is contained in that set. The closure of such a premise set under a guaranteed truth-preservation consequence relation includes every sentence in the language. Thus the truth preservation view of consequence inevitably trivialises the consequences of unsatisfiable premise sets.

However, if there are other semantic or syntactic properties of such a premise set that are worth preserving, we can constrain the consequences that follow from those premises by insisting that the closure of a set under an acceptable consequence relation retain these valuable properties. This is a rather modest proposal; it allows us to motivate constraints on inference from unsatisfiable sets without proposing any revision in our views of what sets of sentences are satisfiable, and it imposes no particular view of what properties of sets of sentences are worth preserving.

Level was Schotch and Jennings’ first suggestion for a preservable property that some inconsistent sets have; the level of a set of sentences Γ can be defined intuitively as the least cardinal n such that Γ can be divided amongst n classically consistent sets. On this definition, both {p ∨  ¬p} and {p, q} have level 1, while {p, q,  ¬p} has level 2, {p ∧ q, p ∧  ¬q,  ¬p ∧  ¬q} has level 3 and {p ∧  ¬p} (like all sets that include a contradiction) has no level at all, since it cannot be divided into n consistent sets for any cardinal n. Note also that some sets, such as the set of all literals in a language with an infinite collection of atoms, have level ω.

This first, rough definition can be refined in two ways. First, to ensure that every set of sentences has a level, we assign as the level of sets that cannot be divided into consistent sets. More subtly, in order to recognise the special consistency of the empty set and its consequences (the tautologies)—perhaps most vividly illustrated by the fact that unions of such sets with consistent sets are always consistent—we assign such sets the special level 0 as in Schotch and Jennings (1980a1989).

The classical consequence relation preserves levels 0 and 1. But Schotch and Jennings’ forcing relation preserves all levels. The most obvious inferential restriction that forcing imposes (in comparison with the classical consequence relation) is the rejection of conjunction introduction, and its replacement with a sequence of level-dependent rules for aggregation: where n is the level of our premise set, the set is closed under singleton consequence and the rule

$$2/n+1 : \quad \frac{\{{\alpha }_{1}\ldots {\alpha }_{n+1}\}} {{\vee }_{1\leq i\neq j\leq n+1}({\alpha }_{i} \wedge {\alpha }_{j})}<Footnote ID="Fn1"> <Para>It's worth pointing out here that the 2/n+1 rule can be replaced by any operation that forms the disjunction of conjunctions of the edges of an n-uncolourable hypergraph on a collection of input sentences. See the appendix of\ Brown and Schotch\ (<CitationRef CitationID="CR4">1999</CitationRef>) for details.</Para> </Footnote>$$

Preservationism is a broad tent; there are many other interesting properties worth preserving. The usual syntactic standard of acceptability is consistency; applying this standard, we get a variation on the usual account of the consequence relation:

Γ ⊢ α ⇔ α is a consistent extension of every consistent extension of Γ.

Similarly, the usual semantic standard of acceptability is satisfiability; applying this standard, we get:

Γ⊧α ⇔ α is a satisfiable extension of every satisfiable extension of Γ.

It’s easy to see that, given classical accounts of consistency and satisfiability, these definitions will give us the familiar classical consequence relation—but these definitions focus our attention on the way in which closing a premise set under the logical consequence relation preserves something important: the consistent (satisifiable) extensions of our premises remain consistent (satisfiable) extensions when we extend the set by adding its logical consequences to it. We might say that extending a set in this way begs no questions that aren’t already begged in accepting the set.

In Brown (1999), I proposed a general way of thinking about consequence relations as preserving desirable features of premise sets: given a standard of acceptability for extensions of sets of sentences, we say that:

α follows from Γ iff α is an acceptable extension of every acceptable extension of Γ.Footnote 1

Given this general account of consequence relations, we can define new consequence relations by identifying other properties that can be used as standards of acceptability in the new consequence relation. For example, preserving a measure of the ambiguity required to produce a consistent image of a premise set leads us to a semantics for Priest’s system LP (Priest 1979; Brown 1999) while preserving the level as defined above gives us the set-sentence (Γ[ ⊢ α) version of Schotch and Jennings’ forcing relation (Schotch and Jennings 1980a1989).

But approaching things this way is somewhat inelegant. Treating consequence relations as relations between two different types of entities, with premise sets on the left and single sentences on the right, disguises some important symmetries. In particular, we will be interested here in symmetries relating conjunction to disjunction, and assertion to denial.

In one direction, we can see how to reduce the usual set-sentence presentation of the classical consequence relation to a relation between sentences by pointing out that Γ ⊢ α holds if and only if {γ} ⊢ α holds, where γ is a conjunction of members of Γ: in classical logic (and in any other compact, fully aggregative logic that has a conjunction operator) closure under conjunction combines premises into single sentences from one or another of which any conclusion following from Γ must follow. However, bringing a connective into our account of consequence in this way is limiting: the resulting account will be restricted to languages that have such a conjunction, or to which one can be conservatively added. So it’s more general, and more illuminating, to type-raise conclusions from sentences to sets of sentences, and treat the consequence relation as holding between premise sets and conclusion sets.

In classical multiple-conclusion logic we say that Γ ⊢ Δ if the assertion of all members of Γ commits us to the assertion of at least one member of Δ. Assuming that denial and assertion are incompatible, i.e. that we cannot simultaneously deny and assert a sentence, this is equivalent to saying that Γ ⊢ Δ if and only if the denial of all members of Δ commits us to the denial of at least one member of Γ. So the familiar logic of assertion runs from left to right, while the logic of denial runs from right to left.

In the rest of this paper we will consider two paraconsistent set-sentence consequence relations, examine the role of preservation in them, and develop corresponding set–set consequence relations. In the conclusion, reflections on these examples will lead to insights on the role of preservation in consequence relations.

2 Ambiguity, LP and FDE

The central idea of the logics we will discuss in this section is quite simple. Historically, a standard way of responding to apparent inconsistencies has been to say, instead, that there is an ambiguity in the sentences that appear to conflict. For example, when we say both that this wine is dry and that this wine (being wet) is not dry, we seem to be contradicting ourselves. But we know perfectly well that there is an ambiguity here: the use of ‘dry’ in ‘this wine is dry’ means (at least normally) something like ‘not sweet’ or ‘having low sugar content’, while the use of ‘dry’ in ‘this wine is not dry’ means something like ‘is not a liquid’ or ‘contains very little water’. The apparent contradiction is merely apparent.

We can formulate a general approach to dealing with inconsistent premise sets along these lines: any inconsistent set of sentences can be rendered consistent by treating some of the atoms appearing in the set as ambiguous. We can make that ambiguity explicit by replacing instances of each of those atoms with one of two new atoms, creating a disambiguated image of the original set. However, without the informal guidance we have in the example above, we don’t really know which such disambiguations are the right ones (if any are). So we will take all the different ways of disambiguating atoms of the original set into consideration. Some of these will produce consistent images of our premises. In general some will do so by treating more atoms as ambiguous, while others make do with less. Our logic will focus on least sets of atoms that allow the projection of a consistent image of our premises, i.e. sets of atoms that are enough to do the job, and for which no proper subset of those atoms is enough. Here we present the main ideas in simple form; more formal definitions of these ideas can be found in A.1.

We begin by defining how to project a consistent image of a set of sentences Γ using a set of sentence letters A:

Γ′ is a consistent image of Γbased onA iff

  1. 1.

    A is a set of sentence letters.

  2. 2.

    Γ′ is consistent.

  3. 3.

    Γ′ results from the substitution, for each occurrence of each member a of A in Γ, of one of a pair of new sentence letters, a f and a t .

Acceptability is the central notion in the general preservationist proposal for consequence described above. The intuitive idea is that a set Δ is acceptable, given a commitment to a premise set Γ, if and only if:

  1. 1.

    Δ includes Γ.

  2. 2.

    Extending Γ to Δ does not make things worse, by our standards of acceptability.

In our first ambiguity logic, we will say Δ is ambiguity-acceptable relative to Γ iff there is some way of projecting a consistent image of Δ that does not require a larger base of sentence letters than the minimal projection bases for Γ. This means that Δ may resolve uncertainties about how we will project a consistent image that Γ leaves open, by eliminating some of the minimal projection bases that work for Γ. For example, we can project consistent images of the set Σ = { p, q,  ¬(p ∧ q)} based on either {p} or {q}. So {p, q,  ¬(p ∧ q),  ¬p} will be acceptable relative to Σ (as will {p, q,  ¬(p ∧ q),  ¬q}): consistent images of both these sets can be projected using one or the other of the least sets sufficient for projecting a consistent image of Σ. However {p, q,  ¬(p ∧ q),  ¬p,  ¬q} is not acceptable: both p and q must be treated ambiguously to create a consistent image of this set, so we cannot project a consistent image of this set based on either {p} or {q}, the least sets that were sufficient to project consistent images of Σ. Subtler ways of making things worse retain some of the original ambiguity-set’s members, but require that we extend others. We rule all these undesirable extensions out by requiring the extended set’s least projection bases to be a subset of those of the original set.

To arrive at our new consequence relation we simply declare, following the pattern presented in our introduction, that α follows from Γ in our ambiguity-logic (which we write Γ ⊢ { amb}α), if and only if α is an ambiguity-acceptable extension of every ambiguity-acceptable extension of Γ. In Brown (1999) I show that this consequence relation is identical to the consequence relation of Priest’s logic of paradox (LP).

However, the LP consequence relation is inelegant from the point of view of this paper. Its set-sentence formulation leads to a number of ugly asymmetries. In particular, it treats absurdities on the left very differently from how it treats their duals, the theorems, on the right: In LP, classical contradictions on the left don’t trivialise, but classical tautologies on the right do, that is, any such tautology follows from every premise set. These asymmetries persist even when we express LP in a multiple-conclusion form: multiple-conclusion LP blocks the trivialisation of inconsistent premise sets (in general, not every conclusion set follows from these) but it still trivialises (from right to left) all conclusion sets that cannot be consistently denied. These right-trivial sets are the sets whose closure under disjunction includes a tautology, the same sets that are right-trivial in classical logic.

First degree entailment (FDE) is a well-known logic closely related to LP that provides a symmetrical treatment of inconsistency on the left and its dual on the right. In Brown (2001) I present an ambiguity-based account capturing the consequence relation of FDE; the presentation here is a modified version of the original, emphasizing the idea of acceptability and the role of extensions of premise and conclusion sets.

Extending this use of ambiguity to provide a preservationist account of FDE requires careful development of the symmetries of the consequence relation. A direct approach to re-imposing the classical left-right symmetries on our ambiguity semantics for LP is to dualise the property preserved from left to right, and require that this dual property be preserved from right to left. Having used ambiguity to project consistent images of the premise set, we now use ambiguity in order to project consistently deniable images of the conclusion set.

In this sentence-set consequence relation, we say the set Δ follows from a sentence γ, or \(\gamma {\vdash }_{{\text{ amb}}^{{_\ast}}}\Delta \) if and only if γ is an acceptable extension of every acceptable extension of Δ, considered as a set we are committed to denying. But acceptability of an extension is now defined in terms of preservation of the right-ambiguity set of Δ, the set of least sets of atoms whose ambiguous treatment allows us to project a consistently deniable image of Δ: the right-ambiguity set of an acceptable extension of Δ must be a subset of the right-ambiguity set of Δ.

We can combine these two asymmetrical consequence relations to construct a symmetrical one by treating sets on the left as closed under conjunction and sets on the right as closed under disjunction, and demanding that both these consequence relations apply:

$$\Gamma {\vdash }_{\text{ Sym}}\Delta \Leftrightarrow \exists \,\delta \in \text{ Cl}(\Delta ,\vee ) : \Gamma {\vdash }_{\text{ Amb}}\delta \,\&\,\exists \,\gamma \in \text{ Cl}(\Gamma ,\wedge ) : \gamma {\vdash }_{{\text{ Amb}}^{{_\ast}}}\Delta .$$

Alternatively (linking both to a purely sentential consequence relation, so that the symmetrical set–set relation arises from type-raising a symmetrical sentence-sentence relation), we can put it this way instead:

$$\Gamma {\vdash }_{\text{ Sym}}\Delta \Leftrightarrow \exists \,\delta \in \text{ Cl}(\Delta ,\vee ),\exists \,\gamma \in \text{ Cl}(\Gamma ,\wedge ) : \gamma {\vdash }_{\text{ Sym}}\delta .$$

Where \(\gamma {\vdash }_{\text{ Sym}}\delta \Leftrightarrow \{ \gamma \} {\vdash }_{\text{ Amb}}\delta \,\&\,\gamma {\vdash }_{{\text{ Amb}}^{{_\ast}}}\{\delta \}\).

But the result of this manoeuvre is a logic sometimes called K*,Footnote 2 not the more elegant FDE. FDE and K* agree except when classically trivial sets lie on both the left and the right. In those cases the classical triviality of the set on the other side ensures that the property we’re preserving is indeed preserved. So the logic just described trivialises when classically trivial sets appear on both the left and the right. Since our aim here is to use ambiguity to prevent trivialisation of the consequence relation as far as we can, we need to be a bit subtler about how we arrive at our symmetrical, set–set consequence relation.

The way forward here is to simultaneously consider consistent images of premise sets and consistently deniable images of conclusion sets, while requiring that the sets of sentence letters used to project these images be disjoint Footnote 3:

Γ ⊢ { FDE} Δ iff every such consistent image of Γ can be consistently extended by some member of each compatible non-trivial image of Δ (i.e. each non-trivial image of Δ based on a disjoint set of sentence letters)

or equivalently:

Γ ⊢ { FDE} Δ iff every such non-trivial image of Δ can be extended by some element of each compatible non-contradictory image of the premise set while preserving its consistent deniability.

The new point I want to make here, however, is better seen in the light of another way of expressing this relation—one that focuses less on what feature of our premise and conclusion sets is preserved from left to right and right to left, and more on what we want to preserve regarding the consequence relation itself. The point lurking behind the characterisations of Γ ⊢ { FDE} Δ above is that what we are preserving is the classical consequence relation itself ( ⊢ ), under a range of minimally ambiguous, consistent and consistently deniable images of our premises and conclusions, respectively:

Γ ⊢ { FDE} Δ iff every image of the premise and conclusion sets, I(Γ), I  ∗ (Δ) obtained by treating disjoint sets of sentence letters drawn from { Amb}(Γ) and { Amb} ∗ (Δ) as ambiguous is such that I(Γ) ⊢ I  ∗ (Δ).

This suggests another interesting strategy for producing new consequence relations from old. We can say that the new consequence relation holds when and only when the old relation holds in all of a range of cases anchored to (or centered on) the original premise and conclusion sets. This strategy can reduce or eliminate trivialisation by ensuring that the range of cases considered includes some non-trivial ones, even when the instance forming our ‘anchor’ is trivial.

3 Forcing

In this section of the paper, we apply these ideas to Schotch and Jennings’ weakly aggregative forcing relation, to arrive at a set–set version of the relation. Completeness for the resulting consequence relation is proved in an appendix.

In the more sophisticated multiple-conclusion version of the forcing relation in Schotch and Jennings (1989), we block right to left as well as left to right aggregative trivialisation. This requires giving up full aggregation on the right as well as the left: both the dual pair of classical rules,

$$\begin{array}{rcl} & & \frac{\Gamma [\vdash \Delta ,\alpha ,\Gamma [\vdash \Delta ,\beta } {\Gamma [\vdash \Delta ,\alpha \wedge \beta } \\ & & \frac{\Gamma ,\alpha [\vdash \Delta ,\Gamma ,\beta [\vdash \Delta } {\Gamma ,\alpha \vee \beta [\vdash \Delta } \\ \end{array}$$

will fail in our new consequence relation. Following the pattern already applied to our ambiguity logic above, this logic requires a dualisation of level, ℓ′, that applies to conclusion sets.

We begin by defining a generalisation of non-triviality for conclusion sets:

$$\mathrm{NonTriv}(\Delta ,\xi ) \Leftrightarrow \exists A : \oslash ,{a}_{1},\ldots ,{a}_{i},1 \leq i \leq \xi ,\forall \delta \in \Delta ,\exists {a}_{i} : \delta \vdash {a}_{i}\,\&\,\forall i,\oslash \nvdash {a}_{i}.$$

So just as before we will divide a set’s content amongst a family of sets, but this time we require that the sets each follow from some member of our set, and that the sets all be consistently deniable.

Next we define a measure of Δ’s degree of triviality based on this generalisation:

ℓ′(Δ) = the least ξ such that NonTriv(Δ, ξ), else .

The intermediate consequence relation corresponding to K* above results when we produce a symmetrical consequence relation by requiring preservation of both and ℓ′ from left to right and right to left respectively. This consequence relation, like K*, holds trivially whenever classically trivial sets appear on both left and right: The inconsistency of Γ ensures that Δ can be extended by some member of Γ while preserving ℓ′(Δ), and the inconsistency of denying Δ ensures that Γ can be extended by some member of Δ while preserving (Γ). Once again, focusing just on preserving desirable properties of the premise and conclusion sets when extending them leads us to a nicely symmetrical consequence relation, but one that doesn’t capture the full potential of the logical tools we’re using. And once again, we can improve on this by applying the division of Γ and Δ simultaneously:

Set-Set: Γ[ ⊢ Δ ⇔ For every (Γ) covering family of Γ, A, and every ℓ′(Δ) covering family of Δ, B, there is some pair < a, b > , a ∈ A and b ∈ B, such that a ⊢ b.

Aggregation of the premise and conclusion sets in our rules for the forcing relation is governed by a simple pigeon hole principle: given (Γ) + 1 sentences contained in Γ, at least two of the sentences must follow from the same member of the (Γ)-membered covering family of sets. So some conjunction amongst these sentences of Γ will follow from some member of every covering family. So our consequence relation will (as rule 3 below indicates) allow us to infer the disjunction of all the pairwise conjunctions amongst these sentences. In general, this disjunction of pairwise conjunctions will not follow from any of the sentences in Γ, so what we have here is a form of aggregation, though weaker than the familiar rule of conjunction-introduction. Similarly, given ℓ′(Δ) + 1 elements of Δ, at least two of the elements must be such that they imply the same member of our right-covering family of sets. This implies that the disjunction of those two elements will imply that member of the covering family, and there will be such a pair of elements for every covering family. So the conjunction of such disjunctions will imply some member of every right-covering family for Δ. In general, such a conjunction of disjunctions will not imply any of the original sentences drawn from Δ. So we have a weakening form of aggregation on the right, though not as powerful a weakening as closure under disjunction, the form that aggregation on the right takes in the classical case.

The upshot is that our Set–Set condition holds only if the aggregation the level of Γ allows us to apply on the left allows us to produce a formula a, and the aggregation that the right-level of Δ allows us to apply on the right allows us to produce a formula b, such that b follows from a. The equivalence of this characterisation and the official Set–Set condition above is not obvious; in fact, it is the main lemma in a completeness proof for the following rules for multiple-conclusion forcing:

  1. 1.

    Pres ⊢ : \(\frac{\Gamma [\vdash \alpha ,\alpha \vdash \beta ,\beta [\vdash \Delta } {\Gamma [\vdash \Delta }\)

  2. 2.

    Ref: \(\frac{\alpha \in \Gamma } {\Gamma [\vdash \alpha },\quad \frac{\beta \in \Delta } {\beta [\vdash \Delta }\)

  3. 3.

    2/n+1(L): \(\frac{\Gamma [\vdash \Delta ,{\alpha }_{1}\ldots \Gamma [\vdash \Delta ,{\alpha }_{n+1}} {\Gamma [\vdash \Delta ,{\vee }_{1\leq i\neq j\leq n+1}({\alpha }_{i}\wedge {\alpha }_{j})}\quad \text{ where }n = \mathcal{l}(\Gamma )\)

  4. 4.

    2/n+1(R): \(\frac{\Gamma ,{\alpha }_{1}[\vdash \Delta ,\ldots ,\Gamma ,{\alpha }_{n+1}[\vdash \Delta } {\Gamma ,{\wedge }_{1\leq i\neq j\leq n+1}({\alpha }_{i}\vee {\alpha }_{j})[\vdash \Delta }\quad \text{ where }n = \mathcal{l}'(\Delta )\)

  5. 5.

    Trans: \(\frac{\Gamma ,\alpha [\vdash \Delta ,\Gamma [\vdash \alpha ,\Delta } {\Gamma [\vdash \Delta }\)

The proof of completeness for these rules, which generalises the proof of completeness for single conclusion forcing, appears in an appendix (see Apostoli and Brown 1995 for a version of the proof for single-conclusion forcing, applied to weakly aggregative modal logics). The new proof shows that the aggregation rules (3 and 4 above) suffice to capture all the aggregation that results from the division of the premise and conclusion set’s contents amongst (Γ) and ℓ′(Δ) cells, respectively. The rules, it emerges, are enough to provide a pair of single, bridging sentences to which we can apply rule 1 to complete our derivation, whenever Γ[ ⊢ Δ.

The main point that I want to make about this logic here is that, like the version of FDE developed above, it can be described as preserving the classical consequence relation throughout a range of related premise and conclusion sets. Thus far, we say that Γ[ ⊢ Δ holds if and only if the classical consequence relation holds between some pair of cells in every (Γ), ℓ′(Δ) covering of Γ and Δ’s content. Since these levels are defined as the minimum cardinality required for a family of sets to cover Γ and Δ with only nontrivial members, this requirement will be non-trivial so long as some non-trivial divisions of Γ and Δ’s content exist, i.e. so long as Γ and Δ have no individually trivial members. However, we can capture this preservation of ⊢ in yet another way, which emphasises the parallel between our ambiguity-based treatment of FDE and forcing. Instead of dividing Γ’s content amongst the members of some family of sets indexed to (Γ), we can achieve the same effect by means of ambiguity, so long as we require that no ambiguity arise within a single sentence.

When (Γ) = n, we replace the sentence letters of Γ with n sets of new sentence letters, and produce images of Γ that replace the sentence letters in each γ ∈ Γ with sentence letters drawn from one of these sets. Supposing that no numerical subscripts appear in the sentence letters of Γ, we can replace the sentence letters of each sentence in Γ with the same letters combined with subscripts drawn from one of 1, , n. Then we can say that Γ[ ⊢ δ if and only if every such image of Γ has some such image of δ (i.e. an image of δ produced by replacing its sentence letters with letters combined with one of our subscripts) as a classical consequence. Similarly, we can say that γ[ ⊢ Δ if and only if every such image of Δ is such that some such image of γ (i.e. an image of γ produced by replacing its sentence letters in the same way) is a premise from which Δ follows classically. Finally, we can invoke the singleton bridge principle, and say that Γ[ ⊢ Δ if and only if for some sentences < α, β > , Γ[ ⊢ α, β[ ⊢ Δ, and α ⊢ β.

4 Final Reflections on Ambiguity and Aggregation

The existence of this ambiguity-based approach to forcing, together with the broader, strategic parallel between FDE and forcing as ways to ameliorate the trivialisation of classical logic suggests these logics are more closely related than their proponents have thought: ambiguity and aggregation are closely linked. Allowing for ambiguity weakens aggregation by dividing up the content of what is said, whether in a single sentence or between one sentence and another. FDE results from limiting the amount of ambiguity allowed according to the ambiguity sets of Γ and Δ, while insisting that when the ambiguity of a sentence letter is invoked to produce consistent images of Γ, we don’t simultaneously use ambiguity of the same sentence letter to produce a consistently deniable image of Δ. Forcing results from limiting ambiguity in two different ways, first requiring that no ambiguity arise within a sentence, and second, allowing no more ambiguity than is then required to divide the content of Γ to the point of consistency, and to divide the content of Δ to the point of consistent deniability.

Weakening consequence relations when they give us undesirable results is an important job in philosophical logic. Preserving a base consequence relation under a range of images of the premise and conclusion sets is one general route to such weakenings, a route I suspect has more interesting results in store.

5 Appendix A: Formal Definitions for Ambiguity Logic

First, we define { Amb}(Γ) as the set of least sets each of which is the base of some projection of a consistent image of Γ:

$$\text{ Amb}(\Gamma ) =\{ A\vert \exists \,\Gamma ' : \text{ ConIm}(\Gamma ',\Gamma ,A) \wedge \forall A',A' \subset A,\neg \exists \,\Gamma '' : \text{ ConIm}(\Gamma '',\Gamma ,A')\}$$

With this in hand, we can give a formal definition of the preservation relation:

$$\Delta \ \mathrm{is\ an\ }\text{ Amb}(\Gamma )\mbox{ -}\mathrm{preserving\ extension\ of}\ \Gamma \Leftrightarrow \text{ Amb}(\Gamma \cup \Delta ) \subseteq \text{ Amb}(\Gamma ).$$

To indicate that this preservation relation determines the acceptability predicate for our logic, we define:

$$\text{ Accept}(\Delta ,\Gamma ) \Leftrightarrow \Delta \ \mathrm{is\ an}\ \text{ Amb}(\Gamma ) -\mathrm{preserving\ extension\ of}\ \Gamma ,$$

That is,

$$\text{ Accept}(\Delta ,\Gamma ) \Leftrightarrow \text{ Amb}(\Gamma \cup \Delta ) \subseteq \text{ Amb}(\Gamma ).$$

Combining this with our general reading of consequence relations as preserving the acceptability of all acceptable extensions, leads us to a new consequence relation:

$$\Gamma {\vdash }_{\text{ Amb}}\alpha \Leftrightarrow \forall \Delta : \text{ Accept}(\Delta ,\Gamma ) \rightarrow \text{ Accept}(\Delta \cup \{ \alpha \},\Gamma )$$

Second, our sentence-set, right to left consequence relation preserves an ambiguity measure of consistent deniability from right to left. We begin by defining { Amb} ∗ (Δ):

$$\begin{array}{rcl} & &{ \text{ Amb}}^{{_\ast}}(\Delta ) =\{ A\vert \exists \,\Delta ' : \text{ ConDenIm}(\Delta ',\Delta ,A) \wedge \\ & &\forall A',A' \subset A,\neg \exists \,\Delta '' : \text{ ConDenIm}(\Delta '',\Delta ,A')\} \\ \end{array}$$

Where { ConDenIm}(Δ′, Δ, A) says that by treating the set of atoms A as ambiguous, we can project Δ′, a consistently deniable image of Δ. Then we define acceptable extensions of Δ:

$$\begin{array}{rcl} & & \Gamma \ \mathrm{is\ an}\ {\text{ Amb}}^{{_\ast}}(\Delta )\mathrm{-preserving\ extension\ of}\ \Delta \Leftrightarrow \\ & & \Delta \subseteq \Gamma \mathrm{and}\ {\text{ Amb}}^{{_\ast}}(\Delta \cup \Gamma ) \subseteq {\text{ Amb}}^{{_\ast}}(\Delta ) \\ \end{array}$$

We write this { Accept} ∗ (Γ, Δ), and write the condition for our right-to-left, deniability-preserving consequence relation as:

$$\gamma {\vdash }_{{\text{ Amb}}^{{_\ast}}}\Delta \Leftrightarrow \forall \Delta :{ \text{ Accept}}^{{_\ast}}(\Gamma ,\Delta ) \rightarrow {\text{ Accept}}^{{_\ast}}(\Gamma \cup \{ \gamma \},\Delta )$$

6 Appendix B: A Proof of Completeness for Set–Set Forcing, with Equivalence of Two Representation Theorems

This appendix presents a completeness proof for set–set forcing that I developed based on the singleton-bridge approach, a view of forcing that focuses separately on how the limited aggregation allowed by forcing enables us to combine the contents of our premises and our conclusions. A proof that the singleton-bridge approach is equivalent to a standard characterisation of forcing in terms of partitions of premise and conclusion sets, with a simple sort of preservation of the classical ‘ ⊢ ’ across the partitions, amounts, in the end, to a proof of completeness for our rules.

6.1 Definitions and Motivations

6.1.1 Preliminaries: Formal Definitions of Level and Set-Sentence Forcing

We begin with a more formal account of Schotch and Jennings’ forcing relation, described in the introduction. First, we give a more general, definition of the level of a set of sentences: let A be a family of sets that coversΓ’s content in the sense that, for each member of Γ, γ, A includes a set a such that a ⊢ γ. Dividing Γ’s content amongst such families of sets can allow us to “cover” all of Γ’s members in this way even when Γ is inconsistent. We begin by defining a generalisation of consistency using the cardinality of such coverings.

$$\text{ Con}(\Gamma ,\xi ) \Leftrightarrow \exists \,A : \oslash ,{a}_{1},\ldots ,{a}_{i},1 \leq i \leq \xi ,\forall \gamma \in \Gamma ,\exists \,i : {a}_{i}\vert \gamma \,\&\,\forall i,{a}_{i} \nvdash \oslash $$

Next we define level, a measure of Γ’s inconsistency based on this generalisation:

$$\mathcal{l}(\Gamma ) = \mathrm{the\ least}\ \xi \ \mathrm{such\ that}\ \text{ Con}(\Gamma ,\xi ),\mathrm{\ else}\ \infty .$$

Finally, we define a consequence relation that preserves just as the classical ⊢ preserves consistency:

$$\Gamma [\vdash \alpha \ \mathrm{iff,\ for}\ \xi = \mathcal{l}(\Gamma ),\ \mathrm{for\ all}\ A : {a}_{1}\ldots {a}_{i},1 \leq i \leq \xi ,\exists \,{a}_{j} \in A\vert {a}_{j} \vdash \alpha .$$

For sets of level 2 or more, forcing gives up adjunction (i.e. Γ[ ⊢ α, Γ[ ⊢ β ∕ Γ[ ⊢ α ∧ β fails). But so long as (Γ) is finite, we still obtain consequences that are not classical consequences of individual members of Γ. The forcing relation can be straightforwardly captured by means of the following rules:

  1. 1.

    Pres ⊢ : \(\frac{\Gamma [\vdash \alpha ,\alpha \vdash \beta } {\Gamma [\vdash \beta }\)

  2. 2.

    Ref: \(\frac{\alpha \in \Gamma } {\Gamma [\vdash \alpha }\)

  3. 3.

    2/n+1: \(\frac{\Gamma [\vdash {\alpha }_{1},\ldots \Gamma [\vdash {\alpha }_{n+1}} {\Gamma [\vdash {\vee }_{1\leq i\neq j\leq n+1}({\alpha }_{i}\wedge {\alpha }_{j}),}\qquad n = \mathcal{l}(\Gamma )\)

This consequence relation preserves in the same way that classical logic preserves consistency—as a result, it avoids trivialisation for any set that has a well-defined level (i.e. any level other than ).

6.1.2 Towards a Set–Set Consequence Relation: Defining Con on the Right

The key here is to require that, when we partition (or cover, in general) according to the level, we get a consistent (consistently deniable) partition (covering), i.e. one that has no trivial elements. The level is the minimum cardinality of such a consistent partition. We define a new consistency predicate for the right side of the turnstile, following our definition of { Con}(Γ, ξ):

$${ \text{ Con}}^{{_\ast}}(\Delta ,\xi ) \Leftrightarrow \exists \,A : \oslash ,{a}_{ 1},\ldots ,{a}_{i},1 \leq i \leq \xi ,\forall \delta \in \Delta ,\exists \,i : \delta \vdash {a}_{i}\,\&\,\forall i : \oslash \nvdash {a}_{i}$$

6.1.3 Defining Left- and Right-Levels of Incoherence for a Set–Set Consequence Relation

$$\begin{array}{rcl}{ \mathcal{l}}_{L}(\Gamma ) = \left \{\begin{array}{@{}l@{\quad }l@{}} \xi : \text{ Con}(\Gamma ,\xi ),\text{ if such a }\xi \text{ exists,}\quad \\ \infty , \text{ otherwise.}\quad \end{array} \right .& & \\ \end{array}$$
$$\begin{array}{rcl}{ \mathcal{l}}_{R}(\Delta ) = \left \{\begin{array}{@{}l@{\quad }l@{}} \xi :{ \text{ Con}}^{{_\ast}}(\Delta ,\xi ),\text{ if such a }\xi \text{ exists,}\quad \\ \infty ,\text{ otherwise.} \quad \end{array} \right .& & \\ \end{array}$$

In what follows we may omit the subscripted L or R when which is meant is clear from the context.

6.1.4 Characterizing [ ⊢ 

Given these definitions, we can consider how best to define a condition for Γ[ ⊢ Δ. The idea is that, in some sense, the existence of a classical consequence relation between Γ and Δ should be preserved in all the L (Γ) and R (Δ) coverings of our premises and conclusions. For simplicity’s sake, we will focus on partitions of Γ and Δ, which can always meet the conditions Con and Con* if any covering can. We define Π(Γ) as the set of all L (Γ) partitions of Γ (and Π(Δ) for all the R (Δ) partitions of Δ); we use π for elements of Π(Γ), and p for elements of these in turn. Then, starting with the singleton cases, we say:

  1. 1.

    Γ[ ⊢ α iff \(\forall \pi \in \Pi (\Gamma ),\exists \,p \in \pi : p \vdash \alpha \)

  2. 2.

    α[ ⊢ Δ iff \(\forall \pi \in \Pi (\Delta ),\exists \,p \in \pi : \alpha \vdash p\)

Comments:

The first says that a consequence “survives” the partitioning of Γ if and only if it appears as a consequence of every partition, i.e. as a consequence of some cell of every partition. The second applies the same treatment to surviving as a “prover”: a prover of some set survives the partitioning of that set if and only if it proves some cell of every partition.

Both fit well with the definition of levels. Since the level is the least number of cells such that a partition with no trivial cells exists, neither our consequences nor our provers can be trivial unless Γ or Δ (respectively) contains a singleton trivial sentence. Outside those cases, our consequences must result from some non-trivial premises, and our provers must prove some non-trivial consequence.

6.1.5 Defining [ ⊢ for the General Case

Now we need to consider the general, Γ[ ⊢ Δ case. One way of doing it is to require that whatever aggregation of Γ and of Δ that survives all the partitions in Π(Γ) and Π(Δ) guarantee the preservation of any consequences that we will endorse. That is, we require that every pair of partitions of Γ and of Δ (according to their levels) π Γ , π Δ , be such that π Γ  ⊢ π Δ , that is, such that for some p ∈ π Γ and some p′ ∈ π Δ , p ⊢ p′. Then, given a set of rules for [ ⊢ , we can state the:

R1: Γ[ ⊢ Δ ⇔  ∀π ∈ Π(Γ), π ∈ Π(Δ), π  ⊢ π. 

But we can also capture the aggregation of Γ and Δ by means of the singular cases we’ve already settled the completeness of. That is, we can do the same jobFootnote 4 by invoking our rules for the singular case:

R2: Γ[ ⊢ Δ ⇔  ∃ a : Γ[ ⊢ a&a[ ⊢ Δ. 

Proving that these are equivalent brings us very close to a completeness proof for the first representation theorem, so our strategy will be to establish the equivalence of R1 and R2 as a lemma, and then go on to prove R1.

6.2 The Main Proofs

We begin by noting something important about certain singular cases—there are two rules that are sound for these singular cases but that fail as soon as we allow sets with cardinality 2 or more on both sides:

$$\vee [\vdash : \quad \frac{\Gamma ,\alpha [\vdash \gamma ,\Gamma ,\beta [\vdash \gamma } {\Gamma ,\alpha \vee \beta [\vdash \gamma } \qquad \qquad [\vdash \wedge : \quad \frac{\gamma [\vdash \Delta ,\alpha ,\gamma [\vdash \Delta ,\beta } {\gamma [\vdash \Delta ,\alpha \wedge \beta }$$

These rules play a crucial role in the completeness proof for the forcing relations—I think of them (in the context of set–set forcing) as extensions of pres ⊢ : Pres ⊢ tells us that forcing respects the consequence relations between singletons on both sides of [ ⊢ that classical logic provides. These special, singular versions of ∨ [ ⊢ and [ ⊢ ∧ tell us that we will treat aggregation classically from right to left so long as we have a singleton on the right, and we will treat aggregation classically from left to right so long as we have a singleton on the left. And so, of course, we should: we are starting (in both cases) from a state of complete aggregation on the left or right.

Finally, we will need to bring in fixed-level forcing to deal with some variations on our initial premise and conclusion sets. So [ ⊢ here will sometimes be replaced by n[ ⊢ m, where n and m are our initial premise and conclusion set levels. The rules for n[ ⊢ m are precisely those for forcing, with the exception that the level-sensitive rules are set to the fixed numbers n and m, rather than varying in form depending on the levels of the premise and conclusion sets. The key points to keep in mind with respect to fixed level forcing are:

F1: For n = (Γ) and m = (Δ), Γn[ ⊢ m Δ ⇔ Γ[ ⊢ Δ. 

F2: Fixed level forcing is strictly monotonic. (Only the effects of level increases prevent [ ⊢ from being fully monotonic; but fixed level forcing ignores such increases—at the cost of trivializing n[ ⊢ m for sets with levels higher than n, m. This monotonicity comes in handy in the proofs that follow.)

With this background in hand, the equivalence of R1 and R2 can be proved quite straightforwardly.

Lemma.  ∀π ∈ Π(Γ), π ∈ Π(Δ) π  ⊢ π ⇔  ∃ α : Γ[ ⊢ α & α[ ⊢ Δ.

The proof is by induction on the cardinalities of Γ and Δ, paralleling the original completeness proof for Γ[ ⊢ α; see Apostoli and Brown (1995). Given this proof, together with the completeness of Γ[ ⊢ α (and the perfectly dual completeness of α[ ⊢ Δ), we have a proof of completeness for a simple approach to the rules for Γ[ ⊢ Δ.

 ⇒ 

  1. 1.

    Base: Consider the case where we have cardinality of Γ ≤ (Γ), and cardinality of Δ ≤ (Δ). Here the only way for ∀π ∈ Π(Γ), π ∈ Π(Δ ⊢ π to hold is by virtue of a singleton implication linking some elements of Γ, Δ :  ∃ γ ∈ Γ, δ ∈ Δ : γ ⊢ δ. But then by pres ⊢ , it follows that ∃ α : Γ[ ⊢ α and α[ ⊢ Δ.

  2. 2.

    Induction hypothesis: Suppose that this holds for cardinalities of Γ, Δ up to n, m.

  3. 3.

    Assume that Γ has level n′, Δ has level m′, so that Γ[ ⊢ Δ ⇔ Γn′[ ⊢ m′ Δ.

Let Γ’s cardinality be n + 1 and assume that ∀π ∈ Π(Γ), π ∈ Π(Δ ⊢ π.

We assume we are dealing with a case where Card(Γ) > (Γ). Select an (Γ) + 1 tuple from Γ, {γ1, γ n + 1}. Now consider the variations on Γ that we obtain by removing two elements from this n + 1-tuple and replacing them with the conjunction of these two elements. In the context of fixed-level forcing, what we are doing is, in effect, simply restricting our attention to the set of partitions of Γ that keep γ i and γ j in the same cell. We already know from our induction assumption that every one of these partitions includes a cell proving some cell of each partition of Δ.

Let Γ ij be the variation that does this with the ith and jth elements from our n-tuple. Then by our induction hypothesis and our assumption,

$$\begin{array}{rcl} & & \mathrm{If}\,\forall \pi \in \Pi (\Gamma ),\pi ' \in \Pi (\Delta )\pi \uparrow \vdash \pi ',\exists \,\alpha : {\Gamma {}^{ij}}^{n'}[{\vdash }^{m'}\alpha \&\alpha [\vdash \! \Delta 1 \leq i\neq j \leq n+1, \\ & & \forall \pi \in \Pi (\Gamma ),\pi ' \in \Pi (\Delta )\,\pi \uparrow \vdash \pi ' \\ & & \mathrm{So}\ \exists \,\alpha : {\Gamma {}^{ij}\,}^{n'}[{\vdash }^{m'}\alpha \,\&\,\alpha [\vdash \Delta ,1 \leq i\neq j \leq n + \end{array}$$
(1.)

Now, by the monotonicity of n[ ⊢ m, it follows that Γ, ∧ (γ i , γ j ) n′[ ⊢ m′α. Since α is singular we can apply a series of ∨ [ ⊢ steps:

$$\frac{\Gamma ,\alpha [\vdash \gamma ,\Gamma ,\beta [\vdash \gamma } {\Gamma ,\alpha \vee \beta [\vdash \gamma }$$

to obtain

$$\Gamma ,{\vee }_{1\leq i\neq j\leq n+1}{(\wedge ({\gamma }_{i},{\gamma }_{j}))\,}^{n'}[{\vdash }^{m'}\alpha $$

But Γn′[ ⊢ m′1 ≤ ij ≤ n + 1( ∧ (γ i , γ j )) by our rule for aggregation from left to right. By the monotonicity of n′[ ⊢ m′, it follows that Γn′[ ⊢ m′1 ≤ ij ≤ n + 1( ∧ (γ i , γ j )), Δ. And then by trans:

$$\frac{\Gamma ,\alpha [\vdash \Delta \,\&\,\Gamma [\vdash \alpha ,\Delta } {\Gamma [\vdash \Delta }$$

we get Γn′[ ⊢ m′α.

But n′, m′ are the levels of Γ and Δ respectively. So Γn′[ ⊢ m′α ⇔ Γ[ ⊢ α. Hence Γ[ ⊢ α, and ∃ α : Γ[ ⊢ α & α[ ⊢ Δ as required.

Note that it also follows that Γ[ ⊢ Δ: Since Γ[ ⊢ α and α[ ⊢ Δ, α is a level-preserving extension of both Γ and Δ (just place α in whatever cells prove/are proved by α to get a consistent/consistently deniable partitioning of Γ, α and Δ, α respectively). So Mon* assures us that from Γ[ ⊢ α we get Γ[ ⊢ α, Δ and from α[ ⊢ Δ we get Γ, α[ ⊢ Δ. But then by trans, Γ[ ⊢ Δ.

Let Δ’s cardinality be m + 1 and assume that ∀π ∈ Π(Γ), π ∈ Π(Δ), π  ⊢ π.

Select a (Δ) + 1-tuple from Δ, δ1, δ n + 1. Now consider all the variations on Δ that we obtain by removing two elements from this n + 1-tuple and replacing them with the disjunction of these two elements. (Once again, in effect, we are restricting our attention to the partitions of Δ that keep δ i and δ j together.) Let Δ ij be the variation that does this with δ i and δ j . Then our induction hypothesis and our assumption give us:

  1. 1.

    If \(\forall \pi \in \Pi (\Gamma ),\pi ' \in \Pi (\Delta )\pi \uparrow \vdash \pi ',\exists \,\alpha : \Gamma [\vdash \alpha \,\&\,{\alpha \,}^{n'}[{\vdash }^{m'}{\Delta }_{ij}\)

  2. 2.

    \(\forall \pi \in \Pi (\Gamma ),\pi ' \in \Pi (\Delta )\pi \uparrow \vdash \pi '\)

Hence,

$$\exists \,\alpha : \Gamma [\vdash \alpha \,\&\,\alpha [\vdash {\Delta }_{ij},1 \leq i\neq j \leq n + 1$$

Now, by the monotonicity of n[ ⊢ m,

$${\alpha }_{n}'[\vdash m'\Delta ,\vee ({\delta }_{i},{\delta }_{j}),1 \leq i\neq j \leq n + 1$$

And since α is singular, a series of steps applying our special rule for singular aggregation from left to right ([ ⊢ ∧ ) gives us:

$${\alpha \,}^{n'}[{\vdash }^{m'}\Delta ,\wedge (\vee ({\delta }_{i},{\delta }_{j}))$$

By our general rule for aggregation from left to right, ∧1 ≤ ij ≤ n + 1( ∨ (δ i , δ j ))[ ⊢ Δ. And by the monotonicity of n[ ⊢ m it follows that α,  ∧1 ≤ ij ≤ n + 1( ∨ (δ i , δ j )) n′[ ⊢ m′ Δ. So by trans

$$\frac{\Gamma ,\alpha [\vdash \Delta \,\&\,\Gamma [\vdash \alpha ,\Delta } {\Gamma [\vdash \Delta }$$

we get α n′[ ⊢ m′ Δ.

But n′, m′ are the levels of Γ and Δ respectively. So α n′[ ⊢ m′ Δ ⇔ α[ ⊢ Δ. Hence α[ ⊢ Δ, and ∃ α : Γ[ ⊢ α & α[ ⊢ Δ as required.

Note again that it also follows that Γ[ ⊢ Δ, by the same argument as above.

 ⇐ 

This direction is straightforward. Suppose ∃ α : Γ[ ⊢ α and α[ ⊢ Δ.  ∃ α : Γ[ ⊢ α and α[ ⊢ Δ holds only if every partition of Γ includes a cell proving α and every partition of Δ includes a cell proved by α. And by classical trans, any cell in π ∈ Π(Γ) that proves α proves any cell π ∈ Π(Δ) proved by α. □ 

Theorem.  ∀π ∈ Π(Γ), π ∈ Π(Δ ⊢ π ⇔ Γ[ ⊢ Δ.

The notes at the end of each direction of the equivalence proof give us the hard direction ( ⇒ ); soundness follows from pigeon-hole considerations and well-known facts about classical singleton consequences.