Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

A main concern in proof theory for modal logics is the development of philosophically and, at the same time, computationally satisfying frameworks to capture large classes of logics in a uniform and systematic way. Unfortunately the standard sequent framework satisfies these desiderata only partly. Undoubtedly, there are sequent calculi for a number of modal logics exhibiting many good properties (such as analyticity), which can be used in complexity-optimal decision procedures. However, their construction often seems ad-hoc, they are usually not modular, and they mostly lack philosophically relevant properties such as separate left and right introduction rules for the modalities. These problems are often connected to the fact that the modal rules in such calculi usually introduce more than one connective at a time. For example, in the rule

$$\begin{aligned} \frac{\varGamma \vdash A}{\varGamma ',\Box \varGamma \vdash \Box A,\varDelta }\ \mathsf {k} \end{aligned}$$

for modal logic \(\mathsf {K}\) [4], the context \(\varGamma \) could contain an arbitrary finite number of formulae. Hence this rule can also be seen as an infinite set of rules

each with a fixed number of principal formulae. Both of these perspectives are somewhat dissatisfying: the first since it requires modifying the context, and the second since it explicitly discards the distinction between left and right rules for the modal connective.

One way of solving this problem is to consider extensions of the sequent framework that are expressive enough for capturing these modalities using separate left and right introduction rules. This is possible e.g. in the frameworks of labelled sequents [14] or in that of nested sequents or tree-hypersequents [2, 17, 18]. Intuitively, in the latter framework a single sequent is replaced with a tree of sequents, where successors of a sequent are interpreted under a modality. The modal rules of these calculi govern the transfer of (modal) formulae between the different sequents, and it can be shown that it is sufficient to transfer only one formula at a time. However, the price to pay for this added expressivity is that the obvious proof search procedure is of suboptimal complexity since it constructs potentially exponentially large nested sequents [2].

In this work, we reconcile the added superior expressiveness and modularity of nested sequents with the computational behaviour of the standard sequent framework by proposing a focusing discipline for linear nested sequents [9], a restricted form of nested sequents where the tree-structure is restricted to that of a line. The result is a notion of normal derivations in the linear nested setting, which directly correspond to derivations in the standard sequent setting. Moreover, the resulting calculi lend themselves to specification and implementation in linear logic following the approach in [13]. Since we are interested in the connections to the standard sequent framework, we concentrate on logics which have a standard sequent calculus, with examples including normal modal logic \(\mathsf {K}\) and simple extensions, the exemplary simply dependent bimodal logic \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\) [5], but also several non-normal modal logics, i.e., standard extensions of classical modal logic [4]. As a side effect we obtain, to the best of our knowledge, the first nested sequent calculi for all the considered non-normal modal logics.

2 Linear Nested Sequent Systems

We briefly recall the basic notions of the linear nested sequent framework [9], essentially a reformulation of Masini’s 2-sequents [11] in the nested sequent framework (also compare the \(\mathsf {G}\)-\(\mathsf {CK}_n\) sequents of [12]). In the following, we consider a sequent to be a pair \(\varGamma \vdash \varDelta \) of multisets and adopt the standard conventions and notations (see e.g. [14]). In the linear nested sequent framework, the tree structure of nested sequents is restricted to a line, i.e., a linear nested sequent is simply a finite list of sequents. This data structure matches exactly the history in a backwards proof search in an ordinary sequent calculus, a fact we will heavily use in what follows.

Definition 1

The set \(\mathsf {LNS}\) of linear nested sequents is given recursively by:

  1. 1.

    if \(\varGamma \vdash \varDelta \) is a sequent then \(\varGamma \vdash \varDelta \in \mathsf {LNS}\)

  2. 2.

    if \(\varGamma \vdash \varDelta \) is a sequent and \(\mathcal {G}\in \mathsf {LNS}\) then \(\varGamma \vdash \varDelta \mathbin {\!//^{}\!}\mathcal {G}\in \mathsf {LNS}\).

We will write \(\mathcal {S}\{\varGamma \vdash \varDelta \}\) for denoting a context \(\mathcal {G}\mathbin {\!//^{}\!}\varGamma \vdash \varDelta \mathbin {\!//^{}\!}\mathcal {H}\) where \(\mathcal {G,H}\in \mathsf {LNS}\) or \(\mathcal {G,H}=\emptyset \). We call each sequent in a linear nested sequent a component and slightly abuse notation and abbreviate “linear nested sequent” to \(\mathsf {LNS}\).

In this work we consider only modal logics based on classical propositional logic, and we take the system \(\mathsf {LNS}_\mathsf {G}\) (Fig. 1) as our base calculus. Note that the initial sequents are atomic, contraction, weakening and cut are admissible and all rules are invertible.

Fig. 1.
figure 1

System \(\mathsf {LNS}_{\mathsf {G}}\) for classical propositional logic. In the \(\mathsf {init}\) rule, p is atomic.

Fig. 2.
figure 2

The modal rules of the linear nested sequent calculus \(\mathsf {LNS}_{\mathsf {K}}\) for \(\mathsf {K}\).

Figure 2 presents the modal rules for the linear nested sequent calculus \(\mathsf {LNS}_{\mathsf {K}}\) for \(\mathsf {K}\), essentially a linear version of the standard nested sequent calculus from [2, 17]. Conceptually, the main point is that the sequent rule \(\mathsf {k}\) is split into the two rules \(\Box _L\) and \(\Box _R\), which permit to simulate the sequent rule treating one formula at a time. This piecewise treatment could be seen as one of the main features of nested sequent calculi and deep inference in general [7]. In particular, it is the key to modularity for nested and linear nested sequent calculi [9, 18]. Completeness of \(\mathsf {LNS}_\mathsf {K}\) w.r.t. modal logic \(\mathsf {K}\) is shown by simulating a sequent derivation bottom-up in the last two components of the linear nested sequents, marking applications of transitional rules by the nesting \(\mathbin {\!//^{}\!}\) and simulating the \(\mathsf {k}\)-rule by a block of \(\Box _L\) and \(\Box _R\) rules [9]. E.g., an application of \(\mathsf {k}\) on a branch with history captured by the \(\mathsf {LNS}\) \(\mathcal {G}\) is simulated by:

where the double line indicates multiple rule applications. Observe that this method relies on the view of linear nested sequents as histories in proof search. It also simulates the propositional sequent rules in the rightmost component of the linear nested sequents. This gives a different way of looking at system \(\mathsf {K}\), where formulas in the context can be handled separately. However, the modal rules do not need to occur in a block corresponding to one application of the sequent rule anymore. For instance, one way of deriving the instance \(\Box (p \supset q) \supset (\Box p \supset \Box q)\) of the normality axiom for modal logic \(\mathsf {K}\) is as follows.

Note that the propositional rule \(\supset _L\) is applied between two modal rules. Hence there are many derivations in \(\mathsf {LNS}_\mathsf {K}\) which are not the result of simulating a derivation of the sequent calculus for \(\mathsf {K}\). Thus, while the linear nested sequent calculus \(\mathsf {LNS}_\mathsf {K}\) has conceptual advantages over the standard sequent calculus for \(\mathsf {K}\), its behaviour in terms of proof search is worse: there are many more possible derivations with the same conclusion, when compared to the sequent calculus. We will address this issue by proposing a focusing discipline [1] similar to that of [3] to restrict proof search to a smaller class of derivations, while retaining the conceptual advantages of the framework.

3 Labelled Line Sequent Systems

For simplifying the notation of the focused systems and also for encoding linear nested sequent calculi in linear logic (see Sect. 6), we follow the correspondence between nested sequents and labelled tree sequents given in [6], and consider the labelled sequents [14] corresponding to linear nested sequents. Intuitively, the components of a \(\mathsf {LNS}\) are labelled with variables and their order is encoded in a relation.

Formally, a (possibly empty) set of relation terms (i.e. terms of the form xRy) is called a relation set. For a relation set \(\mathcal{R}\), the frame \(Fr(\mathcal{R})\) defined by \(\mathcal{R}\) is given by \((|\mathcal{R}|,\mathcal{R})\) where \(|\mathcal{R}| = \{x\; |\; xRv \in \mathcal{R} \text{ or } vRx \in \mathcal{R} \text{ for } \text{ some } \text{ state } v\}\). We say that a relation set \(\mathcal{R}\) is treelike if the frame defined by \(\mathcal{R}\) is a tree or \(\mathcal{R}\) is empty. A treelike relation set \(\mathcal{R}\) is called linelike if each node in \(\mathcal{R}\) has at most one child.

Definition 2

A labelled line sequent \(\mathsf {LLS}\) is a labelled sequent \(\mathcal{R},X\vdash Y\) where

  1. 1.

    \(\mathcal{R}\) is linelike;

  2. 2.

    if \(\mathcal{R}=\emptyset \) then X has the form \(x\!:\!A_1,\ldots ,x\!:\!A_n\) and Y has the form \(x \!:\!B_1,\ldots , x \!:\!B_m\) for some state variable x;

  3. 3.

    if \(\mathcal{R}\not =\emptyset \) then every state variable x that occurs in either X or Y also occurs in \(\mathcal{R}\).

Observe that, in \(\mathsf {LLS}\), if \(xRy\in \mathcal{R}\) then \(uRy\notin \mathcal{R}\) and \(xRv\notin \mathcal{R}\) for any \(u\ne x\) and \(v\ne y\).

Definition 3

A labelled line sequent calculus is a labelled sequent calculus whose initial sequents and inference rules are constructed from \(\mathsf {LLS}\).

Fig. 3.
figure 3

Labelled line sequent calculus \(\mathsf {LLS}_{\mathsf {G}}\).

In Fig. 3 we present the rules for the labelled line classical calculus \(\mathsf {LLS}_{\mathsf {G}}\).

Since linear nested sequents form a particular case of nested sequents, the algorithm given in [6] can be used for generating \(\mathsf {LLS}\) from \(\mathsf {LNS}\), and vice versa. However, one has to keep the linearity property invariant through inference rules. For example, the following rule (here considered more generally as a labelled sequent rule)

$$\begin{aligned} \frac{\mathcal{R}, xRy, X\vdash Y, y\!:\!A}{\mathcal{R},X,\vdash Y,x\!:\!\Box A}\ \Box _R' \end{aligned}$$

where y is fresh, is not adequate w.r.t. the system \(\mathsf {LNS}_{\mathsf {K}}\), since there may exist \(z\in |\mathcal{R}|\) such that \(xRz\in \mathcal{R}\). That is, for labelled sequents in general, freshness alone is not enough for guaranteeing unicity of x in \(\mathcal{R}\). And it does not seem to be trivial to assure this unicity by using logical rules without side conditions. To avoid this problem, we slightly modify the framework by restricting \(\mathcal{R}\) to singletons, that is, \(\mathcal{R}=\{xRy\}\) will record only the two last components, in this case labelled by x and y, and by adding a base case \(\mathcal{R}=\{y_0Rx_0\}\) for \(x_0,y_0\) different state variables when there are no nested components. The rule for introducing \(\Box _R\) then is

$$\begin{aligned} \frac{xRy, X\vdash Y, y\!:\!A}{zRx,X,\vdash Y,x\!:\!\Box A}\ \Box _R \end{aligned}$$

with y fresh. Note that this solution corresponds to recording the history of the proof search up to the last two steps. We adopt the following terminology for calculi where this restriction is possible.

Definition 4

A \(\mathsf {LNS}\) calculus is end-active if in all its rules the rightmost components of the premisses are active and the only active components (in premisses and conclusion) are the two rightmost ones. An end-active \(\mathsf {LLS}\) is a singleton relation set \(\mathcal {R}\) together with a sequent \(X \vdash Y\) of labelled formulae, written \(\mathcal {R},X \vdash Y\). The rules of an end-active \(\mathsf {LLS}\) calculus are constructed from end-active labelled line sequents such that the active formulae in a premiss \(xRy,X \vdash Y\) are labelled with y and the labels of all active formulae in the conclusion are in its relation set.

Observe that the completeness proof for \(\mathsf {LNS}_\mathsf {K}\) via simulating a sequent derivation in the last component actually shows that the end-active version of the calculus \(\mathsf {LNS}_\mathsf {K}\) is complete for \(\mathsf {K}\) [9]. From now on, we will use the end-active version of the propositional rules (see Fig. 4). Note that, in an end-active \(\mathsf {LLS}\), state variables might occur in the sequent and not in the relation set. Such formulae will remain inactive towards the leaves of the derivation. In fact, a key property of end-active \(\mathsf {LNS}\) calculi is that rules can only move formulas “forward”, that is, either an active formula produces other formulae in the same component or in the next one. Hence one can automatically generate \(\mathsf {LLS}\) from \(\mathsf {LNS}\). In the following we write \(x\!:\!\varGamma \) if the label of every labelled formula in \(\varGamma \) is x.

Fig. 4.
figure 4

The end-active version of \(\mathsf {LLS}_{\mathsf {G}}\). In rule \(\mathsf {init}\), p is atomic.

Fig. 5.
figure 5

The modal rules of \(\mathsf {LLS}_\mathsf {K}\).

Definition 5

For a state variable x, define the mapping \(\mathbb {TL}_{x}\) from \(\mathsf {LNS}\) to end-active \(\mathsf {LLS}\) as follows

$$ {\begin{array}{lcl} \mathbb {TL}_{x_0}(\varGamma _0 \vdash \varDelta _0)&{}=&{}y_0Rx_0,x_0\!:\!\varGamma _0 \vdash x_0\!:\!\varDelta _0\\ \mathbb {TL}_{x_n}(\varGamma _0 \vdash \varDelta _0\mathbin {\!//^{}\!}\ldots \mathbin {\!//^{}\!}\varGamma _n \vdash \varDelta _n) &{}=&{} x_{n-1}Rx_n, x_0\!:\!\varGamma _0,\ldots ,x_n\!:\!\varGamma _n \vdash x_0\!:\!\varDelta _0,\ldots , x_n\!:\!\varDelta _n\quad n>0 \end{array}} $$

with all state variables pairwise distinct.

It is straightforward to use \(\mathbb {TL}_{x}\) in order to construct a \(\mathsf {LLS}\) inference rule from an inference rule of an end-active \(\mathsf {LNS}\) calculus. The procedure, that can be automatised, is the same as the one presented in [6], as we shall illustrate it here.

Example 6

Consider the following application of the rule \(\Box _R\) of Fig. 2:

$$\begin{aligned} \frac{\varGamma _0 \vdash \varDelta _0 \mathbin {\!//^{}\!}\dots \mathbin {\!//^{}\!}\varGamma _{n-1} \vdash \varDelta _{n-1} \mathbin {\!//^{}\!}\varGamma _{n}\vdash \varDelta _{n} \mathbin {\!//^{}\!}\;\vdash A}{\varGamma _0 \vdash \varDelta _0 \mathbin {\!//^{}\!}\dots \mathbin {\!//^{}\!}\varGamma _{n-1} \vdash \varDelta _{n-1} \mathbin {\!//^{}\!}{\varGamma _{n}\vdash \varDelta _{n},\Box A}}\ \Box _R \end{aligned}$$

Applying \(\mathbb {TL}_x\) to the conclusion we obtain \(x_{n-1}Rx_{n}, X \vdash Y, x_{n}\!:\!\Box A\), where \(X=x_1\!:\!\varGamma _1, \dots , x_{n}\!:\!\varGamma _{n} \) and \(Y = x_1\!:\!\varDelta _1, \dots , x_{n}\!:\!\varDelta _{n}\). Applying \(\mathbb {TL}_x\) to the premise we obtain \(x_nRx_{n+1}, X \vdash Y, x_{n+1}\!:\!A\). We thus obtain an application of the \(\mathsf {LLS}\) rule

$$\begin{aligned} \frac{x_nRx_{n+1},X\vdash Y,x_{n+1}\!:\!A}{x_{n-1}Rx_n,X\vdash Y,x_n\!:\!\Box A}\ \mathbb {TL}_x(\Box _R) \end{aligned}$$

which is the rule \(\Box _R\) presented in Fig. 5.

The following result follows readily by transforming derivations bottom-up.

Theorem 7

\(\varGamma \vdash \varDelta \) is provable in a certain end-active \(\mathsf {LNS}\) calculus if and only if \(\mathbb {TL}_{x_0}(\varGamma \vdash \varDelta )\) is provable in the corresponding end-active \(\mathsf {LLS}\) calculus.

The end-active labelled line sequent calculus \(\mathsf {LLS}_\mathsf {K}\) for \(\mathsf {K}\) is given in Fig. 5. The following is immediate from completeness of the end-active version of \(\mathsf {LNS}_\mathsf {K}\).

Corollary 8

A sequent \(\varGamma \vdash \varDelta \) has a proof in \(\mathsf {LNS}_{\mathsf {K}}\) if and only if \(yRx,x\!:\!\varGamma \vdash x\!:\!\varDelta \) has a proof in \(\mathsf {LLS}_{\mathsf {K}}\) for some different state variables xy.

4 Focused Labelled Line Sequent Systems

Although adding labels and restricting systems to their end-active form enhance proof search a little, this is still not enough for guaranteeing that modal rules occur in a block.

In [1], Andreoli introduced a notion of normal form for cut-free proofs in linear logic. This normal form is given by a focused proof system organised around two “phases” of proof construction: the negative phase for invertible inference rules and the positive phase for non-necessarily-invertible inference rules. Observe that a similar organisation is adopted when moving from \(\mathsf {LNS}_{\mathsf {K}}\) to \(\mathsf {LLS}_{\mathsf {K}}\): invertible rules are done eagerly while the non invertible ones (\(\Box _R+\Box _L\)) are done only in the last two components.

We will now define \(\mathsf {FLLS}_{\mathsf {K}}\), a focused system for \(\mathsf {LLS}_{\mathsf {K}}\). Sequents in \(\mathsf {FLLS}_{\mathsf {K}}\) have one of the following shapes:

  1. 1.

    \(zRx\, \Uparrow \,\varGamma ;X\vdash Y; \varDelta \) is an unfocused sequent, where \(\varGamma \) contains only modal formulae and \(\varDelta \) contains only modal or atomic formulae.

  2. 2.

    \(zR [x]\, \Downarrow \,\varGamma ;X\vdash \cdot ; \varDelta \) is a sequent focused on a right boxed or atomic formula.

  3. 3.

    \([x]Ry\, \Downarrow \,\varGamma ;X\vdash Y;\varDelta \) is a sequent focused on a left boxed formula.

In the negative phase sequents have the shape (1) above and all invertible propositional or modal rules are applied eagerly on formulae labelled with the variable x until there are only atomic or boxed formulae left. Some of those are moved to special contexts \(\varGamma ,\varDelta \) using \(\mathtt {store}\) rules. These contexts store the formulae that can be chosen for focusing. When this process terminates, the positive phase starts by deciding on one of the formulae in \(\varDelta \), indicated by a sequent of the form (2). If this formula is an atom, then the proof should terminate. Otherwise, the focusing is over a modal formula, and the rule \(\Box _R\) creates a fresh label y and moves the unboxed part of the formula to this new label, resulting in a sequent of the form (3). The positive phase then continues by possibly moving boxed formulae in \(\varGamma \) , labelled with x, to the label y. Finally, focusing is lost and we come back to the negative phase, now inside the component labelled by y. Note that this procedure gives a forward-chaining flavor to the system.

The rules for \(\mathsf {FLLS}_{\mathsf {K}}\) are presented in Fig. 6. Note that the rule \(\mathtt {store}_R\) systematically moves all atomic and boxed formulae from Y to \(\varDelta \), and hence Y will be eventually empty. This is the trigger for switching from the negative to the positive phase. Note also that the contexts may carry some “garbage”, i.e., formulae which will never be principal. In fact, since the calculus is end-active, only formulae in one of the two last components can be principal. Similar to standard systems where weakening is admissible, these formulae are then absorbed by the initial sequents \(\mathsf {init}\). Since the focusing procedure described above is just a systematic organisation of proofs, soundness and completeness proofs are often straightforward permutation-of-rules arguments.

Fig. 6.
figure 6

Focused labelled line sequent calculus \(\mathsf {FLLS}_{\mathsf {K}}\) for \(\mathsf {K}\). \(A_b\) is atomic or a boxed formula, \(B_b\) is a boxed formula. As usual, the negative phase is marked by \(\Uparrow \), the positive by \(\Downarrow \).

Fig. 7.
figure 7

The derivation of the normality axiom in \(\mathsf {FLLS}_\mathsf {K}\)

Theorem 9

The system \(\mathsf {FLLS}_\mathsf {K}\) is sound and complete w.r.t. modal logic \(\mathsf {K}\), i.e., a formula A is a theorem of \(\mathsf {K}\) iff the sequent \(zRx\, \Uparrow \,\cdot ;\cdot \vdash x\!:\!A; \cdot \) is derivable in \(\mathsf {FLLS}_\mathsf {K}\).

Proof

Observe that propositional rules permute up over the \(\Box _L\) rule. Hence all the applications of \(\Box _L\) can be done in sequence, just after the \(\Box _R\) rule.    \(\square \)

Example 10

The normality axiom is derived as shown in Fig. 7. Note that the modal rules occur in a block corresponding to an application of the sequent rule \(\mathtt {k}\). That is, focusing effectively blocks derivations where propositional rules are applied between modal ones.

5 Some More Involved Examples

It is straightforward to see that the method described above apply to any sequent calculus which can be written as an end-variant linear nested sequent calculus, in particular to extensions of \(\mathsf {K}\) with combinations of the axioms \(\mathsf {D},\mathsf {T},\mathsf {4}\) or to the multi-succedent calculus for intuitionistic logic [9]. We now consider some less trivial examples.

5.1 Simply Dependent Bimodal Logics

As a first example, we consider a bimodal logic with a simple interaction between the modalities. While we only treat one example, our method is readily adapted to other such logics. The language of simply dependent bimodal logic \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\) from [5] contains two modalities \(\Box \) and \(\heartsuit \), and the axioms are the \(\mathsf {K}\mathsf {T}\) axioms for \(\Box \) together with the \(\mathsf {S4}\) axioms for \(\heartsuit \) and the interaction axiom \(\heartsuit A \supset \Box A\) (Fig. 8). Using the methods in [10], these axioms are easily converted into the sequent system \(\mathsf {G}_{\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}}\) extending the standard propositional rules with the modal rules of Fig. 9. It is straightforward to check that these rules satisfy the criteria for cut elimination from  [10], and hence \(\mathsf {G}_{\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}}\) is cut-free.

Fig. 8.
figure 8

The modal axioms for logic \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\).

Fig. 9.
figure 9

The modal rules of the sequent calculus \(\mathsf {G}_{\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}}\) for \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\)

To obtain a focused system, we again convert the sequent calculus into a \(\mathsf {LNS}\) calculus. However, since now we have two different non-invertible right rules (\(\Box _R\) and \(\heartsuit _R\)), we need to modify the linear nested setting slightly, introducing the two different nesting operators \(\mathbin {\!//^{\Box }\!}\) and \(\mathbin {\!//^{\heartsuit }\!}\) for the rules \(\Box _R\) resp. \(\heartsuit _R\). The intended interpretation is

$$\begin{aligned} \iota (\varGamma \vdash \varDelta )&:=\bigwedge \varGamma \supset \bigvee \varDelta \\ \iota (\varGamma \vdash \varDelta \mathbin {\!//^{\Box }\!} \mathcal {H})&:=\bigwedge \varGamma \supset \bigvee \varDelta \vee \Box \iota (\mathcal {H})\\ \iota (\varGamma \vdash \varDelta \mathbin {\!//^{\heartsuit }\!} \mathcal {H})&:=\bigwedge \varGamma \supset \bigvee \varDelta \vee \heartsuit \iota (\mathcal {H}) \end{aligned}$$

The modal sequent rules are then converted into the rules of Fig. 10. The propositional rules are those of \(\mathsf {LNS}_\mathsf {G}\) (Fig. 1). Cut-free completeness of (the end-active variant of) this calculus again follows from simulating sequent derivations in the rightmost two components.

Fig. 10.
figure 10

The modal linear nested sequent rules for \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\). Here \(* \in \{\Box ,\heartsuit \}\).

Lemma 11

(Soundness). The rules of \(\mathsf {LNS}_{\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}}\) preserve validity of the formula interpretation of the sequents with respect to \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\) frames.

Proof

By showing that if the negation of the interpretation of the conclusion of a rule is satisfiable in a \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\) frame, then so is its conclusion, using that in such frames the accessibility relation \(R_\Box \) for \(\Box \) is contained in the accessibility relation \(R_\heartsuit \) for \(\heartsuit \).    \(\square \)

Note that this also shows that the obvious adaption of this calculus to the full nested sequent setting is sound and cut-free complete for \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\). For proposing a focused version for the linear nested sequent rules we essentially follow the method given in Sect. 4, adapting the framework slightly to the multimodal setting by introducing two different kinds of relation terms \(xR_\Box y\) and \(x R_\heartsuit y\) corresponding to the accessibility relations of the modalities \(\Box \) and \(\heartsuit \) respectively. The frame \(Fr(\mathcal {R})\) is defined as \((|R_\Box \cup R_\heartsuit |,R_\Box \cup R_\heartsuit )\) and linelike relation sets are defined using this definition. The \(\mathsf {FLLS}\) rules then are defined straightforwardly (Fig. 11). Soundness and completeness of the resulting system \(\mathsf {FLLS}_{\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}}\) follow as above. Summing up we have:

Fig. 11.
figure 11

The modal rules of \(\mathsf {FLLS}_{\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}}\). Here \(* \in \{\Box ,\heartsuit \}\) and y is fresh in rules \(\Box _{R\Box }\) and \(\Box _{R\heartsuit }\). Rule \(t_\heartsuit \) is analogous to \(t_\Box \) and is omitted. The propositional rules are as in Fig. 4 with \(R_*\) instead of R.

Theorem 12

\(\mathsf {LNS}_{\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}}\) and \(\mathsf {FLLS}_{\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}}\) are sound and complete for \(\mathsf {K}\mathsf {T}\oplus _\subseteq \mathsf {S4}\).

5.2 Non-normal Modal Logics

The same ideas also yield \(\mathsf {LNS}\) calculi and their focused versions for some non-normal modal logics, i.e., modal logics that are not extensions of modal logic \(\mathsf {K}\) (see [4] for an introduction). The calculi themselves are of independent interest since, to the best of our knowledge, nested sequent calculi for the logics below have not been considered before in the literature. The most basic non-normal logic, classical modal logic \(\mathsf {E}\), is given Hilbert-style by stipulating only the rule \((\mathsf {E})\) (or congruence rule) for the connective \(\Box \)

$$\begin{aligned} \frac{ A \supset B \quad B \supset A}{\Box A \supset \Box B}\ (\mathsf {E}) \end{aligned}$$

which allows exchanging logically equivalent formulae under the modality. Some of the better known extensions of this logic are formulated by the addition of axioms from

$$\begin{aligned} \mathsf {M} \quad \Box (A\wedge B) \supset (\Box A \wedge \Box B) \qquad \mathsf {C} \quad (\Box A \wedge \Box B) \supset \Box (A \wedge B) \qquad \mathsf {N} \quad \Box \top \end{aligned}$$

Figure 12 shows the modal rules of the standard cut-free sequent calculi for these logics [8], where in addition weakening is embedded in the conclusion. Extensions of \(\mathsf {E}\) are written by concatenating the names of the axioms, and in presence of the monotonicity axiom \(\mathsf {M}\), the initial \(\mathsf {E}\) is dropped. E.g., the logic \(\mathsf {MC}\) is the extension of \(\mathsf {E}\) with axioms \(\mathsf {M}\) and \(\mathsf {C}\). Its sequent calculus \(\mathsf {G}_{\mathsf {MC}}\) is given by the standard propositional and structural rules together with the rule \((\mathsf {E})\) as well as the rules \((\mathsf {M}n)\) for \(n \ge 1\).

Fig. 12.
figure 12

Sequent rules and calculi for some non-normal modal logics

We first consider monotone logics, i.e., extensions of \(\mathsf {M}\). To simulate the rules from Fig. 12 in the linear nested setting, we introduce an auxiliary nesting operator \(\mathbin {\!//^{\mathsf {m}}\!}\) to capture a state where a sequent rule has been partly processed. In contrast, the intuition for the original nesting \(\mathbin {\!//^{}\!}\) is that the simulation of a rule is finished. In view of end-active systems, we restrict the occurrences of \(\mathbin {\!//^{\mathsf {m}}\!}\) to the end of the structures. Linear nested sequents for monotonic non-normal modal logics then are given by:

$$\begin{aligned} \mathsf {LNS}_{\mathsf {m}} {:}{: =} \varGamma \vdash \varDelta \;\mid \; \varGamma \vdash \varDelta \mathbin {\!//^{\mathsf {m}}\!} \varSigma \vdash \varPi \;\mid \; \varGamma \vdash \varDelta \mathbin {\!//^{}\!}\mathsf {LNS}_\mathsf {m}\end{aligned}$$

The modal linear nested sequent rules are given in Fig. 13. The propositional rules are those of the end-active version of \(\mathsf {LNS}_\mathsf {G}\) (Fig. 1) with the restriction that they cannot be applied inside \(\mathbin {\!//^{\mathsf {m}}\!}\). The sequent rule \((\mathsf {M}n)\) is then simulated by the following derivation

Fig. 13.
figure 13

Modal linear nested sequent rules for some monotone non-normal modal logics.

Fig. 14.
figure 14

Modal linear nested sequent rules for some non-monotone non-normal modal logics

For extensions of classical modal logic \(\mathsf {E}\) not containing the monotonicity axiom \(\mathsf {M}\) we need to store more information about the unfinished premisses. Thus instead of \(\mathbin {\!//^{\mathsf {m}}\!}\) we introduce a binary nesting operator \(\mathbin {\!//^{\mathsf {e}}\!}(.;.)\). Linear nested sequents then are given by

$$\begin{aligned} \mathsf {LNS}_{\mathsf {e}} {:}{: =} \varGamma \vdash \varDelta \;\mid \; \varGamma \vdash \varDelta \mathbin {\!//^{\mathsf {e}}\!} ( \varSigma \vdash \varPi ; \varOmega \vdash \varTheta ) \;\mid \; \varGamma \vdash \varDelta \mathbin {\!//^{}\!}\mathsf {LNS}_\mathsf {e}\end{aligned}$$

Figure 14 shows the modal rules for these logics, where again the propositional rules are those of end-active \(\mathsf {LNS}_\mathsf {G}\) (Fig. 1) with the restriction that they are not applied inside the nesting \(\mathbin {\!//^{\mathsf {e}}\!}\). The derivation simulating the rule \((\mathsf {E}n)\) then is

Theorem 13

(Completeness). The linear nested sequent calculi of Figs. 13 and 14 are complete w.r.t. the corresponding logics.

For showing soundness of such calculi we need a different method, though. This is due to the fact that, unlike for normal modal logics, there is no clear formula interpretation for linear nested sequents for non-normal modal logics. However, since the propositional rules cannot be applied inside the auxiliary nestings \(\mathbin {\!//^{\mathsf {m}}\!}\) resp. \(\mathbin {\!//^{\mathsf {e}}\!}\), the modal rules can only occur in blocks. Together with the fact that the (end-variant) propositional rules can only be applied in the last component, this means that we can straightforwardly translate \(\mathsf {LNS}\) derivations back into sequent derivations.

Theorem 14

(Soundness). If a sequent \(\varGamma \vdash \varDelta \) is derivable in \(\mathsf {LNS}_\mathcal {L}\) for \(\mathcal {L}\) one of the logics presented in this section, then it is derivable in the corresponding sequent calculus.

Proof

By translating a \(\mathsf {LNS}_\mathcal {L}\) derivation into a \(\mathsf {G}_\mathcal {L}\) derivation, discarding everything apart from the last component of the linear nested sequents, and translating blocks of modal rules into the corresponding modal sequent rules. E.g., a block consisting of an application of \(\Box _L^\mathsf {m}\) followed by n applications of \(\Box _L^c\) and an application of \(\Box _R^\mathsf {m}\) is translated into an application of the rule \((\mathsf {M}n)\). The propositional rules only work on the last component and never inside the nesting \(\mathbin {\!//^{\mathsf {m}}\!}\) resp. \(\mathbin {\!//^{\mathsf {e}}\!}\) and are translated easily by the corresponding sequent rules.    \(\square \)

Remark 15

It is possible to consider linear nested sequent calculi for these non-normal modal logics in which the propositional rules are not restricted to their end-active versions. In this case, soundness can be shown by a permutation-of-rules argument, similar to the argument for levelled derivations in [11], using “levelling-preserving” invertibility of the propositional rules.

The modal \(\mathsf {FLLS}\) rules for the non-monotone non-normal modal logics are given in Fig. 15, writing \(R_\mathsf {e}\) for the relation corresponding to \(\mathbin {\!//^{\mathsf {e}}\!}\). The propositional rules are those of \(\mathsf {FLLS}_\mathsf {K}\) (Fig. 6). The systems for monotone logics are constructed similarly.

Fig. 15.
figure 15

The modal \(\mathsf {FLLS}\) rules for non-monotone non-normal modal logics

6 Automatic Proof Search in Linear Nested Sequents

The method for constructing focused systems from Sect. 4 generates optimal systems, in the sense that proof search complexity matches exactly that of the original sequent calculi. We will now go one step further and exploit the fact that these calculi sport separate left and right introduction rules for the modalities to present a systematic way of encoding labelled line nested sequents in linear logic. This enables us to both: (i) use the rich linear logic meta-level theory in order to reason about the specified systems; and (ii) use a linear logic prover in order to do automatic proof search in those systems.

Observe that, while the goal in (ii) is also achieved by implementing the focused versions of the various systems case by case, using a meta-level framework like linear logic allows the use of a single prover for various logics: all one has to do is to change the theory, i.e., the specified introduction clauses. The implementation of a number of specified systems is available online at http://subsell.logic.at/nestLL/.

6.1 From Sequent Rules to Linear Logic Clauses

We now consider focused linear logic (LLF) as a “meta-logic” and the formulae of a labelled modal logic as the “object-logic” and then illustrate how sets of bipoles in linear logic can be used to specify sequent calculi for the object-logic. Since we follow mostly the procedure of  [13], here we only give a general idea.

Specifying Sequents. Let \(\hbox { obj}\) be the type of object-level formulae and let \(\lfloor \cdot \rfloor \) and \(\lceil \cdot \rceil \) be two meta-level predicates on these, i.e., both of type \(\hbox { obj}\rightarrow o\). Object-level sequents of the form \({B_1,\ldots ,B_n}\vdash {C_1,\ldots ,C_m}\) (where \(n,m\ge 0\)) are specified as the multiset \(\lfloor B_1 \rfloor ,\ldots ,\lfloor B_n \rfloor ,\lceil C_1 \rceil ,\ldots ,\lceil C_m \rceil \) within the LLF proof system. The \(\lfloor \cdot \rfloor \) and \(\lceil \cdot \rceil \) predicates identify which object-level formulas appear on which side of the sequent – brackets down for left (useful mnemonic: \(\lfloor \) for “left”) and brackets up for right. Finally, binary relations R are specified by a meta-level atomic formula of the form \(R(\cdot ,\cdot )\).

Specifying Inference Rules. Inference rules are specified by a re-writing clause that replaces the active formulae in the conclusion by the active formulae in the premises. The linear logic connectives indicate how these object level formulae are connected: contexts are copied (\( \mathbin { \& }\)) or split (\(\otimes \)), in different inference rules (\(\oplus \)) or in the same sequent (). For example, the specification of (a representative sample of) the rules of \(\mathsf {LLS}_\mathsf {K}\) are

The correspondence between focusing on a formula and an induced big-step inference rule is particularly interesting when the focused formula is a bipole. Roughly speaking, bipoles are positive formulae in which no positive connective can be in the scope of a negative one (see [13, Definition 3]). Focusing on such a formula will produce a single positive and a single negative phase. This two-phase decomposition enables the adequate capturing of the application of an object-level inference rule by the meta-level logic. For example, focusing on the bipole clause \((\Box _R)\) will produce the derivation

where \(\varDelta = \lceil x\!:\!\Box A \rceil \cup R(z,x)\cup \varDelta '\), and \(\pi _1\) and \(\pi _2\) are, respectively,

This one-step focused derivation will: (a) consume \(\lceil x\!:\!\Box A \rceil \) and R(zx); (b) create a fresh label y; and (c) add \(\lceil y\!:\!A \rceil \) and R(xy) to the context. Observe that this matches exactly the application of the object-level rule \(\Box _R\).

When specifying a system (logical, computational, etc.) into a meta level framework, it is desirable and often mandatory that the specification is faithful, that is, one step of computation on the object level should correspond to one step of logical reasoning in the meta level. This is what is called adequacy [15].

Fig. 16.
figure 16

The LLF specification of the modal rules of \(\mathsf {LLS}_{\mathsf {EC}}\) for the logic \(\mathsf {EC}\) from Sect. 5.2.

Definition 16

A specification of an object sequent system is adequate if provability is preserved for (open) derivations, such as inference rules themselves.

Figure 16 shows adequate specifications in LLF of the labelled systems for the logic \(\mathsf {EC}\). These specifications can be used for automatic proof search as illustrated by the following theorem which is shown readily using the methods in [13].

Theorem 17

Let L be a \(\mathsf {LLS}\) system and let \({\mathcal L}\) be the theory given by the clauses of an adequate specification of the inference rules of L. A sequent \(\mathcal{R},\varGamma \vdash \varDelta \) is provable in L if and only if \({\mathcal L};\mathcal{R}\Uparrow \lfloor \varGamma \rfloor ,\lceil \varDelta \rceil \) is provable in LLF.

It is an easy task to show that all the specifications shown in this paper are adequate.

Specifying Modalities. The reason why the specifications in LLF and the construction of focused systems for \(\mathsf {LLS}\) systems work rather well is the fact that the \(\mathsf {LNS}\) modal rules only manipulate a fixed number of principal formulae, i.e., one can choose some formulae and replace them with some other formulae. If there are no principal formulae, or if the object rule is context dependent, then proposing such encodings or a neat notion of focusing becomes tricky, as it is often the case with sequent systems for modal logics. In [16] linear logic with subexponentials (SELL) was used as a framework for specifying a number of modal logics. Unfortunately, the encodings are far from natural, and cannot be automated. Thus, in our opinion, the use of linear nested systems constitutes a significant step towards defining efficient methods for proof search, but also the construction of automatic provers for modal logics.

7 Concluding Remarks and Future Work

In this work we used the correspondence between linear nested sequents and labelled line sequents to (a) propose focused nested sequent systems for a number of modal logics (including a non-trivial bimodal logic and non-normal logics) which match the complexity of existing sequent calculi; and (b) specify the labelled systems in linear logic, thereby obtaining automatic provers for all of them. This not only constitues a significant step towards a better understanding of proof theory for modal logics in general, but also opens an avenue for research in proof search for a broad set of systems (not only modal).

One natural line of investigation concerns the applicability of this approach to logics based on non-classical propositional logic such as constructive modal logics. Moreover, we would like to understand whether our methods work for “proper” nested sequent calculi, i.e., calculi for logics which are not based on a cut-free sequent calculus, such as the calculi for \(\mathsf {K}\mathsf {5}\) or \(\mathsf {K}\mathsf {B}\) [2]. Finally, it might be possible to automatically extract focused systems from LLF specifications. It would be rather interesting to compare these systems with ours.