Keywords

1 Introduction: Reversibility, Concurrency–Interplays

Concurrency Theory is being reshaped by reversibility: fine distinctions between causality and causation [37] contradicted Milner’s expansion laws [30, Example 4.11], and the study of causal models for reversible computation led to novel correction criteria for causal semantics—both reversible and irreversible [17]. “Traditional” equivalence relations have been captured syntactically [6], while original observational equivalences were developed [30]: reversibility triggered a global reconsideration of established theories and tools, with the clear intent of providing actionable methods for reversible systems [26], novel axiomatic foundations [31] and original non-interleaving models [4, 17, 24].

Two Formalisms extend the Calculus of Communicating Systems (CCS) [34]—the godfather of \(\pi \)-calculus [38], among others—with reversible features. Reversible CCS (RCCS) [18] and CCS with keys (CCSK) [37] are similarly the source of most [1, 17, 32, 33]—if not all—of later formalism developed to enhance reversible systems with some respect (rollback operator, name-passing abilities, probabilistic features, ...). Even if those two systems share a lot of similarities [28], they diverge in some respects that are not fully understood—typically, it seems that different notions of “contexts with history” led to establish the existence of congruences for CCSK  [30, Proposition 4.9] or the impossibility thereof for RCCS  [8, Theorem 2]. However, they also share some shortcomings, and we offer to tackle one of them for CCSK, by providing a syntactical definition of concurrency, easy to manipulate, that satisfies the usual sanity checks.

Reversible Concurrency is of course a central notion in the study of RCCS and CCSK, as it enables the definition of causal consistency—a principle that, intuitively, states that backward reductions can undo an action only if its consequences have already been undone—and to obtain models where concurrency and causation are decorrelated [37]. As such, it has been studied from multiple angles, but, in our opinion, never in a fully satisfactory manner. In CCSK, sideways and reverse diamonds properties were proven using conditions on keys and “joinable” transitions [37, Propositions 5.10 and 5.19], but to our knowledge no “definitive” definition of concurrency was proposed. Ad-hoc definitions relying on memory inclusion [25, Definition 3.1.1] or disjointness [18, Definition 7] for RCCS, and semantical notions for both RCCS  [4,5,6] and CCSK  [24, 36, 40] have been proposed, but, to our knowledge, none of those have ever been 1. compared to each other, 2. compared to pre-existing forward-only definitions of concurrency.

Our Contribution introduces the first syntactical definition of concurrency for CCSK (Sect. 3.1), by extending the “universal” concurrency developed for forward-only CCS  [19], that leveraged proved transition systems [22]. We make crucial use of the loop lemma (Lemma 5) to define concurrency between coinitial traces in terms of concurrency between composable traces—a mechanism that considerably reduces the definition and proof burdens: typically, the square property is derived from the sideways and reverse diamonds. We furthermore establish the correctness of this definition by proving the expected reversible properties—causal consistency (Sect. 3.3), among others—and by discussing how our definition relates to definitions of concurrency in similar systems—obtained by porting our technique to RCCS  [18, 25] and its “identified” declensions [8], or by restricting a notion of concurrency for \(\pi \)-calculus—and to structural congruence (Sect. 4). With respect to this last point, we prove that our technique gives a notion of concurrency that either match or subsumes existing definitions, that sometimes lack a notion of concurrency for transitions of opposite directions.

Additional details are contained in our preliminary technical report [3], i.e. all proofs [3, Sect. B], and the technical justification of the claims made in Sect. 4 about the “universality” of our approach [3, Sect. C].

2 Finite and Reversible Process Calculi

2.1 Finite, Forward-Only CCS

Finite Core CCS . We briefly recall the (forward-only) “finite fragment” of the core of CCS (simply called CCS) following a standard presentation [14].

Definition 1

((Co-)names and labels). Let \(\mathsf {N}=\{a,b,c,\dots \}\) be a set of names and \(\overline{\mathsf {N}}=\{\overline{a},\overline{b},\overline{c},\dots \}\) its set of co-names. The set of labels \(\mathsf {L}\) is \(\mathsf {N}\,\cup \,\overline{\mathsf {N}}\,\cup \,\{\tau \}\), and we use \(\alpha \), \(\beta \) (resp. \(\lambda \)) to range over \(\mathsf {L}\) (resp. \(\mathsf {L}\backslash \{\tau \}\)). A bijection \(\overline{\cdot }:\mathsf {N}\rightarrow \overline{\mathsf {N}}\), whose inverse is also written \(\overline{\cdot }\), gives the complement of a name.

Definition 2

(Operators). We let \(P, Q\) range over CCS processes, defined as usual, using restriction (\(P \backslash \alpha \)), sum (\(P + Q\)), prefix (\(\alpha .P\)) and parallel composition (\(P\mid Q\)). The inactive process \(0\) is omitted when preceded by a prefix, and the binding power of the operators [34, p. 68], from highest to lowest, is \(\backslash \alpha \), \(\alpha .\), \(\mid \) and \(+\), so that e.g. \(\alpha . P + Q \backslash \alpha \mid P + a\) is to be read as \( (\alpha . P) + (((Q \backslash \alpha ) \mid P) + (a.0)) \). In a process \(P \mid Q\) (resp. \(P + Q\)), we call \(P\) and \(Q\) threads (resp. branches).

The labeled transition system for CCS, denoted , is reminded in Fig. 1.

Fig. 1.
figure 1

Rules of the labeled transition system (LTS) for CCS

2.2 CCSK: A “Keyed” Reversible Concurrent Calculus

CCSK captures uncontrolled reversibility using two symmetric LTS—one for forward computation, one for backward computation—that manipulates keys marking executed prefixes, to guarantee that reverting synchronizations cannot be done without both parties agreeing. We use the syntax of the latest paper on the topic [30], that slightly differs [30, Remark 4.2] with the classical definition [37]. However, those changes have no impact since we refrain from using CCSK ’s newly introduced structural congruence, but discuss it in Sect. 4.

Definition 3

(Keys, prefixes and CCSK processes). Let \(\mathsf {K}=\{m,n,\dots \}\) be a set of keys, we let \(k\) range over them. Prefixes are of the form \(\alpha [k]\)—we call them keyed labels—or \(\alpha \). CCSK processes are CCS processes where the prefix can also be of the form \(\alpha [k]\), we let \(X\), \(Y\) range over them.

The forward LTS for CCSK, that we denote , is given in Fig. 2—with \(\mathrm {key}\) and \(\mathrm {std}\) defined below. The reverse LTS is the exact symmetric of  [30, Figure 2] (it can also be read from Fig. 3), and we write if or . For all three types of arrows, we sometimes omit the label and keys when they are not relevant, and mark with \(^*\) their transitive closures. As usual, we restrict ourselves to reachable processes, defined below.

Definition 4

(Standard and reachable processes). The set of keys occuring in \(X\) is written \(\mathrm {key}(X)\), and \(X\) is standard\(\mathrm {std}(X)\)—iff \(\mathrm {key}(X) = \emptyset \). If there exists a process \(O_{X}\) s.t. \(\mathrm {std}(O_{X})\) and , then \(X\) is reachable.

The reader eager to see this system in action can fast-forward to Example 1, but should be aware that this example uses proved labels, introduced next.

Fig. 2.
figure 2

Rules of the forward labeled transition system (LTS) for CCSK

3 A New Causal Semantics for CCSK

The only causal semantics for CCS with replication we are aware of [19]Footnote 1 remained unnoticed, despite some interesting qualities: 1. it enables the definition of causality for replication while agreeing with pre-existing causal semantics of CCS and CCS with recursion [19, Theorem 1] 2. it leverages the technique of proved transition systems that encodes information about the derivation in the labels [22], 3. it was instrumental in one of the first results connecting implicit computational complexity and distributed processes [23], 4. last but not least, as we will see below, it allows to define an elegant notion of causality for CCSK with “built-in” reversibility, as the exact same definition will be used for forward and backward transitions, without making explicit mentions of the keys or directions. We believe our choice is additionally compact, elegant and suited for reversible computation: defining concurrency on composable transitions first allows not to consider keys in our definition, as the LTS guarantees that the same key will not be re-used. Then, the loop lemma allows to “reverse” transitions so that concurrency on coinitial transitions can be defined from concurrency on composable transitions. This allows to carry little information in the labels—the direction is not needed—and to have a definition insensitive to keys and identifiers for the very modest cost of prefixing labels with some annotation tracking the thread(s) or branch(es) performing the transition.

3.1 Proved Labeled Transition System for CCSK

We adapt the proved transition system [15, 19, 20] to CCSK: this technique enriches the transitions label with prefixes that describe parts of their derivation, to keep track of their dependencies or lack thereof. We adapt an earlier formalism [21]—including information about sums [19, footnote 2]—but extend the concurrency relation to internal (i.e. \(\tau \)-) transitions, omitted from recent work [19, Definition 3] but present in older articles [15, Definition 2.3].

Definition 5

(Enhanced keyed labels). Let \(\upsilon \), \(\upsilon _{\mathrm {L}}\) and \(\upsilon _{\mathrm {R}}\) range over strings in \(\{\mid _{\mathrm {L}}, \mid _{\mathrm {R}}, +_{\mathrm {L}}, +_{\mathrm {R}} \}^*\), enhanced keyed labels are defined as

$$\begin{aligned} \theta \,{:}{=}\,\upsilon \alpha [k] ~\Vert ~ \upsilon \langle \mid _{\mathrm {L}} \upsilon _{\mathrm {L}} \alpha [k], \mid _{\mathrm {R}} \upsilon _{\mathrm {R}} \overline{\alpha }[k]\rangle \end{aligned}$$

We write \(\mathsf {E}\) the set of enhanced keyed labels, and define \(\ell : \mathsf {E}\rightarrow \mathsf {L}\) and :

We present in Fig. 3 the rules for the proved forward and backward LTS for CCSK. The rules \(\mid _{\mathrm {R}}\), \(\mid _{\mathrm {R}}^{\bullet }\), \(+_{\mathrm {R}}\) and \(+_{\mathrm {R}}^{\bullet }\) are omitted but can easily be inferred. This LTS has its derivation in bijection with CCSK ’s original LTS:

Lemma 1

(Adequacy of the proved labeled transition system). The transition can be derived using Fig. 2 iff with and \(\ell (\theta ) = \alpha \) can be derived using Fig. 3.

Definition 6

(Dependency relation). The dependency relation on enhanced keyed labels is induced by the axioms of Fig. 4, for \(d \in \{\mathrm {L}, \mathrm {R}\}\).

Fig. 3.
figure 3

Rules of the proved LTS for CCSK

Fig. 4.
figure 4

Dependency Relation on Enhanced Keyed Labels

A dependency \(\theta _0 \lessdot \theta _1\) means “whenever there is a trace in which \(\theta _0\) occurs before \(\theta _1\), then the two associated transitions are causally related”. The following definitions will enable more formal examples, but we can stress that 1. the “action” rule enforces that executing or reversing a prefix at top level, e.g. or , makes the prefix (\(\alpha [k]\)) a dependency of all further transitions; 2. as the forward and backward versions of the same rule share the same enhanced keyed labels, a trace where a transition and its reverse both occur will have the first occurring be a dependency of the second, as \(\lessdot \) is reflexive; 3. no additional relation (such as a conflict or causality relation) is needed to define concurrency; 4. this dependency relation matches the forward-only definition for action and parallel composition, but not for sum: while the original system [19, Definition 2] requires only \(+_{d} \theta \lessdot \theta '\) if \(\theta \lessdot \theta '\), this definition would not capture faithfully the dependencies in our system where the sum operator is preserved after a reduction.

Definition 7

(Transitions and traces). In a transition , \(X\) is the source, and \(X'\) is the target of \(t\). Two transitions are coinitial (resp. cofinal) if they have the same source (resp. target). Transitions \(t_1\) and \(t_2\) are composable, \(t_1;t_2\), if the target of \(t_1\) is the source of \(t_2\). The reverse of is , and similarly if \(t\) is forward, letting \((t^{\bullet })^{\bullet } = t\)Footnote 2.

A sequence of pairwise composable transitions \(t_1; \cdots ; t_n\) is called a trace, denoted \(T\), and \(\epsilon \) is the empty trace.

Definition 8

(Causality relation). Let \(T\) be a trace and \(i, j \in \{1, \cdots , n\}\) with \(i < j\), \(\theta _i\) causes \(\theta _j\) in \(T\) (\(\theta _i \lessdot _{T} \theta _j\)) iff \(\theta _i \lessdot \theta _j\).

Definition 9

((Composable) Concurrency). Let \(T\) be a trace and \(i, j \in \{1, \cdots , n\}\), \(\theta _i\) is concurrent with \(\theta _j\) (\(\theta _i \smile _T \theta _j\), or simply \(\theta _i \smile \theta _j\)) iff neither \(\theta _i \lessdot _{T} \theta _j\) nor \(\theta _j \lessdot _{T} \theta _i\).

Coinitial concurrency (Definition 11) will later on be defined using composable concurrency and the loop lemma (Lemma 5).

Example 1

Consider the following trace, dependencies, and concurrent transitions, where the subscripts to \(\lessdot \) and \(\smile \) have been omitted:

figure t

To prove the results in the next section, we need an intuitive and straightforward lemma (Lemma 2) that decomposes a concurrent trace involving two threads into one trace involving one thread while maintaining concurrency, i.e. proving that a trace e.g. of the form with \(\mid _{\mathrm {L}}\theta \smile _{T} \mid _{\mathrm {L}}\theta '\) can be decomposed into a trace with \(\theta \smile _{T'} \theta '\). A similar lemma is also needed to decompose sums (Lemma 3), and their proofs proceed by simple case analysis and offer no resistance.

Lemma 2

(Decomposing concurrent parallel transitions). Let \(i \in \{1, 2\}\) and \(\theta _i \in \{\mid _{\mathrm {L}} \theta _i', \mid _{\mathrm {R}} \theta _i'', \langle \mid _{\mathrm {L}} \theta _i', \mid _{\mathrm {R}} \theta _i''\rangle \}\), define \(\pi _{\mathrm {L}} (X_{\mathrm {L}} \mid X_{\mathrm {R}}) = X_{\mathrm {L}}\), \(\pi _{\mathrm {L}} (\mid _{\mathrm {L}} \theta ) = \theta \), \(\pi _{\mathrm {L}}(\langle \mid _{\mathrm {L}} \theta _{\mathrm {L}}, \mid _{\mathrm {R}} \theta _{\mathrm {R}}\rangle ) = \theta _{\mathrm {L}}\), \(\pi _{\mathrm {L}}(\mid _{\mathrm {R}} \theta ) = \text {undefined}\), and define similarly \(\pi _{\mathrm {R}}\).

Whenever with \(\theta _1 \smile _{T} \theta _2\), then for \(d \in \{\mathrm {L}, \mathrm {R}\}\), if \(\pi _{d}(\theta _1)\) and \(\pi _{d}(\theta _2)\) are both defined, then, \(\pi _{d}(\theta _1) \smile _{\pi _{d}(T)} \pi _{d}(\theta _2)\) with .

Proof

The trace \(\pi _d(T)\) exists by virtue of the rule \(\mid _{d}\), syn. or their reverses. What remains to prove is that \(\pi _{d}(\theta _1) \smile _{\pi _{d}(T)} \pi _{d}(\theta _2)\) holds.

The proof is by case on \(\theta _1\) and \(\theta _2\), but always follows the same pattern. As we know that both \(\pi _{d}(\theta _1)\) and \(\pi _{d}(\theta _2)\) need to be defined, there are 7 cases:

figure y

By symmetry, we can bring this number down to three:

figure z

In each case, assume \(\pi _{\mathrm {L}}(\theta _1) = \theta _1' \smile _{\pi _{\mathrm {L}}(T)} \theta _2' = \pi _{\mathrm {L}}(\theta _2)\) does not hold. Then it must be the case that either \(\theta _1' \lessdot _{\pi _{\mathrm {L}}(T)} \theta _{2}'\) or \(\theta _2' \lessdot _{\pi _{\mathrm {L}}(T)} \theta _{1}'\), and since both can be treated the same way thanks to symmetry, we only need to detail the following three cases:

  • a) If \(\theta _1' \lessdot _{\pi _{\mathrm {L}}(T)} \theta _{2}'\), then \(\theta _1' \lessdot \theta _{2}'\), and it is immediate that \(\theta _1 = \mid _{\mathrm {L}}\theta '_1 \lessdot _{T} \mid _{\mathrm {L}}\theta _2' = \theta _{2}\), contradicting \(\theta _1 \smile _{T} \theta _2\).

  • b) If \(\theta _1' \lessdot _{\pi _{\mathrm {L}}(T)} \theta _{2}'\), then \(\theta _1' \lessdot \theta _{2}'\), \(\mid _{\mathrm {L}}\theta _1' \lessdot \mid _{\mathrm {L}} \theta _{2}'\) and \(\langle \mid _{\mathrm {L}}\theta _1', \mid _{\mathrm {R}}\theta ''_1\rangle \lessdot \mid _{\mathrm {L}} \theta _{2}'\), from which we can deduce \(\theta _1 \lessdot _{T} \theta _{2}\), contradicting \(\theta _1 \smile _{T} \theta _2\).

  • c) If \(\theta _1' \lessdot _{\pi _{\mathrm {L}}(T)} \theta _{2}'\), then \(\theta _1' \lessdot \theta _{2}'\), \(\mid _{\mathrm {L}}\theta _1' \lessdot \mid _{\mathrm {L}} \theta _{2}'\) and \(\langle \mid _{\mathrm {L}}\theta _1', \mid _{\mathrm {R}}\theta ''_1\rangle \lessdot \langle \mid _{\mathrm {L}} \theta _{2}', \mid _{\mathrm {R}} \theta _2'\rangle \), from which we can deduce \(\theta _1 \lessdot _{T} \theta _{2}\), contradicting \(\theta _1 \smile _{T} \theta _2\).

Hence, in all cases, assuming that \(\pi _{d}(\theta _1) \smile _{\pi _{d}(T)} \pi _{d}(\theta _2)\) does not hold leads to a contradiction.    \(\square \)

Lemma 3

(Decomposing concurrent sum transitions). Let \(i \in \{1, 2\}\) and \(\theta _i \in \{+_{\mathrm {L}} \theta _i', +_{\mathrm {R}} \theta _i''\}\), define \(\rho _{\mathrm {L}} (X_{\mathrm {L}} + X_{\mathrm {R}}) = X_{\mathrm {L}}\), \(\rho _{\mathrm {L}} (+_{\mathrm {L}} \theta ) = \theta \), \(\rho _{\mathrm {L}}(+_{\mathrm {R}} \theta ) = \text {undefined}\), and define similarly \(\rho _{\mathrm {R}}\).

Whenever with \(\theta _1 \smile _{T} \theta _2\), then for \(d \in \{\mathrm {L}, \mathrm {R}\}\), if \(\rho _{d}(\theta _1)\) and \(\rho _{d}(\theta _2)\) are both defined, then, \(\rho _{d}(\theta _1) \smile _{\pi _{d}(T)} \rho _{d}(\theta _2)\) with .

Proof

The trace \(\rho _d(T)\) exists by virtue of the rule \(+_{d}\) or its reverse. What remains to prove is that \(\rho _{d}(\theta _1) \smile _{\rho _{d}(T)} \rho _{d}(\theta _2)\) holds.

The proof is by case on \(\theta _1\) and \(\theta _2\), but always follows the same pattern. As we know that both \(\rho _{d}(\theta _1)\) and \(\rho _{d}(\theta _2)\) need to be defined, there are 2 cases:

figure ac

In each case, assume \(\rho _{\mathrm {L}}(\theta _1) = \theta _1' \smile _{\rho _{\mathrm {L}}(T)} \theta _2' = \rho _{\mathrm {L}}(\theta _2)\) does not hold, then it is immediate to note that \(\theta _1 \smile _T \theta _2\) cannot hold either, a contradiction.    \(\square \)

3.2 Diamonds and Squares: Concurrency in Action

Square properties and concurrency diamonds express that concurrent transitions are actually independent, in the sense that they can be swapped if they are composable, or “later on” agree if they are co-initial. That our definition of concurrency enables those, and to allows inter-prove them, is a good indication that it is resilient and convenient.

Theorem 1

(Sideways diamond). For all with \(\theta _1 \smile \theta _2\), there exists \(X_2\) s.t. .

The proof, sketched, requires a particular care when \(X\) is not standard. Using pre. is transparent from the perspective of enhanced keyed labels, as no “memory” of its usage is stored in the label of the transition. This is essentially because—exactly like for act.—all the dependency information is already present in the term or its enhanced keyed label. To make this more formal, we introduce a function that “removes” a keyed label, and prove that it does not affect derivability.

Definition 10

Given \(\alpha \) and \(k\), we define \(\mathrm {rm}_{\alpha [k]}\) by \(\mathrm {rm}_{\alpha [k]} (0) = 0\) and

$$\begin{aligned}&\mathrm {rm}_{\alpha [k]} (\beta .X) = \beta .X \qquad \qquad \qquad \, \mathrm {rm}_{\alpha [k]}(X \mid Y) = \mathrm {rm}_{\alpha [k]}(X) \mid \mathrm {rm}_{\alpha [k]}(Y) \\&\mathrm {rm}_{\alpha [k]} (X\backslash a) = (\mathrm {rm}_{\alpha [k]}X)\backslash a \qquad \mathrm {rm}_{\alpha [k]}(X + Y) = \mathrm {rm}_{\alpha [k]}(X) + \mathrm {rm}_{\alpha [k]}(Y) \\&\mathrm {rm}_{\alpha [k]} (\beta [m].X) ={\left\{ \begin{array}{ll} X &{} \text {if }\alpha = \beta \text { and }k = m \\ \beta [m].\mathrm {rm}_{\alpha [k]}(X) &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

We let \(\mathrm {rm}_{k}^{\lambda } = \mathrm {rm}_{\lambda [k]} \circ \mathrm {rm}_{\overline{\lambda }[k]}\) if \(\lambda \in \mathsf {L}\backslash \{\tau \}\), \(\mathrm {rm}_{k}^{\tau } = \mathrm {rm}_{\tau [k]}\) otherwise.

The function \(\mathrm {rm}_{\alpha [k]}\) simply looks for an occurrence of \(\alpha [k]\) and removes it: as there is at most one, there is no need for a recursive call when it is found. This function preserves derivability of transitions that do not involve the key removed:

Lemma 4

For all \(X\), \(\alpha \) and \(k\), with iff .

Proof

Assume \(\alpha [k]\) or \(\overline{\alpha }[k]\) (if \(\alpha \ne \tau \)) occur in \(X\) (otherwise the result is straightforward), as , the same holds for \(Y\). As keys occur at most twice, attached to complementary names, in reachable processes [30, Lemma 3.4], \(k \notin \mathrm {key}(\mathrm {rm}_{k}^{\alpha }(X))\,\cup \,\mathrm {key}(\mathrm {rm}_{k}^{\alpha }(Y))\). Then the proof follows by induction on the length of the derivation for : as neither pre. nor \(\text {pre.}^{\bullet }\) change the enhanced keyed label, we can simply “take out” the occurrences of those rules when they concern \(\alpha [k]\) or \(\overline{\alpha }[k]\) and still obtain a valid derivation, with the same enhanced keyed label, hence yielding . For the converse direction, pre. or \(\text {pre.}^{\bullet }\) can be reintroduced to the derivation tree and in the appropriate location, as \(k\) is fresh in \(\mathrm {rm}_{k}^{\alpha }(X)\) and \(\mathrm {rm}_{k}^{\alpha }(Y)\).    \(\square \)

Proof

(of Theorem 1 (sketch)). The proof proceeds by induction on the length of the deduction for the derivation for , using Lemmas 2 and 3 to enable the induction hypothesis if \(\theta _1\) is not a prefix. The only delicate case is if the last rule is pre.: in this case, there exists \(\alpha \), \(k\), \(X'\) and \(X_1'\) s.t. and . As ,  [30, Lemma 3.4], and since \(\theta _1 \smile \theta _2\), we apply Lemma 4 twice to obtain the trace \(T\):

with \(\theta _1 \smile _{T} \theta _2\), and we can use the induction hypothesis to obtain \(X_2\) s.t. . Since , we can append pre. to the derivation of to obtain . Using Lemma 4 one last time, we obtain that implies , which concludes this case.    \(\square \)

Example 2

Re-using Example 1, since \(\mid _{\mathrm {L}} \overline{b}[n] \smile \mid _{\mathrm {R}}+_{\mathrm {R}}c[n']\) in

Theorem 1 allows to re-arrange this trace as

Theorem 2

(Reverse diamonds).

  1. 1.

    For all with \(\theta _1 \smile \theta _2\), there exists \(X_2\) s.t. .

  2. 2.

    For all with \(\theta _1 \smile \theta _2\), there exists \(X_2\) s.t. .

It should be noted that in the particular case of , or \(t^{\bullet };t\), \(\theta _1 \lessdot \theta _1\) by reflexivity of \(\lessdot \) and hence the reverse diamonds cannot apply. The name “reverse diamond” was sometimes used for different properties [37, Proposition 5.10], [36, Definition 2.3] that, in the presence of the loop lemma (Lemma 5), are equivalent to ours, once the condition on keys is replaced by our condition on concurrency. It is, however, to our knowledge the first time this property, stated in this particular way, is isolated and studied on its own.

Proof

(Sketch). We can re-use the proof of Theorem 1 almost as it is, since Lemmas 4, 2 and 3 hold for both directions.

For 1., the only case that diverges is if the deduction for have for last rule pre. In this case, , but we cannot deduce that immediately. However, if , then we would have , but this application of \(\text {act.}^{\bullet }\) is not valid, as \(\mathrm {std}(X_1')\) does not hold, since \(X_1'\) was obtained from \(X'\) after it made a forward transition. Hence, we obtain that \(\mathrm {key}(\theta _2) \ne k\) and we can carry out the rest of the proof as before.

For 2., the main difference lies in leveraging the dependency of sum prefixes between e.g. \(+_{\mathrm {R}}\theta _1\) and \(+_{\mathrm {L}}\theta _2\) in .    \(\square \)

Example 3

Re-using Example 1, since \(\mid _{\mathrm {R}}+_{\mathrm {R}}c[n'] \smile \mid _{\mathrm {L}}\overline{b}[n]\) in

Theorem 2 allows to re-arrange this trace as

Concurrency on coinitial traces is defined using concurrency on composable traces and the loop lemma, immediate in CCSK.

Lemma 5

(Loop lemma [37, Prop. 5.1]). For all , there exists a unique , and conversely. We let \((t^{\bullet })^{\bullet } = t\).

Definition 11

(Coinitial concurrency). Let and be two coinitial transitions, \(\theta _1\) is concurrent with \(\theta _2\) (\(\theta _1 \smile \theta _2\)) iff \(\theta _1 \smile \theta _2\) in the trace .

To our knowledge, this is the first time co-initial concurrency is defined from composable concurrency: while the axiomatic approach discussed coinitial concurrency [31, Section 5], it primarily studied independence relations that could be defined in any way, and did not connect these two notions of concurrency.

Theorem 3

(Square property). For all and with \(\theta _1 \smile \theta _2\), there exist and .

Proof

(sketch). By Definition 11 we have that \(\theta _1 \smile \theta _2\) in . Hence, depending on the direction of the arrows, and possibly using the loop lemma to convert two backward transitions into two forward ones, we obtain by Theorems 1 or 2 , and we let \(t'_1 = t''_1\) and \(t'_2 = {t''}_2^{\bullet }\):

figure bs

Example 4

Following Example 1, we can get e.g. from and the transitions converging to \(a[m] . \overline{b} \mid b[n'] +c\).

3.3 Causal Consistency

Formally, causal consistency (Theorem 4) states that any two coinitial and cofinal traces are causally equivalent:

Definition 12

(Causally equivalent). Two traces \(T_1\), \(T_2\) are causally equivalent, if they are in the least equivalence relation closed by composition satisfying \(t; t^{\bullet } \sim \epsilon \) and \(t_1;t_2' \sim t_2;t_1'\) for any , .

Theorem 4

All coinitial and cofinal traces are causally equivalent.

The “axiomatic approach” to reversible computation [31] allows to obtain causal consistency from other properties that are generally easier to prove.

Lemma 6

(Backward transitions are concurrent). Any two different coinitial backward transitions and are concurrent.

Proof

(Sketch). The proof is by induction on the size of \(\theta _1\) and leverages that for both transitions to be different.    \(\square \)

Lemma 7

(Well-foundedness). For all \(X\) there exists \(n \in \mathbb {N}\), \(X_0, \cdots , X_n\) s.t. , with \(\mathrm {std}(X_0)\).

This lemma forbids infinite reverse computation, and is obvious in CCSK as any backward transition strictly decreases the number of occurrences of keys.

Proof

(of Theorem 4). We can re-use the results of the axiomatic approach [31] since our forward LTS is the symmetric of our backward LTS, and as our concurrency relation (that the authors call the independence relation, following a common usage [39, Definition 3.7]) is indeed an irreflexive symmetric relation: symmetry is immediate by definition, irreflexivity follows from the fact that \(\lessdot \) is reflexive. Then, by Theorem 3 and Lemma 6, the parabolic lemma holds [31, Proposition 3.4], and since the parabolic lemma and well-foundedness hold (Lemma 7), causal consistency holds as well [31, Proposition 3.5].    \(\square \)

We use here the axiomatic approach [31] in a very narrow sense, to obtain only causal consistency—which was our main goal—, but we could have used those lemmas to obtain many other desirable properties for this system “for free”. An interesting problem, as suggested by a reviewer, would also be to establish whenever our system enjoys Coinitial Propagation of Independence [31, Definition 4.2], which in turns would allow it to fulfil Independence of Diamonds [31, Definition 4.6].

Example 5

Re-using the full trace presented in Example 1, we can re-organize the transitions using the diamonds so that every undone transition is undone immediately, and we obtain up to causal equivalence the trace

4 Structural Congruence, Universality and Other Criteria

Causality for a semantics of concurrent computation should satisfy a variety of criteria, the squares and diamonds being the starting point, and causal consistency being arguably the most important. This section aims at briefly presenting additional criteria and at defending the “universality” of our approach. Since this last point requires to introduce two other reversible systems and four other definitions of concurrency, the technical content is only in our research report [3, Sect. C], but we would like to stress that the results stated below are fairly routine to prove—introducing all the material to enable the comparisons is the only lengthy part.

Concurrency-Preserving Structural Congruences. “Denotationality” [17, Section 6] is a criteria stating that structural congruence should be preserved by the causal semantics. Unfortunately, our system only vacuously meets this criteria—since it does not possess a structural congruence. The “usual” structural congruence is missing from all the proved transition systems [15, 20, 22, 23], or missing the associativity and commutativity of the parallel composition [21, p. 242]. While adding such a congruence would benefits the expressiveness, making it interact nicely with the derived proof system and the reversible features [30, Section 4], [7] is a challenge we prefer to postpone.

Comparing with Concurrency Inspired by Reversible \(\pi \)-Calculus. It is possible to restrict the definition of concurrency for a reversible \(\pi \)-calculus extending CCSK  [32], back to a sum-free version of CCSK. The structural causality [32, Definition 22]—for transitions of the same direction—and conflict relation [32, Definition 25]—for transitions of opposite directions—can then both be proven to match our dependency relation in a rather straightforward way, hence proving the adequacy of notions. However, this inherited concurrency relation cannot be straightforwardly extended to the sum operator, and requires two relations to be defined: for those reasons, we argue that our solution is more convenient to use. It should also be noted that this concurrency does not meet the denotationality criteria either, when the congruence includes renaming of bound keys [30].

A similar work could have been done by restricting concurrency for e.g. reversible higher-order \(\pi \)-calculus [29, Definition 9], reversible \(\pi \)-calculus [16, Definition 4.1] or croll-\(\pi \) [27, Definition 1], but we reserve it for future work, and would prefer to extend our definition to a reversible \(\pi \)-calculus rather than proceeding the other way around.

Comparing with RCCS-Inspired Systems. In RCCS, the definition of concurrency fluctuated between a condition on memory inclusion for composable transitions [25, Definition 3.1.1] and a condition on disjointness of memories on coinitial transitions [18, Definition 7], both requiring the entire memory of the thread to label the transitions, and neither been defined on transitions of opposite directions. It is possible to adapt our proved system to RCCS, and to prove that the resulting concurrency relation is equivalent to those two definitions, when restricted to transitions of equal direction. A similar adaptation is possible for reversible and identified CCS  [8], that came with yet another definition of concurrency leveraging its built-in mechanism to generate identifiers.

Optimality, Parabolic Lemma, and RPI. The optimality criteria is the adequacy of the concurrency definitions for the LTS and for the reduction semantics [16, Theorem 5.6]. While this criteria requires a reduction semantics and a notion of reduction context to be formally proven, we believe it is easy to convince oneself that the gist of this property—the fact that non-\(\tau \)-transitions are concurrent iff there exists a “closing” context in which the resulting \(\tau \)-transitions are still concurrent—holds in our system: as concurrency on \(\tau \)-transitions is defined in terms of concurrency of its elements (e.g., \(\langle \theta ^1_{\mathrm {R}}, \theta ^1_{\mathrm {L}} \rangle \smile \langle \theta ^2_{\mathrm {R}}, \theta ^2_{\mathrm {L}} \rangle \) iff \(\theta ^1_d \smile \theta ^2_d\) for at least one \(d \in \{\mathrm {L}, \mathrm {R}\}\)), this criteria is obtained “for free”.

Properties such as the parabolic lemma [18, Lemma 10]—“any trace is equivalent to a backward trace followed by a forward trace”—or “RPI” [31, Definition 3.3]—“reversing preserves independence”, i.e. \(t \smile t'\) iff \(t^{\bullet } \smile t'\)–follow immediately, by our definition of concurrencies for this latter. We furthermore believe that “baking in” the RPI principle in definitions of reversible concurrencies should become the norm, as it facilitates proofs and forces to have \(t_1 \smile t_2\) iff \(t_1^{\bullet } \smile t_2^{\bullet }\), which seems a very sound principle.

5 Conclusion and Perspectives

We believe our proposal to be not only elegant, but also extremely resilient and easy to work with. It should be stressed that it does not require to observe the directions, but also ignore keys or identifiers, that should in our opinion only be technical annotations disallowing processes that have been synchronized to backtrack independently. We had previously defended that identifier should be considered only up to isomorphisms [6, p. 13], or explicitly generated by a built-in mechanism [8, p. 152], and re-inforce this point of view here. From there, much can be done. A first interesting line of work would be to compare our syntactical definition with the semantical definition of concurrency in models of RCCS  [4,5,6] and CCSK  [24, 36, 40]. Of course, as we already mentioned, extending this definition to reversible \(\pi \)-calculi, taking inspiration from e.g. the latest development in forward-only \(\pi \) [23], would allow to re-inforce the interest and solidity of this technique.

Another interesting track would be to consider infinite extensions of CCSK, since infinite behaviors in the presence of reversibility is not well-understood nor studied: an attempt to extend algebras of communicating processes [11], including recursion, seems to have been unsuccessful [41]. A possible approach would be to define recursion and iteration in CCSK, to extend our definition of concurrency to those infinite behaviors, and to attempt to reconstruct the separation results from the forward-only paradigm [35]. Whether finer, “reversible”, equivalences can preserve this distinction despite the greater flexibility provided by backward transitions is an open problem. Another interesting point is the study of infinite behaviors that duplicate past events, including their keys or memories: whether this formalism could preserve causal consistency, or what benefits there would be in tinkering this property, is also an open question.

Last but not least, these last investigations would require to define and understand relevant properties, or metrics, for reversible systems. In the forward-only world, termination or convergence were used to compare infinite behaviors [35], and additional criteria were introduced to study causal semantics [17]. Those properties may or may not be suited for reversible systems, but it is difficult to decide as they sometimes even lack a definition. This could help in solving the more general question of deciding what it is that we want to observe and assess when evaluating reversible, concurrent systems [9, 10].