Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The interest in reversibility dates back to the 60’s, with Landauer [6] observing that only irreversible computations need to consume energy, fostering application of reversible computing in scenarios of low-energy computing. Landauer’s principle has only been shown empirically in 2012 [1]. Nowadays reversible computing is attracting interests because of its applications in diverse fields: biological modelling [10], since many biochemical reactions are by nature reversible; program debugging and testing [4], allowing during debugging time to bring the program state back to a certain execution point in which certain conditions are met [8]; and parallel discrete event simulations [9]. Of particular interest is the application of reversible computation notions to the study of programming abstractions for dependable systems. Several techniques used to build dependable systems such as transactions, system-recovery schemes and checkpoint-rollback protocols, rely in one way or another on some forms of undo. The ability to undo any single action provides us with an ideal setting to study, revisit, or imagine alternatives to standard techniques for building dependable systems and to debug them. Indeed distributed reversible actions can be seen as defeasible partial agreements: the building blocks for different transactional models and recovery techniques. Good examples on how reversibility in CCS and Higher-Order \(\pi \) can be used to model transactional models are respectively [3, 7].

The first reversible variant of CCS, called RCCS, was introduced by Danos and Krivine [2]. In RCCS each process is monitored by a memory, that serves as stack of past actions. Memories are considered as unique process identifiers, and in order to preserve this uniqueness along a parallel composition, a structural law permits to obtain unique memories though a parallel composition. A general method for reversing process calculi, given in a particular SOS format, has been proposed by Phillips and Ulidowski in [11]. The main idea of this approach is the use of communication keys to uniquely identify communications, and to make static each operator of the calculus. By applying this method to CCS, CCSK is obtained. Since in CCSK the history is directly annotated in the process itself, there is no need of splitting history through a parallel composition. We call this kind of recording histories as static reversibility; while we call the one used by RCCS as dynamic, since each thread is endowed with its own history. Hence a natural question arises: are these two reversible calculi equivalent? In this paper we start answering to this question by showing that RCCS is at least as expressive as CCSK. We do it by means of an encoding and show its correctness by means of strong back and forth bisimulaiton.

The rest of the paper is organized as follows: Sect. 2 starts with a brief recall to the syntax of CCS. In Sect. 2.1 will present RCCS with its syntax and semantics. Section 2.2 will be about CCSK and its semantics. In Sect. 3 we will present our encoding function from CCSK to RCCS and prove our main result. In Sect. 4 we will sketch an encoding of RCCS into CCKS and discuss about the difficulties it takes to prove its correctness. Section 5 concludes the paper with a discussion of the future work.

2 CCS and Its Reversible Variants

In this section we briefly present the syntax of CCS [8], and then we show the two reversible extensions of it, namely RCCS [2] and CCSK [11].

Let \(\mathcal {A}\) the set of actions such that \(a\in \mathcal {A}\), and \(\overline{\mathcal {A}}\) the set of co-actions such that \(\overline{\mathcal {A}} = \{ \overline{a} \,\mid \,a\in \mathcal {A}\}\). We let \(\mu ,\lambda \) and their decorated versions to range over the set \(\mathtt {Act}=\mathcal {A}\cup \overline{\mathcal {A}}\), while we let \(\alpha ,\beta \) and their decorated versions to range over the set \(\mathtt {Act}_{\tau } = \mathtt {Act} \cup \{\tau \}\), where \(\tau \) is the silent action.

The syntax of CCS is given in Fig. 1. \(\mathbf {0}\) represents the idle process. A prefix (or action) can be an input a, an output \(\overline{a}\) and the silent action \(\tau \). \(P\parallel Q\) represents the parallel composition of processes P and Q, while \( \sum \alpha _i.P\) represents the guarded choice. Some actions in a process P can be restricted, and this is represented by the process \(P \backslash A\), where A is the set of restricted actions. The set \(\mathcal {P}\) denotes the set of all possible CCS processes.

Fig. 1.
figure 1

CCS syntax

2.1 Reversible CCS

One of approaches to make CCS reversible is to add a memory to each process. A memory then will be recording every action and communication that the process will undergo. Syntax of RCCS is given in Fig. 2. As we can see, RCCS processes are built on top of CCS processes. A term of the form \(m \triangleright P\), is called monitored process, where m represents a memory carrying the information that this process will need in case it wants to backtrack, and P is a standard CCS process. Two monitored process R and S can be composed in parallel \(S\parallel R\), and some actions of a monitored process R can be restricted via \(R\backslash A\). Memories are organised as stacks of events, with the top of the memory representing the very last action of the monitored process. \(\langle \rangle \) represent the empty memory; \(\langle i,\alpha ,Q\rangle \) represent an action event meaning that the monitored process did the action \(\alpha \) identified by i and its “context” was Q; while \(\langle \uparrow \rangle \) represents a splitting event. When there is no ambiguity, we will omit the trailing event \(\langle \rangle \) in memories.

Fig. 2.
figure 2

RCCS syntax

We assume the existence of a infinite denumerable set of action identifiers (sometimes called keys) \(\mathcal {K}\) such that \(\mathcal {K}\cap \mathtt {Act}= \emptyset \). Let \(\mathtt {ActK}= \mathtt {Act} \times \mathcal {K}\) the set of pairs formed by an action \(\mu \) and an identifier i. In the same way we define \(\mathtt {ActK}_{\tau }= \mathtt {Act}_{\tau } \times \mathcal {K}\). The operational semantics of RCCS is defined as a labelled transition system (LTS), \((\mathcal {P}_R, \rightarrow ^{}_{} \cup \rightsquigarrow ^{}_{},\mathtt {ActK}_{\tau } )\) where \(\mathcal {P}_R\) is the set of RCCS (monitored) processes, \(\rightarrow ^{}_{} \subseteq \mathcal {P}_R\times \mathtt {ActK}_{\tau } \times \mathcal {P}_R\) and \(\rightsquigarrow ^{}_{} \subseteq \mathcal {P}_R\times \mathtt {ActK}_{\tau } \times \mathcal {P}_R\). Relations \(\rightarrow ^{}_{}\) and \(\rightsquigarrow ^{}_{}\) are the smallest reduction relations induced by respectively rules in Figs. 4 and 5. Both reduction relations exploit the structural congruence \(\equiv \) relation, which is the smallest congruence, on processes and monitored processes, containing the abelian monoid laws for choice (that is commutativity, associativity and \(\mathbf {0}\) as the identity element) and the rules of Fig. 3.

Remark 1

In its first incarnation [2] RCCS used events of this form \(\langle n_{*},\alpha ,Q \rangle \), \(\langle 1 \rangle \) and \(\langle 2 \rangle \), where: \(n_*\) is n in case the process synchonized with a process monitored by memory n or \(*\) is case of partial synchronization. Events \(\langle 1 \rangle \) and \(\langle 2 \rangle \) were used to split a process along a parallel composition according to the following rule:

The version we are using, appeared in [5], simplifies the handling of memories and makes the splitting through the parallel composition commutative. However they are conceptually the same, and we have chosen this version since it simplifies some technicalities when dealing with the proof of our main Theorem.

Identifiers of RCCS are similar to communication keys of CCSK. They are defined as follows:

Definition 1

(Memory Identifiers). The set of identifiers of a memory m, written \(\mathtt {id}(m)\), is inductively defined as follows:

Definition 2

A identifier i belongs to a memory m, written \(i\in m\), if \(i\in \mathtt {id}(m)\).

Definition 3

(Process Identifiers). The set of identifiers of a process R, written \(\mathtt {id}(R)\), is inductively defined as follows:

Definition 4

A identifier i belongs to a process R, written \(i\in R\), if \(i\in \mathtt {id}(R)\).

Fig. 3.
figure 3

RCCS structural laws

Fig. 4.
figure 4

RCCS forward semantics

Let us now comment on the forward rules of Fig. 4. Rule R-ACT allows a monitored process to perform a forward action. As we can see, this action is bound with a particular fresh identifier i. Moreover, the part of the process which has not contributed to the action, that is Q, is stored on top of the memory along with the action and the identifier. Rule R-PAR propagates an action along a parallel composition, with the condition that the identifier of the action is not used by other processes. This check guarantees that all the identifiers are unique. Rule R-SYN allows two processes in parallel to syncrhonize. To do so, they have to match both the action and the identifier. Rule R-RES deals with restriction in the normal way, while rule R-EQUIV brings structural equivalence into the reduction relation.

Fig. 5.
figure 5

RCCS backward semantics

Backward rules are reported in Fig. 5. For each of forward rule there exists an opposite backward one. Rule R-ACT\(^\bullet \) allows a monitored process to revert its last action. To do so, the event on top of the memory is taken and the information contained in it is used to build back the previous form of the process, that is the prefix and the process that was composed with the \(+\) operator. Rule R-PAR\(^\bullet \) allows a reversible action to be propagated through a parallel composition, only when the identifier of the action does not belong to monitored processes in parallel. This check is crucial to avoid partial undo of some synchronizations. The remaining rules are similar to the forward ones.

Definition 5

(Reachable Process). A RCCS process R is reachable if it can be derived from an initial process \(\langle \rangle \triangleright P\), by using rules of Figs. 4 and 5.

Lemma 1

For any transition \(m \triangleright \alpha .P + Q \rightarrow ^{i}_{\alpha }\langle {i},{\alpha },{Q}\rangle \cdot {m}\triangleright P\) we can derive the following transitions:

  • \(\langle {j},{\beta },{Q_{1}}\rangle \cdot {m}\triangleright \alpha .P+Q\rightarrow ^{i}_{\alpha }\langle {i},{\alpha },{Q}\rangle \cdot {\langle {j},{\beta },{Q_{1}}\rangle \cdot {m}}\triangleright P\), for \(i\ne j\)

  • \(\langle \uparrow \rangle \cdot {m}\triangleright \alpha .P+Q\rightarrow ^{i}_{\alpha }\langle {i},{\alpha },{Q}\rangle \cdot {\langle \uparrow \rangle \cdot {m}}\triangleright P\)

and its opposite:

Lemma 2

For any transition \(\langle {i},{\alpha },{Q}\rangle \cdot {m}\triangleright P\rightsquigarrow ^{i}_{\alpha }\langle \rangle \triangleright \alpha .P+Q\), we can derive the following transitions:

  • \(\langle {i},{\alpha },{Q}\rangle \cdot {\langle {j},{\beta },{Q_{1}}\rangle \cdot {m}}\triangleright P\rightsquigarrow ^{i}_{\alpha }\langle {j},{\beta },{Q_{1}}\rangle \cdot {m}\triangleright \alpha .P+Q\), for \(i\ne j\)

  • \(\langle {i},{\alpha },{Q}\rangle \cdot {\langle \uparrow \rangle \cdot {m}}\triangleright P\rightsquigarrow ^{i}_{\alpha }\langle \uparrow \rangle \cdot {m}\triangleright \alpha .P+Q\)

An easy induction on the structure of terms provides us with a kind of normal form for RCCS processes (by convention \(\prod _{i\in I} R_i =\mathbf {0}\) if \(I=\emptyset \) ):

Lemma 3

(Normal Form). For any RCCS reachable process R we have that

$$R \equiv (\prod _{i\in I} (m_{i}\triangleright \alpha _{i}.P_{i}+Q_{i})\backslash A_{i} )\backslash B$$

2.2 CCS with Communication Keys

The main idea behind this approach is to directly record the actions inside a process and to make all the operator of CCS static. In this way there is no need of using an external memory, since all the information are syntactically presents inside a term. Syntax of CCSK is given in Fig. 6. The only difference with respect to CCS processes is that prefixes now can be annotated with an identifiers.

Fig. 6.
figure 6

CCSK syntax

Definition 6

(Process Keys). The set of keys of a process X, written \(\mathtt {key}(X)\), is inductively defined as follows:

Definition 7

A key i is fresh in a process X, written \(\mathtt {fresh}(i,X)\) if \(i \not \in \mathtt {key}(X)\).

Definition of keys in CCSK correspond to the definition of identifiers in RCCS. The operational semantics of CCSK is defined as a labelled transition system (LTS), \((\mathcal {P}_K, \rightarrow ^{}_{} \cup \rightsquigarrow ^{}_{},\mathtt {ActK}_{\tau } )\) where \(\mathcal {P}_K\) is the set of CCSK processes, \(\rightarrow ^{}_{} \subseteq \mathcal {P}_R\times \mathtt {ActK}_{\tau } \times \mathcal {P}_R\) and \(\rightsquigarrow ^{}_{} \subseteq \mathcal {P}_R\times \mathtt {ActK}_{\tau } \times \mathcal {P}_R\). Relations \(\rightarrow ^{}_{}\) and \(\rightsquigarrow ^{}_{}\) are the smallest reduction relations induced by respectively rules in Figs. 7 and 8. Differently from RCCS, CCSK does not exploit any structural congruence.

Remark 2

In the following when in proofs, rules and so on we use P instead of X we just indicate that the process P has no labelled actions, as P being a CCS process. An alternative is to use predicate \(\mathtt {std}(X)\) as in [11].

Fig. 7.
figure 7

CCSK forward semantics

Rules for forward transitions are given in Fig. 7. Rule \(\textsc {K-ACT1}\) deals with prefixed processes \(\alpha .P\). It just transforms a prefix into a label but differently from the normal CCS rule for prefix, it generates a fresh new key i which is bound to the action \(\alpha \) becoming \(\alpha [i]\). As we can note the prefix is not discarded after the reduction. Rule K-ACT2 inductively allows a prefixed process \(\alpha [i].X\) to execute if X can execute. The actions that X can do are forced to use keys different from i. Rules K-PLUS-L and K-PLUS-R deal with the \(+\) operator. Let us note that these rule do not discard the context, that is part of the process which has not contributed to the action. In more detail, if the process \(P + Q\) does an action, say \(\alpha [i]\), and becomes X then the process becomes \(X + Q\). In this way the information about \(+ Q\) is preserved. Moreover since Q is a standard process then it will never executes even if it is present in the process \(X + Q\). So we can say that \(+ Q\) is just a decoration of X. Let us note that in order to apply one of the plus rule one of the two processes has to be a CCS process P (e.g. not containing labelled prefixes), meaning that it is impossible for two non standard process to both execute if composed by the choice operator. Rules K-PAR-L and K-PAR-R propagate an action \(\alpha [i]\) through a parallel composition, provided that the key i is not used by the other processes in parallel (use of \(\mathtt {fresh}(\cdot )\) predicate in the premises). Rule K-SYN allows two processes in parallel to syncrhonize. To do so, they have to match both the action and the identifier. Rule K-RES deals with restriction in the canonical (CCS) way. Backward rules are the exact opposite of the forward ones.

Fig. 8.
figure 8

CCSK backward semantics

Definition 8

(Reachable Process). A CCSK process X is reachable if it can be derived from an CCS process P, by using rules of Figs. 7 and 8.

Property 1

(Plus Form). If X is a reachable process, and \(X=Y+Q\), then

$$Y= P_1 + \ldots +( Y_1 \parallel \ldots \parallel Y_m) + P_j + \ldots + P_n $$

for some \(\alpha \), m, n and with \(P_i\) not having top level \(+\).

Proof

By induction on the length of the derivation that led an initial process to X and by case analysis on the last applied rule.

3 Encoding CCSK in RCCS

We now adapt the concept of bisimulation [12] to work in a reversible setting and with two different semantics. To this aim, we indicate with \(\xrightarrow {}_{s_i}\) the forward relation of the \(s_i\) semantics, and with the backward one. Moreover, we indicate with \(\mathcal {P}_{s_i}\) the set of processes of semantics \(s_i\) and with \(\mathcal {L}_{s_i}\) the set of labels produced by semantics \(s_i\).

Definition 9

(Back and Forth Bisimulation). Given a bijective function \(\gamma :\mathcal {L}_{s_1} \rightarrow \mathcal {L}_{s_2}\), a relation \(_{s_1}\mathcal {R}_{s_2}\subseteq \mathcal {P}_{s_1} \times \mathcal {P}_{s_2}\) is a strong back and forth simulation if whenever \(P _{s_1}\mathcal {R}_{s_2} R\):

  • \(P \xrightarrow {\alpha [i]}_{s_1} Q\) implies \(R\xrightarrow {\gamma (\alpha [i])}_{s_2}S\) with \(Q_{s_1}\mathcal {R}_{s_2} S\)

A relation \(_{s_1}\mathcal {R}_{s_2}\subseteq \mathcal {P}_{s_1} \times \mathcal {P}_{s_2}\) is called strong back and forth bisimulation if \(_{s_1}\mathcal {R}_{s_2}\) and \((_{s_1}\mathcal {R}_{s_2})^{-1}\) are strong back and forth simulations. We call strong bisimilarity and note \(_{s_1}\sim _{s_2}\) the largest bisimulation with respect to semantics \(s_1\) and \(s_2\).

This definition when instantiated with a single semantics, that is \(s_1=s_2\) and \(\gamma \) being the identity, is similar to the definition of forward-reverse bisimulation used in [11], with the only difference is that our definition does not take into account predicates. Moreover, when instantiated with CCSK semantics, the two notions coincide.

In this section we will show how CCSK can be encoded in RCCS. We will use the same notation like before. P stands for processes from CCS and X for CCSK processes. Let \(\mathcal {P}_K\) and \(\mathcal {P}_R\) the set of processes from CCSK and RCCS, respectively, and \(\mathcal {M}\) is the set of all the memories derivable by productions in Fig. 2. The encoding function \([\![{\cdot }]\!] : \mathcal {P}_K\times \mathcal {M}\times \mathcal {P}\rightarrow \mathcal {P}_R\), is inductively defined as follows:

Let us comment it. The main difference between RCCS and CCSK is on the way they keep track of the history. In RCCS all the information is local to each monitored process, while in CCSK the information is spread along the structure of a process. Moreover, a CCSK process may correspond to several monitored processes, since in CCSK there is no need of splitting memories through a parallel composition. So the encoding has to inductively drill the structure of a CCSK process X, in order to build the final memory of the process and to find the plus context of each labelled action \(\alpha [i]\) present inside X. To this aim, the encoding takes two additional parameters: a memory m and a CCS process P. The parallel and the restriction of CCSK operator are mapped to the corresponding operators of RCCS. Let us note that in the parallel case, the memory m is split into two \(\langle \uparrow \rangle \cdot {m}\). The encoding of a process \(\alpha [i].X\) with memory m and context Q is the encoding of process X where the memory stack is augmented of the event \(\langle i,\alpha ,Q\rangle \). In this case the action \(\alpha [i]\) disappears from the process as it goes inside the memory m. The encoding of a process \(P+X\) is the encoding of X where its context is the sum composition of its previous context and P. Finally, the encoding of a normal CCS process P is just its monitored version, with memory m representing its history. Since the context parameter is used for past actions, in the case of normal process P, we impose this parameter to be \(\mathbf {0}\). In order to understand how the encoding works let us consider the following example. Let \(X = (a+b)+c[i].( d[h] \parallel P)\) then

$$\begin{aligned}&[\![{X,\langle \rangle ,\mathbf {0}}]\!] = [\![{c[i].( d[h] \parallel P), \langle \rangle , a+b}]\!] = [\![{d[h] \parallel P, \langle {i},{c},{a+b}\rangle \cdot {\langle \rangle },\mathbf {0}}]\!] =\\&[\![{d[h], \langle \uparrow \rangle \cdot {\langle {i},{c},{a+b}\rangle \cdot {\langle \rangle },\mathbf {0}}}]\!] \parallel [\![{P, \langle \uparrow \rangle \cdot {\langle {i},{c},{a+b}\rangle \cdot {\langle \rangle },\mathbf {0}}}]\!] =\\&[\![{\mathbf {0}, \langle {h},{d},{\mathbf {0}}\rangle \cdot {\langle \uparrow \rangle \cdot {\langle {i},{c},{a+b}\rangle \cdot {\langle \rangle },\mathbf {0}}}}]\!] \parallel \langle \uparrow \rangle \cdot {\langle {i},{c},{a+b}\rangle \cdot {\langle \rangle }}\triangleright P =\\&\langle {h},{d},{\mathbf {0}}\rangle \cdot {\langle \uparrow \rangle \cdot {\langle {i},{c},{a+b}\rangle \cdot {\langle \rangle }}}\triangleright \mathbf {0}\parallel \langle \uparrow \rangle \cdot {\langle {i},{c},{a+b}\rangle \cdot {\langle \rangle }}\triangleright P \end{aligned}$$

Before stating our main Theorem, we need some lemmata about operational correspondence.

Lemma 4

(Forward Correspondence). For all transitions \(X\xrightarrow {\alpha [i]}X'\) in CCSK, with \( R= [\![{X,\langle \rangle ,\mathbf {0}}]\!]\), there exists a corresponding RCCS transition such that \(R\rightarrow ^{i}_{\alpha }R'\) with \([\![{X',\langle \rangle ,\mathbf {0}}]\!]=R'\).

Proof

By induction on the derivation \(X\xrightarrow {\alpha [i]}X'\) and by case analysis on the last applied rule. We show the relevant cases:

  • K-ACT2: We have \(\alpha [i].X\xrightarrow {\beta [j]}\alpha [i].X'\) with \(X\xrightarrow {\beta [j]}X'\). Be \(R = [\![{X,\langle \rangle ,\mathbf {0}}]\!]\), by Lemma 3 we know that: \(R \equiv (\prod _{i\in I} (m_{i}\triangleright \alpha _{i}.P_{i}+Q_{i})\backslash A_{i}) \backslash B \).

    By applying inductive hypothesis we have that \([\![{X,\langle \rangle ,\mathbf {0}}]\!]\rightarrow ^{j}_{\beta }[\![{X',\langle \rangle ,\mathbf {0}}]\!]\), that is \(R\rightarrow ^{j}_{\beta }R'\) with \(R'=[\![{X',\langle \rangle ,\mathbf {0}}]\!]\). Now we to distinguish two cases: either \(\beta \) is a single action or it has been produced by a synchonization. In the first case we have then that there exists an index \(h\in I\) such that \(\alpha _h=\beta \), and then

    $$\begin{aligned}&[\![{X,\langle \rangle ,\mathbf {0}}]\!]\equiv ( \prod _{i\in I} (m_{i}\triangleright \alpha _{i}.P_{i}+Q_{i})\backslash A_{i})\backslash B \rightarrow ^{j}_{\beta }\\&(\prod _{i\in I\setminus h} (m_{i}\triangleright \alpha _{i}.P_{i}+Q_{i})\backslash A_{i} \parallel (\langle {j},{\beta },{Q_h}\rangle \cdot {m}_h\triangleright P_h)\backslash A_{h})\backslash B\equiv [\![{X',\langle \rangle ,\mathbf {0}}]\!] \end{aligned}$$

    Moreover, by definition of encoding we have that

    $$[\![{\alpha [i].X,\langle \rangle ,\mathbf {0}}]\!] = [\![{X,\langle {i},{\alpha },{\mathbf {0}}\rangle \cdot {\langle \rangle },\mathbf {0}}]\!]$$

    and by using Lemma 1 we can mimic the same transition with an augmented memory: \([\![{X,\langle {i},{\alpha },{\mathbf {0}}\rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\rightarrow ^{j}_{\beta }[\![{X',\langle {i},{\alpha },{\mathbf {0}}\rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\) as desired. The synchronisation case is similar.

  • K-PLUS-L: We have \(X=Y+P\xrightarrow {\alpha [i]}Y'+P\). By Property 1, we have that:

    $$Y= P_1 + \ldots +( Y_1 \parallel \ldots \parallel Y_m) + P_j + \ldots + P_n$$

    Let \(T=\sum _{i\in n\setminus l}P_i\), by applying the encoding we have that

    $$[\![{Y+P,\langle \rangle ,\mathbf {0}}]\!] = [\![{(Y_1 \parallel \ldots \parallel Y_m),\langle \rangle ,P+T}]\!]$$
    $$[\![{Y,\langle \rangle ,\mathbf {0}}]\!] = [\![{(Y_1 \parallel \ldots \parallel Y_m),\langle \rangle ,T}]\!]$$

    By Lemma 3 we know that: \([\![{Y+P,\langle \rangle }]\!] \equiv (\prod _{l\in I} (m_{l}\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l})\backslash B \)

    This implies that there exists a subset \(J \subseteq I\) on indexes such that memories in J share the action \(\langle k,\beta ,T\rangle \), with \(T=\sum _{i\in n\setminus l}P_i\), such that:

    $$\begin{aligned}{}[\![{Y,\langle \rangle ,\mathbf {0}}]\!] \equiv&( \prod _{l\in I\setminus J} (m_{l}\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l} \parallel \\&\prod _{h\in J} (m_{h}\cdot \langle k,\beta ,T\rangle \triangleright \alpha _{h}.P_{h}+Q_{h})\backslash A_{h})\backslash B \\ [\![{Y+P,\langle \rangle ,\mathbf {0}}]\!] \equiv&(\prod _{l\in I\setminus J} (m_{l}\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l} \parallel \\&\prod _{h\in J} (m_{h}\cdot \langle k,\beta ,T+P\rangle \triangleright \alpha _{h}.P_{h}+Q_{h})\backslash A_{h})\backslash B \end{aligned}$$

    By hypothesis we have that \(Y\xrightarrow {\alpha [i]} Y'\) and by inductive hypothesis we have that \([\![{Y,\langle \rangle ,\mathbf {0}}]\!]\rightarrow ^{i}_{\alpha }[\![{Y',\langle \rangle ,\mathbf {0}}]\!]\), but then also \([\![{Y+P,\langle \rangle ,\mathbf {0}}]\!] \rightarrow ^{i}_{\alpha } [\![{Y',\langle \rangle ,P}]\!]=[\![{Y'+P,\langle \rangle ,\mathbf {0}}]\!]\), as desired.    \(\blacksquare \)

Lemma 5

(Backward Correspondance). For all transitions in CCSK, with \(R= [\![{X,\langle \rangle ,\mathbf {0}}]\!]\), there exists a corresponding transition \(R\rightsquigarrow ^{i}_{\alpha }R'\) in RCCS with \([\![{X',\langle \rangle ,\mathbf {0}}]\!]=R'.\)

Proof

By induction on the derivation and by case analysis on the last applied rule. The proof follows the lines of the one of Lemma 4.

With the previous two lemmata we have proved that if we have a couple of processes \((X,R)=(X,[\![{X,\langle \rangle ,\mathbf {0}}]\!])\) where X is reachable, and if process X does an action \(\alpha \) in CCSK, then process R does the same action in RCCS. Obtained process \(R'=[\![{X',\langle \rangle ,\mathbf {0}}]\!]\) is still encoding of process \(X'\). Now we have to show the opposite direction.

Lemma 6

(Forward Completeness). For any CCSK process X and RCCS process R, such that \(R=[\![{X,\langle \rangle ,\mathbf {0}}]\!]\), if \(R\rightarrow ^{i}_{\alpha }R'\) in RCCS, then there exists a corresponding transition \(X\xrightarrow {\alpha [i]}X'\) in CCSK, with \(R'=[\![{X',\langle \rangle ,\mathbf {0}}]\!]\).

Proof

By structural induction on X. We have two main cases, whether \(X=P\) or not. We will show just the most significant cases.

In first case we observe form of the processes \(X=P\), where P is (standard) CCS process. We then do an induction of the form of P. If \(P=\alpha .P_1\): we have that \(R=[\![{\alpha .P_1,\langle \rangle ,\mathbf {0}}]\!]\) and by applying encoding

$$[\![{\alpha .P_1,\langle \rangle ,\mathbf {0}}]\!]=\langle \rangle \triangleright \alpha .P_1$$

Then, by using R-ACT we get \(\langle \rangle \triangleright \alpha .P_1\rightarrow ^{i}_{\alpha }\langle {i},{\alpha },{\mathbf {0}}\rangle \cdot {\langle \rangle }\triangleright P_1\), where \(\langle {i},{\alpha },{\mathbf {0}}\rangle \cdot {\langle \rangle }\triangleright P_1=[\![{\alpha [i].P_1,\langle \rangle ,\mathbf {0}}]\!]=R'\).

In CCSK process \(\alpha .P_1\), can do the same action \(\alpha \) by applying the rule K-ACT1 and we get

$$\alpha .P_1\xrightarrow {\alpha [i]}\alpha [i].P_1 \text { where } X'=\alpha [i].P_1 \text { as we desired.}$$

In the second case we observe form of the processes X, when he have a structure of CCSK process and it is not standard process. We consider the significant cases:

  • \( X=\alpha [i ]. Y\) : we have that \(R=[\![{\alpha [i].Y,\langle \rangle ,\mathbf {0}}]\!]\). By Lemma 3 we know that:

    $$[\![{Y,\langle \rangle ,\mathbf {0}}]\!]\equiv (\prod _{l\in I} (m_{l}\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l})\backslash B$$

    Now we know that there exists some subset \(H\subseteq I\) such that all processes from that subset share the very first action \(\alpha \) . For some \(t\in H\) such that \(\alpha _t=\beta \) we have:

    $$\begin{aligned}{}[\![{Y,\langle \rangle ,\mathbf {0}}]\!]\equiv&( \prod _{l\in I\setminus H} (m_{l}\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l}\parallel \prod _{h\in H\setminus t} (m'_{h}\cdot \langle i,\alpha ,\mathbf {0}\rangle \triangleright \alpha _{h}.P_{h}+Q_{h})\backslash A_{h}\parallel \\&(m'_{t}\cdot \langle i,\alpha ,\mathbf {0}\rangle \triangleright \beta .P_{t}+Q_{t})\backslash A_{t}\ )\backslash B\rightarrow ^{j}_{\beta }\\&(\prod _{l\in I\setminus H} (m_{l}\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l}\parallel \prod _{h\in H\setminus t} (m'_{h}\cdot \langle i,\alpha ,\mathbf {0}\rangle \triangleright \alpha _{h}.P_{h}+Q_{h})\backslash A_{h}\parallel \\&\langle {j},{\beta },{Q_{t}}\rangle \cdot {m'_{t}\cdot \langle i,\alpha ,\mathbf {0}\rangle }\triangleright P_{t}\backslash A_{t})\backslash B\equiv [\![{Y',\langle \rangle ,\mathbf {0}}]\!] \end{aligned}$$

    By definition of encoding we have that \([\![{\alpha [i].Y,\langle \rangle ,\mathbf {0}}]\!]=[\![{Y,\langle {i},{\alpha },{\mathbf {0}}\rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\) and using Lemma 1 we can mimic the same transition:

    $$[\![{Y,\langle {i},{\alpha },{\mathbf {0}}\rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\rightarrow ^{j}_{\beta }[\![{Y',\langle {i},{\alpha },{\mathbf {0}}\rangle \cdot {\langle \rangle },\mathbf {0}}]\!]$$

    By inductive hypothesis we have that \(Y\xrightarrow {\beta [j]}Y'\) and using rule K-ACT2 we get:

    $$\alpha [i].Y\xrightarrow {\beta [j]}\alpha [i].Y'\quad \text {as desired.}$$
  • \(X=Y_1\parallel Y_2\) We have that \(R=[\![{Y_1\parallel Y_2,\langle \rangle ,\mathbf {0}}]\!]\) and by applying encoding

    $$[\![{Y_1\parallel Y_2,\langle \rangle ,\mathbf {0}}]\!]=[\![{Y_1,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\parallel [\![{Y_2,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]$$

    Now, we distinguish three cases: if first branch of parallel composition do action \(\alpha \), or the second one, or \(\alpha \) is syncrhonization action. If \(R_1=[\![{Y_1,\langle \rangle ,\mathbf {0}}]\!]\) in first case we have that in \(R_1\) exists an index \(h\in I\) such that \(\alpha _h=\alpha \) and then by Lemma 3, we get

    $$\begin{aligned}&R_1\equiv (\prod _{l\in I} (m_l\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l})\backslash B\rightarrow ^{i}_{\alpha }\\&(\prod _{l\in I\setminus h} (m_l\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l} \parallel (\langle {i},{\alpha },{Q_h}\rangle \cdot {m_h}\triangleright P_h)\backslash A_{h})\backslash B\equiv R'_1 \end{aligned}$$

    By Lemma 1 we have: \([\![{Y_1,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\rightarrow ^{i}_{\alpha }[\![{Y'_1,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\) Using rule R-PAR we get

    $$[\![{Y_1,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\parallel [\![{Y_2,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\rightarrow ^{i}_{\alpha }[\![{Y'_1,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\parallel [\![{Y_2,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]$$

    where \([\![{Y'_1,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]\parallel [\![{Y_2,\langle \uparrow \rangle \cdot {\langle \rangle },\mathbf {0}}]\!]=[\![{Y'_1\parallel Y_2,\langle \rangle ,\mathbf {0}}]\!]=R'\). By inductive hypothesis we have that also \(Y_1\xrightarrow {\alpha [i]}Y'_1\) and by using rule K-PAR-L, we get: \(Y_1\parallel Y_2\xrightarrow {\alpha [i]}Y'_1\parallel Y_2\) as desired. The remaining cases are similar.

  • \(X=Y+P\) : By Property 1 we have that:

    $$Y= P_1 + \ldots +( Y_1 \parallel \ldots \parallel Y_m) + P_j + \ldots + P_n$$

    In the same way, like in Lemma 4 we define processes congruent to processes \([\![{Y+P,\langle \rangle ,\mathbf {0}}]\!]\) and \([\![{Y,\langle \rangle ,\mathbf {0}}]\!]\). Now we know that there exists some \(t\in J\) such that \(\alpha _t=\alpha \) and we have

    $$\begin{aligned}&(\prod _{l\in I\setminus J} (m_{l}\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l} \parallel \prod _{h\in J} (m_{h}\cdot \langle k,\beta ,T\rangle \triangleright \alpha _{h}.P_{h}+Q_{h})\backslash A_{h}\parallel \\&(m_{t}\cdot \langle h,\beta ,T\rangle \triangleright \alpha .P_{t}+Q_{t})\backslash A_{t})\backslash B \rightarrow ^{i}_{\alpha }\\&( \prod _{l\in I\setminus J} (m_{l}\triangleright \alpha _{l}.P_{l}+Q_{l})\backslash A_{l} \parallel \prod _{h\in J} (m_{h}\cdot \langle k,\beta ,T\rangle \triangleright \alpha _{h}.P_{h}+Q_{h})\backslash A_{h}\parallel \\&(\langle i,\alpha ,Q_t\rangle \cdot m_{t}\cdot \langle k,\beta ,T\rangle \triangleright P_{t})\backslash A_{t})\backslash B \end{aligned}$$

    Then we also have \([\![{Y+P,\langle \rangle ,\mathbf {0}}]\!] \rightarrow ^{i}_{\alpha } [\![{Y',\langle \rangle ,P}]\!]=[\![{Y'+P,\langle \rangle ,\mathbf {0}}]\!]\). By applying the inductive hypothesis we also have \(X\xrightarrow {\alpha [i]}X'\) and by rule K-PLUS-L, we get \(X+P\xrightarrow {\alpha [i]}X'+P\) as desired.    \(\blacksquare \)

Lemma 7

(Backward Completeness). For any CCSK process X and RCCS process R, such that \(R=[\![{X,\langle \rangle ,\mathbf {0}}]\!]\), if \(R\rightsquigarrow ^{i}_{\alpha }R'\) in RCCS, then there exists a corresponding transition in CCSK, with \(R'=[\![{X',\langle \rangle ,\mathbf {0}}]\!]\).

Proof

By structural induction on X.

We now can state our main result:

Theorem 1

(Operational Correspondance). For any CCS process P, \(P \sim [\![{P,\langle \rangle ,\mathbf {0}}]\!]\).

Proof

We just need to show that the relation

$$\mathcal {R}= \{(X,[\![{X,\langle \rangle ,\mathbf {0}}]\!]) \text { with } X \text { CCSK reachable }\}$$

is a strong back and forth bisimulation.

If X does a forward transition, \(X\xrightarrow {\alpha [i]}Y\) by Lemma 4 we have also that \([\![{X,\langle \rangle ,\mathbf {0}}]\!] \rightarrow ^{\alpha }_{i} [\![{Y,\langle \rangle ,\mathbf {0}}]\!]\), with \((Y,[\![{Y,\langle \rangle ,\mathbf {0}}]\!]) \in \mathcal {R}\). If the transition is a backward one we apply the Lemma 5.

If \(R= [\![{X,\langle \rangle ,\mathbf {0}}]\!]\) does a forward transition, \(R\rightarrow ^{i}_{\alpha }S\) then by Lemma 6 we also have that \(X\xrightarrow {\alpha [i]} Y\) with \(S = [\![{Y,\langle \rangle ,\mathbf {0}}]\!]\), and we have that \((Y,[\![{Y,\langle \rangle ,\mathbf {0}}]\!])\in \mathcal {R}\). If the transition is a backward one we apply the Lemma 7.    \(\blacksquare \)

4 Encoding RCCS in CCSK

In this section we just give the encoding of RCCS into CCSK and discuss how it works, without showing its correctness.

The main difference between RCCS and CCSK is that in RCCS via structural congruence is possible to split a parallel composition of processes sharing the same memory into a parallel composition of different monitored processes. This allows the single monitored processes to continue independently their computation. In CCSK there is no need of splitting rule, as its reversibility is static. Then it is the case that different RCCS processes may correspond to a single CCSK process. In order to better understand this main issues, let us consider the following RCCS process:

$$R = \langle {j},{\beta },{P_1}\rangle \cdot {\langle \uparrow \rangle \cdot { \langle {i},{\alpha },{Q}\rangle \cdot {\langle \rangle }}} \triangleright P_1 \parallel \langle \uparrow \rangle \cdot { \langle {i},{\alpha },{Q}\rangle \cdot {\langle \rangle }}\triangleright \gamma .P_2$$

derived from the initial process \(\langle \rangle \triangleright \alpha .(\beta .P_1 \parallel \gamma .P_2)+Q\). Now the corresponding CCSK process is the following one:

$$\alpha [i].(\beta [j].P_1 \parallel \gamma .P_2)+Q$$

So the encoding has to be able, while encoding monitored processes, to collect partially encoded processes sharing the same memory. In the example before, the encoding has to join together processes \(\beta [j].P_1\) and \(\gamma .P_2\) and put them in the context \(\alpha [i].[\bullet ] + Q\). It is clear that such encoding cannot be compositional as it has to reason on the whole process while reconstructing back the history of monitored processes up to a split \(\langle \uparrow \rangle \), then somehow apply the structural law \(\textsc {Split}\) in order to marge partially encoded processes and then to continue the encoding of the obtained parallel composition under the common memory. This is why the encoding of a RCCS reachable process R is defined as ,

where function is inductively defined as follows

As we can see, the encoding of a monitored process proceed as long as in m there are events of the form \( \langle i,\alpha ,Q\rangle \) and freezes when it encounters a memory m on top of which there is a split event \(\langle \uparrow \rangle \cdot {m}\), and the act of this freezing produces a partially encoded process of the form .

Function \(\delta (\cdot )\), which is in charge of fusing two partially encoded CCSK processes sharing the same memory, is defined as follows:

Let us note that when the \(\delta \) only stops when an entire CCSK process has been derived, otherwise it applies again the encoding on the fused processes.

The following example shows how the entire mechanism work:

5 Conclusions and Future Work

The first reversible variant of CCS, called RCCS, was introduced by Danos and Krivine [2]. In RCCS each process is monitored by a memory, that serves as stack of past actions. Memories are considered as unique process identifiers, and in order to preserve this uniqueness along a parallel composition, a structural law permits to obtain unique memories though a parallel composition. A general method for reversing process calculi, given in a particular SOS format, has been proposed by Phillips and Ulidowski in [11]. The main idea of this approach is the use of communication keys to uniquely identify communications, and to make static each operator of the calculus. By applying this method to CCS, CCSK is obtained. Since in CCKS the history is directly annotated in the process itself, there is no need of splitting history through a parallel composition. We call this kind of recording histories as static reversibility; while we call the one used by RCCS as dynamic, since each thread is endowed with its own history. In order to show that these two methods are similar, e.g. two reversible CCS derived by them are strongly bisimilar, we have provided and encoding from a CCSK process to possibly several RCCS monitored processes. Then we have showed that a CCSK term and its encoding in RCCS are strongly back and forth bisimilar. We then sketched a possible encoding from RCCS to CCSK and discussed the difficulties behind it, mostly due to the fact that multiple split monitored process may correspond to a single CCSK process. We leave as future work showing the correctness of this encoding. Once this will be proven, then we can state that the two calculi (and their underling semantics) are equivalent, and will allow us to bring to CCSK some results about causally consistency already proven for RCCS.

We leave as future work showing that back and forth bisimulation is a congruence. Moreover, another interesting result would be to show that the two calculi are fully abstract, e.g. that two bisimilar CCSK terms are translated into two bisimimilar RCCS terms.