Keywords

1 Introduction

Concurrent constraint programming (CCP) [24, 26] (see a survey in [22]) combines concurrency primitives with the ability to deal with constraints, and hence, with partial information. The notion of concurrency is based upon the shared-variables communication model. CCP is intended for reasoning, modeling and programming concurrent agents (or processes) that interact with each other and their environment by posting and asking information in a medium, a so-called store. Agents in CCP can be seen as both computing processes (behavioral style) and as logic formulae (declarative style). Hence CCP can exploit reasoning techniques from both process calculi and logic.

CCP is a very flexible model and then, it has been applied to an increasing number of different fields such as probabilistic and stochastic [4], timed [8, 18, 25] and mobile [23] systems. More recently, CCP languages have been proposed for the specification of spatial and epistemic behaviors as in, e.g., social networks [14, 20].

One crucial problem when working with a concurrent language is being able to provide tools to debug programs. This is particularly useful for a language in which a program can generate a large number of parallel running agents. In order to tame this complexity, abstract interpretation techniques have been considered (e.g. in [6, 7, 11]) as well as (abstract) declarative debuggers following the seminal work of Shapiro [27]. However, these techniques are approximated (case of abstract interpretation) or it can be difficult to apply them when dealing with complex programs (case of declarative debugging). It would be useful to have a semi automatic tool able to interact with the user and filter, in a given computation, the information which is relevant to a particular observation or result. In other words, the programmer could mark the outcome that she is interested to check in a particular computation that she suspects to be wrong. Then, a corresponding depurated partial computation is obtained automatically, where only the information relevant to the marked parts is present.

Slicing was introduced in some pioneer works by Mark Weiser [28]. It was originally defined as a static technique, independent of any particular input of the program. Then, the technique was extended by introducing the so called dynamic program slicing [15]. This technique is useful for simplifying the debugging process, by selecting a portion of the program containing the faulty code. Dynamic program slicing has been applied to several programming paradigms, for instance to imperative programming [15], functional programming [19], Term Rewriting [1], and functional logic programming [2]. The reader may refer to [13] for a survey.

In this paper we present the first formal framework for CCP dynamic slicing and show, by some working examples and a prototypical tool, the main features of this approach. Our aim is to help the programmer to debug her program, in cases where she could not find the bugs by using other debuggers. We proceed with three main steps. First we extend the standard operational semantics of CCP to a “collecting semantics” that adds the needed information for the slicer. Second, we propose several analyses of the faulty situation based on error symptoms, including causality, variable dependencies, unexpected behaviors and store inconsistencies. Thirdly, we define a marking algorithm of the redundant items and define a trace slice. Our algorithm is flexible and it can deal with different variants of CCP. In particular, we show how to apply it to timed extensions of CCP [25].

Organization. Section 2 describes CCP and its operational semantics. In Sect. 3 we introduce a slicing technique for CCP. In Sect. 4 we extend our method to consider timed CCP programs. We present a working prototypical implementation of the slicer available at http://subsell.logic.at/slicer/. We describe an example using the slicer to debug a multimedia interacting system programmed in timed CCP. Due to lack of space, other examples are given only in the web page of the tool as, for instance, a biochemical system specified in timed CCP. Finally, Sect. 5 concludes.

2 Concurrent Constraint Programming

Processes in CCP interact with each other by telling and asking constraints (pieces of information) in a common store of partial information. The type of constraints is not fixed but parametric in a constraint system (CS). Intuitively, a CS provides a signature from which constraints can be built from basic tokens (e.g., predicate symbols), and two basic operations: conjunction (\(\sqcup \)) and variable hiding (\(\exists \)). The CS defines also an entailment relation (\(\models \)) specifying inter-dependencies between constraints: \(c\,\models \,d\) means that the information d can be deduced from the information c. Such systems can be formalized as a Scott information system as in [26], as cylindric algebras [9], or they can be built upon a suitable fragment of logic e.g., as in [18]. Here we follow [9], since the other approaches can be seen as an instance of this definition.

Definition 1 (Constraint System –CS–)

A cylindric constraint system is a structure \( \mathbf{C} = \langle {\mathcal {C}},\le ,\sqcup ,{\texttt {\textit{t}}},{\texttt {\textit{f}}},{ Var}, \exists , D \rangle \) s.t.

  • \(\langle {\mathcal {C}},\le ,\sqcup ,{\texttt {\textit{t}}},{\texttt {\textit{f}}}\rangle \) is a complete algebraic lattice with \(\sqcup \) the \({ lub}\) operation (representing conjunction). Elements in \({\mathcal {C}}\) are called constraints with typical elements \(c,c',d,d'...\), and \({\texttt {\textit{t}}}\), \({\texttt {\textit{f}}}\) the least and the greatest elements. If \(c\le d\), we say that d entails c and we write \(d \,\models \, c\). If \(c\le d\) and \(d\le c\) we write \(c \cong d\).

  • Var is a denumerable set of variables and for each \(x\in { Var}\) the function \(\exists x: {\mathcal {C}}\rightarrow {\mathcal {C}}\) is a cylindrification operator satisfying: (1) \(\exists x (c) \le c\). (2) If \(c\le d\) then \(\exists x (c) \le \exists x (d)\). (3) \(\exists x(c \sqcup \exists x (d)) \cong \exists x(c) \sqcup \exists x(d)\). (4) \(\exists x\exists y(c) \cong \exists y\exists x (c)\). (5) For an increasing chain \(c_1 \le c_2 \le c_3...\), \( \exists x \bigsqcup _i c_i \cong \bigsqcup _i \exists x ( c_i) \).

  • For each \(x,y \in { Var}\), the constraint \(d_{xy} \in D\) is a diagonal element and it satisfies: (1) \(d_{xx} \cong {\texttt {\textit{t}}}\). (2) If z is different from xy then \(d_{xy} \cong \exists z(d_{xz} \sqcup d_{zy})\). (3) If x is different from y then \(c \le d_{xy} \sqcup \exists x(c\sqcup d_{xy})\).

The cylindrification operator models a sort of existential quantification for hiding information. As usual, \(\exists x. c\) binds x in c. We use \({ fv}(c)\) (resp. \({ bv}(c)\)) to denote the set of free (resp. bound) variables in c. The diagonal element \(d_{xy}\) can be thought of as the equality \(x=y\), useful to define substitutions of the form [t/x] (see the details, e.g., in [11]).

As an example, consider the finite domain constraint system (FD) [12]. This system assumes variables to range over finite domains and, in addition to equality, one may have predicates that restrict the possible values of a variable as in \(x<42\).

2.1 The Language of CCP Processes

In the spirit of process calculi, the language of processes in CCP is given by a small number of primitive operators or combinators as described below.

Definition 2

(Syntax of Indeterminate CCP [26]). Processes in CCP are built from constraints in the underlying constraint system and the syntax:

$$\begin{aligned} P,Q ~{:}{:}{=}~ {\mathbf {skip}}\mid \mathbf {tell}(c) \mid \sum \limits _{i\in I}\mathbf {ask} \ (c_i) \ \mathbf {then} \ P_i \mid P \parallel Q \mid (\mathbf {local} \, x) \, P \mid p({\overline{x}}) \end{aligned}$$

The process \({\mathbf {skip}}\) represents inaction. The process \(\mathbf {tell}(c)\) adds c to the current store d producing the new store \(c\sqcup d\). Given a non-empty finite set of indexes I, the process \(\sum \limits _{i\in I}\mathbf {ask} \ (c_i) \ \mathbf {then} \ P_i\) non-deterministically chooses \(P_k\) for execution if the store entails \(c_k\). The chosen alternative, if any, precludes the others. This provides a powerful synchronization mechanism based on constraint entailment. When I is a singleton, we shall omit the “\(\sum \)” and we simply write \(\mathbf {ask} \ (c) \ \mathbf {then} \ P\).

The process \(P\parallel Q\) represents the parallel (interleaved) execution of P and Q. The process \((\mathbf {local} \, x) \, P\) behaves as P and binds the variable x to be local to it. We use \({ fv}(P)\) (resp. \({ bv}(P)\)) to denote the set of free (resp. bound) variables in P.

Given a process definition \(p({\overline{y}}) \mathop {=}\limits ^{\Delta }P\), where all free variables of P are in the set of pairwise distinct variables \({\overline{y}}\), the process \(p({\overline{x}})\) evolves into \(P[{\overline{x}}/{\overline{y}}]\). A CCP program takes the form \({\mathcal {D}}.P\) where \({\mathcal {D}}\) is a set of process definitions and P is a process.

The Structural Operational Semantics (SOS) of CCP is given by the transition relation \( \gamma \longrightarrow \gamma '\) satisfying the rules in Fig. 1. Here we follow the formulation in [10] where the local variables created by the program appear explicitly in the transition system and parallel composition of agents is identified to a multiset of agents. More precisely, a configuration \(\gamma \) is a triple of the form \((X; \varGamma ; c)\), where c is a constraint representing the store, \(\varGamma \) is a multiset of processes, and X is a set of hidden (local) variables of c and \(\varGamma \). The multiset \(\varGamma =P_1,P_2,\ldots ,P_n\) represents the process \(P_1 \parallel P_2 \parallel \cdots \parallel P_n\). We shall indistinguishably use both notations to denote parallel composition. Moreover, processes are quotiented by a structural congruence relation \(\cong \) satisfying: (STR1) \(P \cong Q\) if they differ only by a renaming of bound variables (alpha conversion); (STR2) \(P\parallel Q \cong Q \parallel P\); (STR3) \(P \parallel (Q \parallel R) \cong (P \parallel Q) \parallel R\); (STR4) \(P \parallel {\mathbf {skip}}\cong P\).

Let us briefly explain the rules in Fig. 1. A tell agent \(\mathbf {tell}(c)\) adds c to the current store d (Rule \(\mathrm R_\mathrm{TELL}\)); the process \(\sum \limits _{i\in I}\mathbf {ask} \ (c_i) \ \mathbf {then} \ P_i\) executes \(P_k\) if its corresponding guard \(c_k\) can be entailed from the store (Rule \(\mathrm R_\mathrm{SUM}\)); a local process \((\mathbf {local} \, x) \, P\) adds x to the set of hidden variable X when no clashes of variables occur (Rule \(\mathrm R_\mathrm{LOC}\)). Observe that Rule \(\mathrm R_\mathrm{EQUIV}\) can be used to do alpha conversion if the premise of \(\mathrm R_\mathrm{LOC}\) cannot be satisfied; the call \(p({\overline{x}})\) executes the body of the process definition (Rule \(\mathrm R_\mathrm{CALL}\)).

Definition 3

(Observables). Let \(\longrightarrow ^{*}\) denote the reflexive and transitive closure of \(\longrightarrow \). If \((X;\varGamma ; d) \longrightarrow ^{*}(X';\varGamma ';d')\) and \(\exists X'. d' \,\models \, c\) we write \((X;\varGamma ;d)\Downarrow _{c}\). If \(X=\emptyset \) and \(d={\texttt {\textit{t}}}\) we simply write \(\varGamma \Downarrow _{c}\).

Intuitively, if P is a process then \(P\Downarrow _{c}\) says that P can reach a store d strong enough to entail c, i.e., c is an output of P. Note that the variables in \(X'\) above are hidden from \(d'\) since the information about them is not observable.

Fig. 1.
figure 1

Operational semantics for CCP calculi

3 Slicing a CCP Program

Dynamic slicing is a technique that helps the user to debug her program by simplifying a partial execution trace, thus depurating it from parts which are irrelevant to find the bug. It can also help to highlight parts of the programs which have been wrongly ignored by the execution of a wrong piece of code.

Our slicing technique consists of three main steps:

S1. :

Generating a (finite) trace of the program. For that, we propose a collecting semantics that generates the (meta) information needed for the slicer.

S2. :

Marking the final store, to choose some of the constraints that, according to the symptoms detected, should or should not be in the final store.

S3. :

Computing the trace slice, to select the processes and constraints that were relevant to produce the (marked) final store.

3.1 Collecting Semantics (Step \(\mathbf{S1}\))

The slicer we propose requires some extra information from the execution of the processes. More precisely, (1) in each operational step \(\gamma \rightarrow \gamma '\), we need to highlight the process that was reduced; and (2) the constraints accumulated in the store must reflect, exactly, the contribution of each process to the store.

In order to solve (1) and (2), we propose a collecting semantics that extracts the needed meta information for the slicer. The rules are in Fig. 2 and explained below.

Fig. 2.
figure 2

Collecting semantics for CCP calculi. \(\varGamma \) and \(\varGamma '\) are (possibly empty) sequences of processes. \({ fvars} = X\cup fv(S) \cup fv(\varGamma ) \cup fv(\varGamma ')\). In “:j”, j is a fresh identifier.

The semantics considers configurations of the shape \((X ; \varGamma ; S)\) where X is a set of hidden variables, \(\varGamma \) is a sequence of processes with identifiers and S is a set of atomic constraints. Let us explain the last two components. We identify the parallel composition \(Q = P_1 \parallel \cdots \parallel P_n\) with the sequence \(\varGamma _{Q} =P_1\!:\!i_1, \cdots , P_n\!:\!i_n\) where \(i_j \in {\mathbb {N}}\) is a unique identifier for \(P_j\). Abusing of the notation, we usually write \(Q\!:\!i\) instead of \(\varGamma _Q\) when the indexes in the parallel composition are unimportant. Moreover, we shall use \(\epsilon \) to denote an empty sequence of processes. The context \(\varGamma , P\!:\!i,\varGamma '\) represents that P is preceded and followed, respectively, by the (possibly empty) sequences of processes \(\varGamma \) and \(\varGamma '\). The use of indexes will allow us to distinguish, e.g., the three different occurrences of P in “\(\varGamma _1,P\!:\!i,\varGamma _2,P\!:\!j, (\mathbf {ask} \ (c) \ \mathbf {then} \ P)\!:\!k\)”.

Transitions are labeled with \(\xrightarrow {[i]_{k}}\) where i is the identifier of the reduced process and k can be either \(\bot \) (undefined) or a natural number indicating the branch chosen in a non-deterministic choice (Rule \(\mathrm R'_\mathrm{SUM}\)). In each rule, the resulting process has a new/fresh identifier (see e.g., j in Rule \(\mathrm R'_\mathrm{LOC}\)). This new identifier can be obtained, e.g., as the successor of the maximal identifier in the previous configuration. For the sake of readability, we write [i] instead of \([i]_\bot \). Moreover, we shall avoid the identifier “\(\ \!:\!i\)” when it can be inferred from the context.

Stores and Configurations. The solution for (2) amounts to consider the store, in a configuration, as a set of (atomic) constraints and not as a constraint. Then, the store \(\{c_1,\cdots ,c_n\}\) represents the constraint \(c_1 \sqcup \cdots \sqcup c_n\).

Consider the process \(\mathbf {tell}(c)\) and let \(V \subseteq Vars\). The Rule \(\mathrm R'_\mathrm{TELL}\) first decomposes the constraint c in its atoms. For that, assume that the bound variables in c are all distinct and not in V (otherwise, by alpha conversion, we can find \(c' \cong c\) satisfying such condition). We define \(atoms(c, V) = \langle bv(c), basic(c)\rangle \) where

$$\begin{aligned} basic(c) = \left\{ \begin{array}{l} c \text{ if } c \text { is an atom, } \mathtt{t},\, \mathtt{f} \text{ or } d_{xy}\\ basic(c') \text{ if } c= \exists x. c' \\ basic(c_1) \cup basic(c_2) \text{ if } c= c_1 \sqcup c_2 \\ \end{array} \right. \end{aligned}$$

Observe that in Rule \(\mathrm R'_\mathrm{TELL}\), the parameter V of the function atoms is the set of free variables occurring in the context, i.e., \({ fvars}\) in Fig. 2. This is needed to perform alpha conversion of c (which is left implicit in the definition of \(basic(\cdot )\)) to satisfy the above condition on bound names.

Rule \(\mathrm R'_\mathrm{SUM}\) signals the number of the branch k chosen for execution. Rule \(\mathrm R'_\mathrm{LOC}\) chooses a fresh variable \(x'\), i.e., a variable not in the set of free variables of the configuration (\({ fvars}\)). Hence, we execute the process \(P[x'/x]\) and add \(x'\) to the set X of local variables. Rule \(\mathrm R'_\mathrm{CALL}\) is self-explanatory.

It is worth noticing that we do not consider a rule for structural congruence in the collecting semantics. Such rule, in the system of Fig. 1, played different roles. Axioms STR2 and STR3 provide agents with a structure of multiset (commutative and associative). As mentioned above, we consider in the collecting semantics sequences of processes to highlight the process that was reduced in a transition. The sequence \(\varGamma \) in Fig. 2 can be of arbitrary length and then, any of the enabled processes in the sequence can be picked for execution. Axiom STR1 allowed us to perform alpha-conversion on processes. This is needed in \(\mathrm R_\mathrm{LOC}\) to avoid clash of variables. Note that the new Rule \(\mathrm R'_\mathrm{LOC}\) internalizes such procedure by picking a fresh variable \(x'\). Finally, Axiom STR4 can be used to simplify \({\mathbf {skip}}\) processes that can be introduced, e.g., by a \(\mathrm R_\mathrm{TELL}\) transition. Observe that the collecting semantics does not add any \({\mathbf {skip}}\) into the configuration (see Rule \(\mathrm R'_\mathrm{TELL}\)).

Example 1

Consider the following toy example. Let \({\mathcal {D}}\) contain the process definition \(A \mathop {=}\limits ^{\Delta }\mathbf {tell}(z>x+4) \) and \({\mathcal {D}}.P\) be a program where

\( P=\mathbf {tell}(y<7) \parallel \mathbf {ask} \ (x<0) \ \mathbf {then} \ A \parallel \mathbf {tell}(x=-3) \). The following is a possible trace generated by the collecting semantics.

Now we introduce the notion of observables for the collecting semantics and we show that it coincides with that of Definition 3 for the operational semantics.

Definition 4 (Observables Collecting Semantics)

We write \(\gamma \xrightarrow {[i_1,...,i_n]_{k_1,...,k_n}} \gamma '\) whenever \( \gamma = (X_0;\varGamma _0;S_0) \xrightarrow {[i_1]_{k_1}} \cdots \xrightarrow {[i_n]_{k_n}} (X_n;\varGamma _n;S_n) = \gamma ' \). Moreover, if \(\exists X_n. \bigsqcup \limits _{d\in S_n}d \,\models \, c\), then we write \(\gamma \downdownarrows _{c}\). If \(X_0=S_0=\emptyset \), we simply write \(\varGamma _0\downdownarrows _{c}\).

Theorem 1 (Adequacy)

For any process P, constraint c and \(i \in {\mathbb {N}}\), \(P\Downarrow _{c}\) iff \(P\!:\!i\downdownarrows _{c}\)

Proof

(sketch) (\(\Rightarrow \)). The proof proceeds by induction on the length of the derivation needed to perform the output c in \(P\Downarrow _{c}\) and using the following results.

Given a set of variables V, a constraint d and a set of constraints S, let us use \(\lfloor d \rfloor _V\) to denote (the resulting tuple) atoms(dV) and \(\lceil S \rceil _V\) to denote the constraint \(\exists V . \bigsqcup \limits _{c_i \in S} c_i\). If \(\langle Y, S\rangle = \lfloor d \rfloor _{V}\), from the definition of atoms, we have \(d \cong \lceil S \rceil _Y\).

Let \(\varGamma \) (resp. \(\varPsi \)) be a multiset (resp. sequence) of processes. Let us use \(\lfloor \varGamma \rfloor \) to denote any sequence of processes with distinct identifiers built from the processes in \(\varGamma \) and \(\lceil \varPsi \rceil \) to denote the multiset built from the processes in \(\varPsi \). Consider now the transition \(\gamma = (X ; \varGamma ; d)\longrightarrow (X' ; \varGamma ' ; d')\). Let \(\langle Y, S\rangle = \lfloor d \rfloor _{V}\) where \(V= X \cup { fv}(\varGamma ) \cup { fv}(d)\). By choosing the same process reduced in \(\gamma \), we can show that there exist ik s.t. the collecting semantics mimics the same transition as \((X \cup Y, \lfloor \varGamma \rfloor , S) \xrightarrow {[i]_{k}} (X'\cup Y' ; \lfloor \varGamma '' \rfloor ; S')\) where \(d' \cong \lceil S' \rceil _{Y'}\) and \(\varGamma '' \cong \varGamma '\).

The (\(\Leftarrow \)) side follows from similar arguments.

3.2 Marking the Store (Step \(\mathbf{S2}\))

From the final store the user must indicate the symptoms that are relevant to the slice that she wants to recompute. For that, she must select a set of constraints that considers relevant to identify a bug. Normally, these are constraints at the end of a partial computation, and there are several strategies that one can follow to identify them.

Let us suppose that the final configuration in a partial computation is \((X;\varGamma ;S)\). The symptoms that something is wrong in the program (in the sense that the user identifies some unexpected configuration) may be (and not limited to) the following:

  1. 1.

    Causality: the user identifies, according to her knowledge, a subset \(S' \subseteq S\) that needs to be explained (i.e., we need to identify the processes that produced \(S'\)).

  2. 2.

    Variable Dependencies: The user may identify a set of variables \(V\subseteq { fv}(S)\) whose constraints need to be explored. Then, one would be interested in marking the following set of constraints

    $$\begin{aligned} S_{sliced} = \{ c \in S \mid vars(c) \cap V \ne \emptyset \} \end{aligned}$$
  3. 3.

    Unexpected behaviors: there is a constraint c entailed from the final store that is not expected from the intended behavior of the program. Then, one would be interested in marking the following set of constraints:

    $$\begin{aligned} S_{sliced} = \bigcup \{S' \subseteq S \mid \bigsqcup S' \,\models \, c \text{ and } S' \text { is set minimal}\} \end{aligned}$$

    where “\(S'\) is set minimal” means that for any \(S'' \subset S'\), \(S'' \not \models c\).

  4. 4.

    Inconsistent output: The final store should be consistent with respect to a given specification (constraint) c, i.e., S in conjunction with c must not be inconsistent. In this case, the set of constraints to be marked is:

    $$\begin{aligned} S_{sliced} = \bigcup \{S' \subseteq S \mid \bigsqcup S' \sqcup c \,\models \, \mathtt{f} \text{ and } S' \text { is set minimal}\} \end{aligned}$$

    where “\(S'\) is set minimal” means that for any \(S'' \subset S'\), \(S'' \sqcup c \not \models \mathtt{f}\).

We note that “set minimality”, in general, can be expensive to compute. However, we believe that in some practical cases, as shown in the examples in Sect. 4.1, this is not so heavy. In any case, we can always use supersets of the minimal ones which are easier to compute but less precise for eliminating useless information.

3.3 Trace Slice (Step \(\mathbf{S3}\))

Starting from the set \(S_{sliced}\) above we can define a backward slicing step. We shall identify, by means of a backward evaluation, the set of transitions (in the original computation) which are necessary for introducing the elements in \(S_{sliced}\). By doing that, we will eliminate information not related to \(S_{sliced}\).

Notation 1

(Sliced Terms). We shall use the fresh constant symbol \(\bullet \) to denote an “irrelevant” constraint or process. Then, for instance, “\(c\sqcup \bullet \)” results from a constraint \(c\sqcup d\) where d is irrelevant. Similarly, \(\mathbf {ask} \ (c) \ \mathbf {then} \ (P \parallel \bullet ) + \bullet \) results from a process of the form \(\mathbf {ask} \ (c) \ \mathbf {then} \ (P \parallel Q) + \sum \mathbf {ask} \ (c_l) \ \mathbf {then} \ P_l\) where Q and the summands in \(\sum \mathbf {ask} \ (c_l) \ \mathbf {then} \ P_l\) are irrelevant. We also assume that a sequence \(\bullet , \ldots , \bullet \) with any number (\({\ge }1\)) of occurrences of \(\bullet \) is equivalent to a single occurrence.

A replacement is either a pair of the shape [T/i] or [T/c]. In the first (resp. second) case, the process with identifier i (resp. constraint c) is replaced with T. We shall use \(\theta \) to denote a set of replacements and we call these sets as “replacing substitutions”. The composition of replacing substitutions \(\theta _1\) and \(\theta _2\) is given by the set union of \(\theta _1\) and \(\theta _2\), and is denoted as \(\theta _1 \circ \theta _2\). If \(\varGamma =P_1\!:\!i_1,...,P_n\!:\!i_n\), for simplicity, we shall write \([\varGamma / j]\) instead of \([P_1 \parallel \cdots \parallel P_n/j]\). Moreover, we shall write, e.g., \(\mathbf {ask} \ (c) \ \mathbf {then} \ \varGamma \) instead of \(\mathbf {ask} \ (c) \ \mathbf {then} \ (P1 \parallel \cdots \parallel P_n)\).

Algorithm 1 computes the slicing. The last configuration in the sliced trace is \((X_n\cap vars(S); \bullet ; S)\). This means that we only observe the local variables of interest, i.e., those in vars(S). Moreover, note that the processes in the last configuration were not executed and then, they are irrelevant (and abstracted with \(\bullet \)). Finally, the only relevant constraints are those in S.

figure a

The algorithm backwardly computes the slicing by accumulating replacing pairs in \(\theta \). The new replacing substitutions are computed by the function sliceProcess in Algorithm 2. Suppose that \(\gamma \xrightarrow {[i]_{k}} \). We consider each kind of process. For instance, assume a \(\mathrm R'_\mathrm{TELL}\) transition \(\gamma = (X_\gamma ; \varGamma _1, \mathbf {tell}(c)\!:\!i,\varGamma _2;S_\gamma ) \xrightarrow {[i]_{}} (X_\psi ;\varGamma _1,\varGamma _2;S_\psi )=\psi \). We note that \(X_\gamma \subseteq X_\psi \) and \(S_\gamma \subseteq S_\psi \). We replace the constraint c with its sliced version \(c'\) computed by the function sliceConstraints. In that function, we compute the contribution of \(\mathbf {tell}(c)\) to the store, i.e., \(S_c = S_\psi {\setminus } S_\gamma \). Then, any atom \(c_a \in S_c\) not in the relevant set of constraints S is replaced by \(\bullet \). By joining together the resulting atoms, and existentially quantifying the variables in \(X_\psi {\setminus } X_\gamma \) (if any), we obtain the sliced constraint \(c'\). In order to further simplify the trace, if \(c'\) is \(\bullet \) or \(\exists {\overline{x}}. \bullet \) then we substitute \(\mathbf {tell}(c)\) with \(\bullet \) (thus avoiding the “irrelevant” process \(\mathbf {tell}(\bullet )\)).

In a non-deterministic choice, all the precluded choices are discarded (“\(+~ \bullet \)”). Moreover, if the chosen alternative \(Q_k\) does not contribute to the final store (i.e., \(\varGamma _Q\theta = \bullet \)), then the whole process \(\sum \mathbf {ask} \ (c_l) \ \mathbf {then} \ P_l\) becomes \(\bullet \).

Consider the process \((\mathbf {local} \, x) \, Q\). Note that x may be replaced to avoid a clash of names (see \(\mathrm R_\mathrm{LOC}'\)). The (new) created variable must be \(\{x'\} = X_\psi {\setminus } X_\gamma \). Then, we check whether \(\varGamma _Q[x'/x]\) is relevant or not to return the appropriate replacement. The case of procedure calls can be explained similarly.

figure b

Example 2

Let abcde be constraints without any entailment and consider the process .

In any execution of R, the final store is \(\{a,b,c,d\}\). If the user selects only \(\{d\}\) as slicing criterion, our implementation (see Sect. 4.1) returns the following output (omitting the processes’ identifiers):

figure c

Note that only the relevant part of the process \(\mathbf {ask} \ (c) \ \mathbf {then} \ (\mathbf {tell}(d) \parallel \mathbf {tell}(b))\) is highlighted as well as the process \(\mathbf {tell}(d)\) that introduced d in the final store.

Also note that the process \(P=\mathbf {ask} \ (a) \ \mathbf {then} \ \mathbf {tell}(c)\) is not selected in the trace since c is not part of the marked store. However, one may be interested in marking this process to discover the causality relation between P and \(Q=\mathbf {ask} \ (c) \ \mathbf {then} \ (\mathbf {tell}(d) \parallel \mathbf {tell}(b))\). Namely, P adds c to the store, needed in Q to produce d.

It turns out that we can easily adapt Algorithm 2 to capture such causality relations as follows. Assume that sliceProcess returns both, a replacement \(\theta \) and a constraint c, i.e., a tuple of the shape \(\langle \theta , c\rangle \). In the case of \(\sum \mathbf {ask} \ (c_l) \ \mathbf {then} \ P_l\), if \(\varGamma _Q \theta \ne \bullet \), we return the pair \(\langle [ \mathbf {ask} \ (c_k) \ \mathbf {then} \ \varGamma _k\theta + \bullet / i], c_k\rangle \). In all the other cases, we return \(\langle \theta , \mathtt{t}\rangle \) where \(\theta \) is as in Algorithm 2. Intuitively, the second component of the tuple represents the guard that was entailed in a “relevant” application of the rule \(\mathrm R'_\mathrm{SUM}\). Therefore, in Algorithm 1, besides accumulating \(\theta \), we add the returned guard to the set of relevant constraints S. This is done by replacing the line 5 in Algorithm 1 with

figure d

where \(S_{minimal}(S,c)=\emptyset \) if \(c=\mathtt{t}\); otherwise, \(S_{minimal}(S,c) = \bigcup \{S' \subseteq S \mid \bigsqcup S' \,\models \, c \text{ and } S' \text { is set minimal}\}\). Therefore, we add to S the minimal set of constraints in \(S_k\) that “explains” the entailed guard c of an ask agent.

With this modified version of the algorithm (supporting causality relations), the output for the program in Example 2 is:

figure e

where the process \(\mathbf {tell}(a)\) is also selected since the execution of \(\mathbf {ask} \ (a) \ \mathbf {then} \ \mathbf {tell}(c)\) depends on this process.

Soundness. We conclude here by showing that the slicing procedure computes a suitable approximation of the concrete trace. Given two processes \(P, P'\), we say that \(P'\) approximates P, notation \(P \preceq ^{\sharp }P'\), if there exists a (possibly empty) replacement \(\theta \) s.t. \(P' = P\theta \) (i.e., \(P'\) is as P but replacing some subterms with \(\bullet \)). Let \(\gamma = (X ; \varGamma ; S)\) and \(\gamma '= (X' ;\varGamma ' ; S')\) be two configurations s.t. \(|\varGamma | =|\varGamma '|\). We say that \(\gamma '\) approximates \(\gamma \), notation \(\gamma \preceq ^{\sharp }\gamma '\), if \(X' \subseteq X\), \(S' \subseteq S\) and \(P_i \preceq ^{\sharp }P_i'\) for all \(i \in 1..|\varGamma |\).

Theorem 2

Let \(\gamma _0 \xrightarrow {[i_1]_{k_1}} \cdots \xrightarrow {[i_n]_{k_n}}\gamma _n\) be a partial computation and \(\gamma '_0 \xrightarrow {[i_1]_{k_1}} \cdots \xrightarrow {[i_n]_{k_n}}\gamma '_n\) be the resulting sliced trace according to an arbitrary slicing criterion. Then, for all \(t\in 1..n\), \(\gamma _t \preceq ^{\sharp }\gamma _t'\). Moreover, let \(Q=\sum \mathbf {ask} \ (c_k) \ \mathbf {then} \ P_k\) and assume that \((X_{t-1}; \varGamma , Q\!:\!i_t, \varGamma ' ; S_{t-1})\xrightarrow {[i_t]_{k_t}} (X_t; \varGamma , P_{k_t}\!:\!j, \varGamma '; S_t)\) for some \(t \in 1..n\). If the sliced trace is computed with the Algorithm that supports causality relations, then \(\exists X'_{t-1} (\bigsqcup S'_{t-1}) \,\models \,c_{k_t}\).

4 Applications to Timed CCP

Reactive systems [3] are those that react continuously with their environment at a rate controlled by the environment. For example, a controller or a signal-processing system, receives a stimulus (input) from the environment, computes an output and then waits for the next interaction with the environment.

Timed CCP (tcc) [18, 25] is an extension of CCP tailoring ideas from Synchronous Languages [3]. More precisely, time in tcc is conceptually divided into time intervals (or time-units). In a particular time interval, a CCP process P gets an input c from the environment, it executes with this input as the initial store, and when it reaches its resting point, it outputs the resulting store d to the environment. The resting point determines also a residual process Q that is then executed in the next time-unit. The resulting store d is not automatically transferred to the next time-unit. This way, outputs of two different time-units are not supposed to be related.

Definition 5

(Syntax of tcc [18, 25]). The syntax of tcc is obtained by adding to Definition 2 the processes \( \mathbf {next}\ P \mid \mathbf {unless} \ (c) \ \mathbf {next}\ P \mid \ ! P \).

The process \( \mathbf {next}\ P\) delays the execution of P to the next time interval. We shall use \( \mathbf {next}\ \!^n{P}\) to denote P preceded with n copies of “\( \mathbf {next}\ \)” and \( \mathbf {next}\ \!^0{P}=P\).

The time-out \( \mathbf {unless} \ (c) \ \mathbf {next}\ P \) is also a unit-delay, but \( P \) is executed in the next time-unit only if \( c \) is not entailed by the final store at the current time interval.

The replication \(! P\) means \( P\parallel \mathbf {next}\,P\parallel \mathbf {next}^{2}P\parallel \dots \), i.e., unboundedly many copies of \( P \) but one at a time. We note that in tcc, recursive calls must be guarded by a next operator to avoid infinite computations during a time-unit. Then, recursive definitions can be encoded via the \(! \) operator [17].

The operational semantics of tcc considers internal and observable transitions. The internal transitions correspond to the operational steps that take place during a time-unit. The rules are the same as in Fig. 2 plus:

figure f

where j and \(j'\) are fresh identifiers. The \(\mathbf {unless}\) process is precluded from execution if its guard can be entailed from the current store. The process \(! P\) creates a copy of P in the current time-unit and it is executed in the next time-unit. The seemingly missing rule for the \(\mathbf {next}\) operator is clarified below.

The observable transition (“\( P \) on input \( c \), reduces in one time-unit to \( Q \) and outputs \( d \)”) is obtained from a finite sequence of internal reductions:

The process \(F(\varGamma ')\) (the continuation of \(\varGamma '\)) is obtained as follow:

$$\begin{aligned} F(R)=\left\{ \begin{array}{ll} {\mathbf {skip}}&{} \text{ if } R={\mathbf {skip}} \text{ or } R=\mathbf {ask} \ (c) \ \mathbf {then} \ R' \\ F(R_1) \parallel F(R_2) &{} \text{ if } R = R_1 \parallel R_2 \\ Q &{} \text{ if } R= \mathbf {next}\ Q \text{ or } R=\mathbf {unless} \ (c) \ \mathbf {next}\ Q \end{array} \right. \end{aligned}$$

The function F(R) (the future of R) returns the processes that must be executed in the next time-unit. More precisely, it unfolds next and \(unless \) expressions. Notice that an ask process reduces to \({\mathbf {skip}}\) if its guard was not entailed by the final store. Notice also that F is not defined for \(\mathbf {tell}(c)\), \(! Q\), \((\mathbf {local} \, x) \, P\) or \(p({\overline{x}})\) processes since all of them give rise to an internal transition. Hence these processes can only appear in the continuation if they occur within a \(\mathbf {next}\) or \(\mathbf {unless}\) expression.

4.1 A Trace Slicer for tcc

From the execution point of view, only the observable transition is relevant since it describes the input-output behavior of processes. However, when a tcc program is debugged, we have to consider also the internal transitions. This makes the task of debugging even harder when compared to CCP.

We implemented in Maude (http://maude.cs.illinois.edu) a prototypical version of a slicer for tcc (and then for CCP) that can be found at http://subsell.logic.at/slicer/.

The slicing technique for the internal transition is based on the Algorithm 1 by adding the following cases to Algorithm 2:

figure g

Note that if an unless process evolves during a time-unit, then it is irrelevant. In the case of \(! P\), we note that \(\varGamma _Q = Q\!:\!j, \mathbf {next}\ ! Q\!:\!j'\). We check whether P is relevant in the current time-unit (Q) or in the following one (\( \mathbf {next}\ ! Q\)). If this is not the case, then \(! Q\) is irrelevant.

Recall that next processes do not exhibit any transition during a time-unit and then, we do not consider this case in the extended version of Algorithm 2.

For the observable transition we proceed as follows. Consider a trace of n observable steps and a set \(S_{slice}\) of relevant constraints to be observed in the last configuration \(\gamma _n\). Let \(\theta _n\) be the replacement computed during the slicing process of the (internal) trace generated from \(\gamma _n\). We propagate the replacements in \(\theta _n\) to the configuration \(\gamma _{n-1}\) as follows:

  1. 1.

    In \(\gamma _{n-1}\) we set \(S_{sliced} = \emptyset \). Note that the unique store of interest for the user is the one in \(\gamma _n\). Recall also that the final store in tcc is not transferred to the next time-unit. Then, only the processes (and not the constraints) in \(\gamma _{n-1}\) are responsible for the final store in \(\gamma _n\).

  2. 2.

    Let \(\psi \) be the last internal configuration in \(\gamma _{n-1}\), i.e., and \(\gamma _n = F(\psi )\). We propagate the replacements in \(\theta _n\) to \(\psi \) before running the slicer on the trace starting from \(\gamma _{n-1}\). For that, we compute a replacement \(\theta '\) that must be applied to \(\psi \) as follows:

    • If there is a process \(R = \mathbf {next}\ P\!:\!i\) in \(\psi \), then \(\theta '\) includes the replacement \([ \mathbf {next}\ (\varGamma _P\theta _n)/i]\). For instance, if \(R = \mathbf {next}\ (\mathbf {tell}(c)\parallel \mathbf {tell}(d))\) and \(\mathbf {tell}(c)\) was irrelevant in \(\gamma _n\), the resulting process in \(\psi \) is \( \mathbf {next}\ ( \bullet \parallel \mathbf {tell}(d))\). The case for \(\mathbf {unless} \ (c) \ \mathbf {next}\ P\) is similar.

    • If there is a process \(R =\sum _l\mathbf {ask} \ (c_l) \ \mathbf {then} \ P_l \!:\!i\) in \(\psi \) (which is irrelevant since it was not executed), we add to \(\theta '\) the replacement \([\bullet / i]\).

  3. 3.

    Starting from \(\psi \theta \), we compute the slicing on \(\gamma _{n-1}\) (Algorithm 1).

  4. 4.

    This procedure continues until the first configuration \(\gamma _0\) is reached.

Example 3

Consider the following process definitions:

$$\begin{aligned} \begin{array}{lll} System \mathop {=}\limits ^{\Delta }Beat2 \parallel Beat4 \qquad \qquad Beat2 \mathop {=}\limits ^{\Delta }\mathbf {tell}(b2) \parallel \mathbf {next}\ ^2 \ Beat2 \\ Beat4 \mathop {=}\limits ^{\Delta }\mathbf {tell}(b4) \parallel \mathbf {next}\ ^4 \ Beat4 \end{array} \end{aligned}$$

This is a simple model of a multimedia system that, every 2 (resp. 4) time-units, produces the constraint b2 (resp. b4). Then, every 4 time-units, the system produces both b2 and b4. If we compute 5 time-units and choose \(S_{slice}=\{b4\}\) we obtain (omitting the process identifiers):

figure h

Note that all the executions of Beat2 in time-units 1, 3 and 5 are hidden since they do not contribute to the observed output b4. More interestingly, the execution of \(\mathbf {tell}(b4)\) in time-unit 1, as well as the recursive call of Beat4 (\( \mathbf {next}\ ^4 \ Beat4\)) in time-unit 5, are also hidden.

Now assume that we compute an even number of time-units. Then, no constraint is produced in that time-unit and the whole execution of System is hidden:

figure i

As a more compelling example, consider the following process definitions:

figure j

where \(I_1 = \{0,3,5,7,9,11,14,16,18,20,22\}\), \(I_2=\{0,3,5,7,9,11\}\) and \(\varPi _i\) stands for parallel composition. This process represents a rhythmic pattern where groups of “2”-unit elements separate groups of “3”-unit elements, e.g., \( 3\ \underbrace{\ 2\ 2\ 2\ 2\ }\ 3\ \underbrace{\ 2\ 2\ 2\ 2 \ 2}\). Such pattern appears in repertoires of Central African Republic music [5] and were programmed in tcc in [21].

This pattern can be represented in a circle with 24 divisions, where “2” and “3”-unit elements are placed. The “3”-unit intervals are displayed in red in Fig. 3. The important property is asymmetry: if one attempts to break the circle into two parts, it is not possible to have two equal parts. To be more precise, the \({\texttt {start}}\) and \({\texttt {stop}}\) constraints divide the circle in two halves (see process Start) and it is always the case that the constraint \({\texttt {beat}}\) does not coincide in a time-unit with the constraint \({\texttt {stop}}\). For instance, in Fig. 3(a) (resp. (b)), the circle is divided in time-units 1 –start– to 13 –stop– (resp. 4 –start– to 16 –stop–). The signal \({\texttt {beat}}\) does not coincide with a \({\texttt {stop}}\): in Fig. 3(a) (resp. (b)), the \({\texttt {beat}}\) is added in time-unit 12 (resp. 15).

If we generate one of the possible traces and perform the slicing processes for the time-unit 13 with \(S_{sliced}=\{{\texttt {beat}}, {\texttt {stop}}\}\), we only observe as relevant process Check (since no \({\texttt {beat}}\) is produced in that time-unit):

figure k

More interestingly, assume that we wrongly write a process Check that is not “well synchronized” with the process Beat. For instance, let \(I_2'=\{2\}\). In this case, the \({\texttt {start}}\) signal does not coincide with a \({\texttt {beat}}\). Then, in time-unit 15, we (wrongly) observe both \({\texttt {beat}}\) and \({\texttt {stop}}\) (i.e., asymmetry is broken!). The trace of that program (that can be found in tool’s web page) is quite long and difficult to understand. On the contrary, the sliced one is rather simple:

figure l

Something interesting in this trace is that the ask in the Check process is hidden from the time-unit 4 on (since it is not “needed” any more). Moreover, the only \(\mathbf {tell}({\texttt {beat}})\) process (from Beat definition) displayed is the one that is executed in time-unit 15 (i.e., the one resulting from \( \mathbf {next}\ ^{14}{\mathbf {tell}({\texttt {beat}})}\)). From this trace, it is not difficult to note that the Start process starts on time-unit 3 (the process \( \mathbf {next}\ ^{11}{\mathbf {tell}({\texttt {stop}})}\) first appears on time-unit 4). This can tell the user that the process Start begins its execution in a wrong time-unit. In order to confirm this hypothesis, the user may compute the sliced trace up to time-unit 3 with \(S_{sliced}=\{{\texttt {beat}}, {\texttt {start}}\}\) and notice that, in that time-unit, \({\texttt {start}}\) is produced but \({\texttt {beat}}\) is not part of the store.

The reader may find in the web page of the tool a further example related to biochemical systems. We modeled in tcc  the P53/Mdm2 DNA-damage Repair Mechanism [16]. The slicer allowed us to detect two bugs in the written code. We invite the reader to check in this example the length (and complexity) of the buggy trace and the resulting sliced trace.

Fig. 3.
figure 3

(taken from [5]).

Pattern of “2” and “3”-unit elements

5 Conclusions and Future Work

In this paper we introduced the first framework for slicing concurrent constraint based programs, and showed its applicability for CCP and timed CCP. We implemented a prototype of the slicer in Maude and showed its use in debugging a program specifying a biochemical system and a multimedia interacting system.

Our framework is a good basis for dealing with other variants of CCP such as linear CCP [10], spatial and epistemic CCP [14] as well as with other temporal extensions of it [8]. We are currently working on extending our tool to cope with these languages. We also plan to incorporate into our framework an assertion language based on a suitable fragment of temporal logic. Such assertions will specify invariants the program must satisfy during its execution. If the assertion is not satisfied in a given state, then the execution is interrupted and a concrete trace is generated to be later sliced. For instance, in the multimedia system, the user may specify the invariant \({\texttt {stop}}\rightarrow (\lnot {\texttt {beat}})\) (if \({\texttt {stop}}\) is entailed then \({\texttt {beat}}\) cannot be part of the store) or \({\texttt {stop}}\rightarrow {\circleddash {\texttt {beat}}}\) (a \({\texttt {stop}}\) state must be preceded by a \({\texttt {beat}}\) state).