1 Introduction

In today’s world, we encounter computational devices all around us. These devices do not act in isolation but interact in increasingly complex ways. For example, Automatic Teller Machines (ATMs), online banking systems, car braking systems, railway gate controllers are all composed of several components that communicate with each other over an extended period of time. A common factor in many such systems is the interplay between concurrency and timing. Concurrency plays an important role since systems usually consist of independent components that interact periodically to coordinate their behavior. Timing considerations play an important role in describing how these interactions proceed.

Our goal is to use formal methods to reason about systems where time and concurrency play a significant role. The first challenge is to choose a suitable formalism that admits automated analysis. For instance, if a system exhibits regular, finite-state behavior, we can use model checking to efficiently explore the state space and determine various behavioral properties. Finite-state automata provide an intuitively appealing machine model for generating regular behaviors in this setting. The regular behaviors can be represented as a set of words over an alphabet. Monadic second order logic is an elegant language to describe abstract properties of sets of words. The Büchi-Elgot-Trakhtenbrot Theorem [10, 16] links the two formalisms: a behavior can be described by a finite-state automaton if and only if it can be expressed in monadic second order logic. This correspondence is effective and forms the basis for model checking behavioral properties of finite-state systems. We would like to lift this approach to the timed and distributed setting.

In the timed but sequential setting, event clock automata are a well-known formalism, which have nice properties and describe timed behaviors as timed words. An event clock automaton uses implicit “event clocks” that record or predict time lapses with respect to the last or the next occurrence of an event. In [14], it was proved that a timed language, i.e., a set of timed words, can be recognized by an event clock automaton if and only if it can be defined in a timed version of MSO logic. On the other hand, in the concurrent setting, message passing automata (MPA) form a natural machine model for distributed systems. MPA comprise of several finite state automata communicating via FIFO channels and their runs can be described as Message sequence charts, which are labeled partial orders (over a labeling alphabet of sends and receives). In this case, it was proved in [17, 19] that with different restrictions on the MSC languages (which will be made precise later), such an MSC can be described by a MPA if and only if it can be expressed in a MSO logic over MSCs. In [7], this result was proved without any restriction on the language of MSCs by considering only the existential fragment of the MSO logic.

In this work, we unify the above approaches by defining a machine model for timed and concurrent systems, namely the Event clock message passing automata (event clock MPA). Event clock MPA are message passing automata which are equipped with event clocks (as in the event clock automaton in the sequential timed setting) to reason about timed and concurrent behavior. To describe the behavior of such automata, we generalize MSCs to their timed extension, which we do in two ways. We first consider timed MSCs which are just MSCs with time-stamps at events (as in timed words). These are ideal to describe real-time system executions, while keeping the causal relation between events explicit. Next, we consider MSCs with timing constraints or time-constrained MSCs, where we associate time-intervals to some pairs of events (instead of attaching time-stamps to individual events). The endpoints of the interval give us the upper and lower bounds on the time allowed to elapse between the events.

Results on event clock message passing automata

Our first result is to lift the Büchi-Elgot equivalence [10, 16] to the timed and distributed setting. For the logical framework, we use a timed version of MSO logic. We interpret both event clock MPAs and timed MSO formulae over timed MSCs and prove a constructive equivalence between them, with and without restriction on the MSC languages. Thus, we provide a logical characterization for event clock MPA. This is done by lifting the corresponding results from the untimed case [7, 17, 19]. An important intermediary step in this translation is the reinterpretation of the timed MSO and event clock MPA in terms of time-constrained MSCs rather than timed MSCs. The time-constrained MSCs provide a dual link: they can be seen as MSCs whose labelings are extended by timing information and they can also be seen as a representation of infinite sets of timed MSCs. Once this translation is done, we can essentially follow the technique of [14] where such an equivalence is shown for (sequential) timed words.

Next, we consider the emptiness problem, which is one of the most basic verification questions that one may wish to ask. In our setting, the emptiness problem for event clock MPAs asks if a given event clock MPA has any run, i.e., a timed MSC which is accepted by it. It can be easily seen that without any restriction on the timed MSCs, this problem is undecidable, since this is already the case in the untimed setting. However we prove that if we restrict to timed MSCs with at least one bounded linearization (i.e., in which any send and its matching receive are apart by at most K events), then the emptiness problem is decidable. Indeed, this condition is subsumed by one of restricted settings under which we proved the above equivalence result. Therefore, as a corollary, we also obtain that the satisfiability problem for our logic is decidable. Our approach to prove decidability consists of constructing a global finite timed automaton that can simulate the runs of an event clock MPA (which is a distributed machine) and so, reduce the problem to checking emptiness for a timed automaton. The hard part of the construction lies in “cleverly” maintaining the partial-order information (of the timed MSC) along the sequential runs of the global timed automaton, while using only finitely many clocks.

Related work

Providing a timed partial order semantics as we have done above allows us to apply partial order reduction techniques [18] to address the model-checking problem. But indeed, there are several other models that also handle time and concurrency in a comparable way. Many timed extensions of Petri nets have been considered, for instance, time Petri nets [6], timed Petri nets [23]. Unfoldings of Petri nets provide a way to model the partial order behavior of these systems and by lifting these unfoldings to the timed extensions, they provide a timed partial order semantics [12]. However, these unfoldings are seldom graphically representable in a compact manner unlike MSCs (and their timed extensions). Further, unfoldings in Petri nets correspond to “branching time” whereas MSCs express “linear time” behavior. Other models dealing with time and concurrency include networks of timed automata [2] and products of timed automata [13]. In [8], unfolding techniques were applied to study such networks of timed automata. However, none of these models allow communication via explicit message passing, which is one of the main features of the event clock MPA that we have introduced.

The formal semantics and analysis of timing in MSCs has been addressed earlier in [4, 5, 11, 20]. In [4] and [5], only single timed MSCs or high-level timed MSCs were considered, while in [20] one of the first models of timed MPAs was introduced. However, the latter do not consider MSCs as semantics but rather look at restricted channel architectures (e.g., one-channel systems) to transfer decidability of reachability problems from the untimed to timed setting. The automaton model in [11] links the two approaches by considering a similar automaton model with semantics in terms of timed MSCs and proposes a practical solution to a very specific matching problem using the tool Uppaal.

A preliminary version of the results in this paper was presented in [1].

2 Preliminaries

We denote by ℕ={0,1,2,…} the set of natural numbers. For an alphabet Σ, a Σ-labeled poset is a structure \(\mbox {$(E,\leq ,\lambda )$}\) over Σ, where E is a set of events, ≤ is a partial order on the set of events E called its ordering relation and λ:EΣ is the labeling function. A linearization of a Σ-labeled poset \(\mbox {$(E,\leq ,\lambda )$}\) is any Σ-labeled poset (E,≤′,λ) such that ≤′ is a linear extension of ≤. Then, the set of events E={e 1,…,e n } can be rewritten as a sequence \(e_{i_{1}}\mathbin {\leq '} e_{i_{2}}\mathbin{\leq '}\cdots\mathbin{\leq '} e_{i_{n}}\) s.t., \(\lambda(e_{i_{1}})\ldots\lambda(e_{i_{n}})\in\varSigma^{*}\). Thus, any linearization of \(\mbox {$(E,\leq ,\lambda )$}\) can be identified with a unique word over Σ.

Message sequence charts (MSCs)

Let Proc={p,q,r,…} be a non-empty finite set of processes (agents) that communicate through messages via reliable FIFO channels using an alphabet of messages \(\mathcal {M}\). For pProc, let \(\mathit {Act}_{p} = \{ {p}!{q}(m), {p}?{q}(m) \mid q\in \mathit {Proc}, q\neq p,\allowbreak m \in\nobreak \mathcal {M}\}\) be the set of communication actions of process p . The action p!q(m) is read as p sends the message m to q and the action p?q(m) is read as p receives the message m from q. We set Act=⋃ pProc Act p . We also denote the set of channels by Ch={(p,q)∈Proc×Procpq}. For an action aAct, we will sometimes write ap!q (respectively, ap?q) to denote a=p!q(m) (respectively, p?q(m)) for some \(m\in \mathcal {M}\).

Let \(M = \mbox {$(E,\leq ,\lambda )$}\) be an Act-labeled partial order. For eE, let ↓e={e′∈Ee′≤e}. For XE, let ↓X=⋃ eX e. We call XE a prefix of M if X=↓X. For pProc and aAct, we set E p ={eEλ(e)∈Act p } to be the set of all p-events and E a ={eEλ(e)=a} to be the set of a-events. For each (p,q)∈Ch, we define a relation < pq as follows, to capture the fact that channels are FIFO with respect to each message—if e< pq e′, the message m read by q at e′ is the one sent by p at e.

$$e <_{{pq}} e' \quad\text{if}\ \exists m, \lambda (e) = {p}!{q}(m),\ \lambda \bigl(e'\bigr) = {q}?{p}(m)\ \mbox{and}\ |{\downarrow }{e} \cap E_{{p}!{q}(m)}|= \bigl|{\downarrow }{e'} \cap E_{{q}?{p}(m)}\bigr| $$

Finally, for each pProc, we define the relation ≤ pp =(E p ×E p )∩≤, with < pp standing for the largest irreflexive subset of ≤ pp . Also, \({\mathrel {{<}{\llap {\mbox {$\cdot $}}}}_{pp}}\) denotes the immediate successor relation on process p: for e,e′∈E p , \(e\mathrel {{<}{\llap {\mbox {$\cdot $}}}}_{pp}e'\) if e< pp e′ and for all e″∈E p , we have e< pp e″≤ pp e′ implies e″=e′.

Definition 2.1

message sequence chart (MSC) over Act is a finite Act-labeled poset \(M = \mbox {$(E,\leq ,\lambda )$}\) such that:

  1. 1.

    Each relation ≤ pp is a total order on E p .

  2. 2.

    If pq then for each \(m \in \mathcal {M}\), |E p!q(m)|=|E q?p(m)|.

  3. 3.

    If e< pq e′, then

    $$\biggl|{\downarrow }{e} \cap\biggl(\bigcup_{m \in \mathcal {M}} E_{{p}!{q}(m)} \biggr)\biggr| = \biggl|{\downarrow }{e'} \cap\biggl(\bigcup_{m \in \mathcal {M}} E_{{q}?{p}(m)} \biggr)\biggr| $$
  4. 4.

    The partial order ≤ is the reflexive, transitive closure of ⋃ p,qProc < pq .

As each linearization of an MSC \(M=\mbox {$(E,\leq ,\lambda )$}\) over Act corresponds to a word over Act, we will use these notions interchangeably. Under the FIFO assumption an MSC can be reconstructed, up to isomorphism, from any of its linearizations. In diagrams, the events of an MSC are presented in visual order. Events of a process are arranged in a vertical line and messages are displayed as horizontal or downward-sloping directed edges. Figure 1 shows an example of an MSC with three processes {p,q,r} and six events \(\{e_{1},e'_{1},e_{2},e'_{2},e_{3},e'_{3}\}\) corresponding to three messages—m 1 from p to q, m 2 from q to r and m 3 from p to r. Then, for instance, p!q(m 1) q?p(m 1) q!r(m 2p!r(m 3) r?q(m 2) r?p(m 3) is one linearization or execution of this MSC seen as a word over Act.

Fig. 1
figure 1

An MSC

An MSC language over Act is a set of MSCs over Act. An important subclass of MSCs is the set of bounded MSCs that correspond to systems whose channel capacity is restricted. These systems turn out to enjoy nice algorithmic properties and have liberal logical correspondences too. Let B∈ℕ>0 be a positive integer. Then, a word wAct is said to be B-bounded if for each prefix u of w and any p,qProc, the number of occurrences of p!q exceeds the number of occurrences of q?p by at most B. This means that along the sequential execution of M described by w, no channel ever contains more than B-messages.

Definition 2.2

An MSC M is called universally B-bounded (or ∀-B-bounded) if every linearization of M is B-bounded. It is said to be existentially B-bounded (or ∃-B-bounded) if there exists a linearization which is B-bounded.

A set of MSCs is said to be ∃-B-bounded (respectively,-B-bounded) if each MSC in the set is ∃-B-bounded (respectively, ∀-B-bounded). Further such a set is called existentially bounded (respectively, universally bounded) if there exists a B such that it is ∃-B-bounded (respectively, ∀-B-bounded). As shown in [19], any regular MSC language (i.e., the set of all possible linearizations is regular) is universally bounded.

3 Formalisms to describe timed and concurrent behaviors

The first natural attempt while trying to add timing information to MSCs would be to add time stamps to the events of the MSCs. This is motivated from timed words where we have words with time stamps added at each letter (action). This approach is quite realistic when we want to model the real-time execution of concurrent systems.

Definition 3.1

timed MSC (TMSC) over Act is a pair \(\mbox {$(M,t)$}\) where \(M=\mbox {$(E,\leq ,\lambda )$}\) is an MSC over Act and t:E→ℝ≥0 is a function such that if ee′ then t(e)≤t(e′) for all e,e′∈E. The set of all TMSCs over Act is denoted \(\mathbb {TMSC}(\mathit {Act})\).

In the above definition note that over events e,e′, ≤ is the partial order relating events, while over reals t(e),t(e′), ≤ refers to the usual ordering between real numbers.

Example 3.1

Consider the TMSC T presented in Fig. 2 which shows a scenario where two user processes p and r interact with a railway ticket-booking server process q to book the last available ticket for a certain train journey. Actions are of the form p!q(req) meaning that User1 sends the Server a request for a ticket. In the scenario shown, both users send booking requests to the Server at time instant 1. The Server grants User1’s request since it is received first. Then, User1 confirms his/her booking which leads to the server denying the booking to User2. Meanwhile, User2 repeats her request at instant 5 which reaches the Server at time 8. An execution or linearization of this scenario is: (p!q(req),1)(r!q(req),1)(q?p(req),1)(q?r(req),2)(q!p(grant),3.5)(p?q(grant),3.5)(r!p(req),5)(p!q(conf),6)(q?p(conf),6)(q!r(deny),7)(r?q(deny),7)(q?r(req),8).

Fig. 2
figure 2

(a) A timed MSC T and (b) a TCMSC \({\mathfrak {M}}_{1}\) describing the interaction of two users with a server

Note that in the above execution, the ordering on the time stamps is preserved, thus making it a timed word over the alphabet of actions. Such an execution is called a timed linearization of T. A single TMSC might have more than one timed linearization if concurrent events on different processes have the same time stamp. For instance, if we swap the first two pairs in the above execution we obtain another execution which respects the time stamps. Finally, observe that not all linearizations are timed linearizations as seen by swapping the last two pairs in the above execution.

Formally, let \(T=\mbox {$(M,t)$}\) be a TMSC over Act with \(M=\mbox {$(E,\leq ,\lambda )$}\). Consider a linearization (E,≤′,λ) of (E,≤,λ) according to which the events of E can be written as a sequence e 1≤′e 2≤′⋯≤′e n . Then (E,≤′λ,t) is said to be a linearization of TMSC T. If in addition, for all 1≤j<kn, we have t(e j )≤t(e k ), then it is said to be a timed linearization. Indeed, this can be seen as a timed word σ over Act by uniquely identifying it with σ=(λ(e 1),t(e 1))…(λ(e n ),(t(e n ))). We let TW Act denote the set of timed words over Act and \(\mathrm {t\text {-}lin}({T})\subseteq \mathrm {TW}_{\mathit {Act}}\) denote the set of timed linearizations of a TMSC T, seen as a timed word language over Act. As with untimed MSCs, a timed MSC can be faithfully reconstructed from any of its timed linearizations under FIFO assumption on channels. A TMSC language \(\mathcal {L}\) over Act is a set of TMSCs over Act. Then, \(\mathrm {t\text {-}lin}({\mathcal {L}})\) is the timed word language over Act consisting of all timed linearizations of TMSCs in \(\mathcal {L}\), i.e., \(\mathrm {t\text {-}lin}({\mathcal {L}})=\bigcup\{ \mathrm {t\text {-}lin}({T})\mid T \in \mathcal {L}\}\).

Bounded channel setting

As in the case of untimed MSCs, restricting the channel capacity in TMSCs gives rise to an interesting, more “tractable” subclass of TMSCs. We extend the definition of existential and universal bounds from MSCs to TMSCs.

Definition 3.2

A TMSC \(\mbox {$(M,t)$}\) is called untimed-existentially-B-bounded (∃u-B-bounded) if the MSC M is ∃-B-bounded. Similarly \(\mbox {$(M,t)$}\) is untimed-universally-B-bounded (∀u-B-bounded) if M is ∀-B-bounded.

Note that an existential untimed bound may not be achievable by a timed linearization. For instance, consider the TMSC T′ in Fig. 3. T′ is ∃u-1-bounded as there exists a linearization of the untimed MSC which is 1-bounded, namely, u 1 v 1 u 2 v 2. However, this does not correspond to a timed linearization. In fact, the only timed linearization of T, namely (u 1,1)(u 2,2)(v 1,3)(v 2,4), is 2-bounded.

Fig. 3
figure 3

TMSC T

Message sequence charts with timing constraints

TMSCs (and indeed timed linearizations) essentially capture only the operational/global behavior of distributed systems. In some sense, this is due to the fact that TMSCs and their timed linearizations are not very different. In other words, by attaching time-stamping to events of an MSC, we lose much of its partial order information. The only partial-order information retained is through events on different processes with the same time-stamps. The remaining are totally ordered due to the global time-stamping.

A richer partial order behavior can be retained by attaching timing constraints to pairs of events of the MSC. This approach has two other major advantages:

  • Firstly, from a specification point of view, it allows the specifier to decide and enforce constraints between occurrences of events as he chooses.

  • Secondly, a single MSC with timing constraints can describe a whole family of TMSCs (with the same underlying MSC) thus being a much more succinct description of the timed behaviors of a system.

For instance, consider Example 3.1 again where two users interact with a railway booking server. It might be that after granting a User request, the Server waits only for a bounded amount of time for him/her to respond before canceling the request. The family of TMSCs satisfying such a constraint is easier to capture using a scenario with timing constraints as shown in \({\mathfrak {M}}_{1}\) of Fig. 2. The label [0,3] from a 3 to a 4 specifies that User1 must respond to the grant within 3 time units. The label [0,1] on the message from a 3 to u 2 specifies the bounds on the delay of message delivery and so on.

In the above example we can, a priori, define timing constraints between any two distinct but arbitrary events. But is this really what we want? In fact, this might defeat our purpose for introducing timing constraints in MSCs in the first place. For instance, in the TCMSC \({\mathfrak {M}}_{1}\) in Fig. 2, should a specifier be allowed to have a choice of imposing constraints between the first event of User1 and the first event of User2? Or, from an implementation point of view, can a machine that implements such a constraint really be called a distributed machine?

In other words, the vital question is how flexible we want this timing to be, i.e., between which pairs of events we allow constraints. To define this formally, we fix an MSC \(M=\mbox {$(E,\leq ,\lambda )$}\) over Act and define the following relations that will relate the pairs of events on which we wish to impose timing constraints. First, we denote the set of all irreflexive pairs of E by P(E)={(e,e′)∈E×Eee′}. Then,

  • The message relation, which is defined as

    MsgM={(e,e′)∈P(E)∣e< pq e′ for some (p,q)∈Ch}

  • The previous occurrence of an action aAct is defined as:

    \(\mathrm {Prev}^{M}_{a}=\{(e,e')\in \mathrm {P}(E)\mid\lambda(e')=a,e'\leq e\ \mbox{and}\ \forall e''\in E, (e'' \leq e \wedge\lambda(e'')=a )\Rightarrow e''\leq e' \}\)

  • The next occurrence of an action aAct is defined as:

    \(\mathrm {Next}^{M}_{a}=\{(e,e')\in \mathrm {P}(E)\mid\lambda(e')=a, e\leq e'\ \mbox{and}\ \forall e''\in E, (e \leq e'' \wedge\lambda(e'')=a ) \Rightarrow e'\leq e'' \}\)

Thus, the above relations form a flexible timing formalism, as they allow timing between the next and previous occurrence of any action from an event in the MSC along with the messages. Let \(\mathcal {I}\) denote the set of all intervals over the real line with rational end-points.

Definition 3.3

Let \(\mathcal {S}\subseteq \mathcal {I}\) be a set of intervals. An MSC with timing constraints or a time-constrained MSC (denoted TCMSC) over \((\mathit {Act},\mathcal {S})\) is a pair \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) where \(M=\mbox {$(E,\leq ,\lambda )$}\) is an MSC over Act and τ is a partial map from the set of irreflexive pairs of events P(E) to the set of intervals \(\mathcal {S}\) such that \(\mathrm {dom}(\tau)\subseteq \mathrm {Msg}^{M}\cup \bigcup_{a\in \mathit {Act}} (\mathrm {Next}^{M}_{a}\cup \mathrm {Prev}^{M}_{a})\).

With the above definition, TCMSCs can be considered as abstractions of TMSCs and timed words. Here and for the rest of the paper, we let \(\mathcal {S}\subseteq \mathcal {I}\) be some fixed set of intervals. Also, when \(\mathcal {S}=\mathcal {I}\), i.e., if \({\mathfrak {M}}\) is a TCMSC over \((\mathit {Act},\mathcal {I})\), we ignore the latter component and say that \({\mathfrak {M}}\) is a TCMSC over Act. As usual, when the set of actions is clear from context, we may ignore Act as well. This notion of MSCs with interval constraints on arbitrary pairs of events is in fact similar to the approach adopted by Alur et al. [4]. Thus we can use their MSC analysis tool to check consistency of the timing constraints in a single TCMSC.

We remark here that the expressiveness of the relations Prev a , Msg and Next a are in fact incomparable and we cannot always subsume/replace one by the other. We illustrate this by the following example. Consider the TCMSC \({\mathfrak {M}}_{2}\) from Fig. 4. We have abstracted away the message contents \(\mathcal {M}\) for simplicity. \({\mathfrak {M}}_{2}\) has timing constraints defined between the following pairs of events: The pair (w 3,v 3) is a message constraint. The pair (w 1,u 1) is related by Prev p!q . However, note that this constraint can also be seen as between (u 1,w 1) which are related by the Next r?q . In such cases, our definition requires us to have the same interval as a constraint. Finally, the third pair of events (u 3,v 4) are related by the Next q?p relation. We observe that this pair of events is not related by Msg or Prev a relation for any aAct. Similarly, the pair of events (u 3,v 5) can be timed only by a message constraint. And again timing is allowed between the pair (v 5,u 4) only because of the Prev p!q relation. Thus Prev a , Msg, and Next a are expressively incomparable.

Fig. 4
figure 4

A TCMSC \({\mathfrak {M}}_{2}\)

Definition 3.4

Let \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) be a TCMSC over Act with \(M=\mbox {$(E,\leq ,\lambda )$}\). A TMSC \(T=\mbox {$(M,t)$}\) is said to realize \({\mathfrak {M}}\) if for all (e 1,e 2)∈dom(τ) we have |t(e 2)−t(e 1)|∈τ(e 1,e 2). The set of all TMSCs that realize \({\mathfrak {M}}\) is denoted \({ \mathcal {L}_{\mathit {time}}}({\mathfrak {M}})\).

For instance, the TMSC T of Fig. 2 realizes the TCMSC \({\mathfrak {M}}_{1}\) from Fig. 2. Now, if all possible allowed pairs have explicit timing constraints defined, then we call such a TCMSC maximally defined. That is, a TCMSC \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) over \((\mathit {Act},\mathcal {S})\) is said to be maximally defined if \(\mathrm {dom}(\tau)=\mathrm {Msg}^{M} \cup \bigcup_{a\in \mathit {Act}} (\mathrm {Next}^{M}_{a}\cup \mathrm {Prev}^{M}_{a})\).

4 Event clock message passing automata

In this section we introduce our machine model which implements TMSCs as well as TCMSCs. We begin by fixing a formal set TC of symbols as follows:

(1)

When interpreted over a TCMSC \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) over Act, each symbol α∈TC is interpreted as the relation α M defined in the previous section. We set TCM=⋃ α∈TC α M and we let \([\mathrm {TC}\dashrightarrow \mathcal {I}]\) denote the set of partial maps from the set of symbols TC to the set \(\mathcal {I}\).

Definition 4.1

An event clock message passing automaton (ECMPA) is a tuple \(\mathcal {A}=(\{\mathcal {A}_{p}\}_{p \in \mathit {Proc}},\mathit {Act},\varDelta ,F)\), where

  • Δ is a finite set of auxiliary messages;

  • Act is the alphabet;

  • for each pProc, the component \(\mathcal {A}_{p}\) is a structure (S p ,→ p ,ι p ) where

    • S p is a finite set of p-local states,

    • ι p S p is the p-local initial state,

    • p is a finite subset of \((S_{p}\times \mathit {Act}_{p}\times [\mathrm {TC}\dashrightarrow \mathcal {I}]\times\varDelta\times S_{p})\);

  • F⊆∏ pProc S p is a set of global final states.

With the above definition, ECMPA are extensions of both event clock automata (over timed words) [3] and message passing automata (over MSCs) [9].

Example 4.1

A simple example of an ECMPA is shown in Fig. 5. It describes the interaction between a user and a server, with the server trying to authenticate the user. The user component is denoted \(\mathcal {A}_{p}\) and the server \(\mathcal {A}_{q}\). The set of actions Act consists of: p!q(pswd) (user sends password to server), q?p(pswd) (server receives password), q!p(correct) and q!p(wrong) (server sends appropriate message to user) and p?q(correct), p?q(wrong) (user receives message from server). Thus, in the above automaton, the user starts by sending its password. If the password received by server is correct, it acknowledges this and goes to final state. Else, it sends message wrong which must reach in 4 time units and waits in state t 1. If the user receives correct it goes to its final state. If not, it must receive wrong in a bounded amount of time since it last sent its password. And in this case, the user tries to resend password. Otherwise, the current interaction is considered void and the run is rejected.

Fig. 5
figure 5

An ECMPA \(\mathcal {A}_{1}\)

In Fig. 5 the auxiliary data set Δ is a singleton and not depicted for the sake of simplicity. In general, ECMPAs allow every message to be tagged with auxiliary data from a finite set Δ. The ability to convey this finite amount of extra information is quite useful, even in the (untimed) case of message passing automata [9], and it increases the expressive power significantly. For further discussion on message passing automata with and without auxiliary data we refer to the survey at [21].

Semantics over TCMSCs

We define the run of an ECMPA \(\mathcal {A}\) over a TCMSC \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) over Act, where \(M=\mbox {$(E,\leq ,\lambda )$}\). Consider r:E→⋃ pProc S p labeling each event of process p with a local state from S p . Define r :E→⋃ pProc S p as follows: For event eE p , if there is another event e′∈E p such that \(e' \mathrel {{<}{\llap {\mbox {$\cdot $}}}}_{pp} e\), then r (e)=r(e′) and r (e)=ι p otherwise. Then r is a run of \(\mathcal {A}\) on \({\mathfrak {M}}\) if, for all e,e′∈E, with e< pq e′ for some channel (p,q)∈Ch, there are \(g,g'\in [\mathrm {TC}\dashrightarrow \mathcal {I}]\) and an auxiliary message dΔ such that,

(2)
(3)
(4)

Note that given e,e′ as above and α M, \(\widetilde{e}\) and \(\widetilde{e}\,'\) are uniquely defined since α M is a partial function. We define s p =r(e p ), where e p is the maximal event on process p. If there are no events on process p, we set s p =ι p . Then run r is successful if (s p ) pProc F. A TCMSC over Act is accepted by an ECMPA \(\mathcal {A}\) if it admits a successful run. We denote by \(\mathcal {L}_{\mathit {TC}}(\mathcal {A})\), the set of all TCMSCs over Act that are accepted by \(\mathcal {A}\). As an example, we may observe that the TCMSC \({\mathfrak {M}}_{3}\) from Fig. 6 is accepted by \(\mathcal {A}_{1}\), the ECMPA shown in Fig. 5.

Fig. 6
figure 6

TCMSC \({\mathfrak {M}}_{3}\) and TMSC T 3

Semantics over TMSCs

The semantics of ECMPAs over TMSCs is obtained similarly. The definition of a run of \(\mathcal {A}\) over a TMSC \(T=\mbox {$(M,t)$}\) is the same as over a TCMSC \({\mathfrak {M}}=\mbox {$(M,\tau )$}\), except that conditions (3) and (4) are respectively replaced by (5) and (6) below:

(5)
(6)

Then, with the notion of acceptance as above, we can denote the set of all TMSCs accepted by a given ECMPA \(\mathcal {A}\) as \({ \mathcal {L}_{\mathit {time}}}(\mathcal {A})\). Again, as an example, the TMSC T 3 from Fig. 6 is accepted by the ECMPA \(\mathcal {A}_{1}\) shown in Fig. 5. We may notice here that T 3 realizes \({\mathfrak {M}}_{3}\). This is not a coincidence. In fact, we can make the following general observation.

Lemma 4.1

Suppose a TMSC T over Act realizes a TCMSC \({\mathfrak {M}}\) over Act. Then for an ECMPA \(\mathcal {A}\), \({\mathfrak {M}}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A})\) implies \(T \in {\mathcal {L}_{\mathit {time}}}(\mathcal {A})\).

Proof

The lemma follows directly from the definitions. Let \(T=\mbox {$(M,t)$}\) be the TMSC over Act with \(M=\mbox {$(E,\leq ,\lambda )$}\) and \({\mathfrak {M}}=\mbox {$(M,\tau )$}\). Also let \(\mathcal {A}=(\{\mathcal {A}_{p}\}_{p \in \mathit {Proc}},\mathit {Act},\varDelta ,F)\) from Definition 4.1. Then, any run r of \(\mathcal {A}\) on \({\mathfrak {M}}\) satisfies conditions (2)–(4). Now, since T realizes \({\mathfrak {M}}\), for all (e 1,e 2)∈dom(τ), we have |t(e 1)−t(e 2)|∈τ(e 1,e 2). This along with (3), implies that for all α∈dom(g) there exists \(\widetilde {e}\in E\) such that \((e,\widetilde{e}\,)\in\alpha^{M}\) and \(|t(e)-t(\widetilde{e}\,)|\in \tau(e,\widetilde{e}\,)\subseteq g(\alpha)\). Thus, (5) holds. Similarly, from (4) we obtain (6) and conclude that r is a run of \(\mathcal {A}\) on T. As every run of \(\mathcal {A}\) on \({\mathfrak {M}}\) is also a run of \(\mathcal {A}\) on T and the acceptance criterion is the same in both cases we conclude that if \({\mathfrak {M}}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A})\) then \(T \in { \mathcal {L}_{\mathit {time}}}(\mathcal {A})\). □

5 Timed monadic second-order logic

We introduce the logical framework for timed partial orders, which will serve as our specification formalism. As usual, we start with a supply of individual variables x,y,…, and set variables X,Y,… which range over events (and sets of events) of the timed MSC. We generalize the usual MSO logic by using (other than unary predicates P a (x) for aAct) timing predicates of the form δ α (x)∈I for a variable x, α∈TC, and \(I\in \mathcal {I}\). Here TC is the same set of symbols defined by (1) and are interpreted as relations over the events. Again, as for MSO over MSCs, the logic depends on a set ℜ of (binary) relation symbols, which settles the access to the partial order relation. Thus,

Definition 5.1

The set TMSO(Act,ℜ) of all timed monadic second-order logic formulae over Act with relational symbols from ℜ, is generated inductively using the grammar:

$$\varphi::= {P_a(x)} \mid x\in X \mid x=y\ |\ R(x,y) \mid \delta _{\alpha}(x)\in {I} \mid\neg\varphi\mid\varphi\vee\varphi\mid \exists x \varphi\mid\exists X \varphi\ $$

where, x,y are individual variables, X is a set variable, aAct,R∈ℜ,α∈TC and \(I\in \mathcal {I}\).

The existential fragment of TMSO(Act,ℜ), denoted ETMSO(Act,ℜ), consists of all formulas ∃X 1…∃X n φ such that φ does not contain any set quantifier. Though we have defined the logic above for arbitrary sets of relational symbols ℜ, we are in fact interested only in two restricted sets, namely ℜ={≤} and \(\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}}=\{\mathrel {{<}{\llap {\mbox {$\cdot $}}}}_{pp}\mid p\in \mathit {Proc}\}\cup \{<_{{pq}}\mid p\neq q\}\). Then a formula φ from any of these logics, i.e., TMSO(Act,ℜ), \(\mathrm {TMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\), ETMSO(Act,ℜ) or \(\mathrm {ETMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) can be interpreted over TMSCs as well as over TCMSCs as follows. We write TMSO to denote \(\mathrm {TMSO}(\mathit {Act},\mathfrak {R}_{\leq }\cup \mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\), i.e., the union of all formulae from these logics, when there is no scope for confusion.

Now, we will give the semantics for this logic over both TMSCs and TCMSCs. Given an MSC M, let μ be an interpretation mapping first order variables to elements in E and second order variables to subsets of E. Then, for φ∈ TMSO and a TMSC \(T=\mbox {$(M,t)$}\), we define the satisfaction relation T,μφ, by induction on the structure of φ. For all operators, except the timing predicate, this is given as usual. For instance, the unary predicate P a (x) expresses that μ(x) is labeled with aAct, i.e., λ(μ(x))=a. The only novelty is the timing predicate, for which we define the satisfaction relation as follows. Intuitively, by δ α (x)∈I we mean that there is an event eE such that μ(x) and e are related by α M and the time difference between the events μ(x) and e is in I. Formally for each α∈TC we define,

(7)

Then, as usual, for sentences φ (i.e., formulae without free variables) we write Tφ instead of T,μφ and denote by \(\mathcal {L}_{\mathit{time}}(\varphi)\) the set of all TMSCs T over Act such Tφ. Turning to TCMSCs, we can give a formula φ∈ TMSO(Act) a natural semantics over \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) exactly as done for TMSCs above. The only noteworthy difference is in the timing predicate δ α (x)∈I, where for any α∈TC, we define:

(8)

The set of TCMSCs over Act that satisfy a TMSO sentence φ is denoted by \(\mathcal {L}_{\mathit {TC}}(\varphi)\).

Example 5.1

Consider again the interaction scenario in Example 4.1, where a server is authenticating a user. Suppose we wish to specify that every Message wrong sent by the server is conveyed within 4 time units. This can be written as the following sentence in our logic:

(9)

Similarly, we may require that whenever Message wrong is received by the user, the time elapsed since it last sent its password is within 3 and 7 time units. This can be expressed as:

(10)

We observe immediately that both these sentences (9) and (10) are satisfied by TCMSC \({\mathfrak {M}}_{3}\) and TMSC T 3 from Fig. 6.

The following proposition relates the expressiveness of TMSO and ETMSO under the different signatures that we have introduced above, namely ℜ and \(\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}}\).

Proposition 5.1

  1. 1.

    For all φ∈TMSO(Act,ℜ) there exists \(\psi\in \mathrm {TMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) such that \(\mathcal {L}_{\mathit{time}}(\varphi )= \mathcal {L}_{\mathit{time}}(\psi)\).

  2. 2.

    Let φ be a \((\mathrm{E})\mathrm {TMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) formula and B∈ℕ such that \(\mathcal {L}_{\mathit{time}}(\varphi)\) is a set ofu-B-bounded TMSCs over Act. Then, there exists ψ∈(E)TMSO(Act,ℜ) such that \(\mathcal {L}_{\mathit{time}}(\varphi )= \mathcal {L}_{\mathit{time}}(\psi)\).

  3. 3.

    There exists a formula γ B ∈TMSO(Act,ℜ) (respectively, in MSO(Act,ℜ)) such that \(\mathcal {L}_{\mathit{time}}(\gamma_{B})\) is the set of allu-B-bounded TMSCs (respectively, all ∃-B-bounded MSCs) over Act.

The first part of the proposition is a generic result which works as in the untimed case. It follows since the partial order relation can be recovered from the immediate successor and message relations in TMSO. Note however that this is not true if we restrict to ETMSO. Indeed, even in the untimed case, it is an open question whether the transitive closure relation, i.e., xy can be expressed as an \(\mathrm{EMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) formula over the set of all MSCs over Act. The second part of the proposition says that the converse is true when restricted to B-bounded setting. The third part says that in both the timed and untimed settings, the set of all existentially(-untimed)-B-bounded MSCs over Act can be expressed as a formula in the (timed) MSO over (Act,ℜ). In the untimed setting, the second part is proved in [17, Proposition 6.2] where the proof uses the (very difficult) construction of a message passing automaton accepting all ∃-B-bounded MSCs. The third part then follows as an easy consequence of this construction. However, it turns out that both these results can be proved on the same lines as [17], but without referring to the expensive automaton construction. This consequently reduces the complexity of these constructions. For this reason, and for the sake of completeness (in the timed setting) we provide proofs of both these statements below.

Proof of Proposition 5.1(2)

The first two steps are exactly the same as in [17, Proposition 6.2]. First, notice that ⋖ pp can be easily expressed with a first-order formula over (Act,ℜ), hence the difficulty is to express < pq for channels (p,q)∈Ch. We consider set variables X 0,…X B−1 that are not used in φ. These will represent variables that count the number of p!q (resp. q?p) actions modulo B. This is ensured by a formula φ 0 which expresses that each X n contains precisely the set of events e such that for some channel (p,q)∈Ch, either e is the n-th send from p to q modulo B, or e is the n-th receive on q from p modulo B.

Second, for each channel (p,q)∈Ch, we define the abbreviation \(x<'_{pq} y\) by

$$\bigvee_{n<B} \left( \begin{array}{l} x\in X_n \wedge\lambda(x)\in p!q \\ {}\wedge y\in X_n \wedge\lambda(y)\in q?p \wedge x\leq y \\ {}\wedge \forall z (z\in X_n \wedge\lambda(z)\in q?p \wedge x\leq z)\Rightarrow y\leq z \end{array} \right) $$

This expresses that for this channel, x is a send event and y is the least receive event above x which has the same number modulo B. Note that, for any send event e from p to q, there exists a (unique) event f such that \(e<'_{pq} f\). Moreover, fg where g is the matching receive, i.e., is such that e< pq g. This follows from the FIFO assumption which implies that e and g have the same number modulo B. Thus \(<'_{pq}\) can be seen as a total function from the set of send events to the set of receive events. Then, we define formula φ 1 which is \(\bigwedge_{(p,q)\in \mathit {Ch}}\forall x,x',y\; (x<'_{pq} y\wedge x'<'_{pq} y)\Rightarrow x=x'\) saying that each \(<'_{pq}\) is injective function (and thus a bijection from sends to receives).

Now, denoting α=∃X 0X B−1 φ 0φ 1, we make the following crucial claims:

Claim

For a TMSC \(T=\mbox {$(M,t)$}\) over Act with \(M=\mbox {$(E,\leq ,\lambda )$}\),

  1. (1)

    if Tα, then the relations < pq and \(<'_{pq}\) coincide on T, and

  2. (2)

    if T isu-B-bounded, then Tα.

Proof

(1) Let eE. If e is not a send event from p to q then e pq f and \(e\nless'_{pq} f\) for all fE. So we assume in the following that e is a send event from p to q. Let f,gE be the unique events with \(e<'_{pq} f\) and e< pq g. We have already seen that fg. Assume towards a contradiction that f<g. By induction, we may assume that for all send event e′<e from p to q, if \(e'<'_{pq} f'\) and e′< pq g′ then f′=g′. Event f is a receive on q from p hence there is a unique event e′∈E such that e′< pq f. Using the FIFO assumption and f<g we get e′<e. By induction we obtain \(e'<'_{pq} f\), a contradiction with φ 1.

(2) We will use an alternate characterization of ∃u-bounded TMSCs obtained by lifting from the untimed setting in [22]. First, we define a relation rev which associates receive events to some send events. Formally, freve′ if there is a channel (p,q)∈Ch and an event eE such that e< pq f and λ(e′)∈p!q and |{e″∈Eλ(e″)∈p!qe<e″≤e′}|=B.

Now, [22] states that an MSC M is ∃-B-bounded iff the relation ≺ B =<∪rev is acyclic. Recalling that a TMSC is ∃u-B-bounded iff its underlying MSC is ∃-B-bounded, we conclude that the above characterization holds for TMSCs as well.

figure a

Next, observe that in any TMSC T over Act, by counting for each channel the send (resp. receive) events modulo B, we get unique sets X i E for i<B such that T,(X i ) i<B φ 0. Assume that \(T,(X_{i})_{i<B}\not\models\varphi_{1}\). Then, there exists a channel (p,q)∈Ch and send events e<e′ from p to q such that \(e<'_{pq} f\) and \(e'<'_{pq} f\). Let \(\tilde{f}\) be such that \(e<_{pq}\tilde{f}\). Since \(e<'_{pq} f\) we get \(f\leq\tilde{f}\). By definition of \(<'_{pq}\), the events e,e′,f have the same number modulo B. Since e<e′, it follows that there exists \(\tilde{e}\leq e'\) such that \(\tilde{f} \mathrel {\mathit {rev}}\tilde{e}\). Therefore, we have a ≺ B cycle \(\tilde{e}\leq e'<'_{pq} f\leq\tilde{f} \mathrel {\mathit {rev}}\tilde{e}\) (observing that \(e<'_{pq} f\) implies e<f), as depicted in the adjoining figure.  □

Now, we are in a position to complete the proof of Part (2) of this proposition. Starting from a formula \(\varphi\in \mathrm{(E)}\mathrm {TMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\), let φ′ be the formula obtained by replacing every occurrence of < pq by \(x<'_{pq} y\). Now, consider the formula ψ=∃(X i ) i<B φ 0φ 1φ′. Clearly it is a \(\mathrm{(E)}\mathrm {TMSO}(\mathit {Act},\mathfrak {R}_{\leq })\) formula. We claim that \(\mathcal {L}_{\mathit{time}}(\varphi)= \mathcal {L}_{\mathit{time}}(\psi)\). If Tφ, then T is ∃u-B-bounded over Act therefore Tα by Claim (2) above. But since Tα, by Claim (1), \(<'_{pq}\) and < pq coincide on T, which implies that T,(X i ) i<B φ′. Thus combining the two, we obtain Tψ. Conversely, assume that Tψ. Since Tα, we know that \(<'_{pq}\) and < pq coincide on M. Hence T,(X i ) i<B φ′ implies Tφ. □

Proof of Proposition 5.1(3)

Now, we prove the final part of the proposition in the timed case (the untimed setting follows in exactly the same way). We use freely the formulas from the proof above. First, we let yrevz stand for

$$\bigvee_{(p,q)\in \mathit {Ch}} \exists x\; x<z \wedge x<'_{pq} y \wedge\lambda(z)\in p!q \wedge\bigvee_{n<B} x,z\in X_n $$

If Tα then Claim (1) implies that \(<'_{pq}\) and < pq coincide and it follows easily that yrevz implies yrevz and that yrevz′ implies yrevz for some zz′. Therefore, there is a ≺ B cycle iff there is a \(\prec'_{B}\) cycle where \({\prec'_{B}}={<}\cup{ \mathrel {\mathit {rev}}'}\). We define below a new formula φ 2 to check, in the context of α, the existence of a \(\prec'_{B}\) cycle.

In fact, it suffices to test for short cycles. Indeed, let \(x_{0}\prec'_{B} x_{1}\prec'_{B} \cdots\prec'_{B} x_{k} \prec'_{B} x_{0}\) be a cycle with k>0 and x i x j for 0≤i<jk. If two events x i and x j are on the same process for some 0<i+1<j<k then either x i <x j and we have a shorter cycle by removing x i+1,…,x j−1; or x j <x i and x i ,…,x j is a shorter cycle. Hence, it suffices to test for the existence of a \(\prec'_{B}\) cycle of length bounded by 2|Proc|+1, which can easily be expressed with a first-order formula φ 2 over (Act,ℜ).

Finally, let γ B =∃X 0X B−1 φ 0φ 1∧¬φ 2 which is in TMSO(Act,ℜ). We can easily check that γ B defines the set of all ∃u-B-bounded TMSCs. □

In the above sections, we have introduced our models to describe as well as implement timed and distributed scenarios. In the next sections, we state and prove our main results.

6 Equivalence between ECMPA and TMSO logic over TMSCs

We now state our main results showing an effective equivalence between ECMPAs and TMSOs over TMSCs, which we will prove in this section.

Theorem 6.1

Let \(\mathcal {L}\) be a set of TMSCs over Act. Then, the following are equivalent:

  1. 1.

    There is an ECMPA \(\mathcal {A}\) such that \(\mathcal {L}_{\mathit{time}}(\mathcal {A})= \mathcal {L}\).

  2. 2.

    There is \(\varphi\in \mathrm {ETMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) such that \(\mathcal {L}_{\mathit{time}}(\varphi) = \mathcal {L}\).

Theorem 6.2

Let B∈ℕ>0 and let \(\mathcal {L}\) be a set ofu-B-bounded TMSCs over Act. Then, the following are equivalent:

  1. 1.

    There is an ECMPA \(\mathcal {A}\) such that \(\mathcal {L}_{\mathit{time}}(\mathcal {A})= \mathcal {L}\).

  2. 2.

    There is φ∈TMSO(Act,ℜ) such that \(\mathcal {L}_{\mathit{time}}(\varphi) = \mathcal {L}\).

This equivalences are effective in the sense that we can explicitly construct the (E)TMSO formula from the ECMPA and vice versa.

The construction of an (E)TMSO formula from an ECMPA follows the similar constructions applied, for example, to finite and asynchronous automata. In addition, we have to cope with the timing predicate. Assume that \(g:\mathrm {TC}\dashrightarrow \mathcal {I}\) is such a guard occurring on a local transition of the given ECMPA. To ensure that the timing constraints that come along with g are satisfied we use the formula ⋀ α∈dom(g) δ α (x)∈g(α).

The difficult part is the construction of an ECMPA from an (E)TMSO formula. The basic idea is to reduce this to an analogous untimed case, which has also been applied in the settings of words and traces [14, 15]. Usually, the untimed formalisms need to be parameterized by a finite alphabet, so that we can speak of structures whose labelings are extended with this alphabet. Hence, in our framework, we need to find a finite abstraction of the infinite set of possible time stamps. For this, we move from TMSCs to TCMSCs over a finite alphabet, using Lemma 6.2 which strengthens Lemma 4.1 and Lemma 6.3 which is the corresponding result for TMSO. This allow us to establish a translation of (E)TMSO formulas into ECMPAs.

6.1 From TMSCs to TCMSCs

TCMSCs are abstractions of TMSCs. Thus, if a TCMSC exhibits a property, the corresponding TMSC should also do so. This is illustrated in Lemma 4.1 where the property is having a run on an automaton. In this section, we are interested in the converse question. In other words, if a TMSC exhibits a property, when can we say that a TCMSC that it realizes also exhibits the same property? For this, given a TMSC, we derive a canonical representative TCMSC using intervals from a specific set which depends on the property. Then it turns out that, this representative exhibits the property iff a TMSC realizing it exhibits the property.

We formalize these ideas now. We begin by introducing the notion of a proper interval set from [14], which will play an important role in what follows.

Definition 6.1

A set of intervals \(\mathcal {S}\subseteq \mathcal {I}\) is said to be proper if it forms a finite partition of ℝ≥0. An interval set \(\mathcal {S}\) is said to refine another interval set \(\mathcal {S}'\) if every interval \(I'\in \mathcal {S}'\) is the union of some collection of intervals of \(\mathcal {S}\).

Example 6.1

Consider the set of intervals \(\mathcal {S}_{1}=\{[0,4],[3,7]\}\). Then, we may observe that the interval set \(\mathcal {S}_{2}=\{[0,3),[3,4],(4,7]\}\) refines \(\mathcal {S}_{1}\) and if we add the interval (7,∞) to \(\mathcal {S}_{2}\) we obtain a proper interval set \(\mathcal {S}_{3}\) that refines \(\mathcal {S}_{1}\).

Now, observe that if \(\mathcal {S}\) is a proper interval set which refines another interval set \(\mathcal {S}'\), then for all \(I\in \mathcal {S}\) and \(I'\in \mathcal {S}'\), we have, either II′ or II′=∅. Also, we have,

Proposition 6.1

For any finite interval set, there exists a proper interval set that refines it.

Proof

Let \(\mathcal {S}\subseteq \mathcal {I}\) be a finite interval set. Then, we define a canonical proper interval set of \(\mathcal {S}\) denoted \(\mathrm {prop}(\mathcal {S})\) as follows. If R is empty, we define \(\mathrm{prop}(\mathcal {S})=\{[0,\infty)\}\). Otherwise, we let R=(t 1,…,t n ) be the sequence of bounds that appear in \(\mathcal {S}\), arranged in increasing order t 1<⋯<t n and which are different from 0,∞. Then, we define \(\mathrm{prop}(\mathcal {S})=\{[0,0],(0,t_{1}),[t_{1},t_{1}],(t_{1},t_{2}),\ldots ,[t_{n},t_{n}],(t_{n},\infty)\}\). With this definition it follows that \(\mathrm {prop}(\mathcal {S})\) is a proper interval set and that \(\mathrm {prop}(\mathcal {S})\) refines \(\mathcal {S}\). □

Now we can show that for a proper interval set \(\mathcal {S}\) and a TMSC T, there is a unique maximally defined TCMSC using intervals only from \(\mathcal {S}\) such that T realizes it. Formally,

Lemma 6.1

Let \(\mathcal {S}\) be a proper interval set and \(T=\mbox {$(M,t)$}\) be a TMSC over Act. Then, there exists a unique TCMSC \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) over \((\mathit {Act},\mathcal {S})\) such that \(\tau:\mathrm{TC}^{M}\rightarrow \mathcal {S}\), T realizes \({\mathfrak {M}}\) and \({\mathfrak {M}}\) is maximally defined. This unique TCMSC is denoted \({\mathfrak {M}}_{T}^{\mathcal {S}}\).

Proof

We first observe that, for each (e,e′)∈TCM, the real number |t(e′)−t(e)| is in a unique interval of \(\mathcal {S}\). Thus, consider the maximally defined TCMSC defined as: \({\mathfrak {M}}_{T}^{\mathcal {S}}= \mbox {$(M,\tau )$}\) where, for any (e,e′)∈TCM, τ(e,e′) is defined to be the unique interval of \(\mathcal {S}\) containing |t(e′)−t(e)|. Then, T realizes \({\mathfrak {M}}_{T}^{\mathcal {S}}\) by definition and the uniqueness follows since \(\mathcal {S}\) is a proper interval set. □

It turns out that this unique TCMSC is the “canonical representative” for a TMSC that we were searching for. As an example, consider the TMSC T 4 from Fig. 7, which represents a part of the scenario of TMSC T 3 from Fig. 6 (and abstracting away the message contents). Now, let \(\mathcal {S}_{3}\) be the proper interval set defined in Example 6.1. Then, the unique maximally defined TCMSC over \((\mathit {Act},\mathcal {S}_{3})\) which is realized by T 4 is the TCMSC \({\mathfrak {M}}^{\mathcal {S}_{3}}_{T_{4}}\) shown in Fig. 7.

Fig. 7
figure 7

TMSC T 4 and its representative TCMSC \({\mathfrak {M}}^{\mathcal {S}_{3}}_{T_{4}}\)

Now, given an ECMPA \(\mathcal {A}\), let \(\mathrm {Int}(\mathcal {A})\) denote the finite set of intervals that occur in \(\mathcal {A}\) as guards. Now look at any proper interval set \(\mathcal {S}\) that refines \(\mathrm {Int}(\mathcal {A})\). By Proposition 6.1, there exists at least one, namely \(\mathrm{prop}(\mathrm {Int}(\mathcal {A}))\). Then,

Lemma 6.2

Given a TMSC T, an ECMPA \(\mathcal {A}\) and a proper interval set \(\mathcal {S}\) that refines \(\mathrm {Int}(\mathcal {A})\), we have \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A})\) iff \({\mathfrak {M}}^{\mathcal {S}}_{T}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A})\).

Proof

Let \(T=\mbox {$(M,t)$}\) be the TMSC over Act with \(M=\mbox {$(E,\leq ,\lambda )$}\) and let the ECMPA be \(\mathcal {A}=(\{\mathcal {A}_{p}\}_{p \in \mathit {Proc}},\mathit {Act},\varDelta ,F)\) with \(\mathcal {A}_{p}= (S_{p},\iota _{p},\rightarrow _{p})\). Now, since T realizes \({\mathfrak {M}}^{\mathcal {S}}_{T}\), by Lemma 4.1, we obtain one direction of the result, \({\mathfrak {M}}^{\mathcal {S}}_{T}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A})\) implies \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A})\).

For the reverse direction, assume \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A})\). Then by definition there is a successful run r of \(\mathcal {A}\) on T. Since r is a run, for all e,e′∈E s.t., e< pq e′ for (p,q)∈Ch, we find \(g,g'\in [TC \dashrightarrow \mathcal {I}]\) and dΔ such that conditions (2), (5) and (6) hold. We show then that r is also a run of \(\mathcal {A}\) on \({\mathfrak {M}}^{\mathcal {S}}_{T}\), for which it is enough to show that conditions (3) and (4) hold. We start from (5), which says that for each α∈dom(g), there is \(\widetilde{e}\in E\) such that \((e,\widetilde{e}\,)\in\alpha^{M}\). Since \({\mathfrak {M}}^{\mathcal {S}}_{T}\) is maximally defined, \(\tau(e,\widetilde{e}\,)\) exists and as T realizes \({\mathfrak {M}}^{\mathcal {S}}_{T}\) we obtain \(|t(\widetilde{e}\,)-t(e)|\in\tau(e,\widetilde{e}\,)\). But again from (5), \(|t(\widetilde{e}\,)-t(e)|\in g(\alpha)\). Thus, we find \(\tau(e,\widetilde{e}\,)\cap g(\alpha)\neq\emptyset\). Now, \(\tau(e,\widetilde{e}\,)\in \mathcal {S}\), \(g(\alpha)\in \mathrm {Int}(\mathcal {A})\) and we know that \(\mathcal {S}\) is a proper interval set that refines \(\mathrm {Int}(\mathcal {A})\), which implies \(\tau(e,\widetilde{e}\,)\subseteq g(\alpha)\) concluding that (3) holds. Similarly (4) can be shown starting from (6). Thus any run of \(\mathcal {A}\) on T is also a run on \({\mathfrak {M}}^{\mathcal {S}}_{T}\). Since acceptance criterion for a run is the same, \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A})\) implies \({\mathfrak {M}}^{\mathcal {S}}_{T}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A})\). □

We can do the same for TMSO as well. That is, given a TMSO formula φ, we let Int(φ) denote the finite set of intervals I for which φ has a sub-formula of the form δ α (x)∈I. Again we can consider a proper interval set \(\mathcal {S}\) which refines Int(φ).

Lemma 6.3

Given a TMSC T, a TMSO formula φ, and a proper interval set \(\mathcal {S}\) that refines Int(φ), we have Tφ iff \({\mathfrak {M}}^{\mathcal {S}}_{T} \models\varphi\).

Proof

Let \(T=\mbox {$(M,t)$}\) be a TMSC with \(M=\mbox {$(E,\leq ,\lambda )$}\). Then, by Lemma 6.1, we have the TCMSC \({\mathfrak {M}}^{\mathcal {S}}_{T}=\mbox {$(M,\tau )$}\) such that, for all α∈TC, and all (e,e′)∈α M, |t(e′)−t(e)|∈τ(α).

We prove the lemma by structural induction on φ. Let μ be any interpretation. The only interesting case is the timing predicate. The others are routine deductions. We have,

using successively the definition of the semantics (7) for TMSCs, \({\mathfrak {M}}^{\mathcal {S}}_{T}\) is maximally defined and T realizes \({\mathfrak {M}}^{\mathcal {S}}_{T}\), \(\mathcal {S}\) is proper and refines Int(φ), and the semantics (8) for TCMSCs. □

6.2 Extending the alphabet

In this subsection, we provide the final pieces of our jigsaw. In particular, we fix a finite set Π and lift MSCs over Act to MSCs over Γ=Act×Π. A Π-extended MSC over Act or an MSC over Γ is a finite Γ-labeled poset \(M=\mbox {$(E,\leq ,\lambda )$}\) such that conditions 1–4 of Definition 2.1 are satisfied, ignoring the extra labeling. Note that the definition of boundedness can be immediately adapted to this setting.

We lift the MPA and MSO definitions to include the additional alphabet. Since such a lift is purely syntactical, we preserve the validity of the equivalence theorems 6.3 and 6.4 in these settings. We define an MPA \(\mathcal {A}=(\{\mathcal {A}_{p}\}_{p \in \mathit {Proc}},\varDelta ,F)\) over Γ as in Definition 4.1 with the only change being the transition relation → p ⊆(S p ×Act p ×Π×Δ×S p ) of component \(\mathcal {A}_{p}\). Runs of \(\mathcal {A}\) over an MSC over Γ is defined as in Sect. 4, ignoring guards g,g′ and conditions 7 and 8. \(\mathcal {L}_{\mathit {MSC}}(\mathcal {A})\) denotes the set of all MSCs over Γ that are accepted by \(\mathcal {A}\).

Similarly, dropping the timing predicate, we define the logics MSO(Act×Π,ℜ) and \(\mathrm {MSO}(\mathit {Act}\times\varPi ,\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) over MSCs over Γ as in Sect. 5. For a sentence φ, \(\mathcal {L}_{\mathit {MSC}}(\varphi)\) denotes the set of MSCs over Γ that satisfy it. With the above definitions, it is easy to see that we can lift the results from [7, 17]. We restate the relevant theorems in our terminology.

Theorem 6.3

(From [7])

Let \(\mathcal {L}\) be a language of MSCs over Γ. Then \(\mathcal {L}= \mathcal {L}_{\mathit {MSC}}(\varphi)\) for some \(\varphi\in \mathrm {EMSO}(\varGamma,\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) iff \(\mathcal {L}= \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\) for some MPA \(\mathcal {A}\) over Γ.

Theorem 6.4

(From [17])

Let B∈ℕ>0. Let \(\mathcal {L}\) be a language of ∃-B-bounded MSCs over Γ. Then, \(\mathcal {L}= \mathcal {L}_{\mathit {MSC}}(\varphi)\) for some sentence φ∈MSO(Γ,ℜ) iff \(\mathcal {L}= \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\) for some MPA \(\mathcal {A}\) over Γ.

TCMSCs as extended MSCs

Let \(\mathcal {S}\subseteq \mathcal {I}\) be a finite set of intervals. Then, we can consider the alphabet \(\varGamma=\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\) where \([\mathrm {TC}\dashrightarrow \mathcal {S}]\) is a finite set of partial maps from TC to \(\mathcal {S}\). Recall that TC is the finite set of symbols defined in Sect. 4. A TCMSC over \((\mathit {Act},\mathcal {S})\) can be directly seen as an MSC over Γ. We make this precise by defining an untiming function \(\mathcal {U}\) which maps TCMSCs over \((\mathit {Act},\mathcal {S})\) to MSCs over Γ.

Let \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) be any TCMSC over \((\mathit {Act},\mathcal {S})\), with \(M=\mbox {$(E,\leq ,\lambda )$}\), \(\tau:\mathrm {TC}^{M}\dashrightarrow \mathcal {S}\). Then \(\mathcal {U}({\mathfrak {M}})=\mbox {$(E,\leq ,\lambda ')$}\) is an MSC over Γ with the same set of events E and partial order ≤. Also, λ′:EΓ is defined for all eE, by λ′(e)=(λ(e),g e ) where \(g_{e}:\mathrm {TC}\dashrightarrow \mathcal {S}\) is such that, for all α∈TC, g e (α)=τ(e,e′) if there exists e′∈E, s.t., (e,e′)∈α M∩dom(τ) (and g e (α) is undefined otherwise). Recall that given eE, α∈TC, there exists at most one event e′∈E s.t., (e,e′)∈α M.

Lemma 6.4

Let \(\mathcal {S}\) be a finite set of intervals and \(\varGamma=\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\). We can build an MPA \(\mathcal{B}\) over Γ such that for all TCMSC \({\mathfrak {M}}\) over \((\mathit {Act},\mathcal {S})\) we have \(\mathcal {U}({\mathfrak {M}})\in \mathcal {L}_{\mathit {MSC}}(\mathcal{B})\) iff \({\mathfrak {M}}\) is maximally defined.

Proof (sketch)

First, we check that for each event labeled (b,g) we have Msg∈dom(g) iff b is a send action. Then, we have to check that Prev a ∈dom(g) iff some action a occurs in the past of the current event. The set of actions occurring in the past can be computed (deterministically) by an MPA using the set Δ of auxiliary messages. Finally, we have to check that Next a ∈dom(g) iff some action a occurs in the future of the current event. The set of actions occurring in the future can be guessed non-deterministically by an MPA. Such guesses can be checked using the set Δ of auxiliary messages and the set Fin of global accepting states. □

TMSO as MSO over the extended alphabet

We will now see that each TMSO formula can be rewritten as an MSO formula over the extended alphabet of guards Γ introduced above. Let \(\mathcal {S}\) be a finite set of intervals and \(\varGamma=\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\). Then from a TMSO formula φ, we obtain \(\varphi^{\mathcal {S}}\in \mathrm {MSO}(\varGamma)\) by replacing each sub-formula of the form P a (x) by the formula ⋁{(b,g)∈Γb=a} P (b,g)(x) and each sub-formula of the form δ α (x)∈I (i.e., timing predicate) by the formula ⋁{(b,g)∈Γg(α)⊆I} P (b,g)(x). This translation preserves the existential fragment, for instance, if \(\varphi\in \mathrm {ETMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\), then \(\varphi^{\mathcal {S}}\in \mathrm {EMSO}(\varGamma,\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\). As before, we can easily obtain the semantics of a formula from this logic in terms of MSCs over Γ. Then, we can relate the TCMSC-language of a TMSO formula and the extended MSC language of its MSO translation as follows:

Lemma 6.5

Let φ be a TMSO sentence and \(\mathcal {S}\) be a finite set of intervals. Then for a TCMSC \({\mathfrak {M}}\) over \((\mathit {Act},\mathcal {S})\), \({\mathfrak {M}}\models\varphi\) if and only if \(\mathcal {U}({\mathfrak {M}})\models\varphi^{\mathcal {S}}\).

Proof

Let \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) be a TCMSC over \((\mathit {Act},\mathcal {S})\) with \(M=\mbox {$(E,\leq ,\lambda )$}\) and let \(\mathcal {U}({\mathfrak {M}})=(E,\leq ,\lambda ')\) as defined above. We show the lemma for any interpretation μ by induction on structure of φ. As before, the only interesting cases are atomic and timing predicates. The others are routine deductions.

  • Suppose φ is of the form P a (x) for some aAct. Then, we obtain easily,

    $${\mathfrak {M}},\mu\models P_a(x) \quad \text{iff} \quad \mathcal {U}({\mathfrak {M}}),\mu\models \bigvee_{\{(b,g)\in\varGamma\mid b=a\}} P_{(b,g)}(x) $$
  • Suppose φ be of the form δ α (x)∈I for some α∈TC, \(I\in \mathcal {I}\). Then, we show that,

    $${\mathfrak {M}},\mu\models \delta _{\alpha}(x)\in {I} \quad \text{iff} \quad \mathcal {U}({\mathfrak {M}}),\mu\models \bigvee_{\{(b,g)\in\varGamma\mid g(\alpha)\subseteq I\}} P_{(b,g)}(x) $$

    (⇒) Assume \({\mathfrak {M}},\mu\models \delta _{\alpha }(x)\in {I}\) where μ(x)=eE. By definition, this means that there exists e′∈E such that (e,e′)∈α M and τ(e,e′)⊆I. By the definition of \(\mathcal {U}({\mathfrak {M}})\), we then have λ′(e)=(λ(e),g e ) with \(g_{e}:\mathrm {TC}\dashrightarrow \mathcal {S}\) such that g e (α)=τ(e,e′)⊆I. This implies that \(\mathcal {U}({\mathfrak {M}}),\mu\models P_{(\lambda (e),g_{e})}(x)\) with g e (α)⊆I. Hence we have, \(\mathcal {U}({\mathfrak {M}}),\mu \models {\bigvee}_{\{(b,g)\in\varGamma\mid g(\alpha)\subseteq I\}} P_{(b,g)}(x)\).

    (⇐) Conversely assume \(\mathcal {U}({\mathfrak {M}}),\mu \models P_{(b,g)}(x)\) for some (b,g)∈Γ such that g(α)⊆I and let μ(x)=eE. Then by definition λ′(e)=(b,g), λ(e)=b and \(g:\mathrm {TC}\dashrightarrow \mathcal {S}\) with g(α)⊆I. Since g(α) is defined, there exists e′∈E s.t., (e,e′)∈dom(τ)∩α M and g(α)=τ(e,e′)⊆I. Thus, we conclude that \({\mathfrak {M}},\mu\models \delta _{\alpha}(x)\in {I}\).  □

ECMPA as MPA over the extended alphabet

Let us fix a finite set of intervals \(\mathcal {S}\) and an alphabet of guards \(\varGamma=\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\). Then, observe that any MPA \(\mathcal {A}\) over Γ is itself an ECMPA which uses intervals from \(\mathcal {S}\) as guards, i.e., \(\mathrm {Int}(\mathcal {A})\subseteq \mathcal {S}\).

Lemma 6.6

Let \(\mathcal {S}\) be a finite set of intervals and \(\mathcal {A}\) be an MPA over \(\varGamma=\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\). Then for any TCMSC \({\mathfrak {M}}\) over \((\mathit {Act},\mathcal {S})\), \(\mathcal {U}({\mathfrak {M}})\in \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\) implies \({\mathfrak {M}}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A})\).

Proof

Let \(\mathcal {A}\) be the MPA over Γ and let \({\mathfrak {M}}=\mbox {$(M,\tau )$}\) over \((\mathit {Act},\mathcal {S})\) with \(M=\mbox {$(E,\leq ,\lambda )$}\). Let \(\mathcal {U}({\mathfrak {M}})=\mbox {$(E,\leq ,\lambda ')$}\in \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\), i.e., there exists a run r of the MPA \(\mathcal {A}\) on the MSC \(\mathcal {U}({\mathfrak {M}})\) which is accepting. Now r:E→⋃ pProc S p satisfies for all e,e′∈E such that e< pq e′ with (p,q)∈Ch, there exists dΔ such that,

$$\bigl( r^-(e),\lambda '(e),d, r(e)\bigr) \in{ \rightarrow_p} \quad \text{and} \quad \bigl( r^- \bigl(e'\bigr),\lambda '\bigl(e'\bigr),d, r\bigl(e'\bigr)\bigr) \in{\rightarrow_q} $$

We claim that r is a run of ECMPA \(\mathcal {A}\) on the TCMSC \({\mathfrak {M}}\). Since λ′(e)=(λ(e),g e ) and λ′(e′)=(λ(e′),g e), condition (2) follows at once with g=g e and g′=g e. Again, by definition of λ′, \(g_{e}\in [\mathrm {TC}\dashrightarrow \mathcal {S}]\) has the property that for all α∈dom(g e ), there exists \(\widetilde{e}\in E\) such that \((e,\widetilde{e}\,)\in\alpha^{M}\cap \mathrm {dom}(\tau)\) and \(g_{e}(\alpha)=\tau(e,\widetilde{e}\,)\). Thus condition (3) is satisfied. Similarly, g e satisfies condition (4). Thus, r is a run of \(\mathcal {A}\) on \({\mathfrak {M}}\). It is accepting since the acceptance condition is the same and depends on reaching the final state. Thus we conclude that \({\mathfrak {M}}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A})\). □

Lemma 6.7

Let \(\mathcal {S}\) be a proper finite set of intervals and \(\mathcal {A}\) be an MPA over \(\varGamma=\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\). Then, for any TMSC \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A})\), there exists a TCMSC \({\mathfrak {M}}\) over \((\mathit {Act},\mathcal {S})\) such that \(\mathcal {U}({\mathfrak {M}})\in \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\) and T realizes \({\mathfrak {M}}\).

Proof

We begin with an accepting run r of \(\mathcal {A}\) on TMSC \(T=\mbox {$(M,t)$}\). Thus, for all e,e′∈E with e< pq e′ for some (p,q)∈Ch, there are \(g_{e},g_{e'}\in [\mathrm {TC}\dashrightarrow \mathcal {I}]\) and dΔ such that conditions (2), (5), (6) hold. But since \(\mathrm {Int}(\mathcal {A})\subseteq \mathcal {S}\), we obtain \(g_{e},g_{e'}\in [\mathrm {TC}\dashrightarrow \mathcal {S}]\).

We recall that by condition (5), for all α∈dom(g e ) there exists \(\tilde{e}\in E\) such that \((e,\tilde{e})\in\alpha^{M}\) and \(|t(e)-t(\tilde{e})| \in g_{e}(\alpha)\). Similarly from condition (6), for all α∈dom(g e) there exists \(\tilde{e}'\in E\) such that \((e',\tilde{e}')\in\alpha^{M}\) and \(|t(e')-t(\tilde{e}')| \in g_{e'}(\alpha)\). Using these partial maps, we define another partial function \(\tau\in [\mathrm {TC}^{M} \dashrightarrow \mathcal {S}]\) as

$$\tau(e,\tilde{e})= \begin{cases} g_{e}(\alpha)&\text{if}\ \exists\ \alpha\in \mathrm {dom}(g_{e}) \ \text{s.t.}\ (e,\tilde{e})\in\alpha^M \\ \text{undefined}& \text{otherwise} \end{cases} $$

Observe that τ is well-defined. If we find α,α′∈dom(g e ) such that \((e,\tilde{e})\in\sigma^{M}\cap\alpha'^{M}\) then \(|t(e)-t(\tilde{e})| \in g_{e}(\alpha')\cap g_{e}(\alpha)\) by (5), (6). Now \(\mathcal {S}\) is a proper interval set implies that g e (α)=g e (α′). Now using the above map we define a TCMSC over \((\mathit {Act},\mathcal {S})\) as \({\mathfrak {M}}=\mbox {$(M,\tau )$}\). T realizes \({\mathfrak {M}}\) by definition, since for all \((e,\tilde{e})\in \mathrm {dom}(\tau)\), \(|t(e)-t(\tilde{e})|\in g_{e}(\alpha)=\tau(e,\tilde{e})\) for some α∈dom(g e ). We are done if we show that \(\mathcal {U}({\mathfrak {M}})\in \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\). But this follows since r is itself a run of \(\mathcal {A}\) over \(\mathcal {U}({\mathfrak {M}})\). It is enough to observe that λ′(e)=(λ(e),g e ) where g e is the partial map given above from the run on the TMSC. Then, for all e,e′∈E such that e< pq e′ for some (p,q)∈Ch, there exists dΔ such that (r (e),λ′(e),d,r(e))∈→ p , (r (e′),λ′(e′),d,r(e′))∈→ q . Thus r is an accepting run of \(\mathcal {A}\) on \(\mathcal {U}({\mathfrak {M}})\). □

6.3 Proof of Theorems 6.1 and 6.2

Proof of Theorem 6.1

(1 ⇒ 2) Given an ECMPA \(\mathcal {A}\), we construct an ETMSO formula \(\varphi\in \mathrm {ETMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) such that \(\mathcal {L}_{\mathit{time}}(\mathcal {A})= \mathcal {L}_{\mathit{time}}(\varphi)\). This direction of the proof is standard and does not need the lemmas we proved in the two previous subsections.

Let \(\mathcal {A}=(\{\mathcal {A}_{p}\}_{p \in \mathit {Proc}},\mathit {Act},\varDelta ,F)\) be the ECMPA at hand with \(\mathcal {A}_{p}= (S_{p},\iota _{p},\rightarrow _{p})\). For any local state sS=⋃ pProc S p , we introduce a second order variable X s . The formula we are targeting guesses a run of \(\mathcal {A}\) in terms of an assignment of events to the variables (X s ) sS . Accordingly, (X s ) sS needs to be a partition of all the events of an MSC. This can easily be done by a first order formula Partition((X s ) sS ) with free variables (X s ) sS . We now define some further macros. For a synchronization message dΔ, we define Trans d (x,(X s ) sS ) by

This formula describes that, under the assignment (X s ) sS , the execution of x actually corresponds to a local transition of \(\mathcal {A}\) that communicates dΔ. Moreover, for a global state \(\overline{s}=(s_{p})_{p\in \mathit {Proc}} \in \prod_{s \in \mathit {Proc}} S_{p}\), we define \(\mathit{Final}_{\overline{s}}((X_{s})_{s \in S})\) by

$$\bigvee_{\substack{\mathit {Proc}' \subseteq \mathit {Proc}\\\forall p \in \mathit {Proc}': s_p=\iota_p}} \biggl(\bigwedge _{p \in \mathit {Proc}\setminus{ \mathit {Proc}'}}\exists x\, \Bigl(\max_p(x) \wedge x \in X_{s_p} \Bigr) \wedge\bigwedge_{\substack{p \in \mathit {Proc}'\\a \in \mathit {Act}_p}} \neg \exists x\, P_a(x) \biggr) $$

Hereby, given a process pProc, \(\max_{p}(x)=\bigvee_{a \in \mathit {Act}_{p}} P_{a}(x) \wedge(\neg\exists y~(x \mathrel {{<}{\llap {\mbox {$\cdot $}}}}_{pp} y))\). Moreover, observe that Proc′ comprises those processes that are assumed not to move. Hence, \(\mathit{Final}_{\overline{s}}((X_{s})_{s \in S})\) formulates that the run described by (X s ) sS ends up in global state \(\overline{s}\).

We are now prepared to give the formula φ with \(\mathcal {L}_{\mathit{time}}(\varphi)= \mathcal {L}_{\mathit{time}}(\mathcal {A})\). Namely,

This concludes the proof in one direction.

(2 ⇒ 1) For the other direction we will require all the machinery that we set up in the two previous subsections.

Let φ be the given \(\mathrm {ETMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) formula and let \(\mathcal {S}\) be a proper interval set which refines Int(φ). Fix an alphabet \(\varGamma=\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\). Thus, we have \(\varphi ^{\mathcal {S}}\in \mathrm {EMSO}(\varGamma,\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) using the translation from Sect. 6.2. Then, by Theorem 6.3 we obtain an MPA \(\mathcal {A}\) over Γ such that \(\mathcal {L}_{\mathit {MSC}}(\mathcal {A})= \mathcal {L}_{\mathit {MSC}}(\varphi^{\mathcal {S}})\). Let \(\mathcal{B}\) be the MPA over Γ from Lemma 6.4 accepting (images of) maximally defined TCMSCs. We view \(\mathcal {A}'=\mathcal {A}\cap\mathcal{B}\) as an ECMPA with \(\mathrm {Int}(\mathcal {A}\cap\mathcal{B})\subseteq \mathcal {S}\), hence \(\mathcal {S}\) refines \(\mathrm {Int}(\mathcal {A}\cap\mathcal{B})\). Now we claim that this \(\mathcal {A}'\) is, in fact, the ECMPA that we require. That is, we will show that \(\mathcal {L}_{\mathit{time}}(\varphi)= \mathcal {L}_{\mathit{time}}(\mathcal {A}')\), thus completing the proof the theorem.

The proof follows the scheme depicted in Fig. 8. We start by showing that \(\mathcal {L}_{\mathit{time}}(\varphi)\subseteq \mathcal {L}_{\mathit{time}}(\mathcal {A}\cap \mathcal{B})\). Let Tφ. Since \(\mathcal {S}\) is a proper interval set that refines Int(φ), we can apply Lemma 6.3 to get \({\mathfrak {M}}^{\mathcal {S}}_{T}\models\varphi\). Since, \(\mathcal {S}\) is proper, it is finite and hence we use Lemma 6.5 to obtain \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\models\varphi^{\mathcal {S}}\). But \(\mathcal {L}_{\mathit {MSC}}(\varphi^{\mathcal {S}})= \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\) and \({\mathfrak {M}}^{\mathcal {S}}_{T}\) is maximally defined, so we have \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\in \mathcal {L}_{\mathit {MSC}}(\mathcal {A}\cap\mathcal{B})\). As \(\mathcal {S}\) is finite and \({\mathfrak {M}}^{\mathcal {S}}_{T}\) is a TCMSC over \((\mathit {Act},\mathcal {S})\), we can then use Lemma 6.6 to conclude that \({\mathfrak {M}}^{\mathcal {S}}_{T}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A}\cap\mathcal{B})\). Then, we can use Lemma 6.2 to obtain \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A}\cap\mathcal{B})\).

Fig. 8
figure 8

Proof schematic for 2⇒1 direction of Theorem 6.1 (arrows depict implication)

Conversely, we show that \(\mathcal {L}_{\mathit{time}}(\varphi)\supseteq \mathcal {L}_{\mathit{time}}(\mathcal {A}\cap\mathcal{B})\). Let \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A}\cap\mathcal{B})\). Then as \(\mathcal {S}\) is a proper interval set, and \(\mathcal {A}\cap\mathcal{B}\) is an MPA over \(\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\), by Lemma 6.7, there exists a TCMSC \({\mathfrak {M}}\) over \((\mathit {Act},\mathcal {S})\) such that \(\mathcal {U}({\mathfrak {M}})\in \mathcal {L}_{\mathit {MSC}}(\mathcal {A}\cap\mathcal {B})\) and T realizes \({\mathfrak {M}}\). Using Lemmas 6.1 and 6.4 we deduce that \({\mathfrak {M}}= {\mathfrak {M}}^{\mathcal {S}}_{T}\). But \(\mathcal {L}_{\mathit {MSC}}(\varphi^{\mathcal {S}})= \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\) implies \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\models\varphi^{\mathcal {S}}\). Now, by Lemma 6.5 we have \({\mathfrak {M}}^{\mathcal {S}}_{T}\models\varphi\). Finally, by Lemma 6.3 we conclude that Tφ. □

Along the lines of the proof above, we can also get a characterization of the full TMSO. However, we have to restrict to untimed-existentially-bounded TMSCs.

Proof of Theorem 6.2

Let B∈ℕ>0 and \(\mathcal {L}\) be a set of ∃u-B-bounded TMSCs over Act.

(1⇒2) Let \(\mathcal {A}\) be an ECMPA with \(\mathcal {L}_{\mathit{time}}(\mathcal {A})=\mathcal {L}\). By Theorem 6.1, we obtain an \(\mathrm {ETMSO}(\mathit {Act},\mathfrak {R}_{\mathrel {{<}{\llap {\mbox {$\scriptstyle \cdot $}}}}})\) formula φ such that \(\mathcal {L}_{\mathit{time}}(\mathcal {A})= \mathcal {L}_{\mathit{time}}(\varphi)\) and by Proposition 5.1(2), we obtain an ETMSO(Act,ℜ) formula ψ such that \(\mathcal {L}_{\mathit{time}}(\psi)= \mathcal {L}_{\mathit{time}}(\varphi)\) which completes the proof in this direction.

(2⇒1) Let φ be a TMSO(Act,ℜ) formula with \(\mathcal {L}_{\mathit{time}}(\varphi)=\mathcal {L}\) and let \(\mathcal {S}\) be a proper interval set which refines Int(φ). Fix an alphabet \(\varGamma=\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\). Thus, we have \(\varphi^{\mathcal {S}}\in \mathrm {MSO}(\varGamma,\mathfrak {R}_{\leq })\) using the translation from Sect. 6.2. Now from Proposition 5.1(3), we obtain a formula γ B ∈MSO(Γ,ℜ) such that \(\mathcal {L}_{\mathit {MSC}}(\gamma _{B})\) is the set of all ∃-B-bounded MSCs over Γ. Then, indeed \(\mathcal {L}_{\mathit {MSC}}(\varphi^{\mathcal {S}}\wedge\gamma_{B})\) is an ∃-B-bounded set of MSCs over Γ and so, we apply Theorem 6.4 to obtain an MPA \(\mathcal {A}\) over Γ such that \(\mathcal {L}_{\mathit {MSC}}(\varphi^{\mathcal {S}}\wedge\gamma_{B})= \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\). Consider again the MPA \(\mathcal{B}\) over Γ from Lemma 6.4. We view \(\mathcal {A}\cap\mathcal{B}\) as an ECMPA and \(\mathcal {S}\) refines \(\mathrm {Int}(\mathcal {A}\cap\mathcal{B})\). We show that \(\mathcal {L}_{\mathit{time}}(\varphi)= \mathcal {L}_{\mathit{time}}(\mathcal {A}\cap\mathcal{B})\).

The proof follows the same basic structure as depicted in Fig. 8. However, to capture full MSO (and not just its existential fragment), we use Theorem 6.4 in the place of Theorem 6.3, and to do this, we replace \(\varphi^{\mathcal {S}}\) by \(\varphi^{\mathcal {S}}\wedge\gamma_{B}\).

First observe that if T realizes \({\mathfrak {M}}\), then T is ∃u-B-bounded iff \(\mathcal {U}({\mathfrak {M}})\) is ∃-B-bounded. Indeed a (untimed) linearization of a TMSC only depends on its set of events and the partial order, which are the same for T, \({\mathfrak {M}}\) and \(\mathcal {U}({\mathfrak {M}})\).

Thus, let us consider \(T\in \mathcal {L}_{\mathit{time}}(\varphi)\). Then, T is ∃u-B-bounded and Tφ. By Lemma 6.3, we obtain that \({\mathfrak {M}}^{\mathcal {S}}_{T}\models\varphi\). Again since \(\mathcal {S}\) is proper, it is finite and hence by Lemma 6.5 we get \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\models \varphi^{\mathcal {S}}\). Since T realizes \({\mathfrak {M}}^{\mathcal {S}}_{T}\), \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\) is ∃-B-bounded, and so we get \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\models\gamma_{B}\). Thus we conclude that \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\in \mathcal {L}_{\mathit {MSC}}(\varphi^{\mathcal {S}}\wedge\gamma_{B})= \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\). Since \({\mathfrak {M}}^{\mathcal {S}}_{T}\) is maximally defined, we obtain \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\in \mathcal {L}_{\mathit {MSC}}(\mathcal {A}\cap\mathcal{B})\). Using Lemma 6.6 we get \({\mathfrak {M}}^{\mathcal {S}}_{T}\in \mathcal {L}_{\mathit {TC}}(\mathcal {A}\cap \mathcal{B})\). Finally, Lemma 6.2 implies \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A}\cap \mathcal{B})\).

Conversely, let \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A}\cap\mathcal{B})\). Then as \(\mathcal {S}\) is a proper interval set, and \(\mathcal {A}\cap\mathcal{B}\) is an MPA over \(\mathit {Act}\times [\mathrm {TC}\dashrightarrow \mathcal {S}]\), by Lemma 6.7, there exists a TCMSC \({\mathfrak {M}}\) over \((\mathit {Act},\mathcal {S})\) such that \(\mathcal {U}({\mathfrak {M}})\in \mathcal {L}_{\mathit {MSC}}(\mathcal {A}\cap\mathcal {B})\) and T realizes \({\mathfrak {M}}\). Using Lemmas 6.1 and 6.4 we deduce that \({\mathfrak {M}}= {\mathfrak {M}}^{\mathcal {S}}_{T}\). But \(\mathcal {L}_{\mathit {MSC}}(\varphi^{\mathcal {S}}\wedge\gamma_{B})= \mathcal {L}_{\mathit {MSC}}(\mathcal {A})\) implies \(\mathcal {U}( {\mathfrak {M}}^{\mathcal {S}}_{T})\models\varphi^{\mathcal {S}}\) and by Lemma 6.5 we get \({\mathfrak {M}}^{\mathcal {S}}_{T}\models\varphi\). Finally, Lemma 6.3 implies Tφ. □

7 Checking emptiness of ECMPAs

In this section, we investigate emptiness checking for ECMPAs, leading to a partial solution to the satisfiability problem for TMSO formulas, which is undecidable in its full generality. Since the MSCs are used in early protocol design, this problem is of vital interest as it allows detection of possible design failures at this stage.

Theorem 7.1

The following problem:

Input: :

An ECMPA \(\mathcal {A}=(\{\mathcal {A}_{p}\}_{p \in \mathit {Proc}},\mathit {Act},\varDelta ,F)\), and an integer B>0.

Question: :

Does there exist \(T \in \mathcal {L}_{\mathit{time}}(\mathcal {A})\) such that T has a B-bounded timed linearization?

is decidable in space \(P(|\mathcal {A}|,(B+1)^{|\mathit {Act}|})\) for some polynomial P.

Note that the size of \(\mathcal {A}\) also depends on Act via the transition relations → p of the components \(\mathcal {A}_{p}\). But the space complexity above is only exponential in |Act| and not in the much bigger size \(|\mathcal {A}|\).

Then, using Theorem 6.2 we can conclude that,

Corollary 7.1

The following problem is decidable:

Input: :

A TMSO formula φ and an integer B>0.

Question: :

Does there exist \(T \in \mathcal {L}_{\mathit{time}}(\varphi )\) such that T has a B-bounded timed linearization?

The rest of the section formulates the proof of the above theorem. Let \(\mathcal {A}\) be an ECMPA and let B>0. We construct a (finite) timed automaton \(\mathcal {B}\) (denoted TA) that accepts a timed word w over Act iff w is a B-bounded timed linearization of some TMSC accepted by \(\mathcal {A}\). Since emptiness is decidable for finite timed automata [2], we are done.

The remainder of this section is dedicated to the construction of such a \(\mathcal {B}\), which is done in three steps as sketched below:

  • First, we address the main hurdle in simulating an ECMPA by a TA, namely, a run of a TA is totally ordered, while ECMPAs have partially ordered runs. Hence, to keep track of clock constraints used in the ECMPA, the TA needs to recover the partial order information from its runs, i.e., words. This is done using gadgets that we will define as our first step.

  • Next, using these gadgets, we describe an infinite TA which simulates the ECMPA.

  • Though we allow infinitely many clocks and states in this intermediate construction, on any run we will see that only finitely many states and clocks are used. We modify this automaton to obtain finitely many states and clocks thus completing our construction.

7.1 Recovering the partial order

We begin by introducing some notations that will be used to classify the set of actions. For any channel (p,q)∈Ch, and each θ∈{!,?}, pθq denotes the set of actions \(\{p\theta q(m)\in \mathit {Act}\mid m\in \mathcal {M}\}\). Recall that, we write ap!q (respectively, ap?q) if a=p!q(m) (respectively, p?q(m)) for some \(m\in \mathcal {M}\).

Now, recall that the partial order of an MSC can be recovered from any of its linearizations under the FIFO assumption. Indeed, if w=a 1a n Act is a linearization of MSC M=(E,≤M,λ) over Act, then M is isomorphic to the unique MSC \(M^{w}=(E^{w},\leq ^{M^{w}},\lambda ^{w})\) over Act, where E w={1,…,n} (i.e., the set of positions of the word w), λ w(i)=a i , and \(\mathord{\leq ^{M^{w}}}\) is defined as the reflexive transitive closure of \(\smash{\bigcup_{p,q\in \mathit {Proc}}} <_{{pq}}^{M^{w}}\) where, for all pProc, \(<_{{pp}}^{M^{w}}\) is the set of pairs (i,j)∈E w×E w such that i<j and λ w(i),λ w(j)∈Act p , and for all (p,q)∈Ch, \(<_{{pq}}^{M^{w}}\) is the set of pairs (i,j)∈E w×E w such that λ w(i)=p!q(m) and λ w(j)=q?p(m) for some \(m\in \mathcal {M}\) and |{kiλ w(k)∈p!q}|=|{kjλ w(k)∈q?p}|.

Therefore, we can consider the partial order relation of M to be a relation over the positions of a given linearization of M. Thus, given a linearization w of an MSC M, we can identify M with M w and write iM j for 1≤i,j≤|w|, to mean \(i\leq ^{M^{w}}j\), i.e., the isomorphic images of positions i,j are related by ≤M. Similarly, we may write \(i<_{{pp}}^{M} j\), \(i<_{{pq}}^{M} j\), \((i,j)\in \mathrm {Prev}^{M}_{a}\) and \((i,j)\in \mathrm {Next}^{M}_{a}\) for aAct to mean that the corresponding events are related in M, respectively by, the local-process relation \(<_{{pp}}^{M}\), the message relation \(<_{{pq}}^{M}\), the previous and the next occurrence of a relations. In the same way, we can also write λ(i) instead of λ w(i). Note that iM j implies ij but the converse need not be true (where ≤ is the usual ordering between i and j as elements of ℕ). However, if λ(i)=λ(j), then ij implies iM j.

Now, we describe gadgets, which are deterministic finite-state automata that run on words which are linearizations of an MSC and accept if the first and last position of the word are related under the partial order. For this, we restrict to B-bounded linearizations.

Definition 7.1

Let \(M=\mbox {$(E,\leq ,\lambda )$}\) be an MSC over Act and B∈ℕ>0. Then a B-well-stamping for M is a map ρ:E→{0,…,B−1} s.t., for any eE with λ(e)=pθq(m) for some p,qProc, \(m\in \mathcal {M}\), θ∈{!,?}, we have \(\rho (e)=|{\downarrow }{e} \cap \bigcup_{m'\in \mathcal {M}}E_{p\theta q(m')}|\bmod B\).

Then, by the above considerations, given a linearization w of M, ρ is also a map from the positions of w to {0,…,B−1}. Now, the following property is immediate from the definitions of the message relation in an MSC (Sect. 2) and B-boundedness.

Proposition 7.1

Let w=a 1a n Act be a B-bounded linearization of an MSC M over Act and ρ be a B-well-stamping for M. Then, for 1≤i<jn and (p,q)∈Ch, we have \(i<_{{pq}}^{M} j\) iff the following conditions hold:

  1. (a)

    a i p!q, a j q?p,

  2. (b)

    ρ(i)=ρ(j),

  3. (c)

    for all k, i<k<j, a k q?p implies ρ(k)≠ρ(i).

Proof

First, for any i∈{1,…,n}, if a i =pθq(m) for some \(\theta\in\{!,?\},m\in \mathcal {M}\), we fix the notation \({\Downarrow }{i}= ( {\downarrow }{i} \cap \bigcup_{m'\in \mathcal {M}} E_{p\theta q(m')})=\{i'\leq i\mid \lambda (i')\in p\theta q\}\). Now, the proof follows once we recall the relevant definitions in this setting.

  1. (1)

    For 1≤i<jn, \(i<_{{pq}}^{M} j\) iff λ(i)=p!q(m), λ(j)=q?p(m) for some \(m\in \mathcal {M}\) and |⇓i|=|⇓j|.

  2. (2)

    For all 1≤in, ρ(i)=|⇓i|modB.

  3. (3)

    w is B-bounded iff for all 1≤n,(p,q)∈Ch, |{i′∣i′≤,λ(i′)∈p!q}|−|{j′∣j′≤,λ(j′)∈q?p}|≤B.

Now, for the first direction, let \(i<_{{pq}}^{M} j\). Then, by (1), (a) holds and |⇓i|=|⇓j|=αB+b. Then, b=ρ(i)=ρ(j) and so (b) holds. Now, suppose (c) did not hold. Then there exists k, ik<j such that a k q?p and ρ(k)=ρ(i)=ρ(j). Then, by (2), |⇓k|=α′⋅B+b. But then ⇓k⊆⇓j and j∈⇓j, j∉⇓k implies that |⇓j|>|⇓k|⇒αB+b>α′⋅B+b⇒(αα′)⋅B>0. Thus |⇓j|−|⇓k|≥B and so |⇓i|−|⇓k|≥B. But now, i<k and λ(k)∈q?p implies that |⇓k|≥|{i′≤iλ(i′)∈q?p}|+1 and so we can conclude that |⇓i|−|{i′≤iλ(i′)∈q?p}|>B. This means |{i′≤iλ(i′)∈p!q}|−|{i′≤iλ(i′)∈q?p}|>B and so by (3) (in particular, by letting =i) it contradicts the B-boundedness of w. Thus (c) also holds.

Conversely, let conditions (a), (b) and (c) be true. Then, (a) and (b) together imply that |⇓i|modB=|⇓j|modB. That is |⇓i|=αB+b and |⇓j|=α′⋅B+b for some α, α′∈ℕ. We now show that α=α′. First, observe that between i and j, there can be at most B−1 actions labeled q?p(m′) for some \(m'\in \mathcal {M}\). Otherwise, we can find an event k, i<k<j such that a k q?p and |⇓k|=(α′−1)B+b. And for this k, ρ(k)=ρ(j) which contradicts condition (c). Also, we have i<j and a j q?p. Thus, (α′−1)⋅B+b<|{j′≤ia i q?p}|<α′⋅B+b. Now, if α>α′, then |{i′≤iλ(i′)∈p!q}|−|{j′≤i,λ(j′)∈q?p}|>αBα′⋅B>B which contradicts B-boundedness of w by (3). Else if α′>α, then |{j′≤i,λ(j′)∈q?p}|−|{i′≤iλ(i′)∈p!q}|>(α′−1−α)⋅B≥0 which is a contradiction since in a linearization we cannot have the number of receives exceeding the number of sends. Thus α=α′. Now since number of p!q events below i is equal to the number of q?p events below j, we conclude by FIFO property that j is the matching receive for i, i.e., a i =p!q(m) and a j =q?p(m) for some \(m\in \mathcal {M}\). Thus, by (1), \(i<_{{pq}}^{M} j\). □

Constructing the gadgets

The first gadget we build is a deterministic finite-state automaton \(\mathcal {C}^{\mathord {\leq }}\) over Act×{0,…,B−1}, that detects if the first and last positions of a word, provided the word corresponds to some factor of a B-bounded MSC linearization, are related with respect to the associated partial ordering. In what follows, we let Send denote the set of symbols {p!q∣(p,q)∈Ch}.

We define \(\mathcal {C}^{\mathord {\leq }}=(Q^{\mathord {\leq }},\delta ^{\mathord {\leq }},s_{0}^{\mathord {\leq }},F^{\mathord {\leq }})\) where a state of Q is a triple of the form (P,O,f) where f∈{0,1}, P∈2Proc and O:Send⤏{0,…,B−1} s.t., if p!q∈dom(O) for some (p,q)∈Ch, then pP. Intuitively, the triple (P,O,f) represents the information we need to recover the partial order. The set P contains the processes in the future partial view, O indicates the stamp/numbering of each “open send”, and f will be 1 iff the first and the last position are in fact related as desired. Note that, letting n=|Proc|, we have

$$ \bigl|Q^{\mathord {\leq }}\bigr| = 2 \cdot\sum_{k=0}^{n} \biggl( \binom{n}{k} \cdot (B+1)^{k(n-1)} \biggr) = \mathcal{O} \bigl((B+1)^{|\mathit {Proc}|^2} \bigr) $$
(11)

The initial state of \(\mathcal {C}^{\mathord {\leq }}\) is \(s_{0}^{\mathord {\leq }}=(\emptyset,\emptyset ,0)\in Q^{\mathord {\leq }}\) and the set of final states F ={(P,O,f)∈Q f=1}. The transition function δ :Q ×(Act×{0,…,B−1})→Q is defined as δ ((P,O,f),(a,β))=(P′,O′,f′) where

As usual, a run of \(\mathcal {C}^{\mathord {\leq }}\) on (a 1,β 1)…(a n ,β n )∈(Act×{0,…,B−1}) is a sequence \(s_{0}^{\mathord {\leq }}\xrightarrow{(a_{1},\beta_{1})}\ldots\xrightarrow{(a_{n},\beta _{n})}s^{\mathord {\leq }}_{n}\), where \(s^{\mathord {\leq }}_{i}\in Q^{\mathord {\leq }}\) for each i∈{0,…,n} and \((s^{\mathord {\leq }}_{i},a_{i+1},\beta_{i+1},s^{\mathord {\leq }}_{i+1})\in \delta ^{\mathord {\leq }}\) for all i∈{0,…,n−1} and is accepting if \(s^{\mathord {\leq }}_{n}\in F^{\mathord {\leq }}\). Then, \(\mathcal {L}( \mathcal {C}^{\mathord {\leq }})\) denotes the set of words over Act×{0,…,B−1} having accepting runs. Note that with this definition, \(\mathcal {C}^{\mathord {\leq }}\) has a run (which may be accepting or non-accepting) on every word of (Act×{0,…,B−1}). For linearizations, \(\mathcal {C}^{\mathord {\leq }}\) has the desired property:

Lemma 7.1

Let w=a 1a n Act be a B-bounded linearization of an MSC M over Act and ρ be a B-well-stamping for M. Then, for 1≤ijn, \((a_{i},\rho(i))\ldots(a_{j},\rho(j))\in \mathcal {L}( \mathcal {C}^{\mathord {\leq }})\) iff iM j.

Proof

Let us fix 1≤in and let a i Act p for some pProc. For each j such that ijn, we denote \(w^{\rho}_{ij}=(a_{i},\rho(i))\ldots(a_{j},\rho(j))\). Then for each j, ijn, there is a run r ij of \(\mathcal {C}^{\mathord {\leq }}\) on \(w^{\rho}_{ij}\), namely, \(s_{0}^{\mathord {\leq }}\xrightarrow{(a_{i},\rho (i))}(P_{i},O_{i},f_{i})\ldots\xrightarrow{(a_{j},\rho(j))}(P_{j},O_{j},f_{j})\) where \(s_{0}^{\mathord {\leq }}=(\emptyset,\emptyset,0)\) and for each k∈{i,…,j}, (P k ,O k ,f k )∈Q is given by the definition of \(\mathcal {C}^{\mathord {\leq }}\). Indeed each r ij is a prefix of r ij for jj′≤n. For simplicity of notation, let P i−1=∅, O i−1=∅, f i−1=0, i.e., \(s_{0}^{\mathord {\leq }}=(P_{i-1},O_{i-1},f_{i-1})\). Now by definition of \(\mathcal {C}^{\mathord {\leq }}\), P i−1=∅, a i Act p implies that P i ={p}. Once a process gets into P it is never removed, and so, pP j for all ijn. Also for each j, ijn, r ij  is an accepting run iff f j =1 i.e., iff a j Act q for some qProc and qP j .

We now distinguish two cases. First, suppose a j Act p . Then we already have pP j and so r ij is accepting. Also in this case, a i ,a j belong to the same process p and so \(i\leq _{{pp}}^{M}j\) which implies iM j. Thus,

$$ \forall j\in\{i,\ldots,n\}, a_j\in \mathit {Act}_p \quad {\implies}\quad (r_{ij}\ \text{is\ accepting})\wedge\bigl(i\leq ^M j\bigr). $$
(12)

On the other hand, for all j∈{i+1,…,n} such that a j Act q for some qp, we will show that r ij is an accepting run iff iM j which completes the proof of the lemma. We proceed by induction on ji.

The base case is when ji=1, i.e. \(w^{\rho}_{ij}=(a_{i},\rho (i))(a_{j},\rho(j))\). Then, P j−1=P i ={p}. Now, since a j Act q , r ij is accepting iff qP j . By definition of \(\mathcal {C}^{\mathord {\leq }}\), this happens iff a j q?p and O j−1(p!q)=ρ(j). But since j−1=i we have p!q∈dom(O i ) and along with O i−1=∅, this implies that a i p!q and O i (p!q)=ρ(i). Thus we can conclude that qP j iff a i p!q,a j q?p and ρ(i)=ρ(j) which by Proposition 7.1 happens iff \(i<_{{pq}}^{M} j\) (by noting that there is no event between i and j). But again i+1=j and i,j are not on the same process, thus, \(i<_{{pq}}^{M} j\) iff iM j, which completes the proof of the base case.

Now, suppose ji>1. Then, r ij is accepting implies qP j . But since qP i ={p}, there exists j′ for i<j′≤j such that qP j′−1,qP j. By the definition of \(\mathcal {C}^{\mathord {\leq }}\), since P j′−1≠∅, there is p′∈Proc such that a jq?p′ and O j′−1(p′!q)=ρ(j′). But since O i−1=∅, there is k, ik<j′ such that p′!q∈dom(O k ) and p′!q∉dom(O k−1). Then at k, a k p′!q and O k (p′!q)=ρ(k) and either p′∈P k−1 or k=i and P k−1=∅. In both cases, p′∈P k since a k Act p. Thus, in fact f k =1 and so r ik is an accepting run of \(\mathcal {C}^{\mathord {\leq }}\) on \(w^{\rho}_{ik}\). Now, if p′=p, i.e., a k Act p , by (12), we conclude that iM k. Otherwise, a k Act p, p′≠p and since, k<j′≤j, we can apply the induction hypothesis to conclude that iM k. Now, the values in O, once defined, are never modified and so O k (p′!q)=O j′−1(p′!q) which implies ρ(k)=ρ(j′). Also for any j″, kj″<j′, if a jq?p′ and ρ(j″)=ρ(j′), then at j″, we have O j(p′!q)=ρ(k)=ρ(j″) and so qP j. This is a contradiction since we assumed that qP j′−1. Thus, by Proposition 7.1, \(k<_{{p'q}}^{M} j'\). Also a j,a j Act q implies that \(j' \leq _{{qq}}^{M}j\). Thus we have \(i\leq ^{M} k<_{{p'q}}^{M} j'\leq _{{qq}}^{M} j\) and so by definition of ≤M, we conclude that iM j.

Conversely, let iM j, be such that i is a p-event and j is a q-event for pq. Then let j′ be the earliest event on process q which is related to i. This implies that a j is a receive action from some process, say p′∈Proc. In other words, there exists k,j, such that ik<j′≤j and \(i\leq ^{M} k<_{{p'q}}^{M} j' \leq _{{qq}}^{M} j\) and for all j″ with \(j''<_{{qq}}^{M} j'\), \(i\not \leq ^{M} j''\), where a k =p′!q(m) and a j=q?p′(m) for some p′∈Proc, \(m\in \mathcal {M}\). Then, if p′=p, then by (12), and otherwise by induction hypothesis, r ik is an accepting run of \(\mathcal {C}^{\mathord {\leq }}\) on w ik . Thus f k =1 and so p′∈P k . But since, a k p′!q, we find that either P k =P k−1 or k=i. Thus either p′∈P k−1 or P k−1=∅. Also, p′!q∉dom(O k−1), for otherwise, we can find k′, ik′<k with a k=p′!q, whose corresponding receive contradicts the minimality of j′ under FIFO condition. So by definition of O, we have O k (p′!q)=ρ(k) and by Proposition 7.1, ρ(k)=ρ(j′). Thus, by definition of \(\mathcal {C}^{\mathord {\leq }}\), we have P j=P j′−1∪{q}. Thus, qP jP j and along with a j Act q , we get that f j =1 and so r ij is an accepting run. □

An ECMPA uses Prev a and Next a in the guards, which along a run constrain the previous and next occurrence of an action a respectively. Hence, we need to recover not only the general partial order relation but also these previous and next occurrence relations from the linearization. For this, we build two more gadgets using the gadget described above.

We begin by defining a deterministic finite-state automaton \(\mathcal {C}^{\triangleleft }=(Q^{\triangleleft },\delta ^{\triangleleft },s_{0}^{\triangleleft },F^{\triangleleft })\), which we refer to as the \(\mathcal {C}^{\triangleleft }\) gadget and use to recover the previous occurrence relation. Its state space is given by Q =Q ×Q ×Act. Thus, using (11) we obtain

$$ \bigl|Q^{\triangleleft }\bigr|= \mathcal{O} \bigl( (B+1)^{2|\mathit {Proc}|^2} \times|\mathit {Act}| \bigr) . $$
(13)

Now, the idea is that the first component, mimicking the automaton \(\mathcal {C}^{\mathord {\leq }}\), is started when reading the first occurrence of an action a, which is henceforth stored in the third component. The second component is run if and when a is executed for the second time. Any further event that is related in the partial order to the first occurrence of a but not to the second occurrence matches the Prev a relation so that F =F ×(Q F Act. We let \(s_{0}^{\triangleleft }= (s_{0}^{\mathord {\leq }},s_{0}^{\mathord {\leq }},b)\) for some arbitrary action bAct. Finally, for any (s 1,s 2,a)∈Q , bAct, and n∈{0,…,B−1}, we set

$$\delta ^{\triangleleft }\bigl((s_1,s_2,a),(b,n)\bigr)= \begin{cases} (\delta ^{\mathord {\leq }}(s_1,(b,n)),s_2,b) & \text{if}\ s_1 = s_0^{\mathord {\leq }}\\ (\delta ^{\mathord {\leq }}(s_1,(b,n)),s_2,a) & \text{if}\ s_1 \neq s_0^{\mathord {\leq }},\ s_2 = s_0^{\mathord {\leq }}\\ &\text{and}\ b \neq a \\ (\delta ^{\mathord {\leq }}(s_1,(b,n)),\delta ^{\mathord {\leq }}(s_2,(b,n)),a) & \text{otherwise} \end{cases} $$

Similarly, the \(\mathcal {C}^{\triangleright }\) gadget is defined as \(\mathcal {C}^{\triangleright }=(Q^{\triangleright },\delta ^{\triangleright },s_{0}^{\triangleright },F^{\triangleright })\) with Q =Q ×2Act×{0,1}. Again by (11) we obtain,

$$ \bigl|Q^{\triangleright }\bigr|= \mathcal{O} \bigl( (B+1)^{|\mathit {Proc}|^2} \times2^{|\mathit {Act}|} \bigr) $$
(14)

The idea here is that the second component of a state keeps track of the actions we have seen so far in the future of the first action, and the third component indicates a final state. Accordingly, \(s_{0}^{\triangleright }=(s_{0}^{\mathord {\leq }},\emptyset,0)\), F =F ×2Act×{1} and for any (s,A,f)∈Q , aAct, n∈{0,…,B−1}, we set

$$\delta ^{\triangleright }\bigl((s,A,f),(a,n)\bigr)= \begin{cases} (\delta ^{\mathord {\leq }}(s,(a,n)),A \cup\{a\},1) & \text{if}\ \delta ^{\mathord {\leq }}(s,(a,n)) \in F^{\mathord {\leq }}\ \\ &\text{and}\ a \notin A \\ (\delta ^{\mathord {\leq }}(s,(a,n)),A,0) & \text{otherwise} \end{cases} $$

Then, the following lemma describes the nice property of the above gadgets.

Lemma 7.2

Let w=a 1a n Act be a B-bounded linearization of an MSC M over Act and ρ be a B-well-stamping for M. Then, for all 1≤ijn, letting \(w^{\rho}_{ij}=(a_{i},\rho (i))\ldots(a_{j},\rho(j))\in(\mathit {Act}\times\{0,\ldots,B-1\})^{*}\), we have (1) \(w^{\rho}_{ij}\in \mathcal {L}( \mathcal {C}^{\triangleleft })\) iff \((j,i)\in \mathrm {Prev}^{M}_{a_{i}}\) and (2) \(w^{\rho}_{ij}\in \mathcal {L}( \mathcal {C}^{\triangleright })\) iff \((i,j)\in \mathrm {Next}^{M}_{a_{j}}\).

Again, recalling that there is an isomorphism that maps positions of w to events of M, in the above statement \((j,i)\in \mathrm {Prev}^{M}_{a_{i}}\) means that the events in M corresponding to the positions i,j are related by \(\mathrm {Prev}^{M}_{a_{i}}\) and so on.

Proof

(1) By definition of \(\mathcal {C}^{\triangleleft }\), there is a run r of \(\mathcal {C}^{\triangleleft }\) on \(w^{\rho}_{ij}\), namely, \(s_{0}^{\triangleleft }=s^{\triangleleft }_{i-1}\xrightarrow {(a_{i},\rho(i))}s^{\triangleleft }_{i}\cdots\xrightarrow{(a_{j},\rho (j))}s^{\triangleleft }_{j}\) where for all k, i−1≤kj, \(s^{\triangleleft }_{k}=(s^{\mathord {\leq }}_{k},s'^{\mathord {\leq }}_{k},b_{k})\). Further note that, \(s^{\mathord {\leq }}_{i-1}=s_{0}^{\mathord {\leq }}\) and \(s'^{\mathord {\leq }}_{i}=s'^{\mathord {\leq }}_{i-1}=s_{0}^{\mathord {\leq }}\) and for all k, ikj, b k =a i . Now \(s'^{\mathord {\leq }}_{j}\in F^{\mathord {\leq }}\) implies there is k, i<kj such that \(s'^{\mathord {\leq }}_{k+1}\neq s_{0}^{\mathord {\leq }}\) which means that a k =b k . But b k =a i and so a k =a i . Also, for this k, \(s'^{\mathord {\leq }}_{k}\xrightarrow{(a_{k},\rho(k))}\cdots\xrightarrow{(a_{j},\rho (j))}s'^{\mathord {\leq }}_{j}\) is an accepting run of \(\mathcal {C}^{\mathord {\leq }}\) on \(w^{\rho}_{kj}\). Conversely, if ∃k, i<kj such that a k =a i and there is an accepting run of \(\mathcal {C}^{\mathord {\leq }}\) on \(w^{\rho}_{kj}\) then in r we find that \(s'^{\mathord {\leq }}_{j}\in F^{\mathord {\leq }}\). Thus, we can conclude that \(s'^{\mathord {\leq }}_{j}\in F^{\mathord {\leq }}\) iff there is k, i<kj such that a k =a i and there is an accepting run of \(\mathcal {C}^{\mathord {\leq }}\) on \(w^{\rho}_{kj}\). But by Lemma 7.1 there is an accepting run of \(\mathcal {C}^{\mathord {\leq }}\) on \(w^{\rho}_{kj}\) iff kM j. Thus \(s'^{\mathord {\leq }}_{j}\in F^{\mathord {\leq }}\) iff there is k, i<kj such that a k =a i and kM j. Also by Lemma 7.1 we have \(s^{\mathord {\leq }}_{j}\in F^{\mathord {\leq }}\) iff iM j. Finally r is an accepting run of \(\mathcal {C}^{\triangleright }\) on \(w^{\rho}_{ij}\) iff \(s^{\triangleleft }_{j}\in F^{\triangleleft }\), i.e., \(s^{\mathord {\leq }}_{j}\in F^{\mathord {\leq }}\) and \(s'^{\mathord {\leq }}_{j}\notin F^{\mathord {\leq }}\). And by the above arguments, this happens iff iM j and for all k, i<kj, either a k a i or \(k\not \leq ^{M} j\). But this is exactly the definition of \((j,i)\in \mathrm {Prev}^{M}_{a_{i}}\). Thus \(w^{\rho}_{ij}\in \mathcal {L}( \mathcal {C}^{\triangleleft })\) iff \((i,j)\in \mathrm {Prev}^{M}_{a_{i}}\).

(2) Similarly, there is a run r′ of \(\mathcal {C}^{\triangleright }\) on \(w^{\rho}_{ij}\), namely, \(s_{0}^{\triangleright }=s^{\triangleright }_{i-1}\xrightarrow {(a_{i},\rho(i))}s^{\triangleright }_{i}\cdots\xrightarrow{(a_{j},\rho (j))}s^{\triangleright }_{j}\) where for all k, i−1≤kj, \(s^{\triangleright }_{k}=(s^{\mathord {\leq }}_{k},A_{k},b_{k})\) for b k ∈{0,1}, such that \(r'_{1}=s^{\mathord {\leq }}_{i-1}\xrightarrow{(a_{i},\rho(i))}\cdots\xrightarrow {(a_{j},\rho(j))}s^{\mathord {\leq }}_{j}\) is a run of \(\mathcal {C}^{\mathord {\leq }}\) on \(w^{\rho}_{ij}\). Then r′ is accepting iff \(s^{\mathord {\leq }}_{j}\in F^{\mathord {\leq }}\) and b j =1 i.e., iff \(r'_{1}\) is accepting and a j A j−1. By Lemma 7.1, \(r'_{1}\) is accepting iff iM j. Also a j A j−1 implies a j A k for all ik<j which means that for all k, ik<j, a k a j . Thus, we conclude that r′ is accepting iff iM j and for all k, ik<j, a k a j i.e., iff \((i,j)\in \mathrm {Next}^{M}_{a_{j}}\). □

7.2 From ECMPA to timed automata

If we ignore the clock constraints and timing issues, then this simulation is the same as giving an alternate semantics of an MPA directly over linearizations of MSCs. However, when we include the timing constraints of the ECMPA, then along the run of the TA we need to know when to verify these constraints and against which clocks. We maintain this information in the state space using the gadgets.

Intuitively, at each position of a run of the TA, we start a new copy of the Prev gadget and reset a corresponding clock z. Thus, at a later position, if we encounter a Prev a constraint for some aAct, and if this copy of the Prev gadget is in a final state, then we know that these positions are related by the Prev a relation, hence the constraint must be checked against clock z.

Similarly, at each position of the run where we encounter a Next a constraint, we start a new copy of the Next gadget and reset a clock z′. Then, when it reaches an accepting state and the last transition is an a-action, we know that these positions are related by the Next a relation. Hence, at this point we verify that clock z′ satisfies the constraint mentioned when the gadget was started. However, for this we also need to maintain in the state space the constraint itself so that we can recover it and verify it at the later position.

Finally, for message constraints, we reset a clock when we encounter the send action and verify it when we reach the correct receive. This information is already contained in the state space as we will see. In addition however, we need to maintain the message constraint itself in the state space, so that when we reach the receive we know which constraint to check the clock against.

We formalize these ideas below. Let us first recall the well known notion of timed automata [2]. Unlike event-clock automata, whose clocks are implicit, timed automata have a set \(\mathcal {Z}\) of explicit clocks that can be guarded by means of clock formulas. The set \(\mathrm {Form}(\mathcal {Z})\) of clock formulas over \(\mathcal {Z}\) is given by the grammar

$$\varphi ::= \mathrm {true}\mid x \bowtie c \mid\neg \varphi \mid \varphi _1 \wedge \varphi _2 \mid \varphi _1 \vee \varphi _2 $$

where x ranges over \(\mathcal {Z}\), ⋈∈{<,≤,>,≥,=}, and c ranges over ℕ. A clock valuation over \(\mathcal {Z}\) is a mapping \(\nu : \mathcal {Z}\rightarrow \mathbb {R}_{\ge 0}\). We say that ν satisfies \(\varphi \in \mathrm {Form}(\mathcal {Z})\), written νφ, if φ evaluates to true using the values given by ν. For \(R\subseteq \mathcal {Z}\), we let ν[R↦0] denote the clock valuation defined by ν[R↦0](x)=0 if xR, and ν[R↦0](x)=ν(x), otherwise. A timed automaton (over Act) is a tuple \(\mathcal {B}=(Q_{\mathcal {B}},\mathcal {Z}_{\mathcal {B}},\delta_{\mathcal {B}},\iota_{\mathcal {B}},F_{\mathcal {B}})\) where \(Q_{\mathcal {B}}\) is its set of states, \(\mathcal {Z}_{\mathcal {B}}\) is its set of clocks, \(\delta_{\mathcal {B}} \subseteq Q_{\mathcal {B}} \times \mathrm {Form}(\mathcal {Z}_{\mathcal {B}}) \times \mathit {Act}\times 2^{\mathcal {Z}_{\mathcal {B}}} \times Q_{\mathcal {B}}\) is the set of transitions, \(\iota\in Q_{\mathcal {B}}\) is the initial state, and \(F_{\mathcal {B}} \subseteq Q_{\mathcal {B}}\) is the set of final states. We say that \(\mathcal {B}\) is finite if both \(Q_{\mathcal {B}}\) and \(\mathcal {Z}_{\mathcal {B}}\) are finite. The language of \(\mathcal {B}\) is a set of timed words \(\mathcal {L}_{\mathit {tw}}(\mathcal {B}) \subseteq(\mathit {Act}\times \mathbb {R}_{\ge 0})^{\ast}\). For σ=(a 1,t 1)…(a n ,t n )∈(Act×ℝ≥0), we let \(\sigma\in \mathcal {L}_{\mathit {tw}}(\mathcal {B})\) iff there is an accepting run r of \(\mathcal {B}\) on σ, i.e., there is a sequence

$$r=(\mathrm{st}_0,\nu _0)\xrightarrow{a_1,t_1}(\mathrm{st}_1,\nu _1)\xrightarrow{a_2,t_2} \cdots\xrightarrow{a_n,t_n}(\mathrm{st}_n,\nu _n) $$

(where, for all i∈{0,…,n}, \(\mathrm{st}_{i} \in Q_{\mathcal {B}}\) and \(\nu _{i}: \mathcal {Z}_{\mathcal {B}}\rightarrow \mathbb {R}_{\ge 0}\) with ν 0(x)=0 for all \(x \in \mathcal {Z}_{\mathcal {B}}\)) such that \(\mathrm{st}_{0}=\iota_{\mathcal {B}}\), \(\mathrm{st}_{n}\in F_{\mathcal {B}}\), and, for each i∈{1,…,n}, there exist \(\varphi _{i} \in\mathit{Form}(\mathcal {Z}_{\mathcal {B}})\) and \(R_{i} \in2^{\mathcal {Z}_{\mathcal {B}}}\) satisfying \((\mathrm{st}_{i-1},\varphi _{i},a_{i},R_{i},\mathrm{st}_{i}) \in\delta_{\mathcal {B}}\), (ν i−1+t i t i−1)⊨φ i , and ν i =(ν i−1+t i t i−1)[R i ↦0] (where we assume t 0=0).

Theorem 7.2

(From [2])

Given a finite timed automaton \(\mathcal {B}\), one can decide if \(\mathcal {L}_{\mathit {tw}}(\mathcal {B}) \neq\emptyset\).

For the rest of this section, we fix an ECMPA \(\mathcal {A}=(\{\mathcal {A}_{p}\}_{p \in \mathit {Proc}},\mathit {Act},\varDelta ,F)\), with \(\mathcal {A}_{p}= (S_{p},\iota _{p},\rightarrow _{p})\), and an integer B>0. We also fix an (infinite) set of indices Ind=Act×ℕ which will be used to index the “copies” of the gadgets that we will use. We will define an infinite timed automaton \(\mathcal {B}=(Q_{\mathcal {B}},\mathcal {Z}_{\mathcal {B}},\delta _{\mathcal {B}},\iota_{\mathcal {B}},F_{\mathcal {B}})\) such that \(\mathcal {L}_{\mathit {tw}}(\mathcal {B}) = \{\sigma\in(\mathit {Act}\times \mathbb {R}_{\ge 0})^{\ast}\mid \mathrm{there\ is}\ T \in \mathcal {L}_{\mathit{time}}(\mathcal {A})\ \mathrm{such\ that}\ \sigma\ \mathrm{is\ a}\ B\mbox{-bounded}\ \mathrm{timed\ linearization\ of}\ T\}\). A state \(\mathrm{st}\in Q_{\mathcal {B}}\) is a 6-tuple \((\overline{s},\chi,\eta ,\xi ^{\triangleleft },\xi ^{\triangleright },\gamma )\) where:

  • \(\overline{s}=(s_{p})_{p\in \mathit {Proc}}\in{\prod_{p \in \mathit {Proc}}}S_{p}\) is a tuple of local states from the ECMPA.

  • \(\chi:\mathit {Ch}\to(\mathcal {M}\times \varDelta )^{\le B}\) describes the contents of the channels.

  • η:Act→{0,…,B−1} gives the B-stamping number that should be assigned to the next occurrence of an action.

  • ξ :Ind⤏Q associates with certain indices, states of \(\mathcal {C}^{\triangleleft }\). Thus, the indices in dom(ξ ) of a state specify which copies of the gadgets are “active” in that state.

  • \(\xi ^{\triangleright }:\mathrm {Ind}\dashrightarrow Q^{\triangleright }\times \mathrm {Int}(\mathcal {A})\) associates with some indices, states of \(\mathcal {C}^{\triangleright }\) along with constraints that need to be maintained

  • \(\gamma :\mathit {Ch}\times\{0,\ldots,B-1\} \dashrightarrow \mathrm {Int}(\mathcal {A})\) describes the guards attached to messages.

The initial state is \(\iota_{\mathcal {B}}=((\iota_{p})_{p\in \mathit {Proc}},\chi _{0},\eta_{0},\xi ^{\triangleleft }_{0},\xi ^{\triangleright }_{0},\gamma _{0})\) where χ 0 and η 0 map any argument to the empty word and 0 respectively, and the partial maps \(\xi ^{\triangleleft }_{0}\), \(\xi ^{\triangleright }_{0}\) and γ 0 are nowhere defined. We will use clocks from the set \(\mathcal {Z}_{\mathcal {B}}= \{z^{\triangleleft }_{a,i}, z^{\triangleright }_{a,i} \mid(a,i) \in \mathrm {Ind}\} \cup\{z^{\gamma}_{p,q,i} \mid(p,q)\in \mathit {Ch},\allowbreak i\in\nobreak \{0,\ldots,B-1\}\}\). Also, we fix some notations regarding how we write constraints using these clocks. For a clock \(z\in \mathcal {Z}_{\mathcal {B}}\) and an interval \(I\in \mathcal {I}\) with endpoints l,r∈ℕ, we write “xI” to denote a constraint from \(\mathrm {Form}(\mathcal {Z}_{\mathcal {B}})\). More specifically, x∈(l,r) denotes x>lx<r, whereas x∈[l,r) refers to xlx<r, and so on.

The transition relation \(\delta_{\mathcal {B}}\subseteq Q_{\mathcal {B}}\times \mathrm {Form}(\mathcal {Z}_{\mathcal {B}}) \times \mathit {Act}\times2^{\mathcal {Z}_{\mathcal {B}}} \times Q_{\mathcal {B}}\) is defined by,

$$\bigl(\bigl(\overline{s},\chi,\eta,\xi ^\triangleleft ,\xi ^\triangleright ,\gamma \bigr),\varphi ,a,R, \bigl( \overline{s}',\chi',\eta',\xi '^\triangleleft , \xi '^\triangleright ,\gamma '\bigr) \bigr)\in \delta_\mathcal {B}$$

if there are \(p,q\in \mathit {Proc},m\in \mathcal {M},\theta\in\{!,?\}\) such that a=pθq(m) and there exists a p-local transition of the ECMPA, \((s_{p},a,g,d,s'_{p})\in{\rightarrow_{p}}\) for some \(g\in [\mathrm {TC}\dashrightarrow \mathcal {I}]\) and dΔ, such that the following conditions (i)–(ix) hold:

  1. (i)

    \(s'_{r}=s_{r}\) for all rProc∖{p}.

  2. (ii)

    If θ=! then χ′(p,q)=(m,d)⋅χ(p,q), χ′(r,s)=χ(r,s) for all (r,s)∈Ch∖{(p,q)}.

  3. (iii)

    If θ=? then χ(q,p)=χ′(q,p)⋅(m,d), χ′(r,s)=χ(r,s) for all (r,s)∈Ch∖{(q,p)}.

  4. (iv)

    For all bAct,

    $$\eta'(b)= \begin{cases} (\eta(b)+1)\bmod B& \text{if}\ b \in p\theta q\\ \eta(b)&\text{otherwise } \end{cases} $$
  5. (v)

    The states of the previous automata are updated and we have initialized a new copy of \(\mathcal {C}^{\triangleleft }\) starting at the current position so that we can determine which later positions are related with the current one by the previous relation.

    $$\xi '^\triangleleft (b,i) = \begin{cases} \delta ^{\triangleleft }(s_0^{\triangleleft },(a,\eta(a))) & \text{if}\ b=a \ \text{and}\ i=\min(\mathbb {N}\setminus \mathrm {dom}(\xi ^\triangleleft (a)))\\ \delta ^{\triangleleft }(\xi ^\triangleleft (b,i),(a,\eta(a))) & \text{if}\ (b,i)\in \mathrm {dom}(\xi ^\triangleleft )\\ \text{undefined} & \text{otherwise} \end{cases} $$
  6. (vi)

    The states of the next automata are updated along with the corresponding guards. A new copy of \(\mathcal {C}^{\triangleright }\) is initialized for each bAct, if there is a Next b constraint on the local transition. The guard itself is stored in the second component, so that it can be verified when we reach the next occurrence of the action. Once verified, we release the guard and the corresponding copy of \(\mathcal {C}^{\triangleright }\).

    $$\xi '^\triangleright (b,i) = \begin{cases} (\delta ^{\triangleright }(s_0^{\triangleright },(a,\eta(a))),g(\mathrm {Next}_b)) & \text{if}\ \mathrm {Next}_b\in \mathrm {dom}(g) \ \text{and}\ \\ & i=\min(\mathbb {N}\setminus \mathrm {dom}(\xi ^\triangleright (b))) \\ (\delta ^{\triangleright }(s^\triangleright ,(a,\eta(a))),I) & \text{if}\ \xi ^\triangleright (b,i)=(s^\triangleright ,I)\ \text{and}\ \\ & \neg(a=b\wedge \delta ^{\triangleright }(s^\triangleright ,(a,\eta(a)))\in F^{\triangleright }) \\ \text{undefined} &\text{otherwise} \end{cases} $$
  7. (vii)

    The guards attached to message constraints are maintained as expected. A send event introduces a constraint, which is retained until its matching receive releases it.

    $$\gamma '\bigl((r,s),i\bigr)= \begin{cases} g(\mathrm {Msg}) & \text{if}\ a\in r!s,i=\eta(a),\ \mathrm {Msg}\in \mathrm {dom}(g)\\ \text{undefined} & \text{if}\ a\in s?r \ \text{and}\ i=\eta(a) \\ \gamma ((r,s),i) & \text{otherwise} \end{cases} $$
  8. (viii)

    A clock is reset for every new copy of \(\mathcal {C}^{\triangleleft }\), \(\mathcal {C}^{\triangleright }\) and message constraint introduced at this transition.

  9. (ix)

    The guard must ensure that all constraints that get matched at the current event are satisfied. Thus φ=φ φ φ m where

    $$\varphi^\triangleleft = \bigwedge_{\substack{\{(b,i)\mid \mathrm {Prev}_b\in \mathrm {dom}(g) \\ \text{and}\ \xi '^\triangleleft (b,i)\in F^{\triangleleft }\}}} z^\triangleleft _{b,i}\in g(\mathrm {Prev}_b) \wedge \bigwedge_{\substack{\{b \mid \mathrm {Prev}_b\in \mathrm {dom}(g) \\ \text{and}\ \{i\mid \xi '^\triangleleft (b,i)\in F^{\triangleleft }\}=\emptyset\}}} \mathrm{false} $$

    ensures that all previous constraints that are matched are satisfied. Thus if the local transition contains a Prev b constraint, then we have to check \(z^{\triangleleft }_{b,i}\in g(\mathrm {Prev}_{b})\) for the (unique) i such that ξ(b,i)∈F . If there is no such i then there is no b-action in the past of the current event and the Prev b constraint of the local transition cannot be satisfied. In this case, we set φ to false. For next constraints, we have

    If the current action is the next occurrence of a from some positions where a next guard was registered, for each there is a copy (a,i) of \(\mathcal {C}^{\triangleright }\) which reaches a final state. Thus, we verify the corresponding clock with the constraint recovered from ξ . For message constraints, we have

The set of final states is \(F_{\mathcal {B}}= \{(\overline{s},\chi,\eta ,\xi ^{\triangleleft },\xi ^{\triangleright },\gamma )\in Q_{\mathcal {B}}\mid\overline{s}\in F, \chi =\chi_{0}, \mathrm {dom}(\xi ^{\triangleright })=\mathrm {dom}(\gamma )=\emptyset\}\). This ensures that each registered guard has been checked. Indeed, a next or message constraint is released only when it is checked with the guard φ.

One critical observation here is that, once we have specified the local transition of \(\mathcal {A}\), this global transition of \(\mathcal {B}\) gets determined uniquely. Thus, this step is always deterministic. Note that the above automaton \(\mathcal {B}\) has no ϵ-transitions either. Now, we prove that \(\mathcal {L}_{\mathit {tw}}(\mathcal{\mathcal {B}})\) contains precisely the B-bounded timed linearizations of \(\mathcal {L}_{\mathit{time}}(\mathcal{A})\).

Theorem 7.3

\(\mathcal {L}_{\mathit {tw}}(\mathcal {B}) = \{\sigma\in(\mathit {Act}\times \mathbb {R}_{\ge 0})^{\ast}\mid\ \mathit{there}\ \mathit{is}\ T \in \mathcal {L}_{\mathit{time}}(\mathcal {A})\ \mathit{such}\ \mathit{that}\ \sigma\ \mathit{is}\ \mathit{a} \ B\mbox{-\textit{bounded}}\ \mathit{timed}\ \mathit{linearization}\ \mathit{of}\ T\}\).

Proof

Let \(T=\mbox {$(M,t)$}\) with \(M=\mbox {$(E,\leq ,\lambda )$}\) over Act. Then, a B-bounded timed linearization σ=(a 1,t 1)…(a n ,t n ) of T generates the corresponding B-bounded linearization of M, namely a 1a n over the same set of positions {1,…,n}. Thus, as stated in the previous section, we can interpret the events from E to be positions from {1,…,n}.

Then, recall that \(T\in \mathcal {L}_{\mathit{time}}(\mathcal {A})\) iff there is an accepting run r of \(\mathcal {A}\) on T, where r:E→⋃ pProc S p is given by Definition 4.1. Also, let us recall that \(\sigma\in \mathcal {L}_{\mathit {tw}}(\mathcal {B})\) iff there is an accepting run r′ of \(\mathcal {B}\) on σ, i.e., there is a sequence,

(15)

where for all i∈{0,…,n}, \(\mathrm{st}_{i}=( \overline {s}_{i},\chi _{i},\eta_{i},\xi ^{\triangleleft }_{i},\xi ^{\triangleright }_{i},\gamma _{i})\in Q_{\mathcal {B}}\) (with \(\overline {s}_{i}=( s^{i}_{p})_{p\in \mathit {Proc}}\in\prod_{p\in \mathit {Proc}} S_{p} \)), \(\mathrm{st}_{0}=\iota_{\mathcal {B}}\), \(\mathrm{st}_{n}\in F_{\mathcal {B}}\) and \(\nu _{i}: \mathcal {Z}_{\mathcal {B}}\rightarrow \mathbb {R}_{\ge 0}\) (with ν 0(x)=0 for all \(x \in \mathcal {Z}_{\mathcal {B}}\)), and for each i∈{1,…,n}, there exist \(\varphi _{i} \in\mathit {Form}(\mathcal {Z}_{\mathcal {B}})\) and \(R_{i} \in2^{\mathcal {Z}_{\mathcal {B}}}\) such that,

(16)
(17)
(18)

Now we construct an accepting run r′ of \(\mathcal {B}\) on σ from an accepting run r of \(\mathcal {A}\) on T and vice versa.

(⇒) Let r be an accepting run of \(\mathcal {A}\) on T. Then, we construct run r′ inductively from i=0 to i=n. Further, we maintain two further state invariants at each step i∈{0,…,n}:

(19)
(20)

Then, indeed at i=0, we have \(\mathrm{st}_{0}=\iota_{\mathcal {B}}=( \overline {s}_{0},\chi _{0},\eta_{0},\xi ^{\triangleleft }_{0},\xi ^{\triangleright }_{0},\gamma _{0})\). This satisfies our state invariants (19)–(20) since \(s^{0}_{p}=\iota_{p}\) for all pProc and η 0(a)=0 for all aAct. Now for some i∈{1,…,n} assuming we have constructed the run r′ until (st i−1,ν i−1), let a i =pθq(m) for some p,qProc, \(m\in \mathcal {M}\), and θ∈{!,?}. We then extend the run r′ to stage i by exhibiting st i , ν i , φ i , and R i such that conditions (16)–(20) hold.

From the definition of r, we have a local transition on the ECMPA on event i. More precisely, (r (i),a i ,g i ,d i ,r(i))∈→ p for some guard g i and d i Δ. Recall that r (i)=r(i′) for \(i'\mathrel {{<}{\llap {\mbox {$\cdot $}}}}_{pp} i\), and r (i)=ι p if such an event i′ does not exist. Thus, by condition (19) at stage i−1, we have \(s^{i-1}_{p}=r^{-}(i)\). And by choosing \(s^{i}_{p}=r(i)\) we obtain \(( s^{i-1}_{p},a_{i}, g_{i}, d_{i}, s^{i}_{p})\in\mathord{\rightarrow_{p}}\). Again, by choosing for all p′≠p, \(s^{i}_{p'}= s^{i-1}_{p'}\), condition (19) holds at stage i. (This follows since for p, the largest ji such that jE p is i itself. And for p′≠p, the largest j′≤i such that j′∈E p is the largest such event j′≤i−1.)

Now, as we commented in our construction, the local transition fully specifies the global transition and thus we get a transition of \(\mathcal {B}\),

$$\bigl(( \overline {s}_{i-1},\chi_{i-1},\eta_{i-1},\xi ^\triangleleft _{i-1}, \xi ^\triangleright _{i-1},\gamma _{i-1}),\varphi _{i},a_i,R_{i}, ( \overline {s}_{i},\chi_{i},\eta_{i},\xi ^\triangleleft _{i}, \xi ^\triangleright _{i},\gamma _i)\bigr)\in\delta_{\mathcal {B}}, $$

where \(\varphi _{i},R_{i},\chi_{i},\eta_{i},\xi ^{\triangleleft }_{i},\xi ^{\triangleright }_{i},\gamma _{i}\) are defined from their values at stage i−1 and the local transition. Thus condition (16) holds at i, with \(\mathrm{st}_{i}=( \overline {s}_{i},\chi_{i},\eta_{i},\xi ^{\triangleleft }_{i},\xi ^{\triangleright }_{i},\gamma _{i})\). Again condition (20) continues to hold at i if it holds at i−1. (If bpθq, then η i (b)=(η i−1(b)+1)modB=|{i′≤i−1∣λ(i′)∈pθq}|+1modB=|{i′≤iλ(i′)∈pθq}|modB. And for bActpθq, it follows since η i (b)=η i−1(b).) Now, we just define ν i =(ν i−1+t i t i−1)[R i ↦0], so that condition (18) holds at i. Thus, we have extended run r′ of \(\mathcal {B}\) on σ to i, if we prove condition (17).

Claim

(ν i−1+t i t i−1)⊨φ i .

Proof

The proof of this claim is by induction on the structure of φ i . But first we observe that the mapping ρ:E→{0,…,B−1} which maps ρ(i)=η i (a i ) is a B-well-stamping for M. This follows from the fact that the state invariant, condition (20), holds until stage i. We have the following cases to consider.

(1) Previous constraint of the form \(z^{\triangleleft }_{a,k}\in g_{i}(\mathrm {Prev}_{a})\) or false: If for some aAct, Prev a ∈dom(g i ), then by the definition of a run r of \(\mathcal {A}\) on T (i.e., condition (5) or (6)), there exists an event j such that \((i,j)\in \mathrm {Prev}^{T}_{a}\) (thus, a j =a). By Lemma 7.2, \((a_{j},\eta_{j}(a_{j}))\ldots(a_{i},\eta_{i}(a_{i}))\in \mathcal {L}( \mathcal {C}^{\triangleleft })\). And so, in the run of \(\mathcal {B}\) at stage i, \(\xi ^{\triangleleft }_{i}(a,k)\in F^{\triangleleft }\) where \(k=\min(\mathbb {N}\setminus \mathrm {dom}(\xi ^{\triangleleft }_{j-1}(a)))\). Hence, for any a such that Prev a ∈dom(g i ), the set {ξ(a,)∈F }≠∅ and so the false constraint cannot occur as a guard in this simulation, i.e., as part of φ i . Now, at stage j, we have \(z^{\triangleleft }_{a,k}\in R_{j}\) (there cannot be another Prev a guard for some other k′ since there is a unique preceding occurrence of each letter). Also \(z^{\triangleleft }_{a,k}\notin R_{j}'\) for all j′∈{j+1,…,i}, since \((a,j')\in \mathrm {dom}(\xi ^{\triangleleft }_{j'-1})\). Therefore in the valuation \((\nu _{i-1}+t_{i}-t_{i-1})(z^{\triangleleft }_{a,k})=\nu _{i-1}(z^{\triangleleft }_{a,k})+t_{i}-t_{i-1}=\nu _{i-2}(z^{\triangleleft }_{a,k})+t_{i-1}-t_{i-2}+t_{i}-t_{i-1}=\cdots=\nu _{j}(z^{\triangleleft }_{a,k})+t_{i}-t_{j}\) and \(\nu _{j}(z^{\triangleleft }_{a,k})=0\). Thus \(\nu _{j}(z^{\triangleleft }_{a,k})+t_{i}-t_{j}=t_{i}-t_{j}\) and so we are done if we show that t i t j g i (Prev a ). But this follows from condition (5) or (6), where we have |t(j)−t(i)|∈g i (Prev a ) where t(j)=t j and t(i)=t i . Thus we conclude that the valuation satisfies any previous clock constraint of the form \(z^{\triangleleft }_{a,k}\in g_{i}(\mathrm {Prev}_{a})\).

(2) Next constraint of the form \(z^{\triangleright }_{a_{i},k}\in I\): This implies that there is \(k\in \mathrm {dom}(\xi ^{\triangleright }_{i-1}(a_{i}))\) such that, \(\xi ^{\triangleright }_{i-1}(a_{i},k)=(s^{\triangleright },I)\) and δ (s ,(a i ,η i (a i )))∈F . Then, by the definition of the next update function ξ , we can conclude that there exists j<i, \(k=\min(\mathbb {N}\setminus \mathrm {dom}(\xi ^{\triangleright }_{j-1}(a_{i})))\) such that \(\xi ^{\triangleright }_{j}(a_{i},k)=(\delta ^{\triangleright }(s_{0}^{\triangleright },(a_{j},\eta(a_{j})),I))\) such that \(I=g_{j}(\mathrm {Next}_{a_{i}})\). Thus, \((a_{j},\eta_{j}(a_{j}))\ldots(a_{i},\eta _{i}(a_{i}))\in L( \mathcal {C}^{\triangleright })\) and by Lemma 7.2, we can conclude that \((j,i)\in \mathrm {Next}^{T}_{a_{i}}\). Hence from the definition of run r on TMSC T, we get \(|t(j)-t(i)|\in g_{j}(\mathrm {Next}_{a_{i}})=I\). Also, \(z^{\triangleright }_{a_{i},k}\in R_{j}\) and it is not reset until i. If not, let j′ with j<j′<i be the first instance where \(z^{\triangleright }_{a_{i},k}\in R_{j'}\). This \((a_{i},k)\notin \mathrm {dom}(\xi ^{\triangleright }_{j'})\) and \((a_{i},k)\in \mathrm {dom}(\xi ^{\triangleright }_{j'-1})\) implies that a j=a i and \((a_{j},\eta(j))\ldots (a_{j'},\eta(j'))\in \mathcal {L}( \mathcal {C}^{\triangleright })\) which contradicts \((j,i)\in \mathrm {Next}_{a_{i}}\). Thus, for all j<j′<i we get \(z^{\triangleright }_{a_{i},k}\notin R_{j'}\), \((v_{i-1}+t_{i}-t_{i-1})(z^{\triangleright }_{a_{i},k})=t_{i}-t_{j} \in I\) and so we are done.

(3) Message constraint of the form \(z^{\gamma}_{q,p,k}\in \gamma _{i-1}(q,p,k)\): Here, (q,p,k)∈dom(γ i−1) such that a i p?qη i−1(a i )=k. Then we look at the largest ji such that a j =q!p and η j−1(a j )=k. Such a j exists since if not it would contradict the fact that σ is a linearization of a valid TMSC T. Further, for all j′ with j<j′<i, it is not the case that a jp?q and η j′−1(a j)=k. Then it follows that for all j′, jj′<i, γ j(q,p,k)=g j (Msg). Also (j,i)∈MsgT and so by definition of r on TMSC, |t j t i |∈g j (Msg)=γ i−1(q,p,k). And again, \(z^{\gamma}_{q,p,k}\in R_{j}\) and for all j<j′≤i, \(z^{\gamma}_{q,p,k}\notin R_{j'}\), so \((v_{i-1}+t_{i}-t_{i-1})(z^{\gamma}_{q,p,k})=t_{i}-t_{j}\in \gamma _{i-1}(q,p,k)\) and thus we are done. □

Now it is easy to see that the state \(\widetilde{s}_{n}=( \overline {s}_{n},\chi_{n},\eta_{n},\xi ^{\triangleleft }_{n},\xi ^{\triangleright }_{n},\gamma _{n})\) reached at the end of the above run, is a final state. This follows from the fact that r is a successful run of \(\mathcal {A}\) on T, since then we have \(\overline {s}_{n} \in F\) and χ n =χ 0 (since at the end of r the channel contents must be empty), and the partial maps ξ and γ are nowhere defined (since if that were not the case then this means that a constraint was not checked with its guard).

(⇐) For the converse, from r′ as defined in (15) above, we want to construct a run r:E→⋃ pProc S p of \(\mathcal {A}\) on T. We define, for each event i∈{1,…,n}, \(r(i)= s^{i}_{p_{i}}\). Now, at each i∈{0,…,n}, by (16), we have \((\mathrm{st} _{i-1},a_{i},\varphi _{i},R_{i},\mathrm{st}_{i})\in\delta_{\mathcal {B}}\). By definition of \(\mathcal {B}\), for each i∈{0,…,n}, we have \(( s^{i-1}_{p_{i}},a_{i},g_{i},d_{i}, s^{i}_{p_{i}})\in\mathord{\rightarrow _{p_{i}}}\) for some g i and d i . We now show that this map is a run of \(\mathcal {A}\) on T by verifying conditions (2), (5)–(6) in Definition 4.1. First, \(r^{-}(i)= s^{i-1}_{p_{i}}\), as can be proved by induction on i: For i=1, \(\mathrm{st}_{0}=\iota_{\mathcal {B}}\) implies that \(s^{0}_{p_{1}}=\iota_{p_{1}}\) and \(r^{-}(a_{1})=\iota_{p_{1}}\) since a 1 is the minimal event in that process. For i>1, r (i)=r(j) if there is \(j\mathrel {{<}{\llap {\mbox {$\cdot $}}}}_{p_{i} p_{i}}i\) and r (i)=ι otherwise. Thus we have \(s^{i-1}_{p_{i}}= s^{j}_{p_{j}}\). Now, for events i,j, if \(i<_{{pq}}^{M} j\) then \((r^{-}(i),a_{i},g_{i},d_{i},r(i))\in\mathord{\rightarrow_{p_{i}}}\) and \((r^{-}(j),a_{j},g_{j},d_{j},r(j))\in\mathord{\rightarrow_{p_{j}}}\) implies that d i =d j which means that condition (2) holds. This follows from the definition of χ in \(\delta_{\mathcal {B}}\) ensuring that sent and received messages are synchronized. To prove the other conditions, for any event i, and α∈TC, we need to show that, if α∈dom(g i ), then there exists j with (i,j)∈dom(α T) such that |t(i)−t(j)|∈g i (α). We have three cases depending on α.

  • α=Prev a for some aAct: If Prev a ∈dom(g i ), then firstly there exists some (unique) k such that \(\xi ^{\triangleleft }_{i}(a,k)\in F^{\triangleleft }\). If not, then the false constraint will occur in φ i which contradicts acceptance of σ by \(\mathcal {B}\). Thus we have that for this (a,k), the constraint \(z^{\triangleleft }_{a,k}\in g_{i}(\mathrm {Prev}_{a})\) occurs in φ i and \((\nu _{i-1}+t_{i}-t_{i-1})(z^{\triangleleft }_{a,k})\models \varphi _{i}\) implies that \((\nu _{i-1}+t_{i}-t_{i-1})(z^{\triangleleft }_{a,k})\in g_{i}(\mathrm {Prev}_{a})\). Now by definition of previous state updates in the run of \(\mathcal {B}\), we can use Lemma 7.2 to conclude that there exists ji for which \((i,j)\in \mathrm {Prev}^{M}_{a}\). Further, at j, we can see that \(z^{\triangleleft }_{a,k}\in R_{j}\), and for all j′ with j<j′<i, we have \(z^{\triangleleft }_{a,k}\notin R_{j'}\). Thus, \((\nu _{i-1}+t_{i}-t_{i-1})(z^{\triangleleft }_{a,k})=t_{i}-t_{j}\). Hence \((i,j)\in \mathrm {Prev}_{a}^{T}\) and |t(i)−t(j)|∈g i (Prev a ).

  • α=Next a for some aAct: If Next a ∈dom(g i ), then by definition of ξ , for \(k=\min(\mathbb {N}\setminus \mathrm {dom}(\xi ^{\triangleright }_{i-1}(a)))\), we have \(\xi ^{\triangleright }_{i}(a,k)=(\delta ^{\triangleright }(s_{0}^{\triangleright },(a_{i},\eta(a_{i})),g_{i}(\mathrm {Next}_{a})))\). Also we reset clock \(z^{\triangleright }_{a,k}\). But as r′ is an accepting run, \(\xi ^{\triangleright }_{n}(a,k)\) is undefined and so there exists j such that a=a j and \(\xi ^{\triangleright }_{j}(a,k)=(s,I)\) with sF . Let j be the smallest such j such that, for all j′ with i<j′<j, we have \((a,k)\in \mathrm {dom}(\xi ^{\triangleright }_{j'})\). Thus I=g i (Next a ). Also \((a_{i},\eta_{i}(a_{i}))\ldots(a_{j},\eta_{j}(a_{j}))\in \mathcal {L}( \mathcal {C}^{\triangleright })\) and by Lemma 7.2, \((i,j)\in \mathrm {Next}_{a}^{T}\). Now, we have \((\nu _{j-1}+t_{j}-t_{j-1})(z^{\triangleright }_{a,k})= \nu _{i}(z^{\triangleright }_{a,k})+t_{j}-t_{i}=t_{j}-t_{i}\). Again as the run was successful, \(z^{\triangleright }_{a,k}\in I\) occurs in φ j and \((v_{j-1}+t_{j}-t_{j-1})(z^{\triangleright }_{a,k})\in I=g_{i}(\mathrm {Next}_{a})\). Thus t j t i g i (Next σ ) and we are done.

  • α=Msg: This is similar to the above case (and easier, as the next automaton is not needed).  □

This completes both directions of the proof.

7.3 A finite version of \(\mathcal {B}\)

To get a finite version of \(\mathcal {B}\), we will bound the set of indices Ind to a finite set Ind fin , thus, constructing a finite timed automaton \(\mathcal {B}'\) that is equivalent to \(\mathcal {B}\). The state space of \(\mathcal {B}'\) is the same as \(\mathcal {B}\) except that it uses indices from Ind fin to define the ξ and ξ components. We construct \(\mathcal {B}'\) in two steps. First, we describe how transitions of \(\mathcal {B}\) are modified to handle previous gadgets. Next, we describe how to modify the transitions of \(\mathcal {B}\) to handle next gadgets. Thus, \(\mathcal {B}'\), obtained after both these modifications will turn out to be a finite timed automaton. To bound the set of indices, our basic idea is to reuse copies of the previous and next gadgets when it is safe to do so. First, we handle the previous case by examining when it is “safe” to release a copy of \(\mathcal {C}^{\triangleleft }\). The following proposition gives us the criterion required.

Proposition 7.2

Let \((\overline{s},\chi,\eta,\xi ^{\triangleleft },\xi ^{\triangleright },\gamma )\) be a reachable state of \(\mathcal {B}\). If there exist two indices (a,i),(a,j)∈dom(ξ ), ij, such that ξ (a,i)=ξ (a,j)=s Q , then no final state of \(s^{\triangleleft }_{f}\in F^{\triangleleft }\) is reachable from s .

Proof

The copies of \(\mathcal {C}^{\triangleleft }\) indexed by (a,i) and (a,j) have been started at distinct positions labeled a to keep track of two different pasts. Now suppose there exists \(s^{\triangleleft }_{f}\in F^{\triangleleft }\) such that \(s^{\triangleleft }_{f}\) is reachable from s . Then at \(s^{\triangleleft }_{f}\), this position is related by Prev a with both starting positions, i.e., when the clocks \(z^{\triangleleft }_{a,i}\) and \(z^{\triangleleft }_{a,j}\) were last reset. But this is not possible, because there is at most one previous position labeled a for any position. Thus no final state is reachable from s . □

This implies that we can safely remove the corresponding indices (a,i) and (a,j) from dom(ξ ). Thus, we say that a state \(\mathrm{st}=(\overline{s},\chi,\eta,\xi ^{\triangleleft },\xi ^{\triangleright },\gamma )\in Q_{\mathcal {B}}\) is ◁-safe if there are no two indices (a,i),(a,j)∈dom(ξ ), ij, such that ξ (a,i)=ξ (a,j). Otherwise, we say that st is ◁-unsafe and that (a,i),(a,j) are ◁-unsafe indices at the state st. A transition from \(\mathcal {B}\) is retained in \(\mathcal {B}'\) if it is between ◁-safe states. Further, every transition (st,φ,a,R,st′) in \(\mathcal {B}\) from a ◁-safe state st to a ◁-unsafe state \(\mathrm{st}'=(\overline{s}',\chi',\eta ',\xi '^{\triangleleft },\xi '^{\triangleright },\gamma ')\) is replaced by transition \((\mathrm{st} ,\varphi ,a,R,\widetilde{\mathrm{st}'})\) between ◁-safe states. Hereby, \(\widetilde{\mathrm{st}'}=(\overline{s}',\chi',\eta',\widetilde {\xi '^{\triangleleft }},\xi '^{\triangleright },\gamma ')\) with \(\widetilde{\xi '^{\triangleleft }}(b,i)= \xi '^{\triangleleft }(b,i)\) if (b,i)∈dom(ξ) and there is no ji such that (b,j)∈dom(ξ) and ξ(b,i)=ξ(b,j). Otherwise, \(\widetilde{\xi '^{\triangleleft }}(b,i)\) is undefined. By Proposition 7.2, \(\mathcal {B}'\) still accepts the same set of timed words as \(\mathcal {B}\). This is enough to ensure finiteness in the previous case as shown below.

Lemma 7.3

For any reachable state \((\overline{s},\chi,\eta ,\xi ^{\triangleleft },\xi ^{\triangleright },\gamma )\in Q_{\mathcal {B}'}\), we have dom(ξ )⊆Act×{0,…,|Q |}.

Proof

Suppose not, then, for some aAct, |dom(ξ ,a)|≥(|Q |+1). But this implies that there must exist at least two indices (a,i),(a,j)∈dom(ξ ), ij, such that ξ (a,i)=ξ (a,j). By the above definition, they would have been undefined and, hence, cannot be in the domain of ξ . Thus, we have a contradiction. □

The remaining source of infinity comes from next constraints. The situation is not as easy as for previous constraints, since next constraints registered at several positions could be matched at the same time. Thus, the number of registered Next b -constraints may be unbounded. In particular, in some state of \(\mathcal {B}\), suppose (b,i),(b,j)∈dom(ξ ) for some ij such that ξ (b,i)=(s ,I) and ξ (b,j)=(s ,I′). Then, the constraints associated with i and j will be matched simultaneously. When matched, the guard on the transition of \(\mathcal {B}\) will include both \(z^{\triangleright }_{b,i}\in I\) and \(z^{\triangleright }_{b,j}\in I'\). Our idea is to keep only the stronger constraint and release the other. To determine the stronger constraint we deal separately with upper and lower constraints.

Refining the constraints

A clock constraint over \(\mathcal {Z}_{\mathcal {B}}\) is called an upper-guard if it is of the form xc where ∼∈{<,≤} for some \(x\in \mathcal {Z}_{\mathcal {B}},c\in \mathbb {Q}_{\ge 0}\). Similarly xc is a lower-guard if ∼∈{>,≥}. Note that each “xI” defines uniquely a lower and an upper guard, depending upon the endpoints of the interval I.

Definition 7.2

Let xc and x′∼′c′ be two upper-guards with ∼,∼′∈{<,≤} or two lower guards with ∼,∼′∈{>,≥}. We say xc is stronger than x′∼′c′ if, when evaluated at the same instant, xc holds implies that x′∼′c′ holds as well.

The stronger constraint can be determined with a diagonal guard: For upper guards, xc is stronger than x′∼′c′ if either x′−x<c′−c or x′−xc′−c and (∼=< or ∼′=≤). The relation stronger than is transitive among upper-guards. It is also total: either xc is stronger than x′∼′c′, or the converse holds, or both in which case we say that the two constraints are equivalent. The constraints are equivalent iff x′−x=c′−c and ∼=∼′. The above properties are true for lower guards as well, where we have xc stronger than x′∼′c′ if either x′−x>c′−c or x′−xc′−c and (∼=> or ∼′=≥).

Restricting the domain of ξ

Now we get back to our problem and change \(\mathcal {B}\) so that the size of dom(ξ ) in a state \(\mathrm{st}=(\overline{s},\chi,\eta ,\xi ^{\triangleleft },\xi ^{\triangleright },\gamma )\) is bounded by |Act|⋅(2|Q |+1). Note that a transition of \(\mathcal {B}\) may initiate at most |Act| new copies of \(\mathcal {C}^{\triangleright }\) (one for each bAct such that Next b ∈dom(g)). Hence, we say that state st is ▷-safe if for all bAct we have |dom(ξ (b))|≤2|Q |. Only transitions between ▷-safe states of \(\mathcal {B}\) are retained in \(\mathcal {B}'\).

For a state st, bAct, s Q , we define \(\mathrm {Activ}_{\mathrm {st}}(b,s^{\triangleright })=\{i\mid \xi ^{\triangleright }(b,i)=(s^{\triangleright },I), I\in\nobreak \mathcal {I}\}\). Also for i∈Activst(b,s ), we denote by I i the interval such that ξ (b,i)=(s ,I i ). Now, if the state st is not ▷-safe, then there exist bAct and s Q such that we have |Activst(b,s )|>2. In this case, we say that st is ▷-unsafe for (b,s ). Now, for each (b,s ) such that st is ▷-unsafe for (b,s ), we can find i ,i u ∈Activst(b,s ) such that the lower guard defined by \(z^{\triangleright }_{b,i_{\ell}}\in I_{i_{\ell}}\) is stronger than all lower guards defined by \(z^{\triangleright }_{b,j}\in I_{j}\) for j∈Activst(b,s ) and the upper guard defined by \(z^{\triangleright }_{b,i_{u}}\in I_{i_{u}}\) is stronger than all upper guards defined by \(z^{\triangleright }_{b,j}\in I_{j}\) for j∈Activst(b,s ). From the definition of the relation stronger than, all constraints \(z^{\triangleright }_{b,j}\in I_{j}\) for j∈Activst(b,s ) are subsumed by the conjunction of \(z^{\triangleright }_{b,i_{\ell}}\in I_{i_{\ell}}\) and \(z^{\triangleright }_{b,i_{u}}\in I_{i_{u}}\). Therefore, we can release all next constraints associated with (b,j) with j∈Activst(b,s )∖{i ,i u }.

To do this in the automaton, we define guards of the form \(\varphi (i_{\ell}^{b,s^{\triangleright }},i_{u}^{b,s^{\triangleright }})\) that evaluate to true if \(i_{\ell}^{b,s^{\triangleright }}\) and \(i_{u}^{b,s^{\triangleright }}\) determine stronger lower and upper guards among those defined by Activst(b,s ). Since the relation stronger than can be expressed with diagonal constraints as we have seen above, we have \(\varphi(i_{\ell}^{b,s^{\triangleright }},i_{u}^{b,s^{\triangleright }})\in \mathrm {Form}(\mathcal {Z}_{\mathcal {B}})\).

Thus, for each transition (st,φ,a,R,st′) in \(\mathcal {B}\) from a ▷-safe state st to a state \(\mathrm{st}'=(\overline {s}',\chi',\eta',\xi '^{\triangleleft },\xi '^{\triangleright },\gamma ')\) that is not ▷-safe, we replace it by a transition \((\mathrm{st},\varphi ',a,R,\widetilde{\mathrm{st}'})\), where \(\varphi '={\varphi }\wedge{ (\bigwedge_{|\mathrm {Activ}_{\mathrm {st}'}(b,s^{\triangleright })|>2} \varphi(i_{\ell}^{b,s^{\triangleright }},i_{u}^{b,s^{\triangleright }}))}\) and \(\widetilde{\mathrm{st} '}=(\overline{s}',\chi',\eta',\xi '^{\triangleleft },\widetilde{\xi '^{\triangleright }},\gamma ')\) is such that

$$\widetilde{\xi '^\triangleright }(b,i) = \begin{cases} \xi '^\triangleright (b,i) & \text{if}\ \exists s^{\triangleright }\in Q^{\triangleright }\ \text{s.t.},\ i\in \mathrm {Activ}_{\mathrm {st}'}(b,s^{\triangleright })\ \text{and}\ (|\mathrm {Activ}_{\mathrm {st}'}(b,s^{\triangleright })|\leq2)\ \\ &\text{or}\ ( {|\mathrm {Activ}_{\mathrm {st}'}(b,s^{\triangleright })|>2} \wedge{i\in\{i_\ell^{b,s^{\triangleright }},i_u^{b,s^{\triangleright }}\}})\\ \text{undefined}& \text{otherwise} \end{cases} $$

Then, observe that \(\widetilde{\mathrm{st}'}\) is a ▷-safe state. From the discussion above, we obtain that \(\mathcal {B}\) and \(\mathcal {B}'\) still accept the same set of timed words. Hence we may conclude that in \(\mathcal {B}'\), we can restrict to the finite index set Ind fin =Act×{0,…,n} where n=max{|Q |,2|Q |}. Consequently, \(\mathcal {B}'\) uses finitely many states and clocks.

Theorem 7.4

The timed automaton \(\mathcal {B}'\) is finite, and we have \(\mathcal {L}_{\mathit {tw}}(\mathcal {B}') = \mathcal {L}_{\mathit {tw}}(\mathcal {B})\). Moreover, \(\mathcal {B}'\) has \((B+1)^{\mathcal{O}(|\mathit {Act}|)}\) many clocks.

Proof

We first note that the finiteness of \(\mathcal {B}'\) follows immediately from the fact that Ind fin =Act×{0,…,n} is finite (where n=max{|Q |,2|Q |}). Using (13) and (14), we deduce that the number of clocks of \(\mathcal {B}'\) is bounded by \(\mathcal{O}( |\mathit {Act}| \times2^{|\mathit {Act}|} \times(B+1)^{2|\mathit {Proc}|^{2}} )\).

To prove that \(\mathcal {B}\) and \(\mathcal {B}'\) accept the same timed language we will use an alternative definition of an accepting run of a timed automaton, which has moves with regard to time-elapse instead of time stamps.

In the following, we say that \((b,i) \in \mathrm {dom}(\xi ^{\triangleleft }_{1})\) is ◁-live if there is no ji such that \((b,j)\in \mathrm {dom}(\xi ^{\triangleleft }_{1})\) and \(\xi ^{\triangleleft }_{1}(b,i)=\xi ^{\triangleleft }_{1}(b,j)\). Moreover, we call \((b,i) \in \mathrm {dom}(\xi ^{\triangleright }_{1})\) to be ▷-live if i∈Activst(b,s ) and either (|Activst′(b,s )|≤2) or (|Activst′(b,s )|>2 and i∈{i ,i u }).

We define a relation ⇝ between configurations of \(\mathcal {B}\) and \(\mathcal {B}'\) and let (st1,ν 1)⇝(st2,ν 2) if the following conditions (i)–(iii) hold:

  1. (i)

    \(\mathrm{st}_{1}=(\overline{s},\chi,\eta,\xi ^{\triangleleft }_{1},\xi ^{\triangleright }_{1},\gamma )\) and \(\mathrm{st}_{2}=(\overline{s},\chi,\eta,\xi ^{\triangleleft }_{2},\xi ^{\triangleright }_{2},\gamma )\).

  2. (ii)

    For all ◁-live \((b,i) \in \mathrm {dom}(\xi ^{\triangleleft }_{1})\), there is k such that \((b,k)\in \mathrm {dom}(\xi ^{\triangleleft }_{2})\), \(\xi ^{\triangleleft }_{2}(b,k)=\xi ^{\triangleleft }_{1}(b,i)\), and \(\nu _{1}(z^{\triangleleft }_{(b,i)})=\nu _{2}(z^{\triangleleft }_{(b,k)})\). Conversely, for all \((b,k)\in \mathrm {dom}(\xi ^{\triangleleft }_{2})\), there is i such that \((b,i) \in \mathrm {dom}(\xi ^{\triangleleft }_{1})\) is ◁-live, \(\xi ^{\triangleleft }_{2}(b,k)=\xi ^{\triangleleft }_{1}(b,i)\), and \(\nu _{1}(z^{\triangleleft }_{(b,i)})=\nu _{2}(z^{\triangleleft }_{(b,k)})\).

  3. (iii)

    For all ▷-live \((b,i)\in \mathrm {dom}(\xi ^{\triangleright }_{1})\), there is k such that \((b,k)\in \mathrm {dom}(\xi ^{\triangleright }_{2})\), \(\xi ^{\triangleright }_{2}(b,k)=\xi ^{\triangleright }_{1}(b,i)\), and \(\nu _{1}(z^{\triangleright }_{(b,i)})=\nu _{2}(z^{\triangleright }_{(b,k)})\). Conversely, for all \((b,k)\in \mathrm {dom}(\xi ^{\triangleright }_{2})\), there is i such that \((b,i) \in \mathrm {dom}(\xi ^{\triangleright }_{1})\) is ▷-live, \(\xi ^{\triangleright }_{2}(b,k)=\xi ^{\triangleright }_{1}(b,i)\), and \(\nu _{1}(z^{\triangleright }_{(b,i)})=\nu _{2}(z^{\triangleright }_{(b,k)})\).

We show that the relation ⇝ is a bisimulation. That is, if (st1,ν 1)⇝(st2,ν 2), then

  • for every move \((\mathrm{st}_{1},\nu _{1})\xrightarrow{a, \tau } (\mathrm{st}'_{1},\nu '_{1})\), there exists a move \((\mathrm{st}_{2},\nu _{2})\xrightarrow{a,\tau }(\mathrm{st} '_{2},\nu '_{2})\) such that \((\mathrm{st}'_{1},\nu '_{1})\leadsto (\mathrm{st} '_{2},\nu '_{2})\), and

  • conversely, for every move \((\mathrm{st}_{2},\nu _{2})\xrightarrow {a,\tau }(\mathrm{st}'_{2},\nu '_{2})\), there exists a move \((\mathrm{st}_{1},\nu _{1})\xrightarrow{a,\tau } (\mathrm{st}'_{1},\nu '_{1})\) such that \((\mathrm{st} '_{1},\nu '_{1})\leadsto (\mathrm{st}'_{2},\nu '_{2})\)

To prove the first direction, let \(\mathrm{st}_{1}=(\overline{s},\chi,\eta,\xi ^{\triangleleft }_{1},\xi ^{\triangleright }_{1},\gamma )\) and \(\mathrm{st}_{2}=(\overline{s},\chi,\eta,\xi ^{\triangleleft }_{2},\xi ^{\triangleright }_{2},\gamma )\). Then, \((\mathrm{st}_{1},\nu _{1})\xrightarrow {a, \tau } (\mathrm{st}'_{1},\nu '_{1})\) with \(\mathrm{st}'_{1}=(\overline{s}',\chi',\eta',\xi '^{\triangleleft }_{1},\xi '^{\triangleright }_{1},\gamma ')\) if \((\mathrm{st}_{1},\varphi ,a,R,\mathrm{st}'_{1})\in\delta_{\mathcal {B}}\) for some φ and R with ν 1+τφ and \(\nu '_{1}=(\nu _{1}+\tau ) [R\mapsto0]\). We can now define φ′ and R′ by replacing each occurrence of \(z^{\triangleleft }_{(b,i)}\) (and \(z^{\triangleright }_{(b,i)}\)) in φ by \(z^{\triangleleft }_{(b,k)}\) (respectively \(z^{\triangleright }_{(b,k)}\)) for some k given by condition (ii) (respectively, (iii)). Then, \((\mathrm{st}_{2},\varphi ',a,R',\mathrm{st}'_{2})\in\delta_{\mathcal {B}'}\) where \(\mathrm{st}'_{2} =(\overline{s}',\chi',\eta',\xi '^{\triangleleft }_{2},\xi '^{\triangleright }_{2},\gamma ')\) with \(\xi '^{\triangleleft }_{2}\) and \(\xi '^{\triangleright }_{2}\) obtained from the definition of the respective modified transition relation.

Now, by Proposition 7.2, each previous clock mentioned in φ has an image in φ′ which, by definition, has the same constraint. Again, if φ mentions a next clock, then either it itself is in φ′ or there exists some other clocks in φ′ whose upper guards and lower guards are stronger than it. We can conclude that ν 1+τφ iff ν 2+τφ′ and \(\nu '_{2}=(\nu _{2}+\tau )[R'\mapsto0]\). Thus, we have \((\mathrm{st}_{2},\nu _{2})\xrightarrow{a,\tau }(\mathrm{st}'_{2},\nu '_{2})\) and we are done once we see that \((\mathrm{st} '_{1},\nu '_{1})\leadsto (\mathrm{st}'_{2},\nu '_{2})\). Condition (i) is already true. Consider condition (ii). For \((b,i)\in \mathrm {dom}(\xi '^{\triangleleft }_{1})\), there are two cases to consider. First, it was defined here, i.e., b=a and \(i=\min(\mathbb {N}\setminus \mathrm {dom}(\xi ^{\triangleleft }_{1}(a)))\). In this case, (b,i) is not ◁-unsafe at \(\mathrm{st}'_{1}\) iff (b,k) is not ◁-unsafe at \(\mathrm{st}'_{2}\), for \(k=\min(\mathbb {N}\setminus \mathrm {dom}(\xi ^{\triangleleft }_{2}(a)))\). Second, it was updated from the previous state (i.e., \((b,i)\in \mathrm {dom}(\xi ^{\triangleleft }_{1})\)). Now, if it was updated from the previous state st1 and it is not ◁-unsafe at \(\mathrm{st}'_{1}\), then it was not ◁-unsafe at st1 either (because, otherwise, there is \((b,j)\in \mathrm {dom}(\xi ^{\triangleleft }_{1})\) such that \(\xi ^{\triangleleft }_{1}(b,i)=\xi ^{\triangleleft }_{1}(b,j)\) which implies that \(\delta ^{\triangleleft }(\xi ^{\triangleleft }_{1}(b,i),(a,\eta(a)))=\delta ^{\triangleleft }(\xi ^{\triangleleft }_{1}(b,j),(a,\eta(a)))\) which implies that (b,i) is ◁-unsafe at \(\mathrm{st}_{1}'\)). Then, as st1⇝st2, there exists k such that \(\xi ^{\triangleleft }_{2}(b,k)=\xi ^{\triangleleft }_{1}(b,i)\). Now, (b,k) cannot be ◁-unsafe at \(\mathrm{st}'_{2}\) since, otherwise, we obtain that (b,i) will be ◁-unsafe at \(\mathrm{st}'_{1}\). Thus, \(\xi '^{\triangleleft }_{2}(b,k)=\delta ^{\triangleleft }(\xi ^{\triangleleft }_{2}(b,k)(a,\eta(a)))=\delta ^{\triangleleft }(\xi ^{\triangleleft }_{1}(b,j)(a,\eta(a)))=\xi '^{\triangleleft }_{1}(b,i)\). Conversely, if there exists \((b,k)\in \mathrm {dom}(\xi '^{\triangleleft }_{2})\) and \(\xi '^{\triangleleft }_{2}(b,k)=\xi '^{\triangleleft }_{1}(b,i)\), then there cannot exist \((b,j)\in \mathrm {dom}(\xi '^{\triangleleft }_{1})\) such that \(\xi '^{\triangleleft }_{1}(b,j)=\xi '^{\triangleleft }_{1}(b,i)\). By similar arguments, we see that condition (iii) also holds.

In the converse direction, for previous or next index (b,i) of st1 which does not have a corresponding index in st2, we use a fresh previous or next clock and use the same arguments as above.

Thus, ⇝ is a bisimulation, and now from the fact that the final states of \(\mathcal {B}\) and \(\mathcal {B}'\) coincide, we conclude that the timed languages are the same. □

From Theorems 7.2, 7.3, and 7.4, we obtain the proof of the main result, i.e., the decidability of emptiness for ECMPA over existentially B-bounded timed MSCs. To conclude the proof of Theorem 7.1, it remains to discuss the complexity of our algorithm.

Recall from [2] that the emptiness problem for timed automata is PSPACE-complete. More precisely, the upper bound is obtained with an NLOGSPACE reachability analysis of the region graph, the number of regions being polynomial in the number of states of the automaton but exponential in the number of clocks.

By Theorem 7.4, the number of clocks of \(\mathcal {B}'\) is \((B+1)^{\mathcal{O}(|\mathit {Act}|)}\). Each state of \(\mathcal {B}'\) is a tuple of the form \((\overline{s},\chi,\eta,\xi ^{\triangleleft },\xi ^{\triangleright },\gamma )\) as defined in Sect. 7.2 but using the finite set of indices Ind fin . From (13) and (14) we deduce that |Q |, |Q | and |Ind fin | are all in \((B+1)^{\mathcal{O}(|\mathit {Act}|)}\). Then, we can check that the number of states of \(\mathcal {B}'\) is bounded by \(2^{P(|\mathcal {A}|,(B+1)^{|\mathit {Act}|})}\) for some polynomial P.

Finally, combining the bounds above on the number of clocks and the number of states of \(\mathcal {B}'\) with the NLOGSPACE reachability analysis of the region graph, we can check emptiness using space polynomial in \(|\mathcal {A}|\) and (B+1)|Act| as announced in Theorem 7.1.

8 Conclusion

In this paper, we have introduced ECMPA as an automaton model to deal with timed and distributed systems. These combine the generality of message passing automata (in the distributed untimed setting) with the tractability of event clock automata (in the sequential timed setting). To describe the behaviors of such systems we have used two formalisms of timed partial orders, i.e., TMSCs and TCMSCs. The first main result shows that timed monadic second-order logic formulae and ECMPA are expressively equivalent over TMSCs. Indeed, this equivalence holds without assumptions on the bounds of channels, only when we restrict to the existential fragment of the logic. Further, the proof of this equivalence is constructive, since we are able to formulate an explicit translation between the two. The second main result proves that checking emptiness for ECMPA is decidable in the bounded case. These two results together allow us to check satisfiability for the timed logic.