Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Markov Chains (MC) are widely used to model stochastic systems, like randomized protocols, failure and risk analysis, and phenomena in molecular biology. Here we focus on discrete time MCs, where transitions between states are governed by a state probability distribution, denoted by \(\mu :S\times S\rightarrow [0,1]\). Practical applications are often hindered by the fact that the probabilities \(\mu (s,t)\), to go from state s to t, are unknown. Several solutions have been proposed, for instance Parametric Markov Chains (PMC) [7] and Interval Markov Chains (IMC) [14], in which unknown probabilities are replaced by parameters or intervals, respectively, see Figs. 1a and b. Following [9], we study their common generalization, Parametric Interval Markov Chains (PIMC, Fig. 1c), which allow intervals with parametric bounds. PIMCs are more expressive than IMCs and PMCs [2]. PIMCs allow to study the boundaries of admissible probability intervals, which is useful in the design exploration phase. This leads to the study of parameter synthesis for PIMCs, started in [12].

Fig. 1.
figure 1

Examples of a PMC (a), an IMC (b), and of their generalization PIMC (c). The MC (d) implements the PIMC (c), as shown by the dashed edges, and formalized in Definition 7. We drop self-loops with probability 1 (or [1, 1]) in all terminal nodes.

IMCs can be viewed as specifications of MCs. An IMC is consistent if there exists an MC that implements it. The main requirements on the implementation are: (1) all behaviour of the MC can be simulated by the IMC, preserving probabilistic information; and (2) the outgoing transition probabilities for each state sum up to 1. The consistency synthesis problem for PIMCs is to compute all parameter values leading to a consistent IMC. E.g., PIMC in Fig. 1cFootnote 1 is consistent when \(q=0, p=1\), or when \(p+q=1, r=1\). The witness MC of Fig. 1d corresponds to \(p=\frac{1}{4}, q=\frac{3}{4},r=1\).

Contribution. This paper studies the consistency synthesis problem for PIMCs. We improve the theory in [12], which gave an inductive definition of n-consistency. Basically, a state is n-consistent if it has a locally consistent subset of successors, which are in turn all \((n-1)\)-consistent. Here local consistency checks that there is a solution within the specified bounds that sums up to 1. That paper provided an expression for n-consistency. There are two drawbacks from an algorithmic perspective: all possible subsets of successors must be enumerated, and a sufficiently large upper bound for n must be provided. It has been shown that taking the number of states for n is sufficient.

We address both problems. First we simplify the inductive definition and show that for IMCs, the enumeration over the subset of successors is not necessary. Instead, we can restrict to a single candidate: the set of all \((n-1)\)-consistent successors. Second, we provide a smaller upper bound for n. We show that it is sufficient to take the length of the longest simple path in the IMC. However, the length of the longest simple path cannot be efficiently computed (this would solve the Hamiltonian path problem, which is NP-complete).

Our main contribution is to provide an alternative, co-inductive definition of consistency, dropping the need to reason about n altogether. Instead, we define consistency as the largest set, such that a state is consistent if the set of its consistent successors is locally consistent. We have formally proved the equivalence between all these definitions in the interactive theorem prover PVS [17]. The complete PVS proof development is available onlineFootnote 2.

Based on the simplified inductive definition, we provide a polynomial time forward algorithm to check that an IMC is consistent. Based on the new co-inductive definition, we provide a polynomial backward algorithm. Again, the number of iterations is bounded by the length of the longest simple path, without having to compute it.

Finally, we provide algorithms to compute an expression for all parameters for which a PIMC is consistent. Unfortunately, to obtain an expression we must fall back on subset enumeration. The forward algorithm can be implemented as a Constraint Logic Program, so Prolog + CLP(Q) can be used directly to compute a list of all solutions, basically as a disjunction of conjunctions of linear inequations over the parameters. We introduce two optimizations: caching intermediate results and suppressing subsumed solutions. The backward algorithm for IMCs can be viewed as computing the maximal solution of a Boolean Equation System. Generalizing this to PIMCs, we now compute the maximal solution of an equation system over disjunctions of conjunctions of constraints. Such equation systems can be solved by standard iteration, representing the intermediate solutions as powerdomains over convex closed polyhedra. We implemented this using the Parma Polyhedra Library [1].

Related Work. One of the first results on synthesis for PMCs [7] computes the probability of path formulas in PCTL as an expression over the parameters. Since then, the efficiency and numeric stability has been improved considerably [8, 18]. On such models, the realizability (or well-defined) property is considered [13, 16] which mainly differs from the consistency we address in that they consider consistency of all states, while we have the option to avoid some states (and their successors) by assigning null-probability to some edges. Model checking for IMCs is studied, for instance in [5, 6]. For continuous-time Markov Chains, precise parameter synthesis is studied as well [4]. However, PIMCs are more expressive (concise) than PMCs and IMCs, as shown in [2]. As far as we know, besides reachability, there are no model checking results for PIMCs.

Other specification formalisms include Constraint Markov Chains [11], with arbitrary constraints on transitions, and Abstract Probabilistic Automata [10], which add non-deterministic transitions. We believe that our work can be extended in a straightforward manner to CMCs with linear constraints. For APA the situation is probably quite different. Another branch of research has investigated the synthesis of parameters for timed systems, but that is out of the scope of this paper.

PIMCs, and the related consistency problem, have been introduced in [9]. Our backward algorithm for IMCs is somewhat similar in spirit to the pruning operator of [9]. We provide a sharper upper bound on the number of required iterations. That paper addresses the existence of a consistent parameter valuation for a restricted subclass of PIMCs, where parameters occur only locally. The parameter synthesis problem for the full class of PIMCs was considered in [12]. We improved on their theory, as explained before. Our experiments (Sect. 6) show that our algorithms and optimizations are more efficient than the approach in [12].

Very recently, [2] also introduced a CLP approach for checking the existence of a consistent parameter valuation. Their contribution is a CLP program of linear size in the PIMC. Their CLP is quite different from ours: basically, they introduce a Boolean variable for each state and a real variable for each transition probability of the Markov Chain that implements the PIMC. So solving the CLP corresponds to searching for a satisfying implementation.

2 Parametric Interval Markov Chains

As Parametric Interval Markov Chains allow for describing a family of Markov Chains, we first define these.

Definition 1

(Markov Chain). A Markov Chain (MC) is a tuple \((S,s_0,\mu ,A,V)\), where:

  • S is a set of states and \(s_0\in S\) is the initial state;

  • \(\mu :S\times S\rightarrow [0,1]\) is the transition probability distribution s.t.:

    \(\forall s\in S: \sum _{s'\in S} \mu (s,s') = 1\);

  • A is a set of labels and \(V:S\rightarrow A\) is the labelling function.

Notation 2

Let P be a set of parameters, i.e. variable names. We denote by \({\mathsf {Int[0,1]}}(P)\) the set of pairs [ab] with \(a,b\in [0,1]\cup P\). Given \(x\in {\mathsf {Int[0,1]}}(P)\), we denote by \(x_\ell \) and \(x_u\) its left and right components. If x is an interval, this corresponds to its lower and upper bounds. The same notation is used for functions which result in an interval.

Example 3

[0.3, 0.7], [0, 1], [0.5, 0.5], [p, 0.8], [0.99, q], [pq] are all in \({\mathsf {Int[0,1]}} (\{p,q\})\).

Definition 4

((Parametric) Interval Markov Chain). A Parametric Interval Markov Chain (PIMC) is a tuple \((P,S,s_0,\varphi ,A,V)\) where:

  • P is a set of parameters;

  • S is a set of states and \(s_0\in S\) is the initial state;

  • \(\varphi :S\times S\rightarrow {\mathsf {Int[0,1]}}(P)\) is the parametric transition probability constraint;

  • A is a set of labels and \(V:S\rightarrow A\) is the labelling function.

An Interval Markov Chain (IMC) is a PIMC with \(P=\varnothing \) (we drop P everywhere).

Note that Definitions 1 and 4 are very similar, but for PIMCs and IMCs the well-formedness of the intervals and of the probability distribution will be part of the consistency property to be checked (see Definition 11).

When ambiguity is possible, we will use a subscript to distinguish the models, e.g. \(S_{\mathcal M}\), \(S_{\mathcal I}\) and \(S_{\mathcal P}\) will denote the set of states of respectively a MC, an IMC and a PIMC.

If all intervals are point intervals of the form [pp], this PIMC is actually a Parametric Markov Chain [7].

Example 5

Figure 2a shows our running example of a PIMC with two parameters p and q (taken from [12]). Figure 2b shows a particular IMC.

Fig. 2.
figure 2

Running examples

Definition 6

(Support). The support of a probability distribution \(\mu \) at a state \(s\in S_{\mathcal M}\) is the set: \( sup (\mu ,s):=\{s'\in S_{\mathcal M}\ |\ \mu (s,s')>0\}\).

Similarly, for a parametric transition probability constraint \(\varphi \) at a state \(s\in S_{\mathcal P}\) the support is the set: \( sup (\varphi ,s):=\{s'\in S_{\mathcal P}\ |\ \varphi _u(s,s')>0\}\).

Assumption: From now on, we will assume that \({\mathcal I}\) is finitely branching, i.e. for all \(s\in S\), \( sup (\varphi ,s)\) is a finite set. For the algorithms in Sect. 4 we will even assume that S is finite.

PIMCs and IMCs can be viewed as specifications of MCs.

Definition 7

(A MC implements an IMC). Let \({\mathcal M}=(S_{\mathcal M},{s_{\mathcal M}}_0,\mu ,A,V_{\mathcal M})\) be a MC and \({\mathcal I}=(S_{\mathcal I},{s_{\mathcal I}}_0,\varphi ,A,V_{\mathcal I})\) an IMC. \({\mathcal M}\) implements \({\mathcal I}\) (\({\mathcal M}\vDash {\mathcal I}\)) if there exists a simulation relation \({\mathcal R}\subseteq S_{\mathcal M}\times S_{\mathcal I}\), s.t. \(\forall s_{\mathcal M}\in S_{\mathcal M}\) and \(s_{\mathcal I}\in S_{\mathcal I}\), if \(s_{\mathcal M}{\mathcal R}s_{\mathcal I}\), then:

  1. 1.

    \(V_{\mathcal M}(s_{\mathcal M})=V_{\mathcal I}(s_{\mathcal I})\)

    (the source and target states have the same label)

  2. 2.

    There exists a probabilistic correspondence \(\delta :S_{\mathcal M}\times S_{\mathcal I}\rightarrow [0,1]\), s.t.:

    1. (a)

      \( \forall s'_{\mathcal I}\in S_{\mathcal I}: \sum _{s'_{\mathcal M}\in S_{\mathcal M}} \mu (s_{\mathcal M},s'_{\mathcal M})\cdot \delta (s'_{\mathcal M},s'_{\mathcal I}) \in \varphi (s_{\mathcal I},s'_{\mathcal I})\)

      (the total contribution of implementing transitions satisfies the specification)

    2. (b)

      \( \forall s'_{\mathcal M}\in S_{\mathcal M}: \mu (s_{\mathcal M},s'_{\mathcal M})>0 \Rightarrow \sum _ {s'_{\mathcal I}\in S_{\mathcal I}} \delta (s'_{\mathcal M},s'_{\mathcal I}) = 1\)

      (the implementing transitions yield a probability distribution)

    3. (c)

      \( \forall s'_{\mathcal M}\in S_{\mathcal M}, s'_{\mathcal I}\in S_{\mathcal I}: \delta (s'_{\mathcal M},s'_{\mathcal I})>0 \Rightarrow s'_{\mathcal M}{\mathcal R}s'_{\mathcal I}\)

      (corresponding successors are in the simulation relation)

    4. (d)

      \( \forall s'_{\mathcal M}\in S_{\mathcal M}, s'_{\mathcal I}\in S_{\mathcal I}: \delta (s'_{\mathcal M},s'_{\mathcal I})>0 \Rightarrow \mu (s_{\mathcal M},s'_{\mathcal M})>0 \wedge \varphi _u(s_{\mathcal I},s'_{\mathcal I})>0\)

      (\(\delta \) is only defined on the support of \(\mu \) and \(\varphi \))

Definition 8

(Consistency). An IMC \({\mathcal I}\) is consistent if for some MC \({\mathcal M}\), we have \({\mathcal M}\vDash {\mathcal I}\).

A PIMC is consistent if there exist parameter values such that the corresponding IMC is consistent.

Intuitively, this definition states that the implementation is an MC, whose behaviour is allowed by the specification IMC, i.e., the IMC can simulate the MC. Clause (2d) was not present in the original definition [9], but it is convenient in proofs. We first show that limiting \(\delta \) to the support of \(\mu \) and \(\varphi \) does not alter the implementation relation.

Lemma 9

Definition 7 with clauses (2a)–(2c) is equivalent to Definition 7 with (2a)–(2d).

Fig. 3.
figure 3

The Markov Chain (left) implements the Interval Markov Chain (right)

Proof

Assume, there exist \({\mathcal R}\), \(s_{\mathcal M}{\mathcal R}s_{\mathcal I}\) and \(\delta \) that satisfy conditions (2a)–(2c) of Definition 7. Define:

$$ \delta '(s'_{\mathcal M},s'_{\mathcal I}) := \left\{ \begin{array}{cl} \delta (s'_{\mathcal M},s'_{\mathcal I}) &{} \text{ if } \mu (s_{\mathcal M},s'_{\mathcal M})>0 \text{ and } \varphi _u(s_{\mathcal I},s'_{\mathcal I})>0\text{; }\\ 0 &{} \text{ otherwise. } \end{array}\right. $$

Note that if \(\mu (s_{\mathcal M},s'_{\mathcal M})>0\) and \(\varphi _u(s_{\mathcal I},s'_{\mathcal I})=0\) then \(\delta (s'_{\mathcal M},s'_{\mathcal I})=0\) by (2a). Now properties (2a)–(2d) can be easily checked for \(\delta '\).    \(\square \)

Example 10

For Fig. 3, checking condition (2a) for \(t_1\) boils down to \(0.1 \cdot 0.3 + 0.6 \cdot 0.2 = 0.15 \in [0.1,0.2]\). Checking condition (2b) for \(s_1\) requires \(0.7+0.3=1\).

3 Consistency of Interval Markov Chains

In this section we study consistency of Interval Markov Chains; we will return to Parametric IMCs in Sect. 5. Intuitively, an IMC is consistent if it can be implemented by at least one MC. From now on, we will drop the labelling V, since it plays no role in the discussion on consistency, and thus consider an arbitrary IMC \({\mathcal I}=(S,s_0,\varphi )\).

3.1 Local Consistency

Local consistency of a state \(s\in S\) is defined with respect to a set \(X\subseteq S\) of its successors, and its probability constraint \(\varphi (s,\cdot )\). It ensures the existence of some probability distribution satisfying the interval constraints on transitions from s to X: the collective upper bound should be greater than or equal to 1 (condition \( up (\varphi ,s,X)\)), the collective lower bound less than or equal to 1 (\( low (\varphi ,s,X)\)). Moreover, each lower bound should be smaller than the corresponding upper bound, and the states outside X should be avoidable, in the sense that they admit probability 0 (\( local (\varphi ,s,X)\)).

Definition 11

(Local consistency). The local consistency constraints for a state \(s\in S\) and a set \(X\subseteq S\) is \( LC (\varphi ,s,X)\) s.t.:

$$ \begin{array}{rcl} LC (\varphi ,s,X) &{} := &{} up (\varphi ,s,X) \wedge low (\varphi ,s,X) \wedge local (\varphi ,s,X), ~where \\ up (\varphi ,s,X) &{} := &{} \sum _{s'\in X}\varphi _u(s,s') \ge 1 \\ low (\varphi ,s,X) &{} := &{} \sum _{s'\in X}\varphi _\ell (s,s') \le 1 \\ local (\varphi ,s,X) &{} := &{} (\forall {s'\in X} : \varphi _\ell (s,s') \le \varphi _u(s,s')) \wedge (\forall {s'\not \in X} : \varphi _\ell (s,s')=0) \\ \end{array} $$

We obtain the following facts, which can be directly checked from the definitions. Note that from Lemma 12(1) and (2) it follows that we may always restrict attention to the support of \((\varphi ,s)\): \( LC (\varphi ,s,X)\equiv LC (\varphi ,s,X\cap sup (\varphi ,s))\).

Lemma 12

For \(X,Y\subseteq S\):

  1. 1.

    If \(X\subseteq Y\) and \( LC (\varphi ,s,X)\) then \( LC (\varphi ,s,Y)\).

  2. 2.

    If \( LC (\varphi ,s,X)\) then also \( LC (\varphi ,s,X\cap sup (\varphi ,s))\).

Proof

  1. 1.

    Assume \(X\subseteq Y\) and \( LC (\varphi ,s,X)\), hence \( up (\varphi ,s,X)\), \( low (\varphi ,s,X)\) and \( local (\varphi ,s,X)\). From \( up (\varphi ,s,X)\), we have \(\sum _{s'\in X}\varphi _u(s,s') \ge 1\), so we get \( up (\varphi ,s,Y)\):

    $$ \sum _{s'\in Y}\varphi _u(s,s') = (\sum _{s'\in X}\varphi _u(s,s') + \sum _{s'\in Y\setminus X}\varphi _u(s,s')) \ge (1 + \sum _{s'\in Y\setminus X}\varphi _u(s,s')) \ge 1 $$

    From \( local (\varphi ,s,X)\), we have \(\forall s'\in Y\setminus X: \varphi _\ell (s,s')=0\), and from \( low (\varphi ,s,X)\), we have \(\sum _{s'\in X}\varphi _\ell (s,s') \le 1\), so we get \( low (\varphi ,s,Y)\):

    $$ \sum _{s'\in Y}\varphi _\ell (s,s') = (\sum _{s'\in X}\varphi _\ell (s,s') + \sum _{s'\in Y\setminus X}\varphi _\ell (s,s')) = \sum _{s'\in X}\varphi _\ell (s,s') + 0 \le 1 $$

    Finally, from \( local (\varphi ,s,X)\), it holds that for \(s'\in Y\), if \(s'\in X\) then \(\varphi _\ell (s,s')\le \varphi _u(s,s')\), else \(\varphi _\ell (s,s')=0\), which also implies \(\varphi _\ell (s,s')\le \varphi _u(s,s')\). If \(s'\not \in Y\) then \(s'\not \in X\), so \(\varphi _\ell (s,s')=0\). So we get \( local (\varphi ,s,Y)\). This proves that \( LC (\varphi ,s,Y)\).

  2. 2.

    Assume \( LC (\varphi ,s,X)\), hence \( up (\varphi ,s,X)\), \( low (\varphi ,s,X)\) and \( local (\varphi ,s,X)\). Note that if \(s'\in X\setminus sup (\varphi ,s)\), by definition of \( sup \), we obtain \(\varphi _u(s,s')=0\) and by \( local (\varphi ,s,X\)), we obtain \(\varphi _\ell (s,s')=0\).

    $$ \begin{array}{rcl} \sum \nolimits _{s'\in X\cap sup (\varphi ,s)}\varphi _u(s,s') &{} = &{} \sum \nolimits _{s'\in X}\varphi _u(s,s') - \sum \nolimits _{t\in X\setminus sup (\varphi ,s)}\varphi _u(s,s') \\ &{} = &{} \sum \nolimits _{s'\in X}\varphi _u(s,s') - 0 \ge 1 \end{array} $$
    $$ \begin{array}{rcl} \sum \nolimits _{s'\in X\cap sup (\varphi ,s)}\varphi _\ell (s,s') &{} = &{} \sum \nolimits _{s'\in X}\varphi _\ell (s,s') - \sum \nolimits _{s'\in X\setminus sup (\varphi ,s)}\varphi _\ell (s,s') \\ &{} = &{} \sum \nolimits _{s'\in X}\varphi _\ell (s,s') - 0 \le 1 \end{array} $$

    Finally, if \(s'\in X\cap sup (\varphi ,s)\) then \(s'\in X\), so \(\varphi _\ell (s,s')\le \varphi _u(s,s')\). Otherwise, \(s'\not \in X\) or \(s'\in X\setminus sup (\varphi ,s)\), but in both cases \(\varphi _\ell (s,s')=0\).    \(\square \)

3.2 Co-inductive Definition of Global Consistency

Global consistency of (P)IMCs can be defined in several ways, e.g. co-inductively and inductively. Here, we introduce a new co-inductive definition of global consistency, as a greatest fixed point (\( gfp \)). We first introduce an abbreviation for the set of locally consistent states w.r.t. a set X:

Notation 13

\( LC_\varphi (X)\) := \(\{ s ~|~ LC (\varphi ,s,X) \}\)

Next, we define \(\mathcal {C}\! ons \) as the greatest fixed point of \( LC_\varphi \). Intuitively, from consistent states one can keep taking locally consistent steps to other consistent states.

Definition 14

(Global consistency, co-inductive). \(\mathcal {C}\! ons := gfp ( LC_\varphi )\).

Lemma 15

From the definition of greatest fixed point, \(\mathcal {C}\! ons \) is the largest set C s.t. \(C\subseteq LC_\varphi (C)\):

  1. 1.

    \(s\in \mathcal {C}\! ons \ \equiv \ s\in LC_\varphi (\mathcal {C}\! ons ) \ \equiv \ s\in LC_\varphi (\mathcal {C}\! ons \cap sup (\varphi ,s))\)

  2. 2.

    If \(C\subseteq LC_\varphi (C)\) then \(C\subseteq \mathcal {C}\! ons \).

Proof

(1) holds because \(\mathcal {C}\! ons \) is a fixed point; the second equation uses Lemma 12(2); (2) holds because \(\mathcal {C}\! ons \) is the greatest fixed point. (Tarski)    \(\square \)

We motivate the definition of consistency by the following two theorems:

Theorem 16

Let \({\mathcal M}=(S_{\mathcal M},{s_{\mathcal M}}_0,\mu )\) be a MC, \({\mathcal I}=(S_{\mathcal I},{s_{\mathcal I}}_0,\varphi )\) an IMC, and assume \({\mathcal M}\vDash {\mathcal I}\). Then \({s_{\mathcal I}}_0\in \mathcal {C}\! ons \).

Proof

Since \({\mathcal M}\vDash {\mathcal I}\), there is a simulation relation \({\mathcal R}\), with \({s_{\mathcal M}}_0{\mathcal R}{s_{\mathcal I}}_0\), and for all \(s_{\mathcal M}\in S_{\mathcal M}, s_{\mathcal I}\in S_{\mathcal I}\), if \(s_{\mathcal M}{\mathcal R}s_{\mathcal I}\), there is a correspondence \(\delta \), satisfying properties (2a)–(2d) of Definition 7. We will prove that \(\{s_{\mathcal I}~|~ \exists s_{\mathcal M}: s_{\mathcal M}{\mathcal R}s_{\mathcal I}\} \subseteq \mathcal {C}\! ons \). Since \({s_{\mathcal M}}_0{\mathcal R}{s_{\mathcal I}}_0\), it will follow that \({s_{\mathcal I}}_0\in \mathcal {C}\! ons \).

We proceed using the \( gfp \)-property (Lemma 15(2)), so it is sufficient to prove:

$$\{s_{\mathcal I}\in S_{\mathcal I}~|~ \exists s_{\mathcal M}\in S_{\mathcal M}: s_{\mathcal M}{\mathcal R}s_{\mathcal I}\} \subseteq LC_\varphi (\{s_{\mathcal I}\in S_{\mathcal I}~|~ \exists s_{\mathcal M}\in S_{\mathcal M}: s_{\mathcal M}{\mathcal R}s_{\mathcal I}\}).$$

Let \(s_{\mathcal I}\in S_{\mathcal I}\) be given with \(s_{\mathcal M}\in S_{\mathcal M}\) s.t. \(s_{\mathcal M}{\mathcal R}s_{\mathcal I}\). Define \(X := \{s'_{\mathcal I}\in S_{\mathcal I}~|~\exists s'_{\mathcal M}\in S_{\mathcal M}: \delta (s'_{\mathcal M},s'_{\mathcal I})>0\}\). Clearly, if \(s'_{\mathcal I}\in X\), then for some \(s'_{\mathcal M}\in S_{\mathcal M}\), \(\delta (s'_{\mathcal M},s'_{\mathcal I})>0\) and by the correspondence property of Definition 7(2c), \(s'_{\mathcal M}{\mathcal R}s'_{\mathcal I}\). So \(X\subseteq \{s_{\mathcal I}\in S_{\mathcal I}~|~ \exists s_{\mathcal M}\in S_{\mathcal M}: s_{\mathcal M}{\mathcal R}s_{\mathcal I}\}\). Thus, by monotonicity, Lemma 12(1), it is sufficient to show that \(s_{\mathcal I}\in LC_\varphi (X)\).

To check that \(s_{\mathcal I}\in LC_\varphi (X)\), we first check that the corresponding transitions yield a probability distribution:

$$ \begin{array}{rcll} &{} &{} \sum \nolimits _{s'_{\mathcal I}\in X} \sum _{s'_{\mathcal M}\in S_{\mathcal M}} \mu (s_{\mathcal M},s'_{\mathcal M})\cdot \delta (s'_{\mathcal M},s'_{\mathcal I}) \\ &{} = &{} \sum \nolimits _{s'_{\mathcal I}\in S_{\mathcal I}} \sum _{s'_{\mathcal M}\in S_{\mathcal M}} \mu (s_{\mathcal M},s'_{\mathcal M})\cdot \delta (s'_{\mathcal M},s'_{\mathcal I}) &{} \text{(if } s'_{\mathcal I}\not \in X\text{, } \delta (s'_{\mathcal M},s'_{\mathcal I})=0 \text{ by } \text{ def. } \text{ of } \text{ X })\\ &{} = &{} \sum \nolimits _{s'_{\mathcal M}\in S_{\mathcal M}} \mu (s_{\mathcal M},s'_{\mathcal M}) \cdot (\sum \nolimits _{s'_{\mathcal I}\in S_{\mathcal I}} \delta (s'_{\mathcal M},s'_{\mathcal I})) &{} {\text {(by}\,\, \sum \text {-manipulation)}} \\ &{} = &{} \sum \nolimits _{s'_{\mathcal M}\in S_{\mathcal M}} \mu (s_{\mathcal M},s'_{\mathcal M}) \cdot 1 &{} {(\delta ~~\text {is a prob. distrib. by Definition}\,7(2\mathrm{b}))}\\ &{} = &{} 1 &{} {(\mu \,\,\text {is a prob. dist. by Definition}\,1\text { of MC)}} \end{array}$$

By Definition 7(2a), \(\forall s'_{\mathcal I}\in X\), \(\varphi _\ell (s_{\mathcal I},s'_{\mathcal I})\le \sum _ {s'_{\mathcal M}\in S_{\mathcal M}}\mu (s_{\mathcal M},s'_{\mathcal M})\cdot \delta (s'_{\mathcal M},s'_{\mathcal I}) \le \varphi _u(s_{\mathcal I},s'_{\mathcal I})\). By the computation above, \(\sum _{s'_{\mathcal I}\in X}\varphi _\ell (s_{\mathcal I},s'_{\mathcal I})\le 1\) and \(\sum _{s'_{\mathcal I}\in X}\varphi _u(s_{\mathcal I},s'_{\mathcal I})\ge 1\), proving \( low (\varphi ,s_{\mathcal I},X)\) and \( up (\varphi ,s_{\mathcal I},X)\), respectively. For \(s'_{\mathcal I}\in X\) we already established \(\varphi _\ell (s_{\mathcal I},s'_{\mathcal I})\le \varphi _u(s_{\mathcal I},s'_{\mathcal I})\). For \(s'_{\mathcal I}\not \in X\), by definition of X: \(\forall s'_{\mathcal M}:\delta (s'_{\mathcal M},s'_{\mathcal I})=0\). Thus, \(\varphi _\ell (s_{\mathcal I},s'_{\mathcal I})\le 0\) by the computation above, proving \( local (\varphi ,s_{\mathcal I},X)\). This proves \(s_{\mathcal I}\in LC_\varphi (X)\).    \(\square \)

Conversely, we prove that a consistent IMC can be implemented by at least one MC. Note that the proof of Theorem 17 provides the construction of a concrete MC.

Theorem 17

For IMC \({\mathcal I}=(S,s_0,\varphi )\), if \(s_0\in Cons \), then there exist a probability distribution \(\mu \) and a MC \({\mathcal M}=( Cons ,s_0,\mu )\) such that \({\mathcal M}\vDash {\mathcal I}\).

Proof

Assume \({\mathcal I}\) is consistent. Consider an arbitrary state \(s\in Cons \). We will define \(\mu (s,s')\) in between \(\varphi _\ell (s,s')\) and \(\varphi _u(s,s')\), scaling it by a factor p such that \(\mu (s)\) sums up to 1. Define \(L := \sum _{s'\in Cons }\varphi _\ell (s,s')\) and \(U := \sum _{s'\in Cons }\varphi _u(s,s')\). Set \(p := \frac{1-L}{U-L}\) (or 0 if \(L=U\)). Finally, we define \(\mu (s,s') := (1-p)\cdot \varphi _\ell (s,s') + p\cdot \varphi _u(s,s')\).

By Lemma 15, \(s\in LC_\varphi ( Cons )\), thus \(L \le 1 \le U\). Hence \(0\le p\le 1\), and indeed \(\varphi _\ell (s,s')\le \mu (s,s')\le \varphi _u(s,s')\). We check that \(\mu (s)\) is a probability distribution:

$$ \begin{array}{rcl} \sum \nolimits _{s'\in Cons }\mu (s,s') &{} = &{} \sum \nolimits _{s'\in Cons }\big ((1-p)\cdot \varphi _\ell (s,s') + p\cdot \varphi _u(s,s')\big ) \\ &{} = &{} (1-p)L + pU = L + p(U-L) = L + \frac{1-L}{U-L}(U-L) = 1 \end{array} $$

Finally, we show that \({\mathcal M}\) implements \({\mathcal I}\). Define \({\mathcal R}:= \{(s,s)~|~s\in Cons \}\). Define \(\delta (s',s'):=1\) and \(\delta (s',s''):=0\) for \(s'\ne s''\). Properties (2a)–(2c) of Definition 7 follow directly from the definition of \(\delta \), so by Lemma 9, \({\mathcal M}\vDash {\mathcal I}\).    \(\square \)

3.3 Inductive n-Consistency

Next, we define n-consistency for a state \(s\in S\) in an IMC, inductively. This rephrases the definition from [12]. Intuitively, n-consistent states can perform n consistent steps in a row. That is, they can evolve to \((n-1)\)-consistent states.

Definition 18

(Consistency, inductive). Define sets \( Cons _n\subseteq S\) by recursion on n:

$$ \begin{array}{rcl} Cons _0 &{} = &{} S,\\ Cons _{n+1} &{} = &{} LC_\varphi ( Cons _{n}). \end{array} $$

Lemma 19

We have the following basic facts on local consistency:

  1. 1.

    \( Cons _{n+1}\subseteq Cons _n\)

  2. 2.

    If \(m\le n\) then \( Cons _n \subseteq Cons _m\).

  3. 3.

    If \( Cons _{n+1}= Cons _n\) and \(s\in Cons _n\) then \(\forall m: s\in Cons _m\).

Proof

  1. 1.

    Induction on n. \(n=0\) is trivial. Let \(s'\in Cons _{n+2}= LC_\varphi ( Cons _{n+1})\). By induction hypothesis, \( Cons _{n+1}\subseteq Cons _{n}\), so by the monotonicity Lemma 12(1), \(s'\in LC_\varphi ( Cons _n)= Cons _{n+1}\), indeed.

  2. 2.

    Induction on \(n-m\), using (1).

  3. 3.

    If \(m>n\), we prove the result with induction on \(m-n\). Otherwise the result follows from (2).    \(\square \)

Next, we show that universal n-consistency coincides with global consistency. Note that this depends on the assumption that the system is finitely branching.

Theorem 20

Let \( sup (\varphi ,s)\) be finite for all s. Then \(s\in \mathcal {C}\! ons ~\equiv ~\forall n: s\in Cons _n\).

Proof

\(\Rightarrow \): Induction on n. The base case is trivial. Assume \(s\in \mathcal {C}\! ons \). Then \(s\in LC_\varphi (\mathcal {C}\! ons )\). By induction hypothesis, \(\mathcal {C}\! ons \subseteq Cons _{n}\). So, by monotonicity, Lemma 12(1), \(s\in LC ( Cons _{n})= Cons _ {n+1}\).

\(\Leftarrow \): Assume that \(s\in \bigcap _{n\ge 0} Cons _n\). Define \(Y_n := Cons _n\cap sup (\varphi ,s)\). By Lemma 19(1), \(s\in Cons _{n+1}= LC ( Cons _n)= LC_\varphi (Y_n)\) and \(Y_n\) is a decreasing sequence of finite sets (since \(\varphi \) has finite support). Hence it contains a smallest member, say \(Y_m\). For \(Y_m\), we have \(s\in LC_\varphi (Y_m)\) and \(Y_m\subseteq \bigcap _{n\ge 0} Cons _n\). By monotonicity, Lemma 12(1), \(s\in LC_\varphi (\bigcap _{n\ge 0} Cons _n)\). So we found another fixed point, and by Lemma 15(2), \(\bigcap _{n\ge 0} Cons _n \subseteq \mathcal {C}\! ons \), so indeed \(s\in \mathcal {C}\! ons \).    \(\square \)

Fig. 4.
figure 4

Infinitely-branching IMC

The following example shows that the condition on finite branching is essential. The situation is similar to the equivalence of the projective limit model with the bisimulation model in process algebras [3].

Example 21

Let \(t{\mathop {\longrightarrow }\limits ^{[0,1]}} t_i, \forall i\) and \(t_{i+1}{\mathop {\longrightarrow }\limits ^{ [0,1]}} t_i\), see Fig. 4. Then \(\forall i: t_i\) is i-consistent, but not \((i+1)\)-consistent (since no transition exits \(t_0\)). So t is n-consistent for all n. However, no \(t_i\) is globally consistent, and \( LC (t,\varnothing )=\bot \), so t is not globally consistent either.

Finally, we would like to limit the n that we need to check. It has been shown before [12] that when the number of states is finite, \(n=|S|\) is sufficient. We now show that we can further bound the computation to the length of the longest simple path.

Definition 22

(Paths properties). We define a (reverse) path of length n as a sequence \(s_0,\ldots ,s_ {n-1}\), such that for all \(0\le i<n\), \(\varphi _u(s_{i+1},s_i)>0\). The path is simple if for all ij, with \(0\le i< j < n\), we have \(s_i\ne s_j\). This path is bounded by m if \(n\le m\).

Note that, for instance, a path \((s_0,s_1,s_2)\) has length 3 according to this definition. The essential argument is provided by the following lemma:

Lemma 23

If \(s\in Cons _{n+1}\setminus Cons _{n+2}\), then there exists a \(s'\), with \(\varphi _u(s,s')>0\), such that \(s'\in Cons _n\setminus Cons _{n+1}\).

Proof

Assume \(s\in Cons _{n+1}\) and \(s\not \in Cons _{n+2}\). By Lemma 12(2): \(s\in LC_\varphi ( Cons _n\cap sup (\varphi ,s))\). Now if \( Cons _n\cap sup (\varphi ,s)\subseteq Cons _{n+1}\), by monotonicity we would have \(s\in LC_\varphi ( Cons _n\cap sup (\varphi ,s))\subseteq LC_\varphi ( Cons _{n+1})= Cons _{n+2}\), which contradicts the assumption. So there must be some \(s'\in Cons _n\) with \(\varphi _u(s,s')>0\) and \(s'\not \in Cons _{n+1}\).    \(\square \)

Since \( Cons _{n+1}(s)\) depends on \( Cons _n\) of its direct successors only, consistency information propagates along paths. In particular, it propagates no longer than the longest simple path. This can be made more formal as follows:

Theorem 24

If all simple paths from s are bounded by m, then:

$$ Cons _m(s) ~\equiv ~\forall n: Cons _n(s) $$

Proof

We first prove the following statement by induction on n:

(*) If \(s\in Cons _n\setminus Cons _{n+1}\), then there exists a simple path from s of length \(n+1\), \(s_0,\ldots ,s_n=s\), such that for all \(0 \le i \le n\), \(s_i\not \in Cons _{i+1}\).

Case \(n=0\): we take the path [s]. Case \(n+1\): Let \(s\in Cons _{n+1}\) but \(s\not \in Cons _{n+2}\). By Lemma 23, we obtain some \(s'\) with \(\varphi _u(s,s')>0\) and \(s'\in Cons _n\setminus Cons _{n+1}\). By induction hypothesis, we get a simple path \(s_0,\ldots ,s_n=s'\), with for all \(0\le i\le n\), \(s_i\not \in Cons _ {i+1}\). Extend this path with \(s_{n+1}:=s\). We must show that the extended path is still simple, i.e. \(s_i\ne s\) for \(i\le n\). Since \(s\in Cons _{n+1}\), by Lemma 19(2), \(s\in Cons _{i+1}\) but \(s_i\not \in Cons _{i+1}\), so \(s\ne s_i\).

Finally, to prove the theorem, assume \(s\in Cons _m\), with all simple paths from s bounded by m. We prove \(s\in Cons _n\) by induction on n. If \(n\le m\), \(s\in Cons _n\) by Lemma 19(2). Otherwise, assume as induction hypothesis \(s\in Cons _{n}\). Note that there is no simple path from s of length \(n+1\) since \(n+1>m\). By the statement (*), we obtain \(s\in Cons _{n+1}\). So \(\forall n: s\in Cons _n\).    \(\square \)

To summarize, if the longest simple path from \(s_0\) in \({\mathcal I}=(S,s_0,\varphi )\) is of length m, we can compute \(s_0\in Cons _m\). From Theorem 24, it follows that \(\forall n:s_0\in Cons _n\). By Theorem 20, we then obtain \(s_0\in \mathcal {C}\! ons \). By Theorem 17 we then know that there exists a Markov Chain \({\mathcal M}=(S,s_0,\mu )\), s.t. \({\mathcal M}\vDash {\mathcal I}\). Conversely, if there exists any \({\mathcal M}'=(S',s'_0,\mu ')\) s.t. \({\mathcal M}'\vDash {\mathcal I}\), we know by Theorem 16 that \({\mathcal I}\) is consistent.

Example 25

Let us consider again the IMC in Fig. 2b. The longest simple paths from state 0 are [0, 2, 1, 3] and [0, 2, 4, 3], of length 4. Hence to check consistency of the IMC, it suffices to check that state 0 is in \( Cons _4\).

4 Algorithms for Consistency Checking of IMCs

In this section, the developed theory is used to provide algorithms to check the consistency of a finite IMC \((S,s_0,\varphi )\) (i.e. without parameters). In the next section, we will synthesize parameters for PIMCs that guarantee their consistency. For IMCs, we present a forward algorithm and a backward algorithm, which are both polynomial. The justification of these algorithms is provided by the following Corollary to Lemma 12 and Definitions 14 and 18.

Corollary 26

Let IMC \((S,s_0,\varphi )\) be given, let \(s\in S\) and \(n\in {\mathbb {N}}\).

  1. 1.

    \(s\in Cons _{n+1} \equiv s\in LC_\varphi ( Cons _n \cap sup (\varphi ,s))\).

  2. 2.

    \(s\in \mathcal {C}\! ons \equiv s\in LC_\varphi (\mathcal {C}\! ons \cap sup (\varphi ,s))\).

The backward variant (Algorithm 1) follows our simple co-inductive definition, rephrased as Corollary 26(2). Initially, it is assumed that all states are consistent (true), which will be actually checked by putting them in the work list Q. When some state s is not locally consistent (LC, according to Definition 11), it is marked \(\mathbf{false}\) (line 5) and all predecessors t of s that are still considered consistent, are put back in the work list Q (lines 7–9).

figure a

The forward Algorithm 2 is based on the inductive definition of consistency, rephrased in Corollary 26(1). It is called with a consistency level m and a state s to check if \(s\in Cons _m\). It stores and reuses previously computed results, to avoid unnecessary computations. In particular, for each state, we store both the maximal level of consistency demonstrated so far (in field pc, positive consistency, initially 0), and the minimal level of inconsistency (in field nc, negative consistency, initially \(\infty \)). We exploit monotonicity to reuse these cached results: By Lemma 19(2), if a state is n-consistent for some \(n\ge m\), then it is m-consistent as well (lines 1–2 of Algorithm 2). By contraposition, if a state is not n-consistent for \(n\le m\), then it cannot be m-consistent (lines 3–4). In all other cases, all successors \(t\in \sup (\varphi ,s)\) are recursively checked and the \((m-1)\)-consistent ones are collected in C (lines 5–8). Finally, we check the local consistency (LC) of C (line 9), and store and return the result of the computation in lines 9–14.

Lemma 27

Let |S| be the number of states of the IMC and let \(|\varphi |:=|\{ (s,t)~|~\varphi _u(s,t)>0\}|\) be the number of edges. Let d be the maximal degree \(d:=max_s | sup (\varphi ,s)|\).

  1. 1.

    Algorithm 1 has worst-case time complexity \({\mathcal O}(|\varphi |\cdot d)\) and space complexity \({\mathcal O}(|S|)\).

  2. 2.

    Algorithm 2 has worst-case time complexity \({\mathcal O}(|S|\cdot |\varphi |)\) and space complexity \({\mathcal O}(|S|\cdot \log _2 |S|)\).

Proof

Note that in Algorithm 1 every state is set to \(\bot \) at most once. So every state s is added to Q at most \(|\varphi _u(s)|+1={\mathcal O}(d)\) times. Handling a state requires checking its incoming and outgoing edges, leading to \({\mathcal O}(|\varphi |\cdot d)\) time complexity. Only one bit per state is stored. Algorithm 2 is called at most \(m\le |S|\) times per state. Every call inspects the outgoing edges a constant number of times, leading to time complexity \({\mathcal O}(|S|\cdot |\varphi |)\). It stores two integers, which are at most |S|, which requires \(2\lfloor \log _2 |S|\rfloor + 1\) bits.    \(\square \)

Algorithm 2 could start at m equal to the length of the longest simple path, which cannot be efficiently computed in general. Algorithm 1 does not require computing any bound.

5 Parameter Synthesis for Parametric IMCs

In this section, we reconsider intervals with parameters from P. In particular, we present algorithms to synthesize the exact constraints on the parameters for which a PIMC is consistent. Note that given a PIMC and concrete values for all its parameters, we actually obtain the corresponding IMC, by just replacing the parameters by their values. This allows us to reuse all theoretical results on consistency from Sect. 3 on IMCs.

In particular, we can view parameters as “logical variables”, and view a PIMC as a collection of IMCs. For instance, consider a PIMC \((P,S,s_0,\varphi )\) with parameters \(P=\{p,q\}\). Given a state s and a finite set of states \(X\subseteq S\), the expression \( LC (\varphi ,s,X)\) can be viewed as a predicate over the logical variables p and q (appearing as parameters in \(\varphi \)). Also \(\mathcal {C}\! ons (s)\) and \(\mathcal {C}\! ons _n(s)\) can be viewed as predicates over p and q. In PVS, we can use universal quantification to lift results from IMCs to PIMCs. For instance, for PIMCs over P, Lemma 19.1 reads: \(\forall p,q: \forall s\in S: \mathcal {C}\! ons _{n+1}(s) \Rightarrow \mathcal {C}\! ons _n(s)\).

Inspecting the expression \( LC (\varphi ,s,X)\) in Definition 11, one realizes that it is actually a constraint over p and q in linear arithmetic. However, due to the recursive/inductive nature of Definitions 14 and 18, \(\mathcal {C}\! ons \) and \(\mathcal {C}\! ons _n\) are not immediately in the form of a Boolean combination over linear arithmetic constraints. We can rephrase the definition using an enumeration over all subsets of successors, similar to [12]. Since we consider finite PIMCs, the enumeration \(\exists X\subseteq S\) corresponds to a finite disjunction. Doing this we get the following variation of the inductive and co-inductive consistency definition:

Corollary 28

Let IMC \((S,s_0,\varphi )\) be given, and let \(s\in S\), \(n\in {\mathbb {N}}\)

  1. 1.

    \(s\in Cons _{n+1} \equiv \ \exists X\subseteq \sup (\varphi ,s): s\in LC_\varphi (X) \wedge X\subseteq Cons _n\)

  2. 2.

    \(s \in \mathcal {C}\! ons \equiv \exists X\subseteq \sup (\varphi ,s): s \in LC_\varphi (X) \wedge X\subseteq \mathcal {C}\! ons \)

Proof

We only prove (1), since (2) is similar.

\(\Rightarrow \): Choosing \(X:= Cons _n\) is sufficient by Lemma 19(1).

\(\Leftarrow \): If for some X, \(s\in LC_\varphi (X \cap sup (\varphi ,s))\) and \(X\subseteq Cons _n\), then by monotonicity, Lemma 12(1), \(s\in LC_\varphi ( Cons _{n}\cap sup (\varphi ,s))\), so \(s\in Cons _{n+1}\) by Lemma 19(1).    \(\square \)

It becomes clear that the synthesised parameter constraints will be Boolean combinations over linear arithmetic constraints. In particular, expressions in DNF (disjunctive normal form) provide clear insight in all possible parameter combinations. The number of combinations can be quite large, but we noticed that in many examples, most of them are subsumed by only a few maximal solutions. In particular, we will use the following notations for operators on DNFs:

  • \(\top \), \(\bot \) denote true and false (universe and empty set)

  • \(\sqcup \) for their union (including subsumption simplification)

  • \(\sqcap \) for their intersection (including transformation to DNF and simplification)

  • \(\sqsubseteq \) for their (semantic) subsumption relation

We have experimented with two prototype realizations of our algorithms. One approach is based on CLP (constraint logic programming) in SWI-Prolog [19] + CLP(\(\mathbb {Q}\)). We wrote a small meta-program for the subsumption check. The other approach is based on Parma Polyhedra Library [1]. In particular, its Pointset Powerset Closed Polyhedra provide efficient implementations of the operations on DNFs.

5.1 Inductive Approach

In the inductive approach, Corollary 28(1) gives rise to a set of equations on variables \(v_{n,s}\) (state s is n-consistent):

$$\begin{aligned} \mathbf {let~} v_{0,s}= & {} \top \\ \mathbf {let~} v_{n+1,s}= & {} \bigsqcup _{X\subseteq \sup (\varphi ,s)} \big ( LC_\varphi (X) \sqcap \bigsqcap _{t\in X} v_{n,t}\big ) \end{aligned}$$

Starting from the initial state \(v_{|S|,s_0}\), we only need to generate the reachable equations that are not pruned away by inconsistent \( LC_\varphi (X)\) constraints, and we need to compute each equation only once. Note that there will be at most \(|S|^2\) reachable equations. The number of conjunctions per equation is bounded by \(2^d\), the number of \(X\subseteq sup (\varphi ,s)\), and for each X we build a conjunction of length O(d). So, the size of the whole set of equations is bounded by \(O(|S|^2.d.2^d)\). In general, however, there is no polynomial upper bound on the size of the corresponding solution. Also, note that by Theorem 24, we could replace |S| by the length of the longest simple path.

The set of equations can be interpreted directly as a Constraint Logic Program in Prolog, with predicates cons(N,S). Prolog will compute the dependency tree for \(s\in Cons _n\), by backtracking over all choices for X. Along the way, all encountered \( LC_\varphi \) predicates are asserted as constraints to the CLP solver. This has the advantage that locally inconsistent branches will be pruned directly, without ever generating their successors. By enumerating all feasible solutions, we obtain the complete parameter space as a disjunction of conjunctive constraints.

However, the computation tree has a lot of duplicates, and the number of returned results is very high, since we start out with a deeply nested and-or-expression. We provide a more efficient version in Algorithm 3. Here recomputations are avoided by caching all intermediate results in a \( Table \) (see lines 3 and 12). For each enumerated subset of successors X (lines 12), the algorithm checks (\(n-1\))-consistency. Note that we “shortcut” this computation as soon as we find that either X is not locally consistent, or some \(t\in X\) is not \((n-1)\)-consistent (line 8). The final optimization is to suppress all subsumed results in the resulting conjunctions and disjunctions. We show this by using \(\sqcap \) and \(\sqcup \). This drastically reduces the number of returned disjunctions.

We have implemented Algorithm 3 in Prolog+CLP, using meta-programming techniques to suppress subsumed results. Alternatively, one could implement the algorithm directly on top of the Parma Polyhedra Library.

figure b

5.2 Co-inductive Approach

Next, we show how to encode co-inductive consistency in a Boolean Equation System (BES) (over sets of polyhedra). Here, the equations will be recursive. The largest solution for variable \(v_i\) indicates that state \(s_i\) is consistent. This solution provides then a description of the set of all parameters for which the PIMC is consistent.

Definition 29

(BES in DNF). Given a PIMC \({\mathcal P}=(P,S,s_0,\varphi )\), we define the BES as the following set of equations, for each formal variable \(v_s\), \(s\in S\):

$$ \Big \{ v_s = \bigsqcup _{X\subseteq \sup (\varphi ,s)} \big ( LC_\varphi (X)\sqcap \mathop {{\bigsqcap }}\limits _{t\in X} v_t \big ) ~\Big |~ s\in S\Big \} $$

We can bound the size of this BES and the number of iterations for its solution. Again, the size of the final solution cannot be bounded polynomially.

Lemma 30

For each PIMC \({\mathcal P}=(P,S,s_0,\varphi )\), with out-degree bounded by d, the corresponding BES has size \(O(|S|.d.2^d)\). The BES can be solved in \(O(\ell )\) iterations, where \(\ell \) is the length of the longest simple path in \({\mathcal P}\).

Proof

We have |S| equations of size at most \(O(d.2^d)\) each. The BES can be solved by value iteration. Let \(F_s\) denote the right hand side of the equation for \(v_s\). We can compute the largest solution by iteration and substitution as follows:

$$\begin{aligned} \sigma _0= & {} \lambda s. \top \\ \sigma _{n+1}= & {} \lambda s. F_s[ v_t \mapsto \sigma _n(t) ~|~ t\in S] \end{aligned}$$

By monotonicity, \(\sigma _{n}\supseteq \sigma _{n+1}\) (pointwise). We can terminate whenever \(\sigma _{n}\subseteq \sigma _{n+1}\). Since it can be proved by induction that \(\sigma _n \equiv Cons _n\), the process terminates within \(\ell \) steps by Lemma 24.    \(\square \)

Solving this BES can be done by straightforward value iteration, see Lemma 30. The intermediate expressions can be viewed as collections of polyhedra, represented e.g. by the Parma Polyhedra Library as Powersets of Convex Polyhedra [1]. Algorithm 4 provides a variant of the iteration in Lemma 30. Here we only update the polyhedra for nodes whose successors have been modified. To this end, we maintain a worklist Q of states that we must check initially (line 1) or when their successors are updated (line 9). This algorithm can be viewed as the parametric variant of the backward Algorithm 1.

Example 31

On the example in Fig. 2a, Definition 29 gives rise to the following equation system (simplifying trivial arithmetic inequalities and global bounds \(0\le p,q\le 1\)).

figure c

We solve the simplified BES by value iteration as follows. In Approximation 0, each vi=true. We show approximations 1–3, which is the stable solution. From the final result, we can conclude that the initial state in Fig. 2a is consistent, if and only if \(0.3\le q \le 0.7 \vee q=1\).

figure d

6 Experiments

To get an indication of the effectiveness of our algorithms and optimizations, we performed some experiments in a Prolog prototype implementation on Lehmann and Rabin’s randomized dining philosophers from Prism [15]. First, we modified that DTMC to an IMC by replacing probabilities like 0.1666 by intervals [0.1, 0.2]. This made the consistency analysis numerically more stable. Subsequently, we replaced some probabilities by parameters to get a PIMC, and experimented with different intervals as featured in the first column of Table 1, for one parameter P or two parameters P and Q. The number of edges with these parameters is given in the third column of the table. We compared the inductive algorithm for increasing n-consistency with the co-inductive algorithm. Column CLP shows our Prolog implementation of the inductive algorithm from [12]. CLP+opt corresponds to the inductive Algorithm 3, including our optimizations, i.e. caching and subsumption. PPL shows our co-inductive Algorithm 4. We carried out the experiments on the model for 3 philosophers (956 states, 3625 edges) and 4 philosophers (9440 states, 46843 edges).

Table 1 shows the results; all times are measured with the SWI-Prolog library statistics. Increasing n-consistency highly impacts the performance. It is clear that caching and subsumption provide useful optimizations, since we can now compute consistency for much higher n. The co-inductive variant with PPL wins, since it is always faster than CLP+opt for \(n=50\). We also observe that the increase time from 3 to 4 philosophers is considerable. This is consistent with the complexity result (note that for phil4, \(d\approx 5\) so \(d\cdot 2^d\approx 150\)).

Note that PPL computes consistency constraints for all states and for all n, whereas CLP only computes this for small n and the initial state. As an example, the constraint computed by PPL for 3 philosophers, 2 parameters, intervals [0, P] and [0.3, Q] is \((3P\ge 0.5 \wedge Q\ge 0.34) \vee (2P+Q\ge 1 \wedge p\ge 0.25 \wedge 3Q\ge 1)\).

Table 1. Experimental results. All entries are time in seconds. Timeout was 1 h.

7 Perspectives

Using inductive and co-inductive definitions of consistency, we provided forward and backward algorithms to synthesize an expression for all parameters for which a PIMC is consistent. The co-inductive variant, combined with a representation based on convex polyhedra, provides the most efficient algorithm. We believe that our work extends straightforwardly when the intervals are replaced by arbitrary linear constraints, since both CLP and PPL can handle linear constraints. The resulting formalism would generalize (linear) Constraint Markov Chains [11] by adding global parameters. Also, our approach should apply directly to parameter synthesis for consistent reachability [12].

We also plan to experiment with other case studies, e.g. the benchmarks of [2], with an implementation that is more elaborate than our initial prototype. This would give insight on how the algorithms scale up w.r.t. the number of parameters and of parametric edges. Finally, [2] gives a polynomial size CLP with extra variables, but doesn’t address the parameter synthesis problem. Polynomial size CLP programs without extra variables can be obtained using if-then-else, e.g. \((v_0?1:0)+(v_1?1:0)+(v_2?p:0)\le 1\). However, we don’t know of a method to solve equation systems with conditionals without expanding them to large intermediate expressions.