1 Introduction

Model checking [9, 24] is an approach to formal methods in which a system is represented as a model M, system behavior of interest is represented as a formula \(\phi \) of a suitable temporal logic, and the question of whether the model satisfies that property (written \(M\models \phi \)) is decided using an algorithm parametric in M and \(\phi \). For infinite models, this question often is undecidable and may therefore require the abstraction of models to finite ones [2].

Program analyses (see e.g. [23]) consider programs P and aim to answer questions such as “Are there portions of code in P that can never be reached during execution?”. Since exact answers may be undecidable, abstraction is often used to under-approximate or over-approximate such answers, for example, the set of program points that can never be reached. Many program analyses can be computed by a static analysis that computes a least fixed point of a monotone function over a complete lattice; see for example Chapter 6 in [23] for more details on this approach based on worklist algorithms.

These two approaches, model checking and static analysis, appear to be quite different even though they share the need for abstraction. For example, it is not immediately clear whether each program analysis might correspond to a property \(\phi \) of some suitable logic. But there is a body of research that points out a close relationship and connections between these approaches. For example, in [26] it is shown how data-flow analyses can be seen as instances of model checking: if programs are represented as models of a modal logic, one can capture a data-flow analysis as a formula in that modal logic, and then partially evaluate the model checker for that logic to thus implement the data-flow analyzer. This insight led to an actual methodology: in [25] one converts a program into a transition system as program model – using its operational semantics, then applies abstraction [3, 4] to eliminate details of that model that are irrelevant to the analysis/formula in question, and finally one can do model checking on the abstract model using formulas that capture the analysis in question.

These contributions furthered the understanding of how program analysis can be seen within the framework of model checking. Conversely, it turns out that the central question of model checking, whether \(M\models \phi \) holds, can be computed with techniques from static analysis. In [22], an alternation-free fixed-point logic was defined and it was shown how static analysis over the resulting flow logic can decide model-checking instances for modal logics such as computation tree logic (CTL) [9]. The flow logic in [22] was also demonstrated to have applications in data-flow analysis and constraint solving [11]. In later work [28], this alternation-free least fixed-point logic was extended so that one could capture model checking of the modal mu-calculus [18] (not just of CTL) in this manner, and a Moore family result was proved for this logic; Moore families are the set of closed sets of a closure operator.

The temporal logic CTL and the linear-time temporal logic LTL can be seen as subsets of the temporal logic CTL* (see e.g. [15]). The logic CTL* can in turn be embedded into the modal mu-calculus [5], although at an exponential cost [19]. LTL and CTL capture many practically important property patterns [7] and are therefore very useful. But some have argued that these logics are mathematically somewhat ad hoc. The modal mu-calculus, on the other hand, is more canonical since it does not limit the manner in which fixed-point patterns can be composed (apart from syntactic restrictions that ensure monotonicity of meaning). It is therefore apt to understand the connections between static analysis and model checking over the modal mu-calculus as well, and the work reported in [28] shows how static analysis in the form of flow logics can capture model checking of the modal mu-calculus.

There is another important aspect to the study of such connections though. It is well understood [8, 10, 27] that model checking of the modal mu-calculus is equivalent (within polynomial time) to the solving of parity games. These are directed graphs whose nodes are owned by one of two players and colored by a natural number. In this chapter, we assume that such graphs are finite. Plays between these players generate infinite paths in these graphs whose winners are decided by minimal colors of cycles generated by these paths. A player wins a node if she can play such that all plays beginning in that node are won by her in this manner. A central result for parity games states that these games are determined [8, 21, 29]: each node is won by exactly one of the two players. Deciding which player wins which nodes, and how they can achieve these wins is what one means by solving parity games.

Using the aforementioned results in [8, 10, 27], we can therefore understand how to use static analysis for model checking by understanding how static analyses may solve parity games. Known approaches of solving parity games in this manner, for example the ones based on small progress measures [17], all suffer from the fact that the height of the ordered domain derived from the parity game may be exponentially larger than that game – leading to exponential worst-case running times of least fixed-point computations in the resulting worklist algorithm that implements a static analysis. In fact, the decision problem of whether a given node in a parity game is won by a given player is in UP\({}\cap {}\)coUP [16], and its exact complexity has been an open problem for over twenty years now.

The work that we report here means to combine static analysis with abstraction. The analyses we design below run in polynomial time by construction. But this efficiency is gained by possibly under-approximating the solution of a parity game: the used static analysis may not decide the winners of all (or indeed some) nodes although they often solve games completely. Furthermore, in local modal checking (see e.g. [27]) it suffices to know whether one or several designated states satisfy a property. In the setting of parity games, this means that it may suffice to statically decide the winner of one or several nodes – which the static analyses we present here may often achieve.

Outline of Chapter: In Sect. 2, we recall background on parity games. Our new type of alternating reachability game is defined and studied in Sect. 3. In Sect. 4, we show how this game induces monotone functions for each player of a parity game, and that we can use these functions to build static analyses of parity games that repeatedly compute greatest fixed points of such functions on (residual) games. We discuss, in Sect. 5, how this approach generalizes our earlier work on fatal attractors in [13]. Our experimental results are reported in Sect. 6, related work not discussed above already is presented in Sect. 7, and the chapter concludes in Sect. 8.

2 Background

In this section, we define key concepts of parity games, and fix technical notation used in this chapter. We write \(\mathbb {N}\) for the set \(\{0,1,\dots \}\) of natural numbers. A parity game G is a tuple \((V,V_0,V_1,E,c)\), where V is a set of nodes partitioned into possibly empty node sets \(V_0\) and \(V_1\), with an edge relation \(E\subseteq V\times V\) (where for all v in V there is a w in V with (vw) in E), and a coloring function \(c:V\rightarrow \mathbb {N}\). In figures, c(v) is written within nodes v, nodes in \(V_0\) are depicted as circles and nodes in \(V_1\) as squares. For v in V, we write v.E for node set \(\{ w\in V\mid (v,w) \in E\}\) of successors of v. Below we write \(\mathsf {C}({G})\) for the set of colors in game G, i.e. \(\mathsf {C}({G}) = \{c(v)\mid v\in V\}\), and \(\mathsf {C}({G}) _\bot \) for set \(\mathsf {C}({G})\cup \{\bot \}\).

Throughout, we write p (or sometimes \(p'\)) for one of 0 or 1 and \(1-p\) for the other player. In a parity game, player p owns the nodes in \(V_p\). A play from some node \(v_0\) results in an infinite play \(\pi = v_0v_1\dots \) in (VE) where the player who owns \(v_i\) chooses the successor \(v_{i+1}\) such that \((v_i,v_{i+1})\) is in E. Let \(\mathsf {Inf}({\pi })\) be the set of colors that occur in \(\pi \) infinitely often:

$$ \mathsf {Inf}({\pi })= \{ k\in \mathbb {N}\mid \forall j\in \mathbb {N}:\exists i\in \mathbb {N}:i>j \text{ and } k=c(v_i) \} $$

Player 0 wins play \(\pi \) iff \(\min \mathsf {Inf}({\pi })\) is even; otherwise player 1 wins play r.

A strategy for player p is a total function \(\sigma _p:V^*\cdot V_p\rightarrow V\) where the pair \((v,\sigma _p(w\cdot v))\) is in E for all v in \(V_p\) and w in \(V^*\). A play \(\pi \) conforms with \(\sigma _p\) if for every finite prefix \(v_0\dots v_i\) of \(\pi \) with \(v_i\) in \(V_p\) we have \(v_{i+1}=\sigma _p(v_0\dots v_i)\). A strategy \(\sigma _p\) is memoryless if for all \(w,w'\) in \(V^*\) and v in \(V_p\) we have \(\sigma _p(w\cdot v) = \sigma _p(w'\cdot v)\) and such a \(\sigma _p\) can be seen to have type \(V_p\rightarrow V\).

It is well known that each parity game is determined [8, 21, 29]: (i) node set V is the disjoint union of two, possibly empty, sets \(W_0\) and \(W_1\), the winning regions of players 0 and 1 (respectively); and (ii) there are memoryless strategies \(\sigma _0\) and \(\sigma _1\) such that all plays beginning in \(W_0\) and conforming with \(\sigma _0\) are won by player 0, and all plays beginning in \(W_1\) and conforming with \(\sigma _1\) are won by player 1. Solving a parity game means computing such data \((W_0,W_1,\sigma _0,\sigma _1)\).

Throughout this chapter, we write G for a parity game \((V,V_0,V_1,E,c)\), denote by p one of its players, and let X be a non-empty set of nodes of G. We write \(x\% 2\) for x modulo 2 for an integer x, and \({{\mathsf {Attr}}_{{p}}[{G},{X}]}\) to denote the attractor of node set X for player p, which computes the standard alternating reachability of X for that player in the game graph of G (see e.g. Definition 1 in [13]).

Example 1

In the parity game G depicted in Fig. 1, the winning regions are \(W_1 = \{\}\) and \(W_0 = V\). The memoryless strategy \(\sigma _0\), defined by \(\sigma _0(v_1) = v_2\), is a winning strategy for player 0 on \(W_0\).

Fig. 1.
figure 1

A parity game: circles denote nodes in \(V_0\), squares denote nodes in \(V_1\).

3 Alternating Reachability Under Parity

In this section, we generalize alternating reachability in parity game graphs, so that this reachability is aware of minimal colors encountered en route:

Definition 1

Given parity game G, player p, and non-empty node set X, let \(\pi = v_0v_1\dots \) be an infinite play in G.

  1. 1.

    Player p wins play \(\pi \) in the reachability game for (Xp) under parity iff there is some \(j >0\) such that \(v_j\) is in X and \(\min (\{c(v_i)\mid 0\le i\le j\} \% 2 = p\). Dually, player \(1-p\) wins play \(\pi \) in that reachability game iff she detracts from (Xp) under parity, that is to say iff for all \(j > 0\) we have that \(v_j\) in X implies that \(\min (\{c(v_i)\mid 0\le i\le j\} \% 2 = 1- p\).

  2. 2.

    A strategy for player \(p'\) in this game is defined like a strategy for that player in the parity game G. Also, the definition of when plays conform with strategies in this game is the same as for parity game G.

  3. 3.

    Player \(p'\) wins a node v for reachability of (X, p) under parity iff she has a strategy \(\sigma _{p'}\) such that all plays starting from v and conforming to \(\sigma _{p'}\) are winning for player \(p'\) in the reachability game for (Xp) under parity.

  4. 4.

    We write \(\mathsf {W}_{r}^{{p}}({G},{X})\) for the set of nodes that player p wins in this manner (we won’t need notation for the set of nodes won by player \(1-p\)).

This acceptance condition binds p to X: it is player p who wants to reach (Xp) under parity. Also, starting from X in a play does not yet mean that X has been reached. In particular, player \(1-p\) wins all plays that don’t visit X after the initial node. An immediate question is whether such games are determined and how complex it is to solve them. We answer these questions next.

Lemma 1

For all parity games G, players p, and non-empty node sets X, the derived game in G of reaching (Xp) under parity is determined.

Proof

For a color i in \(\mathsf {C}({G})\) and node set \(S\subseteq V\) let \(S_i=\{v\in S\mid c(v)=i\}\) and \(S_{\ge i}=\{ v\in S \mid c(v)\ge i\}\). Also, let \(C =\{c\in \mathsf {C}({G})\mid c\%2=p\}\). The set of winning plays for player p in the reachability game for (Xp) under parity is the union of \((V^*_{\ge i} \cdot V_i \cdot V^*_{\ge i} \cdot X_{\ge i} \cdot V^\omega ) \cup (V^+_{\ge i} \cdot X_i \cdot V^\omega )\) over all i in C. Note that, for each such i, both expressions in this union capture the non-deterministic choice of reaching X in Definition 1. The difference in these expressions is merely that the minimal color i may be witnessed before that non-deterministic choice of reaching X. The set of winning plays for player p is thus a Borel definable set of paths. From the Borel determinacy of turn-based games [20] it therefore follows that the game is determined.    \(\square \)

Next, we derive from parity game G and node set X a game graph that reduces reachability of (Xp) under parity to (the usual alternating) reachability in the derived game graph. This derived game has nodes of form (vl) where l records the history of the minimal color encountered so far. In particular, we use \(l=\bot \) to model that a play is just beginning.

Definition 2

For parity game \(G=(V,V_0,V_1,E,c)\), player p, and non-empty node set X, game graph \(G^p_X = (V\times \mathsf {C}({G})_\bot ,E')\) is defined as follows: For c in \(\mathsf {C}({G}) _\bot \), player 0 owns all nodes (vc) with \(v\in V_0\). Player 1 owns all nodes (vc) with \(v\in V_1\). And the edge relation \(E'\subseteq (V\times \mathsf {C}({G})_\bot )\times (V\times \mathsf {C}({G})_\bot )\) is defined as

$$\begin{aligned} E'= & {} \{((v,\bot ),(v',\min (c(v),c(v'))))\mid (v,v')\in E \} \cup {} \\ {}{} & {} \{((v,c),(v',\min (c,c(v'))))\mid (v,v')\in E, c\in \mathsf {C}({G}), (v\not \in X\ \hbox {or}\ c\% 2 \not =p)\} \nonumber \end{aligned}$$
(1)
Fig. 2.
figure 2

Game graph \(G^p_X\) for G from Fig. 1 and X being \(\{v_0\}\); only nodes and edges reachable (in non-alternating sense) from \(X\times \{\bot \}\) in \(G^p_X\) are shown, as this is all that is needed for deciding which nodes in X are contained in \(\mathsf {W}_{r}^{{p}}({G},{X})\). The winning strategy for player 0 requires her to make different choices from the same nodes of G when they are combined with different colors: player 0 needs to move from \((v_1,3)\) to \((v_2,2)\) and from \((v_1,2)\) to \((v_0,2)\) in \(G^p_X\)

Note that relation \(E'\) is even contained in \((V\times \mathsf {C}({G})_\bot )\times (V\times \mathsf {C}({G}))\) and contains dead ends (nodes that don’t have outgoing edges in the game graph). The latter is not an issue since all dead ends in \(G^p_X\) are target nodes for the alternating reachability in \(G^p_X\). Figures 2 and 3 show examples of this construction.

Fig. 3.
figure 3

Game graph \(G^p_X\) for G from Fig. 1 and X being \(\{v_0,v_2\}\). As in Fig. 2, only nodes and edges reachable (in non-alternating sense) from \(X\times \{\bot \}\) in \(G^p_X\) are shown. The winning strategy for player 0 allows her to make choices that do not depend on the color annotating the states. She can move from \((v_1,c)\) to \((v_2,2)\) regardless of the value of c.

The intuition of game graph \(G^p_X\) is that player p can win node v in G for reaching (Xp) under parity iff player p can win the (alternating) reachability game in \(G^p_X\) for target set \(X\times \{c\in \mathsf {C}({G})\mid c\% 2 = p\}\). We state this formally:

Theorem 1

For G and \(G^p_X\) as above, let Z be \(X\times \{c\in \mathsf {C}({G}), c\% 2 = p\}\) and W be \(\{ v\in V\mid (v,\bot )\in {\mathsf {Attr}_{p}({G^p_X},{Z})}\}\). Then W is the winning region of player p in G for reachability of (Xp) under parity.

Proof

First, let \(W_p = \mathsf {W}_{r}^{{p}}({G},{X})\) be the winning region of player p in G for reachability of (Xp) under parity. Since this game has a Borel defined winning condition, there exists a strategy \(\tau :V^*\times V_p\rightarrow V\) such that all plays conforming with \(\tau \) and starting in \(W_p\) are won by player p for reachability of (Xp) under parity.

We write \(\tau '\) for the same strategy but applied to \(G^p_X\) whilst ignoring the second component of nodes in \(G^p_X\). (We note that \(E'\) updates the second component of nodes in \(G^p_X\) deterministically.) Consider a play \(\pi \) in \(G^p_X\) that starts in \(W_p\times \{\bot \}\) and conforms with \(\tau '\). The projection of \(\pi \) onto the first components of its nodes is a play in G that starts in \(W_p\) and conforms with \(\tau \). Therefore, that play is won by player p in G, and so it is also won by player p in \(G^p_X\).

Second, it remains to show that \(\{ v\mid (v,\bot )\in {\mathsf {Attr}_{p}({G^p_X},{Z})}\}\) is contained in \(W_p\). Let \(\delta '\) be a winning (attractor) strategy for player p in \(G^p_X\) for the attractor \({\mathsf {Attr}_{p}({G^p_X},{Z})}\). As an attractor strategy, \(\delta '\) is memoryless. That is, for every node \((v,c) \in V_p \times \mathsf {C}({G})_\bot \) we can write \(\delta '(v,c)\) and this is in \(V\times \mathsf {C}({G})\). For a sequence of nodes \(\pi =v_0,\ldots , v_n\), let \(c(\pi )\) denote \(\min \{c(v_i)\mid 0\le i\le n\}\). Let \(\delta :V^*\cdot V_p \rightarrow V\) be the strategy obtained from \(\delta '\) by setting \(\delta (\pi \cdot v)=\delta '(v,c(\pi \cdot v))\). Then \(\delta \) is a strategy in G. Every play that begins in \(W = \{ v\mid (v,\bot )\in {\mathsf {Attr}_{p}({G^p_X},{Z})}\}\) and conforms with \(\delta \) in G can be extended to a play in \(G^p_X\) that begins in \({\mathsf {Attr}_{p}({G^p_X},{Z})}\) and conforms with \(\delta '\) by adding the deterministic second components. Therefore, this play is winning for player p in \(G^p_X\). It follows from the construction of \(E'\) that player p reaches X from \(W = \{ v\mid (v,\bot )\in {\mathsf {Attr}_{p}({G^p_X},{Z})}\}\) such that the minimal color encountered on the way in G has parity p.    \(\square \)

This theorem also gives us an upper bound on the complexity of solving games for reachability of (Xp) under parity, noting that alternating reachability is linear in the number of edges of the game graph, and that \(G^p_X\) has at most \({\mid \! {E}\! \mid }\cdot {\mid \! {\mathsf {C}({G})}\! \mid }\) many edges.

Corollary 1

For G, p, and X as above, the reachability game in G for (Xp) under parity can be solved in time \(O({\mid \! {E}\! \mid }\cdot {\mid \! {\mathsf {C}({G})}\! \mid })\).

We later consider the issue of whether memoryless strategies suffice for winning in G for reachability of (Xp) under parity (they do not). However, from the proof of Theorem 1 it follows that the size of memory required is bounded by the number of colors in the game (plus 1).

4 Monotone Functions for a Partial Solver

Let player p win node v for reaching (Xp) under parity in G. Then player p can make sure that X is reached from v, and that X can be reached from v such that the minimal color encountered so far has color parity p. If all nodes in X are won by player p, node set X is then won by player p in the parity game G:

Lemma 2

For all G, X, and p such that X is contained in \(\mathsf {W}_{r}^{{p}}({G},{X})\), player p wins all nodes from X in parity game G.

Proof

For each v in X, player p has a strategy \(\sigma _v\) with finite memory such that all plays beginning at node v and conforming with \(\sigma _v\) will reach again some node in X such that the minimal color of that finite play has parity p. Because X is contained in \(\mathsf {W}_{r}^{{p}}({G},{X})\), player p can compose all these strategies to a strategy \(\sigma _p\) with finite memory as follows:

From \(v_0\) in X, she plays conform with \(\sigma _{v_0}\) until a finite play \(v_0\dots v_k\) is generated such that \(v_k\) is in X and \(\min \{c(v_j)\mid 0\le j\le k\}\) has color parity p. We know that such a finite subplay will be generated by \(\sigma _{v_0}\) as it is a winning strategy for player p witnessing that v is in \(\mathsf {W}_{r}^{{p}}({G},{X})\). At node \(v_k\), player p now continues to play conform with strategy \(\sigma _{v_k}\). She can continue this composition pattern to generate an infinite play \(\pi = v_0\cdots v_k\cdots \) that is partitioned into infinitely many finite sub-plays \((\pi ^i)_{i\ge 0}\) that begin and end in X (and may contain other nodes in X) and that each have some minimal color \(c_i\) with parity p.

Since G has only finitely many nodes, this means that all colors that occur infinitely often in \(\pi \) are greater than or equal to some color that occurs as minimal color in infinitely many sub-plays \(\pi ^i\) (and so has parity p and also occurs infinitely often in \(\pi \)). Therefore, player p wins \(\pi \) in the parity game G and so the described strategy is also winning for player p on node set X in parity game G.    \(\square \)

We now put this lemma to use by characterizing such winning node sets as fixed points of a monotone function. For that, let \({V}^{p}\) be the (possibly empty) set of nodes of G that have color parity p, that is \({V}^{p}\) equals \(\{v\in V\mid c(v)\% 2 = p\}\). Let us consider the function \(F^{p}_{G}\), defined by

$$\begin{aligned} F^{p}_{G}:{\mathbb {P}({{V}^{p}})}\rightarrow {\mathbb {P}({{V}^{p}})},\qquad F^{p}_{G}(X) = X\cap \mathsf {W}_{r}^{{p}}({G},{X}) \end{aligned}$$
(2)

Lemma 2 then says, in particular, that all non-empty fixed points of \(F^{p}_{G}\) are node sets won by player p in parity game G. That function is monotone:

Lemma 3

For all G and p, function \(F^{p}_{G}\) defined in (2) is monotone.

Proof

Let X and Y be subsets of \({V}^{p}\) such that X is contained in Y. We need to show that \(F^{p}_{G}(X)\) is contained in \(F^{p}_{G}(Y)\) as well. By definition of \(F^{p}_{G}\), monotonicity follows if X or Y is empty. So let X and Y be non-empty. Since \(X\subseteq Y\) and since intersection is monotone, it suffices to show that \(\mathsf {W}_{r}^{{p}}({G},{X})\) is contained in \(\mathsf {W}_{r}^{{p}}({G},{Y})\). So let v be in \(\mathsf {W}_{r}^{{p}}({G},{X})\). Then player p has a winning strategy that ensures that all plays from node v reach X such that the minimal color encountered thus far has parity p. Since X is contained in Y, this means that all such plays will also reach Y with minimal color encountered en route. Therefore, the winning strategy for \(v\in \mathsf {W}_{r}^{{p}}({G},{X})\) is also a winning strategy for \(v\in \mathsf {W}_{r}^{{p}}({G},{Y})\), and so v is in \(\mathsf {W}_{r}^{{p}}({G},{Y})\) as claimed.    \(\square \)

Neither the monotonicity of \(F^{p}_{G}\) nor the result of Lemma 2 depend on the fact that all nodes in X have color parity p, nor that anything is known about colors in X; for Lemma 2, it only matters that all nodes in X are also in \(\mathsf {W}_{r}^{{p}}({G},{X})\). It is of interest to note that function \(F^{p}_{G}\) would not be monotone if we were to change the acceptance condition for reaching (Xp) under parity to mean that player p has to get minimal color parity p at the first time she reaches X after the first node in the play. Formally, player p would win a play \(\pi \) iff there were some \(j > 0\) with \(\pi _j\) in X such that \(\min \{c(\pi _i)\mid 0\le i\le j\}\% 2\) equals p and there were no k with \(0 < k < j\) such that \(\pi _k\) would be in X. The resulting non-monotonicity of this modified acceptance condition is illustrated in Fig. 4.

Fig. 4.
figure 4

Function \(F^{0}_{G}\) is no longer always monotone when \(\mathsf {W}_{r}^{{p}}({G},{X})\) has acceptance condition that looks at the minimal color of the prefix for the first reached element of X instead of a non-deterministally chosen first or future element of X. For G above and \(X = \{v_3, v_5\}\) and \(Y = {V}^{0}\), we would then have \(X\subseteq Y\) but \(F^{0}_{G}(X) = \{v_3,v_5\}\) would not be contained in \(F^{0}_{G}(Y) = \{v_0,v_1\}\) under that modified acceptance condition

Monotonicity of \(F^{p}_{G}\) means that either all its fixed points are empty or its greatest fixed point is non-empty. This suggests an algorithm that recursively computes such greatest fixed points for each player p, and removes non-empty ones as being recognized winning regions for player p from parity game G until either G is solved completely or both \(F^{0}_{G}\) and \(F^{1}_{G}\) have only empty fixed points. The pseudo-code for this algorithm psolC is shown in Fig. 5.

Fig. 5.
figure 5

Partial solver psolC: in \(G^p_X\), only \(X\cap \mathsf {W}_{r}^{{p}}({G},{X})\) needs to be computed. So this is implemented by only constructing nodes and edges in \(G^p_X\) that are reachable from \(X\times \{\bot \}\) in the non-alternating sense

When a greatest fixed point is discovered for player p, the partial solver removes the p attractor of that fixed point in parity game G from G, not just the fixed point. This is sound since winning node sets for players in parity games are closed under attractors for those players. The pseudo-code does not show the accumulation of the removed node sets into winning regions, as these are routine administrative matters that only detract from the essence of this partial solver.

We show soundness and upper bounds on the complexity of psolC:

Theorem 2

Let G be a parity game as above. Then \(\mathtt{psolC} (G)\) runs in time \(O({\mid \! {E}\! \mid }\cdot {\mid \! {\mathsf {C}({G})}\! \mid }\cdot {\mid \! {V}\! \mid }^2)\), space \(O({\mid \! {E}\! \mid }\cdot (1+{\mid \! {\mathsf {C}({G})}\! \mid }))\), and all node sets \({{\mathsf {Attr}}_{{p}}[{G},{X}]}\) it removes from (residual instances of) G are won by player p in the parity game G.

Proof

Since \(\mathsf {W}_{r}^{{p}}({G},{X})\) can be computed in \(O({\mid \! {E}\! \mid }\cdot {\mid \! {\mathsf {C}({G})}\! \mid })\), each fixed-point computation in \(\mathtt{psolC} (G)\) runs in \(O({\mid \! {E}\! \mid }\cdot {\mid \! {\mathsf {C}({G})}\! \mid }\cdot {\mid \! {V}\! \mid })\) as it can have at most \({\mid \! {V}\! \mid }\) iterations. But there can also be at most \(2\cdot {\mid \! {V}\! \mid }\) many such fixed-point computations in total as each subsequent such computation requires that at least one node has been removed from G beforehand.

The upper bound on the space complexity follows since the size of \(G^p_X\) is the dominating factor for space requirements of psolC  – larger than the size of G, since there are at most \({\mid \! {E}\! \mid }\cdot (1+{\mid \! {\mathsf {C}({G})}\! \mid }))\) many edges in \(G^p_X\), and since there is no need to keep copies of \(G^p_X\) once \(X\cap \mathsf {W}_{r}^{{p}}({G},{X})\) has been computed in psolC.

The remaining soundness claim for partial solver psolC directly follows from Lemma 2 and from the aforementioned fact that winning regions of players in parity games are closed under attractors of those players. The latter also ensures that winning regions of recursive instances of G are winning regions of G.    \(\square \)

It turns out that reachability of (Xp) under parity cannot be solved with memoryless strategies in general, in contrast to the solving of parity games:

Theorem 3

Solving alternating reachability under parity requires finite memory in general.

Proof

It suffices to give an example where this is the case. Recall the simple parity game G from Fig. 1. Let p be 0 and X be \(\{v_0\}\). Then \(\mathsf {W}_{r}^{{0}}({G},{X})\) equals V and so player 0 wins all nodes for reachability of (X, 0) under parity. But she cannot realize this with a memoryless strategy \(\sigma _0\), for either \(\sigma _0(v_1)\) would equal \(v_2\) (and then player 1 can detract from X by moving from \(v_2\) back to \(v_1\)) or \(\sigma _0(v_1)\) would have to equal \(v_0\) (in which case player 1 can move from \(v_0\) to \(v_1\) to generate an infinite play in which all prefixes that reach X have odd color 3). Let the strategy \(\sigma '_0:V^*\cdot \{v_1\} \rightarrow V\) be defined, for all w in \(V^*\), by \(\sigma '_0(w\cdot v_1) = v_0\) if \(v_2\) is in w; and \(\sigma '_0(w\cdot v_1) = v_2\) otherwise. Strategy \(\sigma '_0\) has finite memory and is winning on all nodes for reachability of (X, 0) under parity: \(\sigma '_0\) ensures that \(v_0\) is reached, and that \(v_0\) is reached only after \(v_2\) has been reached. This means that the minimal color encountered until X is reached equals 2, a win for player 0.    \(\square \)

The implication of Theorem 3 is that even though psolC identifies winning regions in the parity game the strategies that it allows us to construct, in general, require memory. At the same time, we know that there exist memoryless strategies for both players from their respective winning regions in the parity game.

Although finite memory is required in general, we note that \(Y = {V}^{0}\) is the greatest fixed point of \(F^{G}_{0}\) for G from Fig. 1, and that the memoryless strategy \(\sigma _0\) above is winning for \(\mathsf {W}_{r}^{{0}}({G},{Y}) = V\). This raises the question of whether non-empty greatest fixed points of \(F^{p}_{G}\) ever require corresponding winning strategies with finite memory or whether they always can be memoryless. This is also apparent in the derived games \(G^0_X\) and \(X^0_Y\) depicted in Figs. 2 and 3, respectively. We formulate this problem as a research question:

Question 1

Is there a parity game G and player p where the greatest fixed point X of \(F^{p}_{G}\) is non-empty and player p does not have memoryless strategies for witnessing that X is contained in \(\mathsf {W}_{r}^{{p}}({G},{X})\)?

If no finite memory is needed for greatest fixed points of \(F^{p}_{G}\), then psolC might be able to compute memoryless winning strategies for parity game G. Let us next give an example of how psolC may solve games completely:

Example 2

Let us consider the execution of \(\mathtt{psolC} (G)\) for parity game G in Fig. 4 (for the acceptance condition as in Definition 1). Initially, \(p=0\) and \(X= \{v_0,v_1,v_3,v_5\} = {G}^{0}\). Then psolC detects in \(\mathtt{fixedPoint}\) that X is the greatest fixed point of \(F^{0}_{G}\) and removes its 0 attractor in G (which is all of V) from G. Thus psolC completely solves G and recognizes that all nodes are won by player 0. Note that X is a fixed point of \(F^{0}_{G}\) since \(\mathsf {W}_{r}^{{0}}({G},{X})\) equals V: (i) player 0 wins node \(v_0\) as player 1 can only move to \(v_3\) or \(v_5\) from there and so reach X with minimal color 0; (ii) player 0 wins node \(v_1\) since player 1 can only move to \(v_0\) from there and so reach X with minimal color 0; (iii) player 0 wins node \(v_2\) since player 1 can only generate the prefix \(v_2v_1v_0\) from there and so get minimal color 0 for this second reach of X; (iv) player 0 wins \(v_3\) since player 1 can either move from there to \(v_2\) and so generate a prefix \(v_3v_2v_1v_0\) with minimal color 0 for his second reach of X or player 1 can move to \(v_4\) from where she can only move to X with minimal color 2 for the first reach of X; (v) player 0 wins \(v_5\) for symmetric reasons; and (vi) player 0 wins \(v_4\) because player 1 can only reach X from here with minimal color 2 on the first reach of X.

Solver psolC is partial in that it may not solve even a single node in a parity game. We illustrate this with an example:

Example 3

Figure 6 shows a parity game G for which psolC solves no nodes at all. For \(p=0\), set X is initially \(V\setminus \{v_1\}\). (i) Node \(v_0\) is lost by player 0 since player 1 can move from there into the cycle \((v_2v_6)^\omega \) with minimal color 1. Player 0 wins all other nodes in X. Therefore, the next value of X equals \(\{v_2,v_3,v_4,v_5,v_6\}\). (ii) Now, nodes \(v_4\) and \(v_5\) are lost by player 0, as player 1 can move from them to node \(v_0\) (which is no longer in X) and then play as for the initial X to get minimal color 1. Player 0 wins all other nodes in X. Therefore, the next value of X equals \(\{v_2,v_3,v_6\}\). (iii) Next, node \(v_3\) is lost by player 0, as player 1 can move from there directly to node \(v_4\) (which is no longer in X) and then enter the cycle \((v_0v_5)^\omega \) and so avoid X altogether. Player 0 wins nodes \(v_2\) and \(v_6\) though. Therefore, the next value of X equals \(\{v_2,v_6\}\). (iv) Now, player 0 loses \(v_2\) as player 1 can avoid reaching that node again from \(v_2\). Player 0 still wins node \(v_6\). Thus, the next value of X equals \(\{v_6\}\). (v) Finally, player 1 can avoid reaching X again from node \(v_6\) and so wins \(v_6\), making X empty.

Clearly, \(F^{1}_{G}\) computes an empty fixed point as all nodes in parity game G are won by player 0. The inability of psolC to solve even a single node in G seems to stem from the fact that the acceptance condition for \(\mathsf {W}_{r}^{{0}}({G},{X})\) captures a weak parity acceptance condition [1] and not a parity acceptance condition.

Fig. 6.
figure 6

Parity game G, owned by player 1, won by player 0, and where psolC cannot solve even a single node

We could extend the types of \(F^{p}_{G}\) to be \({\mathbb {P}({V})}\rightarrow {\mathbb {P}({V})}\). The proofs for monotonicity and for fixed points being won by player p in the parity game G would still carry through then. It may be of interest to compare a variant of psolC based on greatest fixed points for this extended type of \(F^{p}_{G}\) to psolC: that variant may run slower in practice but may solve more nodes in G. However, it will still be a partial solver as can be seen from Example 3: for the version of psolC based on this extended type, both \(v_0\) and \(v_1\) would be removed from initial \(X = V\) in the first iteration and so this still would compute empty fixed points only.

5 Fatal Attractors

Our work in [13] defined and studied monotone attractors and built partial solvers out of them. Let X be a non-empty node set of G where all nodes in X have color c, and set p to be \(c\% 2\). Monotone attractors \({\mathsf {MA}({X})}\) were defined in [13]. For X as above, and subsets A of V this definition is as follows:

$$\begin{aligned} {{{\mathsf {mpre}}}_{p}({A}, {X}, {c})}= & {} \{ v\in V_{p}\mid c(v)\ge c \wedge v.E\cap (A\cup X)\ne \emptyset \} \cup \nonumber \\&\{v\in V_{1-p}\mid c(v) \ge c \wedge v.E\subseteq A\cup X\}\nonumber \\ {\mathsf {MA}({X})}= & {} \mu Z.{{{\mathsf {mpre}}}_{p}({Z}, {X}, {c})} \end{aligned}$$
(3)

where \(\mu Z.f(Z)\) denotes the least fixed point of a monotone function \(f:{\mathbb {P}({V})}\rightarrow {\mathbb {P}({V})}\). It follows that \({\mathsf {MA}({X})}\) is the set of nodes in G from which player p can attract to X whilst avoiding nodes of color less than c. In [13], we called such an X fatal if all of X is in that attractor (i.e. when \(X\subseteq {\mathsf {MA}({X})}\)). In Theorem 2 in [13], we showed that all such fatal attractors are won by player p.

To relate this to our work in this chapter, an infinite play \(\pi \) would be won in this monotone attractor game by player p iff there is some \(j > 0\) with \(\pi _j\) in X and \(c(\pi _i) \ge c\) for all i with \(0\le i < j\); so X can be reached on \(\pi \) with minimal color c at \(\pi _j\). This implies that all such fatal attractors X with node color c are fixed points of \(F^{p}_{G}\) and are therefore contained in the greatest fixed point of \(F^{p}_{G}\). We can use this to prove that psolC is more effective than the partial solver psolB defined in [13]:

Theorem 4

Let psolB be the partial solver defined in Fig. 7 and let G be a parity game. The call \(\mathtt{psolC} (G)\) decides the winner of all nodes for which call \(\mathtt{psolB} (G)\) decides a winner.

Fig. 7.
figure 7

Partial solver psolB from [13] (figure is a reproduction of Fig. 3 in [13])

Proof

For all players p, the acceptance condition for monotone attractors as discussed above implies that all fatal attractors for that player in G (node sets X of some color c with parity p such that \(X\subseteq {\mathsf {MA}({X})}\)) are contained in the greatest fixed point Z of \(F^{p}_{G}\). By Theorem 5 in [13], the order of fatal attractor detection does not affect the output of partial solver psolB. Therefore, we can assume that all fatal attractors X for player p are contained in the greatest fixed point Z of \(F^{p}_{G}\). But by monotonicity, their p-attractors \({{\mathsf {Attr}}_{{p}}[{G},{X}]}\) are then also contained in the p-attractor \({{\mathsf {Attr}}_{{p}}[{G},{Z}]}\) of Z. Thus, it follows that all nodes that are decided by \(\mathtt{psolB} (G)\) are also decided by \(\mathtt{psolC} (G)\).    \(\square \)

In [13], we also studied a more precise but more complex partial solver psolQ. Although the design of psolQ has superficial similarities to that of psolC, the latter is more precise: at noted in [13], psolQ does not solve even a single node for the parity game in Fig. 8. But psolC solves this game completely.

Fig. 8.
figure 8

A 1-player parity game that psolC solves completely (as \(\{v_0,v_4,v_7\}\) is greatest fixed point of \(F^{0}_{G}\)) but for which psolQ in [13] solves no nodes (figure is Fig. 5 in [13])

6 Experimental Results

By Theorem 4, we know that psolC will solve completely all games that psolB solves completely. From [13], we know that psolB completely solves many structured benchmarks. Therefore, there is little value in running psolC over these structured benchmarks again. This is why we focused our experimental effort here an random parity games.

We now report our experiments we did on randomly generated games. The aims of these experiments are

  1. 1.

    to experimentally confirm that psolC solves all nodes that psolB solves, as proved in Theorem 4

  2. 2.

    to compare running times of psolC and psolB over a large set of random games

  3. 3.

    to determine game configurations for which psolC does not really solve more than psolB does.

All our experiments were conducted on a test server that has two Intel® E5 CPUs, with 6-core each running at 2.5 GHz and 48 G of RAM. Experiments were grouped into game configurations, where we generated \(100,\! 000\) games for each such configuration and ran psolB and psolC against these games. We also used Zielonka’s solver [29] for regression tests to ensure that psolB and psolC correctly under-approximate winning regions, all of these tests passed.

The game configurations used are shown in the “Game Mode” column of Fig. 9. Each such mode is denoted by xx-yy-aa-bb. The xx is the number of nodes in a game, and the owners (player 0 or 1) of the nodes are chosen independently at random. The color of each node is also uniformly chosen from set \(\{0,1,\dots ,yy\}\), and has between aa and bb out-going edges to randomly selected successors in the game.

We now summarize key facts that we can observe from the experimental results shown in Fig. 9:

  • psolB has never solved more nodes than psolC, experimentally confirming Theorem 4 (column #10).

  • For games with low edge density (i.e., when aa-bb equals 1-5), psolC solves more than psolB for around 10 % of games (#9).

  • For games with higher edge density (i.e., when aa-bb is different from 1-5), psolC doesn’t appear to have an effect over psolB (#9).

  • psolC takes significantly more time to execute than psolB for high edge density games (#2).

  • Our experimental results suggest that the psolC lapse time increases as the color cap increases, whereas we don’t observe a similar increase for psolB (#2 and #3).

Fig. 9.
figure 9

Our experimental results for the partial solver psolC. The legend for the 10 data columns above is given in Table 1.

Table 1. Legend for experimental data shown in Fig. 9: G’\(_B\) represents the number of games not completely solved by psolB. Similarly, G’\(_C\) represents the number of games not completely solved by psolC.

We note that these experiments took quite some time to complete. For example, the total running time of psolC for these \(800,\!000\) random games was more than 28 days (if converted to calendar time). The experimental data we collected suggest that the comparison between psolB and psolC is bimodal on random games: either psolC is no more effective than psolB on a given game mode, or it appears to be more effective on about \(10\,\%\) of games for a given game mode.

The partial solver psolC may therefore have more theoretical than practical value. However, a staging of psolB and psolC may work reasonably well in practice: on input game G, first run psolB to obtain residual game \(G'\); and then run psolC only on \(G'\) and only when \(G'\) is not empty.

7 Other Related Work

Some easy static analyses for parity games have become part of the folklore of how to preprocess parity games. For example, the tool PGSolver can eliminate self-loops (nodes v with (vv) in E) and dead ends (nodes v for which there is no w with (vw) in E) [12]. The latter can be seen as justification for defining parity games not to have dead ends, as we have done in this chapter.

In [17], progress measures are defined and recognized as representations of winning strategies. A monotone function over a complete lattice is then defined such that pre-fixed points of that function capture progress measures. A least fixed-point computation therefore can compute the winning region and a winning strategy for a chosen player. This algorithm has exponential running time, since the complete lattice may be exponentially larger than the parity game. However, the algorithm runs on polynomial space, unlike some other known algorithms for solving parity games.

Our work relates to research on the descriptive complexity of parity games. In [6], it is investigated whether the winning regions of players in parity games can be defined in suitable logics. We mention two results from this paper: it is shown that this is indeed possible for guarded second-order logic (even for infinite game graphs with an unbounded number of colors); and for an arbitrary finite game graph G (the setting of our chapter), it is proved that least fixed-point logic can define the winning regions of G iff these winning regions are computable in polynomial time.

In [14], a transformation is studied that can map a partial solver \(\rho \) for parity games to another partial solver \({\mathsf {lift}}(\rho )\) that first applies \(\rho \) until it has no effect on the residual game. Then, \({\mathsf {lift}}(\rho )\) searches for some node v in \(V_p\) with more than one outgoing edge such that the commitment to one such edge (i.e. the removal of all other edges outgoing from v) would make partial solver \(\rho \) discover that node v is won by player \(1-p\) in that modified game. If so, it is sound to remove edge (vw) from G and then try \({\mathsf {lift}}(\rho )\) again until no such effect can be observed for both p. It was proved in [14] that \({\mathsf {lift}}(\rho )\) is sound if \(\rho \) is sound, idempotent, and satisfies a locality principle; and it was shown that psolB satisfies these properties.

8 Conclusions

In this chapter, we studied how one may define static analyses of parity games that run in polynomial time and space and compute parts of the games’ winning regions. In particular, the quality of such a static analysis could then be measured by how often it computes winning regions completely, or by what percentage of the winning region it computes across a range of random and structured benchmarks. We developed firm foundations for designing such static analyses, using a novel kind of game derived from parity games: reachability under parity. The intuition of such a game is that player p can reach a node set X whilst ensuring that the minimal color encountered en route has parity p.

We showed that such new reachability games are determined, demonstrated how one can implement their solution efficiently, and used this notion of game to define monotone functions over parity games – one for each player of the parity game. The greatest fixed-points of these functions were proved to be contained in the winning region of the corresponding player in the parity game. This insight led us to design a partial solver psolC and its experimental evaluation demonstrated that it is a powerful static analysis of parity games that can solve completely many types of random and structured benchmarks. Theoretical analysis also showed that these monotone functions generalize, in a more canonical and less ad hoc manner, work on fatal attractors that we had conducted previously [13]. In particular, we proved that psolC is more effective that the partial solver psolB in [13] that performed best in practice.

The decision problem for parity games, whether a given node is won by a given player, is in UP\({}\cap {}\)coUP [16] and so contained in NP\({}\cap {}\)coNP. It is therefore perhaps no great surprise that all known algorithms that completely compute such winning regions run in worst-case exponential or sub-exponential time in the size of these games. One may therefore think of our chapter as taking a complementary approach to attempting to answer the longstanding open problem of the exact complexity of said decision problem for parity games: how to design static analyses that run in polynomial time (relatively easy to do) and that are provably computing the exact winning regions of all parity games (likely very hard to do under these constraints of efficient static analysis). We hope that the reader may find this approach to be of genuine interest so that he or she may pursue it further.