1 Introduction

The emergence of parity games has been a complex and fascinating process. The roots of parity games go back to the works of Buchi [6, 7], Rabin [26], Gurevich and Harrington [15] and McNaughton [23]. Numerous number of papers have been written on the subject, many results have been presented in conferences, and research networks and workshops have been organised. The general interest in parity games comes due to the following reasons. All these reasons are intimately inter-connected:

  1. 1.

    Complexity-theoretic status: Parity games problem is known to be in complexity classes NP and coNP. The problem is among a few algorithmic problems that belong to NP and coNP but not known to be in P.

  2. 2.

    Connections to modal \(\mu \)-calculus: Kozen invented modal \(\mu \)-calculus [20]. This is an extension of propositional modal logic. The model checking problem consists of designing an algorithm that, given a formula \(\phi \) and a transition labeled graph G, decides if G satisfies \(\phi \). It turns out that the model checking problem is polynomial time equivalent to solving the parity games problem [12].

  3. 3.

    Connections to computer-aided verification: The modal \(\mu \)-calculus can be used as a formal specification language for expressing properties of reactive programs. Hence, many model checking problems in computer aided verification can be effectively reduced to parity games problem [4, 5, 12].

  4. 4.

    Connections to automata and logic: There are deep historical and mathematical connections between automata, two player games, modal \(\mu \)-calculus, and parity games [28,29,30]. A recent example is in [9] that implements Kupferman-Vardi algorithm from [21]. The algorithm solves the parity games problem via a reduction to the emptiness problem of alternating parity automata.

To define parity games we need a finite directed bipartite graph \(G=(V; E)\) which is given by its bipartite partition \(V=V_1\cup V_1\) and the set of edges \(E\subseteq V_0\times V_1 \cup V_1\times V_0\). We postulate one condition on the set E of edges that each v from V has at least one outgoing edge from v. Elements \(v\in V\) are called positions of the game.

Definition 1

An arena is given by the pair (Gp), where G is a bipartite graph as above and \(p:V\rightarrow \omega \setminus \{0\}\) is a function called the priority function. For each \(v\in V\), the value p(v) is called the priority of v.

Given an arena (Gp), there are two players: Player 0 and Player 1. Positions in \(V_0\) are called Player 0 positions, and positions in \(V_1\) are called Player 1 positions. We are also given the starting position \(s\in V\), say \(s\in V_0\). Player 0 and Player 1 play the parity game as follows. Imagine a token is placed on s. Player 0 moves the token along an edge outgoing from \(v_0=s\) and moves to position \(v_1\). Player 1 now moves the token along an edge outgoing from \(v_1\) and moves to position \(v_2\). Player 0 moves the token along an edge outgoing from \(v_2\) and moves to position \(v_3\). This continues on. If the start position s is in \(V_1\) then Player 1 makes the first move. Thus, the players produce an infinite path in G called a play:

$$ \rho =v_1, v_2, v_3, v_4, \ldots , v_i, v_{i+1},\ldots $$

Definition 2

Player 0 wins the play \(\rho \) if the maximal priority of positions that appear infinitely often in \(\rho \) is even. Otherwise, Player 1 wins the play.

Clearly, every play \(\rho \) of the parity game played on (Gp) is won by either Player 0 or Player 1. An important fact, however, is this. One of the players has always a winning strategy. In other words, one of the players has a strategy \(f_s\) (the index indicates the starting position) such that all plays consistent with \(f_s\) are won by the player. This fact is known as a determinacy of parity games.

Determinacy of parity games is not surprising. For instance, Martin’s determinacy theorem states that all games with Borel winning conditions are determined [22]. The winning condition for parity games is a Borel condition. Therefore, we can partition that set V of positions of the arena (Gp) into sets \(W_0\) and \(W_1\):

figure a

Importantly, parity games are effectively determined. There exists an algorithm that given a parity game (Gp) and starting position s, tells us which of the players is the winner of the game. Thus, we have the following parity games problem. The statement of the problem is due to effective determinacy of parity games.

figure b

There are numerous algorithms that solve parity games. Emerson and Jutla [12] showed that the problem is in NP \(\cap \) co -NP, and Jurdzinski [17] improved this bound to UP \(\cap \) co -UP. Jurdzinski also provided small progress measure algorithm for solving parity games. Petersson and Vorobyov [25] devised a subexponential randomised algorithm. Later, Jurdzinski, Paterson and Zwick [19] also designed a deterministic algorithm of similar complexity. McNaughton [23] showed that the winner of a parity game can be determined in time \(n^{c+O(1)}\), where c is the maximal priority of the positions. There is also Zielonka’s recursive algorithm [31] which is considered to be useful in practice. Another interesting algorithm is by Schewe [27]. There are also various subclasses of parity games or their relatives which can be solved in polynomial time [2, 3, 14, 16, 24]. The author in collaboration with Calude, Jain, Li and Stephan provided an algorithm that runs in time \(O(n^{ \log (c)+3})\). In terms of running time, this is currently the most efficient algorithm that solves the parity games problem. The goal is to present the algorithm in somewhat informal way with an emphasis on ideas rather than formal details.

2 Memoryless Winning Strategies

A crucial property of parity games is that the winner has a memoryless winning strategy. This is known as memoryless determinacy of parity games. A memoryless strategy is defined by the current position of the play, and does not depend on the history of the play at all. Formally, a memoryless strategy for Player \(\epsilon \) is given by a function \(f:V_{\epsilon }\rightarrow V_{\epsilon +1}\) such that \((v, f(v))\in E\) for all \(v \in V_{\epsilon }\). There are many proofs of memoryless determinacy, see for instance  [1, 12, 23, 31]. Below we provide a simple proof of this result, the simplest possible proof known to the author. The proof is non-constructive but implies memoryless determinacy.

Theorem 3

(Memoryless Determinacy). The winner in any parity game (Gp) has a memoryless winning strategy.

Proof

Let \(s\in V_1\) be a starting position such that the set \(X=\{u\mid (s,u) \in E\}\) is not a singleton. Let \(X_1\), \(X_2\) be a non-trivial partition of X. We split (Gp) into two parity games \((G_1, p)\) and \((G_2,p)\), where \( G_1\) is the same as G but Player 1 at s can only move to \(X_1\), and \(G_2\) is the same as G but Player 1 at position s can only move to \(X_2\).

We claim that Player 0 wins G starting at s iff Player 0 wins both \(G_1\) and \(G_2\) starting at s. Indeed, if Player 0 wins G, then clearly Player 0 wins both \(G_1\) and \(G_2\). For the other direction, assume that Player 0 wins both \(G_1\) and \(G_2\) with winning strategies \(f_1\) and \(f_2\), respectively. Below we describe a winning strategy f for Player 0 in (Gp).

Let \(\gamma =v_0, v_1, \ldots , v_i\) be a finite play in G, and \(v_i, v_{i+1},\ldots , v_{j}\) be a segment of the play such that \(i<j\), \(v_i=s\) and no position among \(v_{i+1}\), ..., \(v_{j}\) equals s. Call such segments proper segments. If \(v_{i+1}\in X_1\), then say that the segment is in \(G_1\); otherwise we say that the segment is in \(G_2\). Consider \(\alpha \) obtained from \(\gamma \) by removing all proper segments that occur in \(G_2\). Likewise, let \(\beta \) be obtained from \(\gamma \) by removing all proper segments that occur in \(G_1\). The strategy f is now defined as follows. The strategy f follows \(f_1\) by setting \(f(\gamma )=f_1(\alpha )\) if the last proper segment occurs in \(G_1\), and the strategy f follows \(f_2\) by setting \(f(\gamma )=f_2(\beta )\) if the last proper segment occurs in \(G_2\). It is clear that f is a winning strategy for Player 0 in game (Gp).

Apply now the splitting process above to the arenas \((G_1, p)\) and \((G_2,p)\) until we reach games \((G',p)\) such that the set of positions of \(G'\) coincide with the set of positions of G, and the set \(E'\) of edges is such that \(E'\subseteq E\) and Player 1 has exactly one outgoing edge from any position v in \(V_1\). Let us call such games solitary games for Player 1.

We conclude that Player 1 wins the parity game starting at s iff Player 1 wins a solitary game starting at s. Each solitary game for Player 1 induces a memoryless strategy.

One applies the arguments above, including the notion of splitting, for Player 0. That implies that Player 0 wins the parity game from the start position s if and only if Player 0 wins one of Player 0’s solitary games starting at position s.

   \(\square \)

There is a polynomial time algorithm that, given a memoryless strategy for Player \(\epsilon \) and position s, detects if the strategy is a winning strategy from s. Hence, we have:

Corollary 4

Parity games problem is in NP and coNP.    \(\square \)

3 Reachability Games

Reachability games are well-studied. We mention these games since we reduce the parity games problem to the reachability games problem.

Just like parity games, a reachability game is given by Player 0 and Player 1, a bipartite directed graph \(G=(V, E)\), a starting position \(s\in V\), and Player 0’s target set \(T\subseteq V\) of positions. Starting at s, the players move alternatively and produce a path \(\rho =v_0, v_1, \ldots , v_i, \ldots \), where \(v_0=s\). Note that if \(s\in V_0\) then Player 0 starts the play, and otherwise Player 1 starts the play. We say that Player 0 wins the play \(\rho \) if \(\rho \) meets the target set T, that is, there is a position \(v_i\) in the play such that \(v_i\in T\).

Theorem 5

(Folklore: Determinacy of Reachability Games). There is a linear time algorithm that given a reachability game finds the winner of the game.

Proof

Consider a reachability game as described above. The idea is to start computing vertices, inductively, from which Player 0 can force plays into the target set T. Clearly, no move is required for Player 0 to win the game if the game starts at positions from T. So, we set \(T_0=T\). Assume that we defined the set \(T_i\). Intuitively, \(T_i\) consists of all v such that starting at v Player 0 can force plays into T in at most i moves. By induction, we also have \(T_0\subset T_1\subset \ldots \subset T_i\).

To build \(T_{i+1}\), put all v in \(T_i\) into \(T_{i+1}\) and for each \(v\not \in T_i\) proceed as follows. If \(v\in V_0\) and there is an edge \((v,u)\in E\) with \(u\in T_i\) then we put v into \(T_{i+1}\). Also, if \(v\in V_1\) and \(u\in T_i\) for all \((v,u)\in E\) then we put v into \(T_{i+1}\). This defines \(T_{i+1}\). Clearly Player 0 wins the reachability game starting at any position \(v\in T_{i+1}\).

Thus, we have the sequence \(T_0\subset T_1 \subset T_2 \subset \ldots \). Let X be the union of this chain. Player 0 wins the game from s if and only if \(s\in X\). One can implement the algorithm above that runs in linear time.    \(\square \)

4 A Quasipolynomial Time Algorithm for Parity Games

For this section, let us fix (Gp) the arena of a parity game with n positions. Let c be the maximal priority, that is, the maximal value in the range of the priority function p. Recall that \(p(v)>0\) for all \(v\in V\). We can also always assume that \(p(v)\le n\) for all \(v\in V\). For this section we postulate that all positions in \(V_0\) have even priorities (that is, p(v) is even for all \(v\in V_0\)), and all positions in \(V_1\) have odd priorities (that is, p(v) is odd for all \(v\in V_1\)). This is an acceptable postulate since every parity game can be reduced in linear time to parity games where priorities of Player 0 positions are even, and priorities of Player 1 positions are odd. The aim in this section is to reduce parity games problem to solving reachability games.

4.1 A Naive Reduction

The winning condition in the parity game for a given play \(\rho \) is an infinitary condition. Namely, one needs to consider all positions in the play \(\rho \) that appear infinitely often and then consider the parity of the maximal priority among these positions. If the parity is even, then Player 0 wins the play; otherwise Player 1 wins the play. This winning condition can be turned into a finitary definition of winning in a finite game as follows. The players, Player 0 and Player 1, play just like in the parity game on (Gp) and start producing a sequence \(v_0, v_1, \ldots , v_j\) of moves. The players stop as soon as the last position \(v_j\) in the play repeats itself, that is, \(v_j=v_i\) for some \(i<j\) in the play. Then declare Player 0 to be the winner if the maximal priority that appears in the cycle \(v_i, v_{i+1}, \ldots , v_{j-1}, v_j\) is even. It is easy to see that the game thus defined is a reachability game. A bit more work is needed to prove that this reachability game is equivalent to the original parity game in the arena (Gp). The number of positions in the reachability game is bounded by n! from below. Hence, this naive reduction of parity games to reachability games is clearly inefficient.

4.2 Unsuccessful Attempt: Good Positions

This subsection is somewhat intuitive and informal. All the notions here can be made formal. The aim is to present some intuition needed for the concepts to be defined in the next subsection.

Let \(\rho =v_0, v_1, \ldots \) be a play in parity game. Assume that Player 0 wins the play. Then we can write the play \(\rho \) as \(\alpha \beta _1\beta _2 \beta _3\ldots \) such that in each segment \(\beta _i\) the maximal priority is even and it appears in the last position of \(\beta _i\). We now attempt to formalise this simple observation. A finite play \(\alpha =v_1,\ldots , v_k\), \(k>1\), is good if the maximum priority position in this play is even and it is achieved at \(v_k\). A position v is good if Player 0 can guarantee a good play starting at v. It is not too hard to see that one computes in linear time all good positions for Player 0. For instance, this can be done using ideas similar to the algorithm that solves the reachability game problem.

The concept of good position can now be iterated. For a given integer \(i>0\), we say that a play \(\alpha \) is i-good if \(\alpha \) can be written as \(\beta _1 \beta _2 \ldots \beta _i\) so that each segement \(\beta _i\) is a good play. A position v is i-good if Player 0 can guarantee an i-good play starting at v. Note that any \((i+1)\)-good position is also i-good. Now we can collect all positions that are i-good for all i. Further, a position is \(\omega \)-good if it is i-good for all \(i\ge 1\). Note that one can similarly define the notion of i-good and \(\omega \)-good positions for Player 1.

The following is not too hard to see. Player 0 wins the parity game from starting at any of the \(\omega \)-good positions. These \(\omega \)-good positions can be computed in polynomial time. Interestingly, we still have a mess. Namely, there are non-empty parity games such that no player in the games possesses \(\omega \)-good positions.

4.3 Ghost Sequences and Statistics

We want to code good plays in more sophisticated way. Let us fix a play

$$ \rho =v_0, v_1, v_2, \ldots $$

in the parity game (Gp). Recall that c denotes the maximal value in the range of p. Also, recall that \(p(v)>0\) for all \(v\in V\). A segment of play \(\rho \) is any consecutive sequence of moves in \(\rho \). So, each segment is of the form \(v_, v_{i+1}, \ldots , v_j\), where \(i\le j\).

Definition 6

A ghost i-sequence is a sub-sequence \(a_1, a_2, a_3, \ldots ,a_{s}\) of the play \(\rho \) such that that (1) \(s=2^i\), and (2) in the play between \(a_k\) and \(a_{k+1}\) the maximum priority is even, where \(1\le k< s\).

The subsequence \(a_1, a_2, a_3, \ldots ,a_{s}\) in the definition does not have to be a sequence of consecutive moves in the play \(\rho \). Later we clarify why we name the subsequences \(a_1, a_2, a_3, \ldots ,a_{s}\) ghost subsequences. Note that the definition is given for Player 0. Ghost sequences for Player 1 are defined similarly by simply changing the parity.

The goal is to design a mechanism that detects ghost \((log(n)+1)\)-sequences. The reason is that any such sequence has repeated positions, and the maximal priority in the play between these positions is even by the definition of the ghost sequence. Detecting ghost \((log(n)+1)\)-sequences is done through the concept of statistics:

Definition 7

(Statistics). A statistics of a player is a sequence \((b_0,b_1,\ldots , b_{ \log (n)+1})\) such that \(0\le b_i\le c\) for all i, and if \(b_i,b_j\) are both non-zero and \(i < j\) then \(b_i \le b_j\). A winning statistics is one with \(b_{\log (n)+1}\ne 0\).

So, non-zero members of the statistics form an increasing, not necessarily strictly increasing, sequence. Note that the space needed for each statistics is \(O(\log (n) \log (c))\).

figure c

The answer is that each statistics are designed to detect existence of ghost sequences in a play. Below we describe the semantics of the statistics \((b_0,\ldots ,b_i, \ldots , b_{ \log (n)+1})\) that we call invariants of the statistics:

  1. 1.

    The initial statistics is \((0, 0, \ldots , 0)\). The initial statistics is \((0, 0, \ldots , 0)\) is the sate at which the players start to play.

  2. 2.

    Each \(b_i\) is a priority of a position that occurred in the play or \(b_i=0\).

  3. 3.

    Each \(b_i\) is associated with at most one ghost i-sequence. We denote the ghost i-sequence associated with \(b_i\) by \(ghost(b_i)\).

  4. 4.

    The priority \(b_i\) is the only information about the ghost i-sequence \(ghost(b_i)\). When \(b_i>0\), the priority \(b_i\) tells us that a ghost sequence \(ghost(b_i)\) appeared in the play.

  5. 5.

    If \(b_i>0\), then the value \(b_i\) is the largest priority occurred at the end or after the ghost i-sequence \(ghost(b_i)\).

  6. 6.

    If \(b_i = 0\), then no ghost i-sequence is being associated with \(b_i\). In other words, the sequence \(ghost(b_i)\) is the nil sequence.

  7. 7.

    If \(b_i,b_j\) are both non-zero and \(i < j\) then \(b_i \le b_j\).

  8. 8.

    If \(0<b_i \le b_j\) and \(i < j\), then the ghost i-sequence associated with \(b_i\) starts after a position with priority \(b_j\) was visited at or after the end of the ghost j-sequence.

When players are playing, statistics are being updated. So, consider the play

$$ \alpha =v_0, v_1, \ldots , v_k $$

and let \((b_0,\ldots ,b_i, \ldots , b_{ \log (n)+1})\) be the current statistics of \(\alpha \). Each \(b_i\) is associated with the sequence \(ghost(b_i)\) which is either the nil sequence or ghost i-sequence. The ghost sequence \(ghost(b_i)\) is a subsequence of a segment of the play \(\alpha \). Denote the segment by \(segment(b_i)\). When \(ghost(b_i)\) is the nil-sequence, so is \(segment(b_i)\). In terms of notations and explanations above, the idea is that the play \(\alpha \) can be written as

$$ \beta \cdot \ segment(b_{\log (n)+1})) \cdot \cdot \ldots \cdot segment (b_i) \cdot \ldots \cdot segment (b_1) \cdot segment(b_0), $$

where \(\beta \) is a prefix of \(\alpha \). The next move by a player triggers an update of the statistics so that the properties (1)–(8) stay true. The update is explained in the next subsection.

It is important to point out that given \(b_i\) we cannot recover the sequence \(ghost(b_i)\) from \(b_i\). We only know that the ghost sequence \(ghost(b_i)\) exists, and \(b_i\) is the only information about the ghost i-sequence. Moreover, updates might change the ghost sequence linked with \(b_i\). These are the main reasons for the term ghost. Yet, we use the existence of ghost sequences in the arguments for correctness of our algorithm.

4.4 Updating Statistics

Consider the play \(\alpha =v_0, v_1, \ldots , v_k\) and let \(\bar{b}=(b_0,\ldots ,b_i, \ldots , b_{ \log (n)+1})\) be the current statistics of \(\alpha \). Let the next move, made by one of the players, be to position \(v_{k+1}\). We describe two updates depending on the parity of the priority \(p(v_{k+1})\) and present the reasoning for the updates.

Case 1. Assume that \(p(v_{k+1})\) is even (and hence \(v_{k+1} \in V_0\)). Set \(b=p(v_{k+1})\). Update the current statistics \(\bar{b}\) as follows:

  • Select the largest j such that either

    1. 1.

      (\(b_j=0 \ \vee \ b_j\) is odd) and all \(b_i\), where \(i<j\), are non-zero and even,

    2. 2.

      or \(0< b_j < b\)

    and then set \(b_j = b\) and \(b_i = 0\) for all \(i < j\).

  • If this update produces a non-zero \(b_{\log (n+1)}\) then Player 0 is declared a winner.

Reasoning:

Suppose the first rule is applied for the update. We have ghost sequences

$$ ghost(b_{j-1}),\ ghost(b_{j-2}), \ \ldots , \ ghost(b_0) $$

that are present in the respective segments of the play \(\alpha \):

$$ segment(b_{j-1}), \ segment(b_{j-2}), \ \ldots , \ segment(b_0). $$

Note that the concatenation

$$ ghost(b_{j-1})\cdot ghost(b_{j-2}), \ \ldots , \cdot ghost(b_0) v_{k+1} $$

is a ghost j-sequence that now we denote by \(ghost(b_j)\) and the segment \(segment(b_j)\) that corresponds to \(ghost(b_j)\) becomes:

$$ segment(b_{j-1}) \cdot segment(b_{j-2}) \cdot \ldots \cdot segment(b_0) v_{k+1}. $$

Now note that with these changes all the old ghost sequences \(ghost(b_j)\), \(ghost(b_{j-1})\), \(\ldots \) , \(ghost(b_0)\) and their corresponding segments become nil sequences. It is not too hard to see that all the invariants (1)–(8) listed above are preserved.

Now suppose that the second rule is applied for the update. In this case, let us replace the last position of \(ghost (b_j)\) with \(v_{k+1}\). The result is a new ghost j-sequence \(ghost(b_j)\) but the segment that corresponds to it is now

$$ segment(b_{j-1}) \cdot segment(b_{j-2}) \cdot \ldots \cdot segment(b_0) v_{k+1}. $$

With these changes all the old ghost sequences \(ghost(b_{j-1})\), \(\ldots \) , \(ghost(b_0)\) and their corresponding segments become nil sequences. Again, it is not too hard to see that all the invariants (1)–(8) listed above are preserved.

Case 2. Assume that \(p(v_{k+1})\) is odd (and hence \(v_{k+1} \in V_1\)). Set \(b=p(v_{k+1})\). Update the current statistics \(\bar{b}\) as follows. Select the largest j such that \(0< b_j < b\), and then set \(b_j = b\) and \(b_i = 0\) for all \(i < j\).

Reasoning:

In this case the j-ghost sequence does not change but \(segment(b_j)\) that corresponds to \(ghost(b_j)\) becomes

$$ segment(b_{j-1}) \cdot segment(b_{j-2}) \cdot \ldots \cdot segment(b_0) v_{k+1}. $$

With these changes all the old ghost sequences \(ghost(b_{j-1})\), \(\ldots \), \(ghost(b_0)\) and their corresponding segments become nil sequences. Again, it is not too hard to see that all the invariants (1)–(8) listed above are preserved.

4.5 Reduction to Reachability Games

The notion of statistics and their updates (that we defined for the parity game on (Gp)) naturally lead to consider the reachability \(G_R\) game based on (Gp) with a starting position s. Here is a description of the reachability game \(G_R\):

  1. 1.

    The positions of \(G_R\) are \((a,\tilde{b})\), where \(a\in G\) and \(\tilde{b}\) is a statistics.

  2. 2.

    The starting position is \((s,\tilde{0})\), where s is the starting position of the parity game.

  3. 3.

    The target set T is the set of all pairs \((a, \tilde{b})\) such that \(\tilde{b}\) indicates a win.

  4. 4.

    Player 0 can move from \((a,\tilde{b})\) to \((a', \tilde{b}')\) if and only if \(a\in V_0\) and the move from a to \(a'\) in G causes the statistics \(\tilde{b}\) to be updated to \(\tilde{b}'\), and \(\tilde{b}\) does not indicate a win.

  5. 5.

    Player 1 can move from \((a,\tilde{b})\) to \((a',\tilde{b}')\) if and only if \(a\in V_1\) and the move from a to \(a'\) in G causes the statistics \(\tilde{b}\) to be updated to \(\tilde{b}'\), and \(\tilde{b}\) does not indicate a win.

Note that the reachability game \(G_R\) is defined with respect to Player 0. One can defined the corresponding reachability game for Player 1.

Theorem 8

([8]). Player 0 wins parity games played on (Gp) starting at s if and only if Player 0 wins the reachability game \(G_R\).

Proof

(Outline). The proof follows from the following sequence of claims. Our claims here are for Player 0. Similar arguments can be done for Player 1.

Claim 1:

If Player 0 wins the game \(G_R\) then the player wins the parity game G.

To prove the claim, consider a play in the parity game G that is consistent with the winning strategy for Player 0 in \(G_R\). The play ends in the target set. So, the play is won by a ghost sequence \(\tilde{b}\) being detected such that \(b_{\log (n)+1}>0\). Thus, for the play there is a ghost sequence \(a_1, \ldots , a_s\) of length \(2^{\log (n)+1}\) in which \(a_k=a_{\ell }\) for some \(k\ne \ell \). The maximum priority b of a position between \(a_h\) and \(a_\ell \) in the play must be even by the definition of a ghost i-sequence. Thus a loop has been observed for which the maximum priority of a position in the loop is even.

Claim 2:

Assume that Player 1 wins the parity game G. Then Player 0 can not win the reachability game \(G_R\).

To prove the claim, suppose that Player 1 follows a memoryless winning strategy in the parity game G. Let us assume that Player 0 wins the reachability game \(G_R\). Then Player 0 goes into a loop in G such that the maximum priority position in the loop is even. Since, Player 1 is using memoryless winning strategy in parity game, Player 0 can force Player 1 to stay in the loop forever and win the play in G. This is a contradiction since Player 1 used winning strategy.

Finally, we need to explain (but not prove) the next claim:

Claim 3:

Assume Player 0 wins the parity game G. If the player follows a memoryless winning strategy in the parity game then the player wins the reachability game \(G_R\).

The proof is somewhat routine but not hard. One way to check the proof is to imagine the following situation. Assume that Player 0 follows a memoryless winning strategy in G. Any play consist with the strategy then arrives to a cycle such that the maximal priority position in this cycle is even. Note that Player 1 does not have to stay in this cycle. But if Player 1 decides to stay in the cycle forever then one needs to analyse the way statistics changes along the play (due to the maximal priority position appearing in the play). So, one can show, in this particular case, that eventually Player 0 reaches the target set in \(G_R\).    \(\square \)

Now we would like to compute the size of the positions in the reachability game \(G_R\). The size of the statistics (for Player 0) is given by \(\log (n)+2\) numbers each of size \(\log (c)\). Note that \(log(c)\le \log (n)\). Therefore, overall size of a representation of a position in the reachability game \(G_R\) is bounded by \((\log (n)+2) (\log (c)+1)\). Hence, the number of positions in the reachability game \(G_R\) is in order \(O(n^{\log (c)+1})\). Since each position in the reachability game \(G_R\) has at most n outgoing edges, the number of edges in \(G_R\) is \(O(n^{\log (c)+2})\). We already explained that the reachability games can be solved in liner time on the size of the of the games. Thus finding out if position s in the parity game (Gp) is a winning position for Player 0 takes time proportional to \(O(n^{\log (c)+2})\). Hence, we have the following theorem that tells us the running time bound for the algorithm that outputs the winning sets \(W_0\) and \(W_1\) for the players:

Theorem 9

(Quasi-Polynomial Solution). There is an algorithm that, given a parity game (Gp) with n positions, solves the game in time \(O(n^{\log (c)+3})\).    \(\square \)

The reduction of parity games to reachability games also provides fixed parameter tractability (FPT) result for solving parity games. For fixed parameter tractability see the textbook [10]. Here the parameter is c the maximal number of priorities. To get the FPT result we count the number of positions in \(G_R\) in a bit different way.

Consider the statistics \(\tilde{b}=(b_0, b_1, \ldots , b_m)\), where \(m=\log (n)+1\). We would like to put a bound on the number of statistics so that the parameter n is removed from the counting. Say, for simplicity that each \(b_i\ne 0\). Then by the definition of statistics we have \(b_0\le b_1\le \ldots \le b_m\). Call such statistics strict. We can transform the strict statistics into the following sequence:

$$ (b_0, b_1+1, b_2+2, \ldots , b_i+i, \ldots , b_{m}+ m). $$

The mapping \((b_0, b_1, \ldots , b_m) \rightarrow (b_0, b_1+1, b_2+2, \ldots , b_i+i, \ldots , b_{m}+ m)\) induces an injection from the set of all strict statistics into the power set \(\{0, \ldots , 2c\}\). Hence, there are at most \(2^{2c}\) strict statistics. We removed the dependence on n. One can code up all statistics in a similar way, and construct an injection map from the set of all statistics into the power set \(\{0, \ldots , 3c\}\). Therefore one can prove that the number of all statistics is bounded by \(2^{3c}\). This implies that the number of all positions in the reachability game \(G_R\) is bounded by \(n\cdot 2^{2c}\). Since each position in the reachability game \(G_R\) has at most n outgoing edges, the number of edges in \(G_R\) is bounded by \(2^{2c}\cdot n^2\). Thus, from determinacy of reachability game, we have the following theorem:

Theorem 10

(FPT Theorem). There is an algorithm that, given a parity game with n positions, solves the game in time proportional to \(g(c)\cdot n^3\), where \(g(c)=2^{2c}\).    \(\square \)