Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Parity games [13, 35] are abstract infinite-duration games that represents a powerful mathematical framework to address fundamental questions in computer science and mathematics. They are strict connected with other games of infinite duration, such as mean and discounted payoff, stochastic, and multi-agent games [710].

In formal system design and verification [12, 25], parity games arise as a natural evaluation machinery to automatically and exhaustively check for reliability of distributed and reactive systems [1, 3, 26]. More specifically, in formal verification, model-checking techniques [11, 31] allow to verify whether a system is correct with respect to a desired behavior by checking whether a mathematical model of the system meets a formal specification of the expected execution. In case the latter is given by means of a \(\mu \)-calculus formula [24], the model checking problem can be translated, in linear-time, into a parity game [13]. Hence, every parity game solver can be used in practice as a model checker for a \(\mu \)-calculus specification (and vice-versa). Using this approach, liveness and safety properties can be addressed in a very elegant and easy way [28]. Also, this offers a very powerful machinery to check for component software reliability [1, 3].

In the basic settings, parity games are two-player turn-based games, played on directed graphs, whose nodes are labeled with priorities (i.e., natural numbers). The players, named player \(0\) and player \(1\), move in turn a token along graph’s edges. Thus, a play induces an infinite path and player \(0\) wins the play if the greatest priority visited infinitely often is even; otherwise, player \(1\) wins the play.

Table 1. Parity algorithms along with their computational complexities.

The problem of finding a winning strategy in parity games is known to be in UPTime \(\cap \) CoUPTime [21] and deciding whether a polynomial time solution exists or not is a long-standing open question. Aimed to find the right complexity of parity games, as well as come out with solutions working efficiently in practice, several algorithms have been proposed in the last two decades. In Table 1, we report the most common ones along with their known computational complexities, where parameters \(n\), \(e\), and \(d\) denote the number of nodes, edges, and priorities in the game, respectively (for more details, see [15, 16]).

All above mentioned algorithms have been implemented in PGSolver, written in OCaml by Oliver Friedman and Martin Lange [15, 16], a collection of tools to solve, benchmark and generate parity games. Noteworthy, PGSolver has allowed to declare the Zielonka Recursive Algorithm as the best performing to solve parity games in practice, as well as explore some optimizations such as decomposition into strong connect components, removal of self-cycles on nodes, and priority compression [2, 22].

Despite the enormous interest in finding efficient algorithms for solving parity games, less emphasis has been put on the choice of the programming language. Mainly, the scientific community relies on OCaml as the best performing programming language to be used in this setting and PGSolver as an optimal and the de facto platform to solve parity games. However, starting from graphs with a few thousand of nodes, even using the Zielonka algorithm, PGSolver would require minutes to decide the given game, especially on dense graphs. Therefore a natural question that arises is whether there exists a way to improve the running time of PGSolver. We identify three research directions to work on, which specifically involve: the algorithm itself, the way it is implemented, and the chosen programming language. As a result we introduce, in this paper, a slightly improved version of the Classic Zielonka Algorithm along with a heavily optimized implementation in Scala Programming Language [29, 30]. Scala is a high-level language, proven to be well performing [20], with object and functional oriented features, that recently has come to the fore with useful applications in several fields of computer science including formal verification [4]. Our experiments show that, by using all Scala features extensively, we are able of gaining two order of magnitude in running time with respect to the implementation of the Zielonka algorithm in PGSolver.

In details, the main goal of this work is the design and development of a new tool for solving parity games, based on an improved version of the Zielonka Recursive Algorithm, with performance in mind. Classical Zielonka algorithm requires to decompose the graph game into multiple smaller arenas, which is done by computing, in every recursive call, the difference between the current graph and a given set of nodes. This operation (Fig. 1, lines 10 and 15) turns out to be quite expensive as it requires to generate a new graph at each iteration. Somehow such a difference operation has the flavor of the complicancy of complementing automata in formal verification [33]. Remarkably, our improved version guarantees that the original arena remains immutable by tracking the removed nodes in every subsequent call and checking, in constant time, whether a node needs to be excluded or not. Casting this idea in the above automata reasoning, it is like enriching the state space with two flags \((removed,\,\lnot removed)\), instead of performing a complementation.

In this paper we consider and compare four implementations. The Classic (C) and Improved (I) Recursive (R) algorithms implemented in Scala (S) and OCaml (O). Using random generated games, we show that IRO gains an order of magnitude against CRO, as well as CRS against CRO. Remarkably, we show that these improvements are cumulative by proving that IRS gains two order of magnitude against CRO.

We have been able to achieve this kind of performance optimization by deeply studying the way the classic Recursive algorithm has been implemented in PGSolver and concentrating on the following tasks of the algorithm, which we have deeply improved: finding the maximal priority, finding all nodes with a given priority, and removing a node (including related edges) from the graph. Parsing the graph in Scala, we allocate an Array, whose size is fixed to the number of nodes of the graph. In addition we populate at the same time the adjacency list and incidence list for each node, which avoids to build a transposed graph. We make also use of an open source Java library called Trove that provides a fast and lightweight implementation of the java.util Collection API.

Finally, we want to remark that, among all programming languages, we have chosen to investigate Scala as it shares several modern and useful programming language aspects. Among the others, Scala carries functional and object-oriented features, compiles its programs for the JVM, is interoperable with Java and an high-level language with a concise and clear syntax. The results we obtain strongly support our choice and allow to declare Scala as a clear winner over OCaml, in terms of performance.

Outline. The sequel of the paper is structured as follows. In Sect. 2, we give some preliminary concepts about parity games. In Sect. 3, we describe the Classic Recursive Zielonka Algorithm. In Sect. 4, we introduce our improved algorithm based on the Zielonka algorithm that we implement in Sect. 5 using Scala programming language. In Sect. 6 we study, analyze, and benchmark the Classic and Improved Algorithms in OCaml (PGSolver) and Scala.

Finally we report that the tool is available as an open source project at https://github.com/vinceprignano/SPGSolver.

2 Parity Games

In this section we report some basic concepts about parity games including the Zielonka Recursive Algorithm. For more details we refer to [14, 35].

A parity game is a tuple \(G=(V,V_{0},V_{1},E,\varOmega )\) where \(\left( V,\, E\right) \) forms a directed graph whose set of nodes is partitioned into \(V=V_{0}\cup V_{1}\), with \(V_{0}\cap V_{1}=\emptyset \), and \(\varOmega :V\rightarrow N\) is the priority function that assigns to each node a natural number called the priority of the node. We assume \(E\) to be total, i.e. for every node \(v\in V\), there is a node \(w\in V\) such that \((v,w)\in E\). In the following we also write \(vEw\) in place of \((v,w)\in E\) and use \(vE:=\{w\,|\, vEw\}\).

Parity games are played between two players called player 0 and player 1. Starting in a node \(v\in V\), both players construct an infinite path (the play) through the graph as follows. If the construction reaches, at a certain point, a finite sequence \(v_{0}...v_{n}\) and \(v_{n}\in V\) then player \(i\) selects a node \(w\in v_{n}E\) and the play continues with the sequence \(v_{0}...v_{n}w\). Every play has a unique winner, defined by the priority that occurs infinitely often. Precisely, the winner of the play \(v_{0}v_{1}v_{2}...\) is player \(i\) iff \(max\{p\,|\,\forall j\,.\,\exists k\ge j\,:\,\varOmega (v_{k})=p\}\, mod\,2=i\). A strategy for player \(i\) is a partial function \(\sigma :V^{*}V\rightarrow V\), such that, for all sequences \(v_{0}...v_{n}\) with \(v_{j+1}\in v_{j}E\), with \(j=0,...,n-1\), and \(v_{n}\in V_{i}\) we have that \(\sigma (v_{0}...v_{n})\in v_{n}E\). A play \(v_{0}v_{1}...\) conforms to a strategy \(\sigma \) for player \(i\) if, for all \(j\) we have that, if \(v_{j}\in V_{i}\) then \(v_{j+1}=\sigma (v_{0}...v_{j})\). A strategy \(\sigma \) for player \(i\) (\(\sigma _{i}\)) is a winning strategy in node \(v\) if player \(i\) wins every play starting in \(v\) that conforms to the strategy \(\sigma \). In that case, we say that player \(i\) wins the game \(G\) starting in \(v\). A strategy \(\sigma \) for player \(i\) is called memoryless if, for all \(v_{0}...v_{n}\in V^{*}V_{i}\) and for \(w_{0}...w_{m}\in V^{*}V_{i}\), we have that if \(v_{n}=w_{m}\) then \(\sigma (v_{0}...v_{n})=\sigma (w_{0}...w_{m})\). That is, the value of the strategy on a path only depends on the last node on that path. Starting from \(G\) we construct two sets \(W_{0},W_{1}\subseteq V\) such that \(W_{i}\) is the set of all nodes \(v\) such that player \(i\) wins the game \(G\) starting in \(v\). Parity games enjoy determinacy meaning that for every node \(v\) either \(v\in W_{0}\) or \(v\in W_{1}\) [13].

The problem of solving a given parity game is to compute the sets \(W_{0}\) and \(W_{1}\), as well as the corresponding memoryless winning strategies, \(\sigma _{0}\) for player 0 and \(\sigma _{1}\) for player 1, on their respective winning regions. The construction procedure of winning regions makes use of the notion of attractor. Formally, let \(U\subseteq V\) and \(i\in \{0,1\}\). The \(i\)-attractor of \(U\) is the least set \(W\) s.t. \(U\subseteq W\) and whenever \(v\in V_{i}\) and \(vE\cap W\ne \emptyset \), or \(v \in V_{1-i}\) and \(vE\subseteq W\) then \(v\in W\). Hence, the \(i\)-attractor of \(U\) contains all nodes from which player i can move “towards” \(U\) and player \(1-i\) must move “towards” \(U\). The \(i\)-attractor of \(U\) is denoted by \(Attr_{i}(G,U)\). Let \(A\) be an arbitrary attractor set. The game \(G\setminus A\) is the game restricted to the nodes \(V\setminus A\), i.e. \(G\setminus A=(V\setminus A,V_{0}\setminus A,V_{1}\setminus A,E\setminus (A{\times }V\cup V{\times }A),\varOmega _{|V\setminus A})\). It is worth observing that the totality of \(G\setminus A\) is ensured from \(A\) being an attractor.

Formally, for all \(k\in \mathbb {N}\), the \(i\)-attractor is defined as follows:

\(Attr_{i}^{0}(U)=U\) ;

\(\begin{aligned}Attr_{i}^{k+1}(U)&=Attr_{i}^{k}(U) \,\cup \,\{v\in V_{i}\mid \exists \,w\in Attr_{i}^{k}(U)\, s.t.\, vEw\}\\&\cup \,\{v\in V_{1-i}\mid \forall \, w\,:\, vEw\implies w\in Attr_{i}^{k}(U)\}\,;\end{aligned}\)

\(Attr_{i}(U)=\bigcup _{k\in \mathbb {N}}Attr_{i}^{k}(U)\).

Fig. 1.
figure 1

Zielonka Recursive Algorithm

3 The Zielonka Recursive Algorithm

In this section, we describe the Zielonka Recursive Algorithm using the basic concepts introduced in the previous sections and some observations regarding its implementation in PGSolver.

The algorithm to solve parity games introduced by Zielonka comes from a work of McNaughton [27]. The Zielonka Recursive Algorithm, as reported in Fig. 1, uses a divide and conquer technique. It constructs the winning sets for both players using the solution of subgames. It removes the nodes with the highest priority from the game, together with all nodes (and edges) attracted to this set. The algorithm \(win(G)\) takes as input a graph \(G\) and, after a number of recursive calls over ad hoc built subgames, returns the winning sets \((W_{0}, W_{1})\) for player \(0\) and player \(1\), respectively. The running time complexity of the Zielonka Recursive Algorithm is reported in Table 1.

3.1 The Implementation of the Zielonka Algorithm in PGSolver

PGSolver turns out to be of a very limited application in several real scenarios. In more details, even using the Zielonka Recursive Algorithm (that has been shown to be the best performing in practice), PGSolver would require minutes to decide games with few thousands of nodes, especially on dense graphs. In this work we deeply study all main aspects that cause such a bad performance.

Specifically, our investigation beginnings with the way the (Classic) Recursive Algorithm has been implemented in PGSolver by means of the OCaml programming language. We start observing that the graph data structure in this framework is represented as a fixed length Array of tuples. Every tuple has all information that a node needs, such as the player, the assigned priority and the adjacency list. Before every recursive call is performed, the program computes the difference between the graph and the attractor, as well as it builds the transposed graph. In addition the attractor function makes use of a TreeSet data structure that is not available in the OCaml’s standard library, but it is imported from TCSlib, a multi-purpose library for OCaml written by Oliver Friedmann and Martin Lange. Such library implements this data structure using AVL-Trees that guarantees logarithmic search, insert, and removal. Also, the same function calculates the number of successors for the opponent player in every iteration when looping through every node in the attractor.

Fig. 2.
figure 2

Improved Recursive Algorithm

4 The Improved Algorithm

In this section we introduce an improved version based on the Classic Recursive Algorithm by Zielonka. The new algorithm is depicted in Fig. 2. In Fig. 3 we also report an improved version of the attractor function that the new algorithm makes use of.

Fig. 3.
figure 3

Improved Recursive Attractor

Let \(G\) be a graph. Removing a node from \(G\) and building the transposed graph takes time \(\varTheta (|V|\,+\,|E|)\). Thus dealing with dense graph such operation takes \(\varTheta (|V|^{2})\). In order to reduce the running time complexity caused by these graph operations, we introduce an immutability requirement to the graph\(G\) ensuring that every recursive call uses \(G\) without applying any modification to the state space of the graph. Therefore, to construct the subgames, in the recursive calls, we keep track of each node that is going to be removed from the graph, adding all of them to a set called \(Removed\).

The improved algorithm is capable of checking if a given node is excluded or not in constant time as well as it completely removes the need for a new graph in every recursive call. At first glance this may seem a small improvement with respect to the Classic Recursive Algorithm. However, it turns out to be very successful in practice as proved in the following benchmark section. Further evidences that boost the importance of such improvement can be related to the fact that the difference operation has somehow the same complicance of complementing automata [33]. Using our approach is like avoiding such complementation by adding constant information to the states, i.e. a flag \((removed,\,\lnot removed)\). Last but not least, about the actual implementation, it is also worth mentioning that general-purpose memory allocators are very expensive as the per-operation cost floats around one hundred processor cycles [18]. Through these years many efforts have been made to improve memory allocation writing custom allocators from scratch, a process known to be difficult and error prone [5, 6].

4.1 Implementation in OCaml for PGSolver

Our implementation of the Improved Recursive Algorithm, listed in Fig. 4, does not directly modify the graph data structure (that is represented in PGSolver as an array of tuples), but rather it uses a set to keep track of removed nodes.

Fig. 4.
figure 4

Improved Recursive in OCaml

The Improved Recursive Algorithm, named solver, takes three parameters: the Graph, its transposed one, and a set of excluded nodes. Our Improved Attractor function, uses a HashMap, called tempMap to keep track of the number of successors for the opponent player’s nodes. In addition, we use a Queue, from OCaml’s standard library, to loop over the nodes in the attractor. Aiming at performance optimizations, the attractor function, implemented in PGSolver also returns the set of excluded nodes that solver passes to the next recursive call.

5 Scala

Scala [29, 30] is the programming language designed by Martin Odersky, the codesigner of Java Generics and main author of javac compiler. Scala defines itself as a scalable language, statically typed, a fusion of an object-oriented language and a functional one. It runs on the Java Virtual Machine (JVM) and supports every existing Java library. Scala is a purely object-oriented language in which, like Java and Smalltalk, every value is an object and every operation is a method call. In addition Scala is a functional language where every function is a first class object, also is equipped with efficient immutable and mutable data structures, with a strong selling point given by Java interoperability. However, it is not a purely functional language as objects may change their states and functions may have side effects. The functional aspects are perfectly integrated with the object-oriented features. The combination of both styles makes possible to express new kinds of patterns and abstractions. All these features make Scala programming language as a clever choice to solve these tasks, in a strict comparison with other programming languages available such as C, C++ or Java. Historically, the first generation of the JVM was entirely an interpreter; nowadays the JVM uses a Just-In-Time (JIT) compiler, a complex process aimed to improve performance at runtime. This process can be described in three steps: (1) source files are compiled by the Scala Compiler into Java Bytecode, that will be feed to a JVM; (2) the JVM will load the compiled classes at runtime and execute proper computation using an interpreter; (3) the JVM will analyze the application method calls and compile the bytecode into native machine code. This step is done in a lazy manner: the JIT compiles a code path when it knows that is about to be executed. JIT removed the overhead of interpretation and allows programs to start up quickly, in addition this kind of compilation has to be fast to prevent influencing the actual performance of the program. Another interesting aspect of the JVM is that it verifies every class file after loading them. This makes sure that the execution step does not violate some defined safety properties. The checks are performed by the verifier that includes a complete type checking of the entire program. The JVM is also available on all major platforms and compiled Java executables can run on all of them with no need for recompilation. The Scala compiler scalac compiles a Scala program into Java class files. The compiler is organized in a sequence of successive steps. The first one is called the front-end step and performs an analysis of the input file, makes sure that is a valid Scala program and produces an attributed abstract syntax tree (AST); the back-end step simplifies the AST and proceeds to the generation phase where it produces the actual class files, which constitute the final output. Targeting the JVM, the Scala Compiler checks that the produced code is type-correct in order to be accepted by the JVM bytecode verifier.

Fig. 5.
figure 5

Improved Algorithm in Scala

In [20], published by Google, Scala even being an high level language, performs just 2.5x slower than C++ machine optimized code. In particular it has been proved to be even faster than Java. As the paper notes: “While the benchmark itself is simple and compact, it employs many language features, in particular high level data structures, a few algorithms, iterations over collection types, some object oriented features and interesting memory allocation patterns”.

5.1 Improved Algorithm in Scala

In this section we introduce our implementation of the Improved Recursive Algorithm in Scala, listed as Figs. 5 and 6.

Fig. 6.
figure 6

Improved Attractor in Scala

Aiming at performance optimizations we use a priority HashMap where every key is a certain priority and a value is a set of each node \(v\) where \(priority(v)=key\). As fast and JVM-Optimized HashMaps and ArrayLists we use the ones included in the open source library Trove. In addition, using the well known strategy pattern [17] we open the framework for further extensions and improvements. The intended purpose of our algorithm is to assert that the performance of existing tools for solving parity games can be improved using the improved algorithm and choosing Scala as the programming language. We rely on Scala’s internal features and standard library making heavy use of the dynamic ArrayBuffer data structure. In order to store the arena we use an array of Node objects. The Node class contains: a list of adjacent nodes, a list of incident nodes, its priority and the player; the data structure also implements a factory method called “\(--(set:ArrayBuffer[Int])\)” that takes an ArrayBuffer of integers as input, flags all the nodes in the array as excluded, and returns the reference to the new graph. In addition, there is also a method called \(max\_priority()\) that will return the maximal priority in the graph and the set of nodes with that priority.

The Attractor function makes deeply use of an array of integers named \(tmpMap\) that is preallocated using the number of nodes in the graph with a negative integer as default value; we use \(tmpMap\) when looping through every node in the set \(A\) given as parameter, to keep track of the number of successors for the opponent player. We add a node \(v\in V\) to the attractor set when its counter (stored in tmpMap[v]) reaches 0 (\(adj(v)\subseteq A\) and \(v\in V_{opponent}\)) or if \(v\in V_{player}\); using an array of integers, or an HashMap, to serve this purpose, guarantees a constant time check if a node was already visited and ensures that the count for the opponent’s node adjacency list takes place one time only. These functions are inside a singleton object called ImprovedRecursiveSolver that extends the Solver interface.

Fig. 7.
figure 7

Random Games Chart in Logarithmic Scale

6 Benchmarks

In this section we study, analyze and evaluate the running time of our four implementations: Classic Recursive in OCaml (CRO), Classic Recursive in Scala (CRS), Improved Recursive in OCaml (IRO) and Improved Recursive in Scala (IRS). We have run our experiments on multiple instances of random parity games. We want to note that IRS does not apply any preprocessing steps to the arena before solving. All tests have been run on an Intel(R) Xeon(R) CPU E5620 @ 2.40 GHz, with 16 GB of Ram (with no Swap available) running Ubuntu 14.04. Precisely, we have used 100 random arenas generated using PGSolver of each of the following types, given \(N=i\times 1000\) with\(\, i\,\)integer and \(1\le i\le 10\) and a timeout set at 600 s. In the following, we report six tables in which we show the running time of all experiments under fixed parameters. Throughout this section we define \(abo_{T}\) when the program has been aborted due to excessive time and \(abo_{M}\) when the program has been killed by the Operating System due to memory consumption. In Fig. 7 we also report the trends of the four implementations using a logarithmic scale with respect to seconds. This figure is based on the averages of all results reported in the tables below.

figure a

6.1 Trends Analysis for Random Arenas

The speedup obtained by our implementation of the Improved Recursive Algorithm is in most cases quite noticeable. Figure 8 shows the running time trend for Improved and Classic Algorithm on each platform. The seconds are limited to \(\left[ 0,\,100\right] \). As a result we show that even with all preprocessing steps enabled in PGSolver, IRS is capable of gaining two orders of magnitude in running time.

Fig. 8.
figure 8

Trends Chart

Fig. 9.
figure 9

Clique Trends

6.2 Trends Analysis for Special Games

Focusing on Classic Recursive in PGSolver and our Improved Recursive in Scala, here we show the running times for non-random games generated by PGSolver. In particular we use four types of non-random games, these experiments have been run against PGSolver using the Classic Recursive Algorithm with all optimizations disabled and all solutions were matched to ensure correctness.

Clique[n] games are fully connected games without self-loops, where n is the number of nodes. The set of nodes is partitioned into \(V_{0}\) and \(V_{1}\) having the same size. For all \(v\in V_{p}\), \(priority(v)\,\%\,2=p\). For our experiments we set \(n=2^{k}\) where \(8\le k\le 14\). Table below reports the running time for our experiments and these results are drawn in Fig. 9.

figure b

In Ladder[n] game, every node in \(V_{0}\) has priority 2 and every node in \(V_{1}\) has priority 1. In addition, each node \(v\in V\) has two successors: one in \(V_{0}\) and one in \(V_{1}\), which form a node pair. Every pair is connected to the next pair forming a ladder of pairs. Finally, the last pair is connected to the top. The parameter n specifies the number of node pairs. For our tests, we set \(n=2^{k}\) where \(7\le k\le 19\) and report our experiments in the table below whose trend is drawn in Fig. 10. Figure 10 shows better performance for CRO than IRS using low-scaled values as input parameter. This behaviour is not surprising as there is a warming-up time required by the Java Virtual Machine.

Fig. 10.
figure 10

Ladder Trends

Fig. 11.
figure 11

Model Checker Ladder Trends

figure c

Model Checker Ladder[n] consists of overlapping blocks of four nodes, where the parameter n specifies the number of desidered blocks. Every node is owned by player 1, \(V_{1}=V\) and \(V_{0}=\emptyset \), and the nodes are connected such that every cycle passes through a single point of colour 0. For our experiments we set \(n=2^{k}\) where \(7\le k\le 19\), report our experiments in the table below and draw the trends in Fig. 11.

Fig. 12.
figure 12

Jurdiznski Trends with a Fixed Parameter of \(n=10\) Layers

figure d

Jurdzinski[n, m] games are designed to generate the worst-case behaviour for the Small Progress Measure Solver [22]. The parameter n is the number of layers, where each layer has m repeating blocks that are inter-connected as described in [22]. As this game takes two parameters, in our test we ran two experiments: one where n is fixed to 10 and \(m=10\times 2^{k}\), for \(k=1,\,2,\,3,\,4,\,5\) and one where m is fixed to 10 and \(n=10\times 2^{k}\), for \(k=1,\,2,\,3,\,4,\,5\). The results of our experiments are reported in the tables below. The trends are drawn in Figs. 12 and 13.

Fig. 13.
figure 13

Jurdiznski Trends with a Fixed Parameter of \(m=10\) Blocks

figure e

7 Conclusions

PGSolver is a well-stablished framework that collects multiple algorithms to decide parity games. For several years now this platform has been the only one available to solve and benchmark in practice. Given PGSolver’s limitations addressing huge graphs, several attempts of improvement have been carried out recently. Some of them have been implemented as preprocessing steps in the tool itself (such as priority compression or SCC decomposition and the like), while others chose parallelism techniques, such as Cuda [19], applied to the algorithms. However these improvements often do not show the desired performance.

In this paper we started from scratch by revisiting the Zielonka Recursive Algorithm, implemented an improved and the classic versions in Scala and OCaml, comparing among them. The choice of Scala as a programming language has been not casual, but rather it comes out from a deep study focused on performance and simplicity. Scala is interoperable with Java libraries, has a concise and clear syntax, functional and object oriented features, runs on the Java Virtual Machine and has been proven to be high performing. Our main result is a new and fast tool for solving parity games capable of gaining up to two orders of magnitude in running time. In conclusion we state that there is place for a faster and better framework to solve parity games and this work is a starting point raising several interesting questions. For example, what if one implements the other known algorithms to solve parity games in Scala? PGSolver showed that Zielonka’s algorithm is the best performing. Can one reproduce the same results in Scala? We leave all these questions as future work.