1 Introduction

Ensuring the correctness of a system is a difficult task. Bugs in manually constructed hard- or software are often missed during testing. To remedy this problem, two lines of research have emerged. The first one deals with the verification of systems that have already been built and spans topics such as process calculi and model checking. The second line concerns the automatic derivation of systems that are correct by construction, also called synthesis. In both cases, the specification of the system needs to be given, but we can save the work of constructing the actual system in the case of synthesis.

Unfortunately, the complexity of synthesis has been proven to be rather high. For example, when given a specification in form of a property in linear-time temporal logic (LTL), the synthesis task has a complexity that is doubly-exponential in the size of the specification [31]. Recently, it has been argued that this is however not a big problem [32] as realisable specifications typically have implementations that are small, which can be exploited. This observation is used in the context of bounded synthesis [14, 32], which builds upon the Safraless synthesis principle [25]. Here, the LTL specification is converted to a universal co-Büchi word or tree automaton, which is then, together with a bound b∈ℕ, used for building a safety game such that winning strategies in the game correspond to implementations that satisfy the specification. The bound in this setting describes the maximally allowed number of visits to rejecting states in the co-Büchi automaton along some run of the automaton that corresponds to the input/output observed along the prefix play in the game so far. If there exists an implementation satisfying a given specification, then there exists some bound such that the resulting game is winning.

In practice, the bound value required is usually rather small [1416, 32], often much smaller than the number of states in the smallest implementation. This leads to improved running times of implementations following this approach. Consequently, all modern tools for full LTL synthesis publicly available nowadays build upon Safraless synthesis. The first of these, named Lily [21], performs the realisability check in an explicit way. Recently, a symbolic algorithm based on antichains has been presented [14], showing a better performance on larger specifications. Surprisingly, the usage of binary decision diagrams (BDDs), a technique that has skyrocketed the size of the systems that can be handled by model checking tools [12, 27] seems to be unconsidered in this context so far. A possible explanation for this is that the safety games constructed in the bounded synthesis context contain a lot of counters with dependencies between them in the transition relation. It has been observed that this can tremendously blow-up the size of BDDs [7, 33, 40]. Thus, for success using this technique, it is a central requirement that efficient techniques for reducing the number of counters are being used. In this paper we investigate this problem and present such techniques. By taking them together, we can improve upon the performance of previous approaches to full LTL synthesis by several orders of magnitude.

In particular, we show how the safety and non-safety parts of a specification can be handled separately in the synthesis game. As for safety properties, no counters are necessary, this reduces the computation time significantly and allows for utilising a major strength of BDDs: efficient dealing with automata that run in parallel. Since it has been argued that typical specifications found in practice are mostly of the form ⋀ aA a→⋀ gG g for some sets of assumptions A and guarantees G [46, 18, 35], both containing LTL formulas, we design our technique to be adapted to this case. As a second contribution, we present a technique that allows for a further reduction of the number of counters in the non-safety part of the synthesis games.

Obviously, the optimisations described in this paper cannot circumvent the 2EXPTIME-completeness of the LTL synthesis problem, but are rather targeted towards improving the applicability of LTL synthesis from specifications that are deemed to be typical for the practice. As representatives for such specifications, we use the benchmarks from [14, 21] to evaluate our approach. We also consider a new load balancing benchmark that mimics the specification design phases of a scalable system.

This paper consists of four parts. In the first part, we recapitulate the basics of the bounded synthesis approach. Then, we discuss the main contributions of this work. The third part provides information on additional minor techniques needed for an efficient BDD-based implementation of the contributions presented. The fourth part then consists of experimental results and a conclusion.

To be more precise, in the next section, we briefly state the preliminaries. Then, we discuss the solution process of obligation games, which forms the basis of the bounded synthesis approach, which is in turn described in Sect. 4.

With respect to the second part, we first explain the new specification splitting and signalling technique for bounded synthesis in Sect. 5, and then discuss in Sect. 6 how some counters that are introduced in the bounded synthesis process can be stripped away for increased efficiency of the overall approach.

In the third part, we start by discussing how the game that is solved in the bounded synthesis process can efficiently be encoded into binary decision diagrams in Sect. 7. In Sect. 8, we then show how the techniques presented in this paper can also be used for detecting unrealisable specifications, and for those that are realisable, we discuss the fully symbolic extraction of prototype implementations satisfying the specification in Sect. 9.

Section 10 contains the experimental results of running a tool that implements the techniques introduced in this paper on the benchmarks from [21] and [14] as well as on a novel load-balancing system case study. We conclude with a summary.

2 Preliminaries

2.1 Basics

We denote by ℕ the natural numbers, including 0, and \(\mathbb {B}=\{\mathbf{false},\mathbf{true}\}\) are the Boolean values. Given some alphabet Σ, we denote by Σ ω and Σ the sets of infinite and finite words over Σ, respectively. Subsets of Σ ω are also called ω-languages. For a word w=w 0 w 1…, we denote the set of elements occurring infinitely often in w by inf(w) and for some i∈ℕ, denote by w i=w i w i+1 w i+2… the suffix of w from the ith element. By (AB) we denote the set of all functions with domain A and co-domain B. For elements (a,b) of some set A×B, we denote its projections by (a,b)| A =a and (a,b)| B =b.

2.2 Word automata

An automaton is a tuple \(\mathcal{A} =(Q,\varSigma ,\delta,Q_{\mathit{init}},\mathcal{F})\) where Q is a finite set of states, Σ is a finite alphabet, δ:Q×Σ→2Q is a transition function, Q init Q is the set of initial states and \(\mathcal{F} : Q \rightarrow \mathbb {N}\) is a so-called colouring function. For the scope of this paper, we need four types of automata: universal co-Büchi word automata (UCW), deterministic parity automata, deterministic safety automata and deterministic co-safety automata. Deterministic safety and co-safety automata and universal co-Büchi automata are parity automata with domain {0,1} for the colouring function. The set of states with colour 1 in a safety automaton is absorbing, and for a co-safety automaton the set of states with colour 0 is absorbing, i.e., all transitions from any state in such a set leads back into the set. For deterministic automata, we require that for all (q,x)∈Q×Σ, we have |δ(q,x)|=1, and that the set of initial states contains only one state. We call a subset of states in \(\mathcal{A}\) a strongly connected component (SCC) if every state in the subset is reachable by some sequence of transitions from any other state in the subset. An SCC SQ is furthermore called a maximal strongly connected component if there is no SCC S′ that is a strict superset of S. By definition, we assume that every state is reachable from itself, so we can partition the state space of an automaton into its set of maximal SCCs \(\mathcal{D}(\mathcal{A})\).

Given an infinite word w=w 0 w 1⋯∈Σ ω, we say that some infinite sequence π=π 0 π 1⋯∈Q ω is a run of \(\mathcal{A}\) over w if π 0Q init and for all i∈ℕ, π i+1δ(π i ,w i ). We say that π is an accepting run for w if \(\max(\inf (\mathcal{F}(\pi_{0})\mathcal{F}(\pi_{1}) \cdots))\) is even. A word w is accepted by \(\mathcal{A}\) if all runs for w are accepting. The set of all words accepted by \(\mathcal{A}\) is called its language, denoted by \(\mathcal{L}(\mathcal{A})\).

2.3 Finite-state machines (FSMs)

Formally, we distinguish two types of FSMs: Mealy and Moore machines. A Mealy machine is a tuple \(\mathcal{M} =(S,I,O,\delta,s_{\mathit{init}})\), while a Moore machine is described by a tuple \(\mathcal{M} = (S,I,O,\delta,s_{\mathit{init}},L)\). In both cases, S is a set of states, I is a set of inputs symbols, O is a set of output symbols and s init S is an initial state. For Mealy machines, δ:Q×IQ×O is the transition function, while for Moore machines, this function is defined as δ:Q×IQ and L:QO is the labelling function.

We say that some word w=w 0 w 1 w 2⋯∈(I×O)ω is in the language of a Mealy machine \(\mathcal{M}\), written as \(\mathcal{L}(\mathcal{M})\), if there exists some run π=π 0 π 1⋯∈S ω of the machine with π 0=s init and for all i∈ℕ, (π i+1,w i | O )=δ(π i ,w i | I ). Likewise, w is in the language of a Moore machine if there exists some run π=π 0 π 1⋯∈S ω of the machine with π 0=s init and for all i∈ℕ, δ(π i ,w i | I )=π i+1 and w i | O =L(π i ). Such a word is also called a trace of \(\mathcal{M}\).

We say that an FSM \(\mathcal{M}\) over the input set I and the output set O realizes a word automaton over the alphabet I×O if the language of \(\mathcal{M}\) is contained in the language of the automaton.

2.4 Linear-time temporal logic (LTL)

Before a system that is correct with respect to its specification can be synthesized, the specification has to be formally stated. For such a task, linear-time temporal logic [30] (LTL) is a commonly used logic. Syntactically, LTL formulas are defined inductively as follows (over some set of atomic propositions AP):

  • For all atomic propositions xAP, x is an LTL formula.

  • Let ϕ 1 and ϕ 2 be LTL formulas. Then ¬ϕ 1, (ϕ 1ϕ 2), (ϕ 1ϕ 2), X ϕ 1, F ϕ 1, G ϕ 1, and (ϕ 1 U ϕ 2) are also valid LTL formula.

The validity of an LTL formula ϕ over AP is defined inductively with respect to an infinite trace w=w 0 w 1⋯∈(2AP)ω. Let ϕ 1 and ϕ 2 be LTL formulas. We set:

  • wp if and only if (iff) pw 0 for pAP

  • w⊨¬ψ iff not wψ

  • w⊨(ϕ 1ϕ 2) iff wϕ 1 or wϕ 2

  • w⊨(ϕ 1ϕ 2) iff wϕ 1 and wϕ 2

  • wX ϕ 1 iff w 1ϕ 1

  • wG ϕ 1 iff for all i∈ℕ, w iϕ 1

  • wF ϕ 1 iff there exists some i∈ℕ such that w iϕ 1

  • w⊨(ϕ 1ϕ 2) iff there exists some i∈ℕ such that for all 0≤j<i, w jϕ 1 and w iϕ 2

We use the usual precedence rules for LTL formulas (unary temporal operators bind stronger than binary temporal operators) in order to be able to omit unnecessary braces and also allow the abbreviations typically used for Boolean logic, e.g., that ab is equivalent to ¬ab for all formulas a, b.

Given an LTL formula ψ over AP, we can convert ψ to an equivalent universal co-Büchi word automaton \(\mathcal{A}\) over the alphabet 2AP such that precisely the words over 2AP that satisfy ϕ are also in the language of \(\mathcal{A}\) [25, 39]. The size of the UCW is at most exponential in the length of ψ.

The LTL synthesis problem is to determine, for a given LTL formula ψ over some set of atomic propositions AP=AP I AP O , whether there exists a Mealy or Moore machine with input symbol set \(2^{\mathsf {AP}_{I}}\) and output symbol set \(2^{\mathsf {AP}_{O}}\) whose language is a subset of the language of ψ, and to compute such a machine whenever existing. Whenever such a machine exists, we call the specification realisable, otherwise it is unrealisable. Without loss of generality, we will only be concerned with Mealy machine synthesis here.

We call an LTL formula over some set of atomic proposition AP a safety property [34] if for every word w=w 0 w 1⋯∈(2AP)ω that is not in its language, there exists some i∈ℕ such that every other infinite word starting with w 0 w 1w i is also not in its language.

2.5 Obligation games

In this paper, we reduce the synthesis problem to determining the winner (and obtaining a winning strategy) in so-called obligation games [37], which contain reachability and safety games as special cases. They are sometimes also called weak Muller games in the literature.

Intuitively, when solving the synthesis problem by reducing it to determining the winner in an obligation game, the two players (player 0 and 1) in the game represent the input and output of a finite-state machine. Player 0 provides the input to the system, while player 1 produces the output. The game needs to be designed in a way such that whenever player 1 has a winning strategy in the game, the strategy represents a finite-state machine that satisfies the original specification.

Formally, a game is a tuple \(\mathcal{G} = (V, \varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\), where V is a finite set of positions (also called the vertices of the game), Σ 0 and Σ 1 are the finite sets of actions for the two players, E:V×Σ 0×Σ 1V is an edge function, v init V denotes the initial position and \(\mathcal{F}\) is a winning condition in form of a Boolean formula in which subsets of V are used as atomic propositions.Footnote 1 For a winning condition \(\mathcal{F}\), we denote by \(\mathcal{S}(\mathcal {F})\) the set of all subsets of V that occur in the formula \(\mathcal {F}\). We also call the sub-tuple (V,Σ 0,Σ 1,E,v init ) of \(\mathcal{G}\) the transition structure of \(\mathcal{G}\).

A decision sequence \(\rho= \rho^{0}_{0} \rho^{1}_{0} \rho^{0}_{1} \rho ^{1}_{1} \rho^{0}_{2} \rho^{1}_{2} \cdots\) is an infinite sequence such that for every i∈ℕ, \(\rho^{0}_{i} \in \varSigma _{0}\) and \(\rho^{1}_{i} \in \varSigma _{1}\). We say that ρ induces a play π=π 0 π 1⋯ if π 0=v init and for every i∈ℕ, \(\pi_{i+1} =E(\pi_{i},(\rho^{0}_{i},\rho^{1}_{i}))\).

We categorize plays in \(\mathcal{G}\) by whether they are winning for player 0 or 1. Given an infinite play \(\pi= \pi^{0}_{0} \pi^{1}_{0} \ldots{}\), we define π as being winning for player 1 if \(\{s\in\mathcal{S}(\mathcal{F}) \mid s \cap \mathrm {Occ}(\pi) \neq \emptyset\} \models\mathcal{F}\), where Occ(π) denotes the set of positions occurring along π, i.e., Occ(π)={vV∣∃i∈ℕ:π i =v}.Footnote 2 Whenever a play is not winning for player 1, we define the play to be winning for player 0. The Occ function is defined likewise for finite plays. We also say that some decision sequence is winning for some player if the induced play is winning.

It is well-known that in obligation games, there exists a winning strategy for precisely one of the players, i.e., for some player j∈{0,1}, there exists a function f:(Σ 0Σ 1)Σ j such that all decision sequences that are in correspondence to f are winning for player j. A decision sequence \(\rho= \rho^{0}_{0} \rho^{1}_{0} \rho^{0}_{1} \rho^{1}_{1} \rho^{0}_{2} \rho^{1}_{2}\cdots\) is said to be in correspondence to f if and only if for all i∈ℕ, \(\rho^{j}_{i} = f(\rho^{0}_{0} \rho^{1}_{0} \cdots\rho ^{1-j}_{i-1+j})\).

Given a game \(\mathcal{G}\) over the action sets Σ 0 and Σ 1, we say that some Moore machine \(\mathcal{M} = (S,\varSigma _{0},\varSigma _{1},\delta,s_{\mathit{init}},L)\) induces a strategy f for player 0 in \(\mathcal{G}\) with f(ϵ)=L(s init ) and \(f(\rho^{0}_{0} \rho^{1}_{0} \cdots\rho^{1}_{k}) =L(\delta(\delta(\cdots\delta(s_{\mathit{init}},\rho^{1}_{0}),\rho^{1}_{1}),\ldots, \rho^{1}_{k}))\) for every prefix decision sequence \(\rho^{0}_{0} \rho ^{1}_{0} \cdots\rho^{1}_{k}\). Likewise, a Mealy machine \(\mathcal{M} = (S,\varSigma _{0},\varSigma _{1},\delta ,s_{0})\) induces a strategy f for player 1 by choosing \(f(\rho^{0}_{0}\rho^{1}_{0} \cdots\rho^{0}_{k}) = \delta(\delta(\cdots\delta(s_{\mathit{init}},\rho^{0}_{0})|_{S},\rho^{0}_{1})|_{S}, \ldots, \rho^{0}_{k})|_{O}\) for every prefix decision sequence \(\rho^{0}_{0} \rho^{1}_{0} \cdots\rho^{0}_{k}\). For obligation games, it is assured that whenever there exists a winning strategy for player 0/1, there also exists a Moore/Mealy automaton inducing a winning strategy, respectively.

In this paper, we mostly take the view of player 1. Thus, we simply call a decision sequence or play winning if it is winning for player 1. We say that a position is winning for a player p∈{0,1} if changing v init to the position makes player p having a winning strategy in the game. We say that a game is winning for a player if the player has a winning strategy.

Henceforth, we use the names of the game tuple components to extract these from a game. For example, for some game \(\tilde{\mathcal{G}} =(\tilde{V},\tilde{\varSigma }_{0},\tilde{\varSigma }_{1},\tilde{E},\tilde{v}_{\mathit{init}},\tilde{\mathcal{F})}\), we define \(v_{\mathit{init}}(\tilde{\mathcal {G}}) = \tilde{v}_{\mathit{init}}\). For automata tuples, we apply the same notational concept.

2.6 Example for obligation games

To clarify the connection between the synthesis problem and obligation games, we give an example here. Figure 1 shows an obligation game \(\mathcal{G} = (V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) for which every decision sequence that induces a winning play represents a word that satisfies the LTL specification ψ=F aGaX b). The winning condition \(\mathcal{F} = \neg\{v_{2}\} \wedge\{v_{1}\}\) of \(\mathcal{G}\) is satisfied by the plays that never visit v 2 but eventually visit v 1. The game in the example also has the property that precisely the decision sequences that induce a winning play for player 1 in \(\mathcal{G}\) are the ones that satisfy ψ. In general, LTL specifications do not always permit to build obligation games with this property. We will however see in Sect. 4 that they are still a suitable computation model for synthesis when applying the bounded synthesis approach.

Fig. 1
figure 1

Graphical representation of an obligation game \(\mathcal{G} =(V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) with Σ 0=2{a}, Σ 1=2{b}, v init =v 0 and \(\mathcal{F} = \neg\{v_{2}\} \wedge\{v_{1}\}\). The edge labels represent the constraints on the transitions, e.g., the label \(\overline{a}b\) on the edge from v 1 to v 0 describes that E(v 1,∅,{b})=v 0 and the self-loop of v 2 describes that E(v 2,A,B)=v 2 for every A⊆{a} and B⊆{b}

2.7 Important special cases of winning conditions in obligation games

Let \(\mathcal{G} = (V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) be a game. In this paper, we consider two prominent special cases of these winning conditions:

  • If \(\mathcal{F} = \neg V'\) for some V′⊆V, we call \(\mathcal{G}\) a safety game.

  • If \(\mathcal{F} = V'\) for some V′⊆V, we call \(\mathcal {G}\) a reachability game.

Thus, the reachability winning condition is the dual case of the safety winning condition. Note that if a winning condition \(\mathcal{F}\) consists of a conjunction of safety winning objectives or a disjunction of reachability objectives, we can use the following simplification rules:

(1)
(2)

2.8 Safety automata and safety/co-safety games

Given a deterministic safety automaton \(\mathcal{A}\) over the alphabet Σ=Σ I ×Σ O , we can easily build a safety game \(\mathsf {ToGame}_{1}(\mathcal{A},\varSigma _{I},\varSigma _{O})\) that is winning for player 1 if and only if there exists a Mealy machine reading Σ I and writing to Σ O that realizes \(\mathcal{A}\). Likewise, we can build a co-safety game \(\mathsf {ToGame}_{0}(\mathcal{A},\varSigma _{O},\varSigma _{I})\) that is winning for player 0 if and only if there exists a Moore machine reading Σ I and writing to Σ O that realizes \(\mathcal{A}\).

Definition 1

Given a deterministic safety automaton \(\mathcal{A} = (Q,\varSigma ,\delta ,\{q_{\mathit{init}}\},F)\) with Σ=Σ a ×Σ b , we define the game \(\mathsf {ToGame}_{p}(\mathcal{A},\varSigma _{a},\varSigma _{b}) = (V,\varSigma _{a},\varSigma _{b},E,v_{\mathit{init}},\mathcal{F})\) for p∈{0,1} as follows:

  • V=Q

  • For all vV, xΣ a , yΣ b : E(v,x,y)=δ(v,(x,y))

  • v init =q init

  • \(\mathcal{F} = \neg\{v \in V \mid F(v) = 1\}\) if p=1 and \(\mathcal{F} = \{v \in V \mid F(v) = 1\}\) otherwise.

2.9 Parallel composition of games

Given two games \(\mathcal{G} = (V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) and \(\mathcal{G}' = (V',\varSigma _{0},\varSigma _{1}, E',v'_{\mathit{init}},\mathcal{F}')\), for some ⊙∈{∧,∨}, we define the parallel composition \(\mathcal{G} \parallel_{\odot}\mathcal{G}'\) of the two games as the (synchronized product) game \(\mathcal{G}^{p} = (V^{p}, \varSigma _{0}, \varSigma _{1}, E^{p}, v^{p}_{\mathit{init}},\mathcal{F}^{p})\) with:

  • V p=V×V

  • \(v^{p}_{\mathit{init}} = (v_{\mathit{init}},v'_{\mathit{init}})\)

  • For all vV, v′∈V′, xΣ 0 and yΣ 1, E p((v,v′),x,y)=(E(v,x,y), E′(v′,x,y)).

  • \(\mathcal{F}^{p} = \mathcal{F}_{1} \odot\mathcal{F}_{2}\), where:

    1. 1.

      \(\mathcal{F}_{1}\) is obtained by taking the Boolean formula \(\mathcal{F}\) and replacing every occurrence of some atomic proposition \(s \in\mathcal{S}(\mathcal{F})\) in the formula by s×Q′, and

    2. 2.

      \(\mathcal{F}_{2}\) is obtained by taking \(\mathcal{F}'\) and replacing every occurrence of some atomic proposition \(s \in\mathcal {S}(\mathcal{F}')\) by Q×s.

The conjunctive parallel composition \(\mathcal{G} \parallel_{\wedge}\mathcal {G}'\) has the property that precisely the decision sequences ρ∈(Σ 0 Σ 1)ω that are winning for player 1 in \(\mathcal {G}\) and \(\mathcal{G}'\) are also winning for player 1 in \(\mathcal {G}^{p}\). Likewise, the disjunctive parallel composition \(\mathcal{G}\parallel_{\vee}\mathcal{G}'\) has the property that precisely the decision sequences ρ∈(Σ 0 Σ 1)ω that are winning for player 1 in \(\mathcal{G}\) or \(\mathcal{G}'\) are also winning for player 1 in \(\mathcal{G}^{p}\).

Note that when taking the conjunctive parallel composition of two safety games, the resulting game is still a safety game, using the simplification given in Equation 1.

2.10 Binary decision diagrams

For representing sets of vertices and the transition relation in safety games symbolically, we use reduced ordered binary decision diagrams (BDDs) [8, 9], which represent characteristic functions \(f : 2^{\mathcal{V}} \rightarrow \mathbb {B}\) for some finite set of variables \(\mathcal{V}\). We start by treating them on an abstract level and state the operations on them that we use. For a comprehensive overview, see [9]. Given two BDDs f and f′, we define their conjunction and disjunction as (ff′)(x)=f(x)∧f′(x) and (ff′)(x)=f(x)∨f′(x) for all \(x\subseteq\mathcal{V}\). The negation of a BDD is defined similarly. Given some set of variables \(V' \subseteq\mathcal{V}\) and a BDD f, we define ∃V′.f as a function that maps all \(x \subseteq \mathcal{V}\) to true for which there exists some x′⊆V′ such that f(x′∪(xV′))=true. Dually, we define ∀V′.f≡¬(∃V′.¬f). Given two ordered lists of variables L=l 1,…,l n and \(L' = l'_{1}, \ldots,l'_{n}\) of the same length, we furthermore denote by f[L/L′] the BDD for which some \(x \subseteq\mathcal{V}\) is mapped to true if and only if \(f(x \setminus\{l'_{1}, \ldots, l'_{n}\} \cup\{l_{i} \mid \exists1 \leq i \leq n: l'_{i} \in x\}) = \mathbf {true}\). By abuse of notation, we say that f=false or f=true if f(x)=false or \(f(x) = \textbf{true}\) for all \(x \subseteq\mathcal {V}\), respectively.

In the sequel, we will also be concerned with operations on BDDs over different variable sets. We assume the standard semantics for such cases, e.g., for a BDD f over \(\mathcal{V}\) and a BDD f′ over \(\mathcal{V}'\), we define \((f \wedge f')(x) = f(x \cap\mathcal{V})\wedge f'(x \cap\mathcal{V}')\) for all \(x \subseteq\mathcal{V} \cup \mathcal{V}'\).

We also define least and greatest fixed points of monotone functions over BDDs. Given to BDDs f and g over the set of variables \(\mathcal {V}\), we write fg if for all \(a \subseteq\mathcal{V}\), we have that f(a)=true implies f(b)=true. Let \(g :(2^{\mathcal{V}} \rightarrow \mathbb {B}) \rightarrow(2^{\mathcal{V}} \rightarrow \mathbb {B})\) be a monotone function that maps a BDD over some set of variables \(\mathcal{V}\) onto another BDD over the same set of variables. We call g monotone if for all BDDs f and f′, if ff′, then g(f)≥g(f′). We define μ 0.g=false, μ i+1.g=g(μ i.g) for all i∈ℕ and for j being the least element of ℕ such that μ j+1.g=μ j.g, the least fixed point is defined as μ.g=μ j.g. The greatest fixed point ν.g of a monotone function g is computed likewise, i.e., we define ν 0.g=true, ν i+1.g=g(ν i.g) for all i∈ℕ and for j being the least element of ℕ such that ν j+1.g=ν j.g, we define ν.g=ν j.g. In both cases, we call μ i.g and ν i.g prefixed points of g. Checking whether a computed prefixed point is already the fixed point is simple in practice by using the fact that equivalence checks on BDDs can be performed in constant time in modern BDD libraries [12].

For most parts of this paper, it is only of importance what BDDs are good for and not how they actually represent Boolean functions. In Sect. 9, however, we will need this information.

Intuitively, a BDD is an acyclic graph with a root node and two sink nodes, labelled true and false. Every node in the graph, except for the sink nodes, is labelled by a variable and has two successor nodes, connected by so-called then and else edges. Given a set of variables \(\mathcal{V}\) and Boolean function \(f :2^{\mathcal{V}} \rightarrow \mathbb {B}\), we say that some set \(v \subseteq \mathcal{V}\) (which we also call a variable valuation in this context) induces f(v)=true if and only if there exists a path in the BDD from the root node to true where in every node, we take the then-successor whenever the variable the node is labelled with is contained in v, and take the else-successor otherwise. We consider only ordered BDDs here, for which there exists a total order on the variables, and along every path of a BDD, every variable can only occur at most once and in the given order.

3 Solving obligation games

In this section, we discuss the solution process for obligation games, i.e., how the winner of a game can be obtained.Footnote 3 Obligation games are determined, i.e., there exists a winning strategy for exactly one of the players. In general, the winning player needs to take the history of the play into account when making the next decision in order to win a play. However, as we will see below, this does not make solving obligation games hard, as it in fact suffices for the winning player to keep track of which vertices have already been visited along a prefix play. Even more, as vertices that occur only in combination in the vertex sets of the winning condition need not be distinguished, the amount of information that needs to be stored is even smaller. Formally, let \(\mathcal{G} = (V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) be an obligation game. Given a set of vertices V′⊆V, we denote by \(\mathcal{C}(V',\mathcal {F})\) the set of atomic propositions of \(\mathcal{F}\) that denote position sets that intersect with V′, i.e., \(\mathcal{C}(V',\mathcal {F}) = \{ V'' \in\mathcal{S}(\mathcal{F}) \mid V'' \cap V' \neq \emptyset\}\). We use \(\mathcal{Z}(\mathcal{G})\) to represent the set of subsets of \(\mathcal{S}(\mathcal{F})\) that need to be distinguished when tracking which vertices have been visited so far in a prefix game, i.e.,

$$\mathcal{Z}(\mathcal{G}) = \{ S \subseteq\mathcal{S}(\mathcal{F})\mid \exists V' \subseteq V : S = \{ V'' \in\mathcal{S}(\mathcal{F})\mid V'' \cap V' \neq\emptyset\} \}$$

For a finite game \(\mathcal{G}\), the set \(\mathcal{Z}(\mathcal{G})\) is finite. Together with set inclusion, \(\mathcal{Z}(\mathcal{G})\) forms a lattice, where ∅ is the minimal and \(\mathcal{S}(\mathcal {F})\) is the maximal element. The following definition and the subsequent lemma formalize this intuition that \(\mathcal{Z}(\mathcal {G})\) is useful as lattice of information about the past to remember in obligation games.

Definition 2

Let \(\rho= \rho^{0}_{0} \rho^{1}_{0} \ldots\) be a decision sequence in an obligation game \(\mathcal{G} = (V,\varSigma _{0}\), \(\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) and π=π 0 π 1⋯ be the corresponding play. We define η=η 0 η 1⋯ to be the corresponding path through \(\mathcal{Z}(\mathcal{G})\), i.e., for all i∈ℕ, we have \(\eta_{i} = \mathcal{C}(\mathrm {Occ}(\pi_{0} \cdots\pi _{i}), \mathcal{F})\). If for some j∈ℕ, we have η j =η j+1=…, we define the limit value η as η =η j .

Lemma 1

Let \(\rho= \rho^{0}_{0} \rho^{1}_{0} \cdots\) be a decision sequence in an obligation game \(\mathcal{G} = (V,\varSigma _{0}\), \(\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\), π=π 0 π 1be the corresponding play and η=η 0 η 1be the corresponding path through \(\mathcal{Z}(\mathcal{G})\). For all i∈ℕ, we have η i η i+1. Furthermore, η has a limit value η and ρ is winning if and only if \(\eta_{\infty}\models\mathcal{F}\).

As an example, consider the obligation game \(\mathcal{G}\) in Fig. 1. Here, the winning condition is \(\mathcal {F} = \neg\{v_{2}\} \wedge\{ v_{1}\}\). Obviously, for this game, we have \(\mathcal{Z}(\mathcal{G}) = \{ \{ \emptyset\}, \{ \{v_{1}\} \}, \{ \{v_{2}\} \}, \{ \{v_{1}\}, \{v_{2}\} \} \}\). For a play π=v 0 v 1 v 0 v 1 v 2…, the corresponding path η=η 0 η 1 η 2 η 3 η 4⋯ has η 0=∅, η 1=η 2=η 3={{v 1}}, and η i ={{v 1},{v 2}} for all i≥4, i.e., it is continuously updated which elements of \(\mathcal{S}(\mathcal {F})\) are state sets that have are already been visited along the play.

The main idea of keeping track of the path in \(\mathcal{Z}(\mathcal {G})\) along a play is that while staying in some element of \(\mathcal {Z}(\mathcal{G})\), the winning player has a memoryless strategy (which only depends on the current vertex in a play of the game and not on the history) in \(\mathcal{G}\); only when a new vertex is visited and thus the play moves to a different element in \(\mathcal{Z}(\mathcal{G})\), the strategy might have to change. Since we know that \(\mathcal {Z}(\mathcal{G})\) is finite, this allows us to solve the game in a bottom-up fashion, where we start from the maximal elements in \(\mathcal {Z}(\mathcal{G})\) and iteratively compute the states that are winning for successively shrinking sets of states visited so far, up to the set of state sets of \(\mathcal{S}(\mathcal{F})\) that contain the initial vertex.

We use some standard notation for this task, starting with the enforceable predecessor operator. Given a game \(\mathcal{G} = (V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\), we define the EnfPre:2V→2V operator as follows:

$$\mathsf {EnfPre}(V') = \{v \in V \mid\forall x \in \varSigma _0 \exists y \in \varSigma _1 : E(v,x,y) \in V' \}$$

When solving an obligation game and working upwards in the \(\mathcal {Z}(\mathcal{G})\) lattice, we might have already identified some vertices as losing (as visiting them causes a transition in the \(\mathcal{Z}(\mathcal{G})\) lattice) and some as winning. Let us assume that we have two (distinct) sets of positions W 0,W 1V given and player 1 wins whenever some position in W 1 is reached before any position in W 0 is reached, and furthermore player 1 wins if no position in W 0W 1 is ever visited. The set of winning positions for this player and winning condition can easily be obtained by computing a greatest fixed point:

$$\mathsf {Win}_1(W_0,W_1) = \nu X. X \cap(\mathsf {EnfPre}(X \setminus W_0) \cup W_1)$$

Likewise, we can compute the set of states from which player 1 can enforce that no position in W 0 is visited before (possibly) some position in W 1 is reached, assuming that player 1 loses whenever no position in W 0W 1 is ever visited:

$$\mathsf {Win}_0(W_0,W_1) = \mu X. X \cup W_1 \cup \mathsf {EnfPre}(X \setminus W_0)$$

Using these prerequisites, we can finally devise a game solving algorithm for obligation games. For all \(S \in\mathcal{Z}(\mathcal{G})\), we define T(S)=Win b (W 0,W 1), where:

  • b=1 if and only if \(S \models\mathcal{F}\)

  • \(W_{0} = \bigcup_{S' \in\mathcal{Z}(\mathcal{G}), S \neq S'} \{v'\notin T(S') \mid S' = S \cup\mathcal{C}(\{v'\},\mathcal{F}) \}\)

  • \(W_{1} = \bigcup_{S' \in\mathcal{Z}(\mathcal{G}), S \neq S'} \{v'\in T(S') \mid S' = S \cup\mathcal{C}(\{v'\},\mathcal{F}) \}\)

Theorem 1

(see, e.g., [38])

Let \(\mathcal{G} = (V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) be an obligation game. For every prefix play π=π 0 π 1π n in \(\mathcal{G}\), we have \(\pi_{n} \in T(\mathcal{C}(\mathrm {Occ}(\pi),\mathcal{F}))\) if and only if player 0 has a winning suffix strategy after the prefix play π.

Corollary 1

Let \(\mathcal{G} = (V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) be an obligation game. Player 1 has a winning strategy in \(\mathcal {G}\) if and only if \(v_{\mathit{init}} \in T(\mathcal{C}(\{v_{\mathit{init}}\},\mathcal{F}))\).

As the values of \(\{T(S)\}_{S \in\mathcal{Z}(\mathcal{G})}\) can be computed using a topological order on \(\mathcal{Z}(\mathcal{G})\), the overall computation can be done in time linear in \(|\varSigma _{0}| \cdot |\varSigma _{1}| \cdot|V| \cdot|\mathcal{Z}(\mathcal{G})|\) (cf. [2, 11, 19]).

3.1 Binary decision diagrams for solving games

Binary decision diagrams are a suitable data structure for the symbolic solution of obligation games (see, e.g., [1]). In order to apply them, we need to encode the set of positions in the game and the sets of actions for the two players into Boolean variables. Then, the edge function can be represented in form of a binary decision diagram (BDD).

Formally, we need four sets of variables \(\mathcal{V}_{\mathit{pre}}\), \(\mathcal{V}_{0}\), \(\mathcal{V}_{1}\) and \(\mathcal{V}_{\mathit{post}}\) in order to do so. The sets \(\mathcal{V}_{\mathit{pre}}\) and \(\mathcal {V}_{\mathit{post}}\) are used to represent predecessor and successor positions for the edges in games, while \(\mathcal{V}_{0}\) and \(\mathcal {V}_{1}\) are used for the action sets of player 0 and 1, respectively. A combined variable valuation for all of these sets is consequently capable of representing a single edge, and as a BDD maps variable valuations to true and false, a BDD over \(\mathcal {V}_{\mathit{pre}} \uplus\mathcal{V}_{\mathit{post}} \uplus\mathcal{V}_{0}\uplus\mathcal{V}_{1}\) can represent the whole edge relation (by mapping variable valuations that represent existing edges to true and the others to false).

We assume that \(\mathcal{V}_{\mathit{pre}}\) and \(\mathcal{V}_{\mathit{post}}\) are totally ordered and use these sets and the lists induced by the sets and the ordering interchangeably. Encoding the set of positions V of a game \(\mathcal{G} = (V,\varSigma _{0},\varSigma _{1},E,v_{\mathit{init}},\mathcal{F})\) into \(\mathcal{V}_{\mathit{pre}}\) and \(\mathcal{V}_{\mathit{post}}\) is done using the encoding operators and . For the techniques presented in this paper, any such operators can be used, provided that they fulfil some side-constraints:

  1. 1.

    For all v,v′∈V: if vv′, then .

  2. 2.

    For all V′,V″⊆V: .

  3. 3.

    and .

  4. 4.

    For all vV, .

Similarly, we encode Σ 0 and Σ 1 into \(\mathcal{V}_{0}\) and \(\mathcal{V}_{1}\), and overload the operator for this purpose as this does not introduce ambiguities. The first three of the properties above are also assumed to hold in this case. We defer a description of the encodings actually used in this work to the later sections. The edge function can now be encoded into a BDD B E by taking:

Obviously, B E is a BDD over the set of variables \(\mathcal{V} =\mathcal{V}_{\mathit{pre}} \uplus\mathcal{V}_{\mathit{post}} \uplus \mathcal{V}_{0} \uplus\mathcal{V}_{1}\). This encoding has the advantage that it is transparent with respect to taking the parallel composition of two games \(\mathcal{G}^{1} = (V^{1},\varSigma _{0},\varSigma _{1},E^{1},v^{\mathit{init}},\mathcal{F}^{1})\) and \(\mathcal{G}^{2} = (V^{2},\varSigma _{0},\varSigma _{1}\), \(E^{2},v^{\mathit{init}},\mathcal{F}^{2})\). If we define for all positions (v,v′) in \(\mathcal{G}^{1} \parallel_{\odot}\mathcal{G}^{2}\) for some ⊙∈{∧,∨}, then we compute the BDD for the edge function of \(\mathcal{G}^{1} \parallel_{\odot}\mathcal{G}^{2}\) by taking \(B_{E^{1}}\wedge B_{E^{2}}\).

Given a BDD B E encoding an edge function E and a BDD B V encoding a set of positions (over the variables \(\mathcal{V}_{\mathit{pre}}\)), we can compute the EnfPre operator as follows:

$$\mathsf {EnfPre}(B_{V'}) = \forall\mathcal{V}_0. \exists\mathcal{V}_1. \exists \mathcal{V}_\mathit{post}. (B_{V'}[\mathcal{V}_\mathit{post}/\mathcal {V}_\mathit{pre}] \wedge B_E)$$

This equation is a reformulation of the EnfPre function defined earlier in this section, with the modification that it uses BDDs. The replacement of every \(\mathcal{V}_{\mathit{pre}}\) variable by the corresponding variable in \(\mathcal{V}_{\mathit{post}}\) is necessary to make B V represent the successor positions when taking the conjunction with B E (and thus applying the edge function). The existential abstraction of the variables in \(\mathcal{V}_{\mathit{post}}\) lets the BDD forget information about successor positions. The subsequent universal and existential abstractions of \(\mathcal{V}_{0}\) and \(\mathcal{V}_{1}\) finally quantify over the possible moves of the players and at the same time only let information about the predecessor vertices in the game remain in the result of taking EnfPre(B V). The rest of the computation process for solving obligation games can be performed as described at the beginning of this section.

4 Bounded synthesis

The bounded synthesis approach [14, 32] is a conceptually simple technique to check the realizability of a given specification, given in form of a universal co-Büchi word automaton (UCW) for the scope of this paper (which are expressive enough to encode any LTL formula), and to compute a Mealy or Moore machine realizing it in case of a positive answer. We briefly recapitulate the basics of that approach here.

The main observation used in the bounded synthesis approach is that whenever a specification is realizable, there also exists one with a certain maximal number of states, which is exponential in the size of the specification UCW. Furthermore, if some Mealy or Moore FSM realizes a specification, there also exists a bound b∈ℕ on the number of visits to rejecting states in the UCW along every of its runs for some word in the language of the FSM. By taking both facts together, one can derive a worst-case value of b that needs to be considered when searching for an implementation. For the scope of this section, for a UCW \(\mathcal{A}\), we let \(|\mathcal{F}(\mathcal{A})|\) represent the number of rejecting states in the UCW, i.e, the ones with colour 1.

Theorem 2

[28]

Given a universal co-Büchi automaton with n states, we can construct an equivalent deterministic parity automaton \(\mathcal{P}\) with 2n n n! states and 2n colours.

Theorem 3

[32]

Let \(\mathcal{M}\) be a Mealy or Moore machine with n states and \(\mathcal{A}\) be a universal co-Büchi word automaton. The language of \(\mathcal{M}\) is a subset of the language of \(\mathcal{A}\) if and only if the maximum number of visits to rejecting states in \(\mathcal {A}\) for some run and some word in the language of \(\mathcal{M}\) is less than or equal to \(|\mathcal{F}(\mathcal{A})| \cdot n\).

Corollary 2

[32]

Given a universal co-Büchi word automaton \(\mathcal{A}\) over the alphabet Σ I ×Σ O with n states, there exists a Mealy or Moore machine over Σ I /Σ O realizing \(\mathcal {A}\) if and only if there exists one for which the maximum number of visits to rejecting states in \(\mathcal{A}\) for some run and some word in the language of \(\mathcal{M}\) is less than or equal to \(|\mathcal {F}(\mathcal{A})| \cdot2n^{n}n!\).

This observation gives rise to the following idea: from a UCW, we build a safety automaton that checks if all runs of the UCW for some given input word do not exceed some bound value b∈ℕ. Provided that the bound value is chosen high enough, the unrealisability of the resulting safety automaton implies the unrealisability of the original specification.

Definition 3

Let \(\mathcal{A} = (Q,\varSigma ,\delta,Q_{\mathit{init}},F)\) be a UCW. We define the deterministic safety automaton \(\mathsf {Bound}(\mathcal{A},b) =(Q',\varSigma ,\delta',\{q'_{\mathit{init}}\},F')\) for some b∈ℕ by:

  • Q′=(Q→{−∞,0,…,b})∪{⊥}

  • For all f∈(Q→{−∞,0,…,b}), xΣ, we have δ′(f,x)∈(Q→{−∞,0,…,b}) and

    $$\delta'(f,x)(q) = \max\{ f(q') \mid q' \in Q, q \in\delta(q',x) \} +\begin{cases} 1 & \text{if}\ q \in F \\0 & \text{otherwise}\end{cases}$$

    if for all qQ, we have

    $$b \geq\max\{ f(q') \mid q' \in Q, q \in\delta(q',x) \} +\begin{cases} 1 & \text{if}\ q \in F \\0 & \text{otherwise}\end{cases},$$

    and δ′(f,x)=⊥ otherwise. In both equations, we assume that max(∅)=−∞ and that −∞+1=−∞.

  • For all xΣ, δ′(⊥,x)={⊥}

  • \(q'_{\mathit{init}} = \{Q_{\mathit{init}} \mapsto0, Q \setminus Q_{\mathit{init}} \mapsto-\infty\}\)

  • F′={Q′∖{⊥}↦0,{⊥}↦1}

Theorem 4

[14, 32]

Let \(\mathcal{A} = (Q,\varSigma ,\delta,Q_{\mathit{init}},F)\) be a UCW with n states and Σ I and Σ O be sets such that Σ=Σ I ×Σ O . There exists a Mealy machine over Σ I and Σ O realizing \(\mathcal{A}\) if and only if the game \(\mathsf {ToGame}_{1}(\mathsf {Bound}(\mathcal{A},b),\varSigma _{I},\varSigma _{O})\) is winning for player 1 with \(b = |\mathcal{F}(\mathcal{A})| \cdot2n^{n}n!\). Furthermore, there exists a Moore machine over Σ I and Σ O realizing \(\mathcal{A}\) if and only if the game \(\mathsf {ToGame}_{0}(\mathsf {Bound}(\mathcal{A},b),\varSigma _{O},\varSigma _{I})\) is winning for player 0 and \(b =|\mathcal{F}(\mathcal{A})| \cdot2n^{n}n!\). Winning strategies for the respectively players represent FSMs realizing  \(\mathcal{A}\).

In order to speed up bounded synthesis in practice, we can apply a simple strategy: we start with a bound of b=1 and successively increase b while checking whether the resulting game \(\mathsf {ToGame}_{0}(\mathrm{Bound}(\mathcal{A},b),\varSigma _{O},\varSigma _{I})\) is winning for player 0 in the Moore setting or whether the game \(\mathsf {ToGame}_{1}(\mathrm {Bound}(\mathcal{A},b),\varSigma _{I},\varSigma _{O})\) is winning for player 1 in the Mealy setting. Typically, in practice, realizable specifications only require a small bound value in order to be identified as such [1416, 32]. In order to also detect unrealisable specifications quickly, a procedure described in [14, 21] can be applied: using a universal co-Büchi automaton \(\overline{\mathcal{A}}\) representing the complement language of \(\mathcal{A}\) (i.e., both automata have the same alphabet Σ and we have \(\mathcal {L}(\mathcal{A}) = \varSigma ^{\omega}\setminus\mathcal{L}(\overline {\mathcal{A}})\)), we execute two copies of the bounded synthesis process. In the first process, we successively increase a bound value b until the game \(\mathsf {ToGame}_{0}(\mathrm{Bound}(\overline{\mathcal{A}},b),\varSigma _{O},\varSigma _{I})\) is winning for player 1. In the second copy, we successively increase a bound value of b′ until \(\mathsf {ToGame}_{1}(\mathrm {Bound}(\mathcal{A},b'),\varSigma _{I},\varSigma _{O})\) is winning for player 1. Once one of the two processes terminates, we know whether \(\mathcal{A}\) is realizable over Σ I /Σ O (in the Mealy setting) or not. The corresponding procedure for the Moore setting is analogous to the Mealy case.

4.1 Compositional bounded synthesis

When performing synthesis, it is sometimes desirable to search for implementations that satisfy several specifications. For the scope of this paper, we only consider a special case of this situation: we search for a Mealy machine for which each word in its language is either accepted by:

  • a UCW \(\mathcal{A}\), and

  • a deterministic safety automaton \(\mathcal{A}_{s}\),

or by

  • a deterministic co-safety automaton \(\mathcal{A}_{c}\).

Details on the compositional bounded synthesis approach for more general cases can be found in [15].

Lemma 2

Given a universal co-Büchi automaton \(\mathcal{A}\) with n states, a deterministic safety automaton \(\mathcal{A}_{s}\) with n s states, and a deterministic co-safety automaton \(\mathcal{A}_{c}\) with n c states, we can construct a deterministic parity automaton \(\mathcal{P}\) with n s n c ⋅2n n n! states and 2n colours whose language is \((\mathcal{L}(\mathcal{A}) \cap\mathcal{L}(\mathcal{A}_{s})) \cup \mathcal{L}(\mathcal{A}_{c})\), and a deterministic parity automaton \(\mathcal{P}'\) of equal size whose language is \((\mathcal{L}(\mathcal {A}) \cup\mathcal{L}(\mathcal{A}_{s})) \cap\mathcal{L}(\mathcal{A}_{c})\).

Proof

We first convert \(\mathcal{A}\) to a deterministic parity automaton (Theorem 2) and then use a standard product construction. If the language of \(\mathcal{P}\) should be \((\mathcal {L}(\mathcal{A}) \cap\mathcal{L}(\mathcal{A}_{s})) \cup\mathcal {L}(\mathcal{A}_{c})\), as the states with colour 1 in a safety automaton and the states with colour 0 in a co-safety automaton need to be absorbing, it suffices to assign a state (q,q′,q″) in \(\mathcal {P}\) with \(q \in Q(\mathcal{A}_{s})\), \(q' \in Q(\mathcal{A}_{c})\) and \(q''\in Q(\mathcal{A})\) the colour 0 if \(\mathcal{F}(\mathcal{A}_{c})(q')=0\), colour 1 if \(\mathcal{F}(\mathcal{A}_{s})(q)=\mathcal{F}(\mathcal {A}_{c})(q')=1\), and colour \(\mathcal{F}(\mathcal{A})(q'')\) otherwise.

The other case in which \(\mathcal{P}'\) should have the language \((\mathcal{L}(\mathcal{A}) \cup\mathcal{L}(\mathcal{A}_{s})) \cap \mathcal{L}(\mathcal{A}_{c})\) is analogous. □

Corollary 3

Let \(\mathcal{A} = (Q,\varSigma ,\delta,Q_{\mathit{init}},F)\) be a UCW with n states, \(\mathcal{A}_{s}\) be a safety automaton with n s states, \(\mathcal{A}_{c}\) be a co-safety automaton with n c states and Σ I and Σ O be sets such that Σ=Σ I ×Σ O . There exists a Moore machine over Σ I and Σ O whose induced words are all in \((\mathcal{L}(\mathcal{A}) \cap\mathcal {L}(\mathcal{A}_{s})) \cup\mathcal{L}(\mathcal{A}_{c})\) if and only if the game \((\mathsf {ToGame}_{0}(\mathrm{Bound}(\mathcal{A},b),\varSigma _{O},\varSigma _{I})\parallel_{\vee} \mathsf {ToGame}_{0}(\mathcal{A}_{s},\varSigma _{O},\varSigma _{I})) \parallel_{\wedge} \mathsf {ToGame}_{0}(\mathcal{A}_{c},\varSigma _{O},\varSigma _{I})\) is winning for player 0 and \(b =|\mathcal{F}(\mathcal{A})| \cdot n_{s} \cdot n_{c} \cdot2n^{n}n!\). Furthermore, there exists a Mealy machine whose induced words are all in \((\mathcal{L}(\mathcal{A}) \cap\mathcal{L}(\mathcal{A}_{s})) \cup \mathcal{L}(\mathcal{A}_{c})\) if and only if the game \((\mathsf {ToGame}_{1}(\mathrm {Bound}(\mathcal{A},b),\varSigma _{I},\varSigma _{O}) \parallel_{\wedge} \mathsf {ToGame}_{1}(\mathcal {A}_{s},\varSigma _{I},\varSigma _{O}))\parallel_{\vee} \mathsf {ToGame}_{0}(\mathcal{A}_{c},\varSigma _{I},\varSigma _{O})\) is winning for player 1 and \(b = |\mathcal{F}(\mathcal {A}) | \cdot n_{s} \cdot n_{c} \cdot2n^{n}n!\).

Proof

Adapt Corollary 2 to the compositional case using Lemma 2. □

5 Safety and non-safety: splitting the specification

We now turn towards explaining the main novelties of this work. In the previous sections, we have seen how bounded synthesis works in general: we construct a UCW from a specification and build a family of safety automata for a successively increasing bound value. These safety automata are converted to realisability safety games and solved. Once the game is found to be winning for the system player (player 1 if our aim is to synthesize an implementation in form of a Mealy automaton or player 0 for a Moore automaton), the process terminates. To simplify the presentation, for the rest of this paper, we assume that we search for a Mealy-type implementation. The techniques presented here are however of course also applicable for the Moore-type.

We have also seen how to encode the process of safety game solving (as a special case of obligation game solving) in a way that allows using binary decision diagrams (BDDs) as data structure for this task. When doing so, we however face one main problem: when building the safety automata from the UCW and a given bound value, we introduce a lot of counters into the structure of the word automaton. Recall that in Definition 3, the state space is defined as (Q→{−∞,0,…,b})∪{⊥}, where Q is the set of states of the UCW. Thus, for every state in the original UCW, we introduce a counter ranging from 0 to b with −∞ as an additional value. Also, the definition of the transition function for the safety automata makes use of these counter values. It has been noticed that encoding such number comparisons into BDDs, which is necessary for the solution of the bounded synthesis problem using BDDs in the context of this paper, often leads to huge BDDs in practice [7, 33, 40]. To solve this problem, we introduce two techniques in this paper, of which the first one is described in this section.

In particular, we explain how to decompose an LTL specification being subject to synthesis in a way such that non-safety and safety properties can be treated in parallel. Recall from the introduction of this paper that we assume that the specification is written in the form ψ=⋀ aA a→⋀ gG g for some set of assumptions A and some set of guarantees G, each consisting of safety and non-safety LTL properties. In the classical bounded synthesis approach, ψ is transformed to a UCW which in turn is converted to its induced safety game for some given bound. Here, we propose a slightly different approach. Instead of building one single game from the specification, we split the latter into parts, build individual games for each of the parts and then take their parallel composition to obtain a composite game (see Sect. 2). This has several advantages:

  1. 1.

    It has been observed [14] that the time to compute a UCW from an LTL formula is a significant part of the overall realisability checking time. By splitting the specification beforehand, building a monolithic UCW is avoided, resulting in a lower total computation time.

  2. 2.

    Taking the parallel composition of multiple game structures can be done in a relatively efficient way when using BDDs for solving the composite game.

  3. 3.

    The state spaces of games corresponding to safety properties do not need the counters that are employed in the bounded synthesis approach. Thus, by decomposing the specification into safety and non-safety parts, we can save counters, which in turn reduces the computation time further.

In order to obtain a valid decomposition scheme, the resulting game must be winning for player 1 (the system player) in the same cases as before, i.e., if and only if either a safety or non-safety assumption is violated or all guarantees are fulfilled. Additionally, winning strategies for the composite game must also be valid solutions to the synthesis problem.

The technique presented in the following does not preserve the smallest bound b such that the specification is fulfillable (as the bound depends on the syntactic structure of the UCW). However, the method proposed is still sound and complete, i.e., if and only if there exists a bound b such that the safety game induced by the UCW for the overall specification and b is winning for player 1, there exists some bound for the non-safety part of the specification and the technique presented in this section such that the resulting game is winning for player 1. Essentially, Corollary 3 establishes this fact.

In [35], the authors propose a method to solve a generalized parity game for a specification of the form ⋀ aA a→⋀ gG g as stated above successively. They first build games for the safety assumptions and guarantees (i.e., the assumptions and guarantees that happen to be safety properties), strip the non-winning parts (for the system player) from them and compose them with games for the remaining parts of the specification. For the completeness of this methodology, the non-safety assumptions however must not have any effect on the fulfillability of the safety guarantees.Footnote 4 In general, we cannot assume this; we thus propose a different method here that is based on introducing a signalling bit into the game that links the safety guarantees and the non-safety part of the specification.

We start by splitting the specification ψ=⋀ aA a→⋀ gG g over the input atomic proposition AP I and the output atomic propositions AP O into four sets of LTL formulas: the safety assumptions A s , the safety guarantees G s , the non-safety assumptions A n , and the non-safety guarantees G n . Then, we build a reachability game \(\mathcal{G}^{1}\) for the safety assumptions that is won by player 1 if some assumption in A s is violated. For the next step, we add one bit to the output atomic proposition set of the system to be synthesized; let its name be safe g . We build a safety game \(\mathcal {G}^{2}\) from the safety guarantees G s that is won by player 1 if safe g always represents whether one of the safety guarantees has already been violated. For the non-safety part, we take the modified specification \(\psi' = (\bigwedge_{a \in A_{n}} a) \rightarrow( \bigwedge_{g \in G_{n}}g \wedge \mathsf {G}(\mathrm{safe}_{g}))\) and convert it to a UCW \(\mathcal{A}\). Given a bound b∈ℕ and having prepared \(\mathcal{G}^{1}\), \(\mathcal {G}^{2}\) and \(\mathcal{A}\), we can now build the composite game \(\mathcal {G}\) by defining \(\mathcal{G}^{3}_{b} = \mathsf {ToGame}_{1}(\mathsf {Bound}(\mathcal {A},b),2^{\mathsf {AP}_{I}},2^{\mathsf {AP}_{O}})\) and:

$$\mathcal{G} = \mathcal{G}^1 \parallel_\vee(\mathcal{G}^2\parallel_\wedge\mathcal{G}^3_b)$$

Figure 2 visualizes the construction of the composite game. We obtain the following result:

Fig. 2
figure 2

Graphical representation of the specification splitting approach proposed in Sect. 5 on an example specification consisting of non-safety and safety assumptions and guarantees

Theorem 5

For every LTL specification ψ=⋀ aA a→⋀ gG g, there exists some bound b∈ℕ such that the composite game \(\mathcal{G}\) built from ψ and b as defined above is won by the system player 1 if and only if there exist some bound b′∈ℕ such that the (classical) safety synthesis game built from the UCW corresponding to ψ and bbuilt using the constructions from Definition 1 and Definition 3 is winning for player 1.

Proof

Assume that ψ is realisable over AP I /AP O . Then, there exists a Mealy automaton/strategy for player 1 realizing it with k states for some k∈ℕ. Thus, the construction of the game makes Corollary 3 applicable.

On the other hand, it there exists no winning strategy, then by the construction of the game, player 0 can prevent player 1 from playing a winning strategy in \(\mathcal{G}\) for every value of b. □

Since \(\mathcal{G}\) is the outcome of taking the parallel composition between the games as defined above, its winning condition is of the form \(\mathcal{F}(\mathcal{G}) = V' \vee(\neg V'' \wedge\neg V''') \) with \(V' = V_{1} \times V(\mathcal{G}^{2}) \times V(\mathcal{G}^{3}_{b})\) for \(\mathcal{F}(\mathcal{G}^{1}) = V_{1}\), \(V'' = V(\mathcal{G}^{1}) \times V_{2}\times V(\mathcal{G}^{3}_{b})\) for \(\mathcal{F}(\mathcal{G}^{2}) = \neg V_{2}\), and \(V''' = V(\mathcal{G}^{1}) \times V(\mathcal{G}^{2}) \times V_{3}\) for \(\mathcal{F}(\mathcal{G}^{3}_{b}) = \neg V_{3}\).

Solving the composite game \(\mathcal{G}\) is simple using the procedure presented in Sect. 3. We first simplify the winning condition of the game to \(\mathcal{F}(\mathcal{G}) = V' \vee (\neg(V'' \cup V'''))\), and then compute T({V′,V″∪V‴}), T({V′}), T({V″∪V‴}) and T(∅). If the last of these sets contains \(v_{\mathit{init}}(\mathcal{G})\), then the game is winning for player 1 and thus the original specification is realisable.

In the context of bounded synthesis, there is however a way to simplify the computation. Let , and , and EnfPre be the enforceable predecessor operator that uses the edge function of the composite game. We can obtain the set of winning positions of player 1 in \(\mathcal{G}\) by computing the following BDD:

$$W = \nu X. X \wedge(B^F_3 \vee(\mathsf {EnfPre}(X \wedge(\neg B^F_2) \wedge (\neg B^F_1)))$$

This equation represents operations on BDDs whose application results in a BDD that encodes Win 1(V″∪V‴,V′). We have \(v_{\mathit{init}}(\mathcal{G}) \in \mathsf {Win}_{1}(V'' \cup V''',V')\) if player 1 has a strategy to either eventually reach one of the vertices in V′, or to stay away from V″∪V‴ forever. Clearly, Win 1(V″∪V‴,V′) is an under-approximation of the set of vertices in \(\mathcal {G}\) from which player 1 can win. However, Win 1(V″∪V‴,V′) does not classify the vertices as winning from which player 1 cannot avoid visiting V″∪V‴, but can ensure that eventually V′ is visited afterwards (in this case player 1 can also win the game). Since \(\mathcal{G}\) is a finite game, there exists an upper bound u on the number of moves that player 1 may require to do so. At the same time, the only way to visit V″ is that player 1 does not choose the right valuation for the safe g output bit (which can always be avoided), and the only way to visit V‴ is to have some counter in \(\mathcal{G}^{3}_{b}\) exceeding the bound.

Thus, whenever we have \(v_{\mathit{init}}(\mathcal{G}) \notin \mathsf {Win}_{1}(V''\cup V''',V')\), but v init is winning for player 1 in \(\mathcal{G}\), by increasing the bound value b used to construct the game \(\mathcal{G}^{3}_{b}\) by u, we ensure that in the game \(\mathcal{G}\) that we then obtain, player 1 can reach V′ before any vertex in V″∪V‴ is reached from the positions in the game with the smaller bound that Win 1(V″∪V‴,V′) misclassified as losing for player 1, and thus we have v init Win 1(V″∪V‴,V′) for the increased bound value. So it suffices to increase the bound value to use the equation above to compute a set of winning states in \(\mathcal{G}\) that eventually contains \(v_{\mathit{init}}(\mathcal{G})\) when successively increasing the bound value. Since we do so anyway in the bounded synthesis process, we use this simplification for our implementation to be described in Sect. 10 as it facilitates the extraction of winning strategies from the game.

6 Observations on the bounded synthesis approach

In this section, we describe the second optimisation technique proposed in this paper for performing bounded synthesis efficiently using binary decision diagrams (BDDs).

Consider a UCW as depicted in Fig. 3. For a bound of b=2, the corresponding safety automaton \(\mathsf {Bound}(\mathcal{A},b)\) has 54+1 states (including the ones that are not reachable from the initial state). While the LTL formula corresponding to the UCW can be decomposed, we describe here a technique that even works without this decomposition, and can be used orthogonally.

Fig. 3
figure 3

Example UCW for the LTL formula FG aG((¬aX a)→XXGF¬b) over the set of atomic propositions {a,b}. Rejecting states are doubly-circled. The part of the automaton consisting of q 0 and q 1 checks that eventually a stops occurring in the input word. Whenever a letter is read that does not contain a, but the subsequent letter contains a, the automaton branches universally from q 1 into q 0 and q 2 at the same time. A suffix run starting from state q 2 then checks if we have GF¬b for the following suffix word

In the bounded synthesis approach, any other procedure Bound′ to convert a UCW and a bound value b to a safety automaton can be used that makes sure that for every universal co-Büchi word automaton \(\mathcal{A}\) and bound value b∈ℕ, we have \(\mathcal{L}(\mathsf {Bound}(\mathcal{A},b)) \subseteq\mathcal{L}(\mathsf {Bound}'(\mathcal{A},b))\subseteq\mathcal{L}(\mathcal{A})\). In such a case, the soundness and completeness arguments given in the previous sections still hold. We make use of this fact by deriving such a modified procedure from the following observation: when decomposing the UCW into maximal strongly connected components (SCCs), any run through the UCW can visit every maximal strongly connected component at most once (by the definition of SCCs) and thus, it is possible to interpret the bound value in a modified way: instead of bounding the number of allowed visits to rejecting states along a run, we bound this number for every maximal SCC separately, i.e., we reset the counter whenever a maximal SCC is left. As every maximal SCC can be visited at most once, we do not need to keep track of the counters for SCCs already left in a run. At the same time, whenever a maximal SCC does not have rejecting states, the counter values for the current SCC can only be 0 or −∞ with this modification. We formally define:

Definition 4

Let \(\mathcal{A} = (Q,\varSigma ,\delta,Q_{\mathit{init}},F)\) be a UCW. We define D to be the equivalence relation over Q such that for every q,q′∈Q, (q,q′)∈D if and only if q and q′ are in the same maximal SCC. We furthermore define KQ to be the set of states that are not in the same maximal SCC as a rejecting state.

The deterministic safety automaton \(\mathsf {Bound}'(\mathcal{A},b) = (Q',\varSigma ,\delta',\{q'_{\mathit{init}}\},F')\) for some b∈ℕ is defined by:

  • Q′=(((QK)→{−∞,0,…,b})×(K→{−∞,0}))∪{⊥}

  • For all f∈(((QK)→{−∞,0,…,b})×(K→{−∞,0})), xΣ, we have δ′(f,x)∈(((QK)→{−∞,0,…,b})×(K→{−∞,0})) and

    if for all qQ, we have

    and δ′(f,x)=⊥ otherwise. In both equations, we assume that max(∅)=−∞ and that −∞+1=−∞.

  • For all xΣ, δ′(⊥,x)={⊥}

  • \(q'_{\mathit{init}} = \{Q_{\mathit{init}} \mapsto0, Q \setminus Q_{\mathit{init}} \mapsto-\infty\}\)

  • F′={Q′∖{⊥}↦0,{⊥}↦1}

7 Encoding bounded synthesis in BDDs

After the discussion of the main ideas presented in this paper, we turn towards filling the remaining blanks in the BDD-based approach to synthesis. In particular, it needs to be discussed how to efficiently symbolically encode the counters in the game \(\mathsf {ToGame}_{1}(\mathsf {Bound}(\mathcal {A},b),2^{\mathsf {AP}_{I}},2^{\mathsf {AP}_{O}})\) and how to deal with \(\mathcal{G}^{1}\) and \(\mathcal{G}^{2}\).

The efficiency of solving games using BDDs heavily depends on a smart encoding of the state space into the BDD bits. As already stated, for a symbolic solution of a safety game, four groups of BDD variables are needed: two groups for the predecessor and successor game positions in its edge function (\(\mathcal{V}_{\mathit{pre}}\) and \(\mathcal{V}_{\mathit{post}}\)), one for the input to the system (\(\mathcal{V}_{0}\)) and one for the output (\(\mathcal{V}_{1}\)). As we defined the input as \(I = 2^{\mathrm {AP}_{I}}\) and the output as \(O = 2^{\mathrm{AP}_{O}}\) for the scope of this paper, a straight-forward Boolean encoding of I and O for usage in the BDDs exists: we allocate one BDD bit for each element of AP I and AP O . It remains to find a suitable encoding for the state space of the game.

Since our overall game \(\mathcal{G}\) is the product of some smaller state spaces, we parallelise the problem and search for state space encodings of \(\mathcal{G}^{1}\), \(\mathcal{G}^{2}\) and \(\mathcal{G}^{3}_{b} =\mathsf {ToGame}_{1}(\mathsf {Bound}(\mathcal{A},b)\), \(2^{\mathsf {AP}_{I}},2^{\mathsf {AP}_{O}})\) separately.

7.1 The non-safety part

Recall that in the context of bounded synthesis, the safety game induced by a UCW for a given bound b has a certain property: the state space consists of all functions mapping the states of the UCW onto {−∞,0,1,…,b} (or {−∞,0} for some states when using the optimisation technique from the previous section) for b being the chosen bound. For each state, we can encode the value the function maps to individually. For the scope of this paper, we define the following encoding for this counter set {−∞,0,1,…,b}: we use ⌜log2(b+1)⌝+1 bits. One bit is used for representing whether the value equals −∞, the remaining bits represent the standard binary encoding of the numeral (if given). Taking an extra bit for the −∞ value has the advantage of obtaining smaller BDDs in most cases as this value appears very often in the definition of the transition function. Encoding the range {−∞,0} on the other hand is trivial as we only need one bit for doing so.

We also propose and use one additional trick. The games defined in the previous section are built in a way such that they permit one type of non-determinism: we can allow the system player to choose a successor state from a set of possible ones. If the system player can do this in a greedy way, i.e., the non-determinism can be resolved after each input/output cycle while ensuring that the decision sequence for the play is still winning in the unmodified game, the game semantics remain unchanged. For bounded synthesis, we can thus relax the transition relation (encoded by the BDD B E ) slightly: we allow the system player to increase her counters in addition to the counter increases imposed by visits to rejecting states. We also allow her to set some counters from −∞ to some arbitrary other value. This non-minimality [3] of the transition relation typically decreases the size of its symbolic encoding.Footnote 5

7.2 The safety part

For the encoding of the game components that correspond to safety assumptions and safety guarantees, we state two different, straight-forward methods, which we explain in the following. The first method only works for locally checkable properties and is usually more efficient than the second one in this case, whereas the latter method is capable of handling arbitrary safety properties.

7.2.1 Smart encoding of locally checkable properties

If an LTL property is of the form ψ=G(ϕ) with a formula ϕ in which the only temporal operator occurring is X, then ψ is a locally checkable property [26]. Let k be the deepest nesting of the X operator in ϕ. For checking the satisfaction of such a property when observing a trace, in every round, it suffices to store whether the property has already been violated, the last k inputs/outputs (also called history) and the current round number (with the domain {0,1,…,k−1,≥k}). Then, in every round with a number ≥k, we update whether the specification is already falsified with the input and output in the last k rounds and the current round. For encoding the round number in a symbolic way, we use a binary representation.

Encoding such a property in this way has some advantages: First of all, the encoding proposed is canonical. Furthermore, multiple properties can share the information stored in the game state space this way, so we can recycle the stored information for all such locally checkable safety properties. Note that it is possible to reduce the number of bits necessary for storage by leaving out the history bits not needed for checking the given properties.

7.2.2 The general method

Safety properties have equivalent syntactically safe UCW, i.e., in the UCW, all rejecting states are absorbing. In this case, the UCW can be determinised by the power set construction. Thus, we can assign to each state in the universal automaton a state bit which is set to 1 whenever there is a run from the initial state to the respective state encoded by the bit for the input/output given by the players during the game so far.

This method is applicable to all safety properties but requires the computation of a universal co-Büchi automaton having the property stated above. While it has been observed that checking if a property is safety is not harder than building an equivalent universal co-Büchi automaton [24], it is not guaranteed that typical procedures for constructing UCW from LTL properties yield automata that have this property. We use a simplified approach in our actual implementation. If the procedure employed for converting an LTL formula into a UCW yields a UCW for which all rejecting states are absorbing or transient, we declare the property as being safety and otherwise treat it as a non-safety property. While we may miss safety properties this way, the soundness of the overall approach is preserved.

8 Checking unrealisability

So far, we have only dealt with the case that we want to prove the realisability of a specification. If a specification is unrealisable, then for no bound b∈ℕ, the safety game induced by the bound and the specification is won for the system player. Thus, an implementation of the approach presented in this paper, which would typically increase the bound successively until the induced safety game is winning for the system player, would have to increase the bound all the way up to the worst case bound established in Corollary 3. In [14, 21], it is described how the bounded synthesis approach can be used for detecting unrealisability quickly anyway: we simply run the synthesis procedure both on the original specification as well as on the negated specification with swapped input and output in parallel. Then, while in the original realizability question, we search for a Mealy automaton satisfying the specification, we search for a Moore automaton for the environment that witnesses the unrealisability of the specification. One of these runs is guaranteed to terminate. Whenever this happens, we can abort the other run. This results in an decision procedure for the overall problem.

When applying the optimisations from this paper, this idea is not directly usable, as when negating the specification, the result is not again of the form ⋀ aA a→⋀ gG g for some sets of assumptions A and guarantees G. Instead, checking if the environment player wins can be done by swapping input and output, negating only the modified specification, and making the final states of \(\mathcal{G}^{1}\) losing for player 1 instead of winning. Then, player 1 (which is now the environment player) wins only if the safety assumptions are fulfilled, the safe g bit always represents if a safety guarantee has already been violated, and the negated modified specification is fulfilled (with respect to the given bound). Note that this makes the resulting game a safety game (again), as in this case, we build the composite game by taking \(\mathcal{G}' = \mathcal{G}^{1} \parallel_{\wedge}\mathcal{G}^{2} \parallel_{\wedge}\mathcal{G}^{3}_{b}\). We can thus easily solve \(\mathcal{G}'\) symbolically using the procedure from Sect. 3.

9 Extracting an implementation in case of realizability

Before we discuss the experimental results in the next section, it remains to be described how implementations that realize a given specification can be extracted in case the game \(\mathcal{G}\) is found to be winning for the system player during realizability checking.

From a theoretical point of view, extracting a winning strategy from a safety game is not difficult. Recall that at the end of Sect. 5, we have seen that the composite game built in the synthesis approach of this paper can be seen as a safety game. For these, we can extract a winning strategy by using the position set of the game as state set of the Mealy automaton and choosing one successor for every position/input combination that does not lead to leaving this set of positions. From a technical point of view, doing so in a fully symbolic manner, i.e., without enumerating all reachable positions explicitly, is however difficult.

In the former part of this paper, the concrete definition of BDDs was not of relevance, i.e., it sufficed to view them as a data structure for Boolean functions. For the task of extracting an implementation, we need to deviate from this. Assume that V′ is the set of winning positions in the safety game \(\mathcal{G}\) and B E is a BDD representing the edge function of \(\mathcal{G}\). Then, a BDD representation of the set of transitions that the Mealy automaton may perform can be obtained by computing (recall that encodes a set of vertices into \(\mathcal{V}_{\mathit{pre}}\) and that encodes a set of vertices into \(\mathcal {V}_{\mathit{post}}\)). If in the order of the BDD, the variables in \(\mathcal{V}_{\mathit{pre}}\) and \(\mathcal{V}_{0}\) occur first, then the BDD can be thought of as a decision tree with collapsed branches. Such a tree can easily be converted to a switching circuit, using one gate per node in the tree, and one flip-flop per state bit.

However, in practice, the variables can be ordered arbitrarily, as a large share of the power of modern BDD libraries is rooted in the fact that they can perform dynamic variable reordering to keep the sizes of the BDDs small. As a remedy, the algorithm proposed by Kukula and Shiple [23] can be used instead. It is directly applicable to the BDD \(B'_{E}\), and it generates a circuit whose size is linear in the number of nodes in the BDD. Intuitively, the algorithm works as follows: for every node in the BDD, some switching logic unit is synthesized. These units are connected by the same edges as the BDD nodes. Whenever a new input bit valuation is fed into the Mealy machine, the switching logic starts to propagate which node in the BDD is reachable from the root node for some output. After the true node has been reached, along such a path, a token is back-propagated in the BDD to the root node, while every switching logic part corresponding to an output variable feeds the chosen value to the output bits and every switching logic part corresponding to a \(\mathcal{V}_{\mathit{post}}\) bit feeds the output to a set of flip-flops that preserve the state for the next computation cycle.

In the actual implementation of the approach in this paper, whenever there are multiple output or post-state bit valuations possible in the back-propagation phase, the bit is set to false.

10 Experimental results

We implemented the symbolic bounded synthesis approach presented in this paper in C++ with the BDD library CuDD v.2.4.2 [36], using dynamic variable reordering. The resulting tool Unbeast (v.0.6) assumes that the individual guarantees and assumptions are given separately. The first step in the computation is to split non-safety properties from safety ones. For this, the tool calls the LTL-to-Büchi converter ltl2ba v.1.1 [17] on the negations of the properties to obtain equivalent universal co-Büchi word automata. As described in Sect. 7.2.2, we then check if the automata obtained are syntactically safe. Locally checkable properties are converted to games using the procedure specialised in this case, all other safety properties are treated by the general procedure given. The UCW corresponding to the modified non-safety part of the specification (as described in Sect. 5) is again computed by calling ltl2ba on it. The last step for realisability checking is to solve the composite games built for a successively increasing number of counter bits per state in the UCW until the game is winning for the system player. We always start with two bits.

We check for realisability and unrealisabilty of the given specification simultaneously, as described in Sect. 8. In case of realisability, we extract an implementation that fulfils the specification. We do this in a fully symbolic way, as described in the previous section. The remaining game graph is, together with the specification, converted to a NuSMV [10] model. This allows running NuSMV to verify the correctness of the implementations produced.

All computation times given in the following are obtained on a Sun XFire computer with 2.6 GHz AMD Opteron processors running an x64-version of Linux. All tools considered are single-threaded. We restricted the memory usage to 2 GB and set a timeout of 3600 seconds. The running times for our tool always include the computation times of ltl2ba.

10.1 Performance comparison on the examples from [14, 21]

We compare Unbeast with the only other currently publicly available tools for full LTL synthesis, namely Lily v.1.0.2 [21] and Acacia 2010 [14, 15]. In the following, for Acacia as well as Unbeast, we only give running times for the non-realisability check if the property is not realisable and the realisability check and model synthesis if the property is realisable.

The 23 mutex variations used as examples in [14, 21] are a natural starting point for our investigation. For usage with our tool, we adapted these examples to the Mealy-type computation model used in this work by prefixing all references to input variables with a next-time operator. For these 23 examples, Lily needed 54.35 seconds of computation time (of which 44.25 seconds were devoted to computing the automata from the given specifications). Acacia in turn finished the task in 52.43 seconds (including 40.79 seconds for building the automata). The version of Unbeast with the features described in this paper had a total running time of about 41.24 seconds. As computing the automata from the specification parts is not a pure preprocessing step in Unbeast, we do not split up the total running time here.

Interestingly, Unbeast spends 39.5 out of the 41.24 seconds of overall computation time on showing the unrealisability of a single specification, namely specification no. 4 from [21]. Lily spent only 1.95 seconds here, whereas Acacia needs 2.11 seconds.

In addition to what has been described in this paper, Unbeast v.0.6 uses one special trick that is common in BDD-based model checking: by grouping the variables in \(\mathcal{V}_{\mathit{pre}}\) together with their respective copies in \(\mathcal{V}_{\mathit{post}}\), the search space for suitable variable orderings can easily be pruned in a reasonable way. For the fourth specification from [21], however, this optimisation is very malicious: it increases the computation time of Unbeast to 198.76 seconds, and the computation time for all specifications together to 204.6 seconds. We currently have no explanation for this huge difference, but use this variable grouping feature for the benchmarks given in the next sub-section anyway, as this optimisation is typically non-malicious.

As a summary, the implementation of the approach presented in this paper is typically much faster on the benchmarks from this suite, with the exception of specification number 4, for which the BDD-based approach is not competitive.

10.2 A load balancing system

For evaluating the techniques presented in this paper in a more practical context, we present an example concerning a load balancing unit that distributes requests to a fixed number of servers. Such a unit typically occurs as a component of a bigger system which in turn utilises it for scheduling internal requests. We demonstrate how a synthesis procedure can be used in the early development process of the bigger system in order to systematically engineer the requirements of the load balancer. Using a synthesis tool in this context makes it possible detect errors in the specification that result in unrealisability as early as possible. We start by stating the fundamental properties of the load balancing system and finally tune it towards serving requests to the first server in a prioritised way. After each added specification/assumption, we run our example implementation in order to check if the specification is still realisable.

The following list contains the parts of the specification. Table 1 gives the running times of our tool and Acacia for the respective sets of assumptions and guarantees and some numbers of clients n∈{2,…,9}. We did not use the compositional techniques introduced in the 2010 version of Acacia as they are targeted towards specifications that consist of a conjunction of guarantees (and at the same time have no assumptions, or only assumptions that are to be interpreted locally to some guarantees).

Table 1 Running times of Acacia (“A”) and Unbeast (“U”) for the sub-problems defined in Sect. 10.2 for n∈{2,…,9}. For each combination of assumptions and guarantees, it is reported whether the specification was satisfiable (+/−), how many counter bits per state in the UCW were involved at the end of the computation (only for Unbeast) and how long the computation took (in seconds). For realisable specifications and Unbeast, we consider the case that an implementation is to be extracted (“+S”) and the case that implementation extraction is turned off “−S”). It can easily be seen that implementation extraction appears to be very costly in this approach. We left out the Lily tool as it is not competitive on the load balancing example

The system to be synthesized uses the input bits r 0,…,r n−1 for receiving the information whether some server is sufficiently under-utilised to accommodate another task and the output bits g 0,…,g n−1 for the task assignments. An additional input job reports on an incoming job to be assigned. For usage with Acacia, all occurrences of output variables in the specification have been prefixed with a next-time operator to take into account the different underlying computation model.

  1. 1.

    Guarantee: Non-ready servers are never bothered: ⋀0≤i<n G(g i r i )

  2. 2.

    Guarantee: A task is only assigned to one server: ⋀0≤i<n G(g i → (⋀ j∈{1,…,n}∖{i}¬g j ))

  3. 3.

    Guarantee: Every server is used infinitely often: ⋀0≤i<n GF(g i )

Note that the guarantees 1, 2 and 3 cannot be fulfilled at the same time as some server might not report when it is ready. Therefore, we replace the third part of the specification and continue:

  1. 4.

    Guarantee: Liveness of the system: ⋀0≤i<n GF(r i )→GF(g i )

  2. 5.

    Guarantee: Only jobs that actually exist are assigned:

    G((⋁0≤i<n g i )→job).

Again, the guarantees 1, 2, 4 and 5 are unrealisable in conjunction as the job signal might never be given. We add the assumption that this is not the case:

  1. 6.

    Assumption: There are always incoming jobs: GF job

At this point, the system designer gets to know that this added requirement does not fix the unrealisability problem, either. The reason is that the clock cycles in which job is set and the cycles in which some server is ready might occur in an interleaved way. We therefore add:

  1. 7.

    Assumption: The job signal stays set until the job has been assigned: G(job∧(⋀0≤i<n ¬g i )→X(job))

Note that the specification is still not realisable. The reason is that the ready signal of one server i might always be given after a job assignment to another server j has been given (for some ij). If server i then always immediately withdraws its ready signal, the controller can never schedule a job to server i, contradicting guarantee 4 if both servers i and j are ready infinitely often. We therefore modify guarantee 4 to not consider these cases:

  1. 8.

    Guarantee: Every ready signal is either withdrawn or eventually handled: ⋀0≤i<n ¬(FG(r i ∧¬g i ))

We continue by adding a priority to the first server. Note that this breaks realisability again, as server 0 can block the others. As an example, we solve this problem by adding the assumption that server 0 works sufficiently long after it obtains a new job before signalling ready again.

  1. 9.

    Guarantee: Server 0 gets a job whenever a job is given and it is ready: G((⋁1≤i<n g i )→¬r 0)

  2. 10.

    Assertion: Server 0 does not report being ready when it gets a task until after an incoming job has been reported on for the next time: G(g 0→((¬job∧¬r 0)∪(job∧¬r 0))).

10.3 The effect of the two main techniques proposed in this paper

To show the effect of the specification splitting technique and the counter reduction technique proposed in this paper (Sects. 5 and 6, respectively), we also give benchmark results for Unbeast and the load balancing benchmark with these optimisations switched off. Table 2 contains the results with specification splitting turned off, but non-safety counter reduction switched on. Table 3 contains the results of switching the non-safety counter reduction off, but keeping the specification splitting turned on. Finally, for Table 4, both features are switched off. In all cases, the computation times for realisable specifications include the implementation extraction time. It can be seen that the optimisation techniques proposed are most effective in conjunction.

Table 2 Benchmark results for Unbeast and the load balancing benchmark with specification splitting turned off and non-safety counter reduction switched on
Table 3 Benchmark results for Unbeast and the load balancing benchmark with specification splitting turned on and non-safety counter reduction switched off
Table 4 Benchmark results for Unbeast and the load balancing benchmark with specification splitting turned off and non-safety counter reduction switched off

11 Conclusion & outlook

In this paper, we described the steps necessary to make the bounded synthesis approach work well with symbolic data structures such as BDDs. The key requirement was to reduce the number of counters in the safety games that occur in this approach as much as possible. We performed this task by splitting the specification into safety and non-safety parts and presented a counter number reduction technique for the game component corresponding to the non-safety specification conjuncts. We also discussed efficient encodings of the safety part of the specification into games. The experimental results show a huge speed-up compared to previous works.

We only briefly discussed the problem of extracting small implementations for the case that the specification is realisable. Similarly to the observations made in the context of generalised reactivity(1) synthesis, where the expressivity of full LTL is traded against the possibility to use more efficient algorithms for performing the synthesis process, the models produced are often non-optimal [4], i.e., unnecessarily large. Thus, further work will deal with the more effective extraction of winning strategies. While the techniques presented here are already suitable for requirements engineering and prototype extraction, the problem of how to obtain small implementations which can directly be converted to suitable hardware circuits is still open.