1 Introduction

First introduced by Maynard Smith and Price in [27, 28], a central concept emerging from evolutionary game theory is that of an evolutionary stable strategy (ESS) in a symmetric two-player game in strategic form. Each pure strategy of the game is viewed as a type of possible individuals of a population. A mixed strategy of the game then corresponds to describing the proportion of each type of individual of the population, which as a simplifying assumption is considered to be infinite. The population is engaged in a pairwise conflict where two individuals are selected at random and receive payoffs depending on their respective types. The population is expected to evolve in a way where strategies that achieve a higher payoff than others will spread in the population. A strategy \(\sigma \) is an ESS if it outperforms any “mutant” strategy \(\tau \ne \sigma \) adopted by a small fraction of the population. Otherwise we say that \(\sigma \) may be invaded. An ESS is in particular a symmetric Nash equilibrium (SNE), but, unlike a SNE, it is not guaranteed to exist.

Fig. 1.
figure 1

Hawk-Dove game

The Hawk-Dove game [28], presented with concrete payoffs in Fig. 1, is a classic example where an ESS may explain the proportion of the population tending to engage in aggressive behavior. The game has a unique SNE \(\sigma \), where the players choose Hawk with probability \(\frac{1}{2}\), and this is in fact an ESS. Note first that \(u(\sigma ,\sigma )= (-1)\left( \frac{1}{2} \right) ^2 + 2\left( \frac{1}{2} \right) ^2 + 0\left( \frac{1}{2} \right) ^2 + 1\left( \frac{1}{2} \right) ^2 = \frac{1}{2}\). Consider now any strategy profile \(\tau \) that chooses Hawk with probability p. Then \(u(\tau ,\sigma ) = (-1 + 2) p/2 +(1+0)(1-p)/2 = \frac{1}{2}\) as well. However, \(u(\sigma ,\tau )=\frac{3}{2}-2p\) and \(u(\tau ,\tau )=1-2p^2\), and thus \(u(\sigma ,\tau ) - u(\tau ,\tau ) = 2(p-\frac{1}{2})^2\), which means that \(\sigma \) outperforms \(\tau \) if \(p \ne \frac{1}{2}\).

While the two-player setting is the typical setting to study ESS, the concept may in a natural way be generalized to the setting of multi-player games, as established by Palm [33] and Broom, Cannings, and Vickers [11]. This allows one to model populations that engage in conflicts involving more than two individuals. Many of the two-player games typically studied in the context of ESS readily generalize to multi-player games, including the Hawk-Dove and Stag Hunt games (cf. [12]). For a naturally occurring example, Broom and Rychtář [12, Example 9.1] argue that the cooperative hunting method of carousel feeding by killer whales may be modeled as a multi-player Stag Hunt game.

The computational complexity of computing an ESS was first studied by Etessami and Lochbihler [19]. We shall denote the problem of deciding whether a given symmetric game in strategic form has an ESS as \(\exists \mathrm {ESS}\) and similarly the problem of deciding whether a given strategy is an ESS of the given game as \(\mathrm {IsESS}\). Previous work has been concerned only with two-player symmetric games in strategic form. Etessami and Lochbihler proved that \(\exists \mathrm {ESS}\) is hard both for \(\mathrm {NP}\) and \(\mathrm {coNP}\) and is contained in \(\mathrm {\Sigma }^\mathrm {p}_2\). Nisan [32] showed that \(\exists \mathrm {ESS}\) is hard for the class \(\mathrm {coDP}\), which is the class of unions of languages from \(\mathrm {NP}\) and \(\mathrm {coNP}\). From both works it also follows that the problem \(\mathrm {IsESS}\) is \(\mathrm {coNP}\)-complete. Finally Conitzer [14] showed \(\mathrm {\Sigma }^\mathrm {p}_2\)-completeness for \(\exists \mathrm {ESS}\). The direct but important consequence of these results is that any algorithm for computing an ESS in a general game can be used to solve \(\mathrm {\Sigma }^\mathrm {p}_2\)-complete problems. For instance, we cannot expect to be able to compute an ESS in a simple way using a SAT solver.

One may observe that the above hardness results for two-player games also generalize to apply to m-player games, for any fixed \(m\ge 3\). Note that, since a reduction showing \(\mathrm {\Sigma }^\mathrm {p}_2\)-hardness must produce an m-player symmetric game, this is not a trivial observation (in particular adding “dummy” players, each having a single strategy, to a nontrivial symmetric game would result in a non-symmetric game). One would however suspect that the problems \(\exists \mathrm {ESS}\) and \(\mathrm {IsESS}\) become significantly harder for m-player games, when \(m \ge 3\). Namely, starting with the work of Schaefer and Štefankovič [36], several works have shown that many natural decision problems concerning Nash equilibrium (NE) in 3-player strategic form games are \(\exists \mathbb {R}\)-complete [4,5,6, 20, 24]. These results stand in contrast to the two-player setting, where the same decision problems are \(\mathrm {NP}\)-complete [15, 21]. The class \(\exists \mathbb {R}\) is the complexity class that captures the decision problem for the existential theory of the reals [36], or alternatively, is the constant free Boolean part of the real analogue \(\mathrm {NP}_\mathbb {R}\) in the Blum-Shub-Smale model of computation [10]. Clearly we have \(\mathrm {NP}\subseteq \exists \mathbb {R}\), and from the decision procedure for the existential theory of the reals by Canny [13] it follows that \(\exists \mathbb {R}\subseteq \mathrm {PSPACE}\). We consider it likely that \(\mathrm {NP}\) is a strict subset of \(\exists \mathbb {R}\), which would mean that the above mentioned decision problems concerning NE become strictly harder as the number of players increase beyond two.

We confirm that the problems \(\exists \mathrm {ESS}\) and \(\mathrm {IsESS}\) indeed are likely to become harder for multi-player games by proving hardness of the problems for discrete complexity classes defined in terms of real complexity classes that we consider likely to be stronger than \(\mathrm {\Sigma }^\mathrm {p}_2\) and \(\mathrm {NP}\). Our results are perhaps most easily stated in terms of the decision problem for the first order theory of the reals \(\mathrm {Th}(\mathbb {R})\). Just like the class \(\exists \mathbb {R}\) corresponds to existential fragment \(\mathrm {Th}_\exists (\mathbb {R})\) of \(\mathrm {Th}(\mathbb {R})\), we can consider classes \(\forall \mathbb {R}\) and \(\exists \forall \mathbb {R}\) corresponding to the universal fragment \(\mathrm {Th}_\forall (\mathbb {R})\) and the existential-universal fragment \(\mathrm {Th}_{\exists \forall }(\mathbb {R})\) of \(\mathrm {Th}(\mathbb {R})\), respectively. It is easy to see that the problem \(\exists \mathrm {ESS}\) belongs to \(\exists \forall \mathbb {R}\) and that \(\mathrm {IsESS}\) belongs to \(\forall \mathbb {R}\). We show that for 5-player games, the problem \(\exists \mathrm {ESS}\) is hard for the subclass of \(\exists \forall \mathbb {R}\) where the block of universal quantifiers is restricted to range over Boolean variables. For the problem \(\mathrm {IsESS}\) we completely characterize its complexity for 5-player games by proving that the problem is also hard for \(\forall \mathbb {R}\). Our hardness results thus imply that any algorithm for computing an ESS in a 5-player game can be used to solve quite general problems involving real polynomials. In particular it indicates that computing an ESS is significantly more difficult than deciding if a system of real polynomials has no solution, which is a basic problem complete for \(\forall \mathbb {R}\).

Our proof of hardness for \(\exists \mathrm {ESS}\) combines ideas of the \(\mathrm {\Pi }^\mathrm {p}_2\)-completeness proof of the problem MinmaxClique by Ko and Lin [26], the reduction from the complement of MinmaxClique to \(\exists \mathrm {ESS}\) for two-player games by Conitzer [14], and the direct translation of solutions of a polynomial system to strategies of a game by Hansen [24], in addition to new ideas.

We leave the problem of determining the precise computational complexity of \(\exists \mathrm {ESS}\) as an interesting open problem. The class \(\exists \forall \mathbb {R}\) is the natural real complexity class generalization of \(\mathrm {\Sigma }^\mathrm {p}_2\). Together with \(\mathrm {\Sigma }^\mathrm {p}_2\)-completeness of \(\exists \mathrm {ESS}\) for the setting of two-player games, this might lead one to expect that \(\exists \mathrm {ESS}\) should be \(\exists \forall \mathbb {R}\)-hard for multi-player games. However, a basic property of the set of evolutionary stable strategies is that any ESS is an isolated point in the space of strategies [2, Proposition 3], which means that the set of evolutionary stable strategies is always a discrete set. Expressing \(\exists \mathrm {ESS}\) in \(\mathrm {Th}_{\exists \forall }(\mathbb {R})\), the universal quantifier range over all potential ESS and the existential quantifier over potential invading strategies. The fact that the set of ESS is a discrete set could possibly mean that the universal quantifier could be made discrete as well. We also note that we do not even know whether \(\exists \mathrm {ESS}\) is hard for \(\exists \mathbb {R}\), which is clearly a prerequisite for \(\exists \forall \mathbb {R}\)-hardness.

1.1 Other Related Work

Starting with the universality theorem of Mnëv [30], which in particular implies that deciding whether an arrangement of pseudolines is stretchable is complete for \(\exists \mathbb {R}\), a large number of problems are by now known to be complete for \(\exists \mathbb {R}\). A crucial insight used for the first \(\exists \mathbb {R}\)-completeness result concerning games by Schaefer and Štefankovič [36] was that the \(\exists \mathbb {R}\)-complete \(\textsc {Quad}\) remains complete when asking for a solution of the polynomial system in the unit ball. This was also used by Schaefer [35] to prove that deciding rigidity of linkages is \(\forall \mathbb {R}\)-complete, and similar insights were used by Abrahamsen, Adamaszek, and Miltzow in their proof of \(\exists \mathbb {R}\)-completeness of the classic art gallery problem [1].

So far much fewer results are known concerning larger fragments of the first-order theory of the reals. Bürgisser and Cucker [10] study decision problems about general semialgebraic sets and show that the problem of deciding whether such a set contains an isolated point is hard for \(\forall \mathbb {R}\) and contained in \(\exists \forall \mathbb {R}\). Dobbins, Kleist, Miltzow, and Rzążewski [18] prove \(\forall \exists \mathbb {R}\)-completeness for certain problems concerned with embedding graphs in the plane. For problems concerning games, Gimbert, Paul, and Srivathsan [22] show that deciding whether in a two-player extensive form game with imperfect recall a player has a behavior strategy with positive payoff is hard both for \(\exists \mathbb {R}\) and \(\forall \mathbb {R}\) while being contained in \(\exists \forall \mathbb {R}\).

2 Preliminaries

2.1 Strategic Form Games

We present here basic definitions concerning strategic form games, mainly to establish our notations. A finite m-player strategic form game \(\mathcal {G}\) is given by finite sets \(S_1,\dots ,S_m\) of actions (pure strategies) together with utility functions \(u_1,\dots ,u_m : S_1 \times \dots \times S_m \rightarrow \mathbb {R}\). A choice of an action \(a_i\in S_i\) for each player together forms a pure strategy profile \(a=(a_1,\dots ,a_m)\). Let \(\varDelta (S_i)\) denote the set of probability distributions on \(S_i\). A (mixed) strategy for player i is then an element \(x_i \in \varDelta (S_i)\). We may conveniently identify an action \(a_i\) with the strategy that assigns probability 1 to \(a_i\). A strategy \(x_i\) for each player i together form a strategy profile \(x=(x_1,\dots ,x_m)\). For fixed i we denote by \(x_{-i}\) the partial strategy profile \((x_1,\dots ,x_{i-1},x_{i+1},\dots ,x_m)\) for all players except player i, and if \(x'_i \in \varDelta (S_i)\) we denote by \((x'_i;x_{-i})\) the strategy profile \((x_1,\dots ,x_{i-1},x'_i,x_{i+1},\dots ,x_m)\). The utility functions extend to strategy profiles by letting \(u_i(x)={\text {*}}{E}_{a \sim x}u_i(a_1,\dots ,a_m)\). We shall also refer to \(u_i(x)\) as the payoff of player i. A strategy profile x is a Nash equilibrium (NE) if \(u_i(x) \ge u_i(x'_i;x_{-i})\) for all i and all \(x'_i \in \varDelta (S_i)\). Every finite strategic form game \(\mathcal {G}\) has an NE [31].

In this paper we shall only consider symmetric games. The game \(\mathcal {G}\) is symmetric if all players have the same set S of actions and where the utility function of a given player depends only on the action of that player (and not the identity of the player) together with the multiset of actions of the other players. More precisely we say that \(\mathcal {G}\) is symmetric if there is a finite set S such that \(S_i=S\), for every \(i \in [m]\), and such that for every permutation \(\pi \) on [m], every \(i \in [m]\) and every \((a_1,\dots ,a_m) \in S^m\) it holds that \(u_i(a_1,\dots ,a_m)=u_{\pi ^{-1}(i)}(a_{\pi (1)},\dots ,a_{\pi (m)})\). It follows that a symmetric game \(\mathcal {G}\) is fully specified by S and \(u_1\); for notational simplicity we let \(u=u_1\). A strategy profile \(x=(x_1,\dots ,x_m)\) is symmetric if \(x_1=\dots =x_m\). If a symmetric strategy profile x is an NE it is called a symmetric NE (SNE). Every finite strategic form symmetric game \(\mathcal {G}\) has a SNE [31].

A single strategy \(\sigma \in \varDelta (S)\) defines the symmetric strategy profile \(\sigma ^m\). More generally, given \(\sigma , \sigma _1,\dots ,\sigma _r\in \varDelta (S)\) and \(m_1,\dots ,m_r\ge 1\) with \(m_1+\dots +m_r=m-1\), we denote by \((\sigma ; \sigma _1^{m_1},\dots ,\sigma _r^{m_r})\) a strategy profile where player 1 is playing using strategy \(\sigma \) and \(m_i\) of the remaining players are playing using strategy \(\sigma _i\), for \(i=1,\dots ,r\). By the assumptions of symmetry, the payoff \(u(\sigma ;\sigma _1^{m_1},\dots ,\sigma _{r}^{m_r})\) is well defined.

2.2 Evolutionary Stable Strategies

Our main object of study is the notion of evolutionary stable strategies as defined by Maynard Smith and Price [28] for 2-player games and generalized to multi-player games by Palm [33] and Broom, Cannings, and Vickers [11]. We follow below the definition given by Broom et al.

Definition 1

Let \(\mathcal {G}\) be a symmetric game given by S and u. Let \(\sigma ,\tau \in \varDelta (S)\). We say that \(\sigma \) is evolutionary stable (ES) against \(\tau \) if there is \(\varepsilon _{\tau }>0\) such that for all \(0<\varepsilon < \varepsilon _{\tau }\) we have

$$\begin{aligned} u(\sigma ; \tau _\varepsilon ^{m-1}) > u(\tau ; \tau _\varepsilon ^{m-1}) , \end{aligned}$$
(1)

where \(\tau _\varepsilon = \varepsilon \tau + (1-\varepsilon )\sigma \) is the strategy that plays according to \(\tau \) with probability \(\varepsilon \) and according to \(\sigma \) with probability \(1-\varepsilon \). We say that \(\sigma \) is an evolutionary stable strategy (ESS) if \(\sigma \) is ES against every \(\tau \ne \sigma \). If \(\sigma \) is not ES against \(\tau \) we also say that \(\tau \) invades \(\sigma \).

The supremum over \(\varepsilon _{\tau }\) for which Eq. 1 holds is called the invasion barrier for \(\tau \). If \(\sigma \) is an ESS and there exists \(\varepsilon _\sigma >0\) such that for all \(\tau \ne \sigma \) the invasion barrier \(\varepsilon _\tau \) for \(\tau \) satisfies \(\varepsilon _\tau \ge \varepsilon _\sigma \), we say that \(\sigma \) is an ESS with uniform invasion barrier \(\varepsilon _\sigma \). For 2-player games any ESS has a uniform invasion barrier [25]. Milchtaich [29] give a simple example of an ESS in a 4-player game without a uniform invasion barrier.

The following simple lemma due to Broom et al. [11] provides a useful alternative characterization of an ESS.

Lemma 1

A strategy \(\sigma \) is ES against \(\tau \) if and only if there exists \(0\le j <m\) such that \(u(\sigma ; \tau ^j, \sigma ^{m-1-j}) > u(\tau ; \tau ^j, \sigma ^{m-1-j})\) and that for all \(0\le i<j\), \(u(\sigma ; \tau ^i, \sigma ^{m-1-i}) = u(\tau ; \tau ^i, \sigma ^{m-1-i})\).

For the case of 2-player games, this alternative characterization is actually the original definition of an ESS given by Maynard Smith and Price [28], and the definition of an ESS we use was stated for the case of 2-player games by Taylor and Jonker [37]. A straightforward corollary of the characterization is that if \(\sigma \) is an ESS then \(\sigma ^m\) is a SNE.

By the support of an ESS \(\sigma \), Supp(\(\sigma \)), we refer to the set of pure strategies i such that are played with non-zero probability under the strategy \(\sigma \).

2.3 Real Computational Complexity

While we are mainly interested in the computational complexity of discrete problems, it is useful to discuss a model of computation operating on real-valued input. We use this to define the complexity class \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\), used to formulate our main result. Alternatively we may simply define this class in terms of a restriction of the decision problem for the first-order theory of the reals, as explained in the next subsection. The reader may thus defer reading this subsection.

A standard model for studying computational complexity in the setting of reals is that of Blum-Shub-Smale (BSS) machines [7]. A BSS machine takes a vector \(x \in \mathbb {R}^n\) as an input and performs arithmetic operations and comparisons at unit cost. In addition the machine may be equipped with a finite set of real-valued machine constants. In this way a BSS machine accepts a real language \(L \subseteq \mathbb {R}^\infty \), where \(\mathbb {R}^\infty = \bigcup _{n\ge 0} \mathbb {R}^n\). Imposing polynomial time bounds we obtain the complexity classes \(\mathrm {P}_\mathbb {R}\) and \(\mathrm {NP}_\mathbb {R}\) for deterministic and nondeterministic BSS machines, respectively, forming real-valued analogues of \(\mathrm {P}\) and \(\mathrm {NP}\). Cucker [16] defined the real analogue \(\mathrm {PH}_\mathbb {R}\) of the polynomial time hierarchy formed by the classes \(\mathrm {\Sigma }^\mathbb {R}_k\) and \(\mathrm {\Pi }^\mathbb {R}_k\), for \(k\ge 1\). The class \(\mathrm {\Sigma }^\mathbb {R}_{k+1}\) may be defined as real languages accepted by a nondeterministic oracle BSS machine in polynomial time using an oracle language from \(\mathrm {\Sigma }^\mathbb {R}_k\) with \(\mathrm {\Sigma }^\mathbb {R}_1=\mathrm {NP}_\mathbb {R}\), and \(\mathrm {\Pi }^\mathbb {R}_k\) is simply the class of complements of languages of \(\mathrm {\Sigma }^\mathbb {R}_k\). For natural problems such as TSP or Knapsack with real-valued input the search space remains discrete. Goode [23] introduced the notion of digital nondeterminism (cf. [17]) restricting nondeterministic guesses to the set \(\{0,1\}\), which when imposing polynomial time bounds define the class \(\mathrm {DNP}_\mathbb {R}\). One may also define a polynomial hierarchy based on digital nondeterminism giving rise to classes \(\mathrm {D\Sigma }^\mathbb {R}_k\) and \(\mathrm {D\Pi }^\mathbb {R}_k\), for \(k\ge 1\).

Another convenient way to define the classes described above is by means of complexity class operators (cf. [9, 38]). Here we shall consider existential or universal quantifiers over either real-valued or Boolean variables whose number is bounded by a polynomial. For a real complexity class \(\mathcal {C}\), define \(\exists ^\mathbb {R}\cdot \mathcal {C}\) as the class of real languages L for which there exists \(L' \in \mathcal {C}\) and a polynomial p such that \(x \in L\) if and only if \(\exists y \in \mathbb {R}^{\le p(\left| x \right| )} : \langle x , y \rangle \in L'\). For a real (or discrete) complexity class \(\mathcal {C}\), define \(\exists ^\mathrm {D}\cdot \mathcal {C}\) as the class of real (or discrete) languages L for which there exists \(L' \in \mathcal {C}\) and a polynomial p such that \(x \in L\) if and only if \(\exists y \in \{0,1\}^{\le p(\left| x \right| )} : \langle x , y \rangle \in L'\). Replacing existential quantifiers with universal quantifiers we analogously obtain definitions of classes \(\forall ^\mathbb {R}\cdot \mathcal {C}\) and \(\forall ^\mathrm {D}\cdot \mathcal {C}\). We now have that \(\mathrm {\Sigma }^\mathbb {R}_{k+1}=\exists ^\mathbb {R}\cdot \mathrm {\Pi }^\mathbb {R}_{k}\), \(\mathrm {D\Sigma }^\mathbb {R}_{k+1}=\exists ^\mathrm {D}\cdot \mathrm {D\Pi }^\mathbb {R}_{k}\), as well as \(\mathrm {\Sigma }^\mathrm {p}_{k+1}=\exists ^\mathrm {D}\cdot \mathrm {\Pi }^\mathrm {p}_k\), for \(k\ge 1\). We shall also consider mixing real and discrete operators. In such cases one may not always have an equivalent definition in terms of oracle machines. For instance, while \(\exists ^\mathbb {R}\cdot \mathrm {coDNP}= \mathrm {NP}_\mathbb {R}^{\mathrm {DNP}_\mathbb {R}}\) we can only prove the inclusion \(\exists ^\mathrm {D}\cdot \mathrm {coNP}_\mathbb {R}\subseteq \mathrm {DNP}_\mathbb {R}^{\mathrm {NP}_\mathbb {R}}\) and in particular we do not know if \(\mathrm {NP}_\mathbb {R}\subseteq \exists ^\mathrm {D}\cdot \mathrm {coNP}_\mathbb {R}\).

To study discrete problems we define the Boolean part of a real language \(L \subseteq \mathbb {R}^\infty \) as \({\text {BP}}(L) = L \cap \{0,1\}^*\) and of real complexity classes \(\mathcal {C}\) as \({\text {BP}}(\mathcal {C})=\{{\text {BP}}(L) \mid L \in \mathcal {C}\}\). The Boolean part of a real complexity class is thus a discrete complexity class and may be compared with other discrete complexity classes defined for instance using Turing machines. Furthermore, since we are interested in uniform discrete complexity we shall disallow machine constants. Indeed, a single real number may encode an infinite sequence of discrete advice strings, which for instance implies that \(\mathrm {P}/\mathrm {poly}\subseteq {\text {BP}}(\mathrm {P}_\mathbb {R})\). For a class \(\mathcal {C}\) defined above we denote by \(\mathcal {C}^0\) the analogously defined class without machine constants. Several classes given by Boolean parts of constant free real complexity are defined specifically in the literature. Most prominently is the class \({\text {BP}}(\mathrm {NP}^0_\mathbb {R})\) which also captures the complexity of the existential theory of the reals. It has been named \(\exists \mathbb {R}\) by Schaefer and Štefankovič [36] as well as \(\mathrm {NPR}\) by Bürgisser and Cucker [10]; we shall use the former notation \(\exists \mathbb {R}\). We further let \(\forall \mathbb {R}={\text {BP}}(\mathrm {coNP}^0_\mathbb {R})\) as well as \(\exists \forall \mathbb {R}={\text {BP}}(\mathrm {\Sigma }^{\mathbb {R},0}_2)=\exists ^\mathbb {R}\cdot \forall \mathbb {R}\) and \(\forall \exists \mathbb {R}={\text {BP}}(\mathrm {\Pi }^{\mathbb {R},0}_2)=\forall ^\mathbb {R}\cdot \exists \mathbb {R}\). We shall in particular be interested in the class \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\). Clearly, from the definitions above we have that this class contains both the familiar classes \(\forall \mathbb {R}\) and \(\mathrm {\Sigma }^\mathrm {p}_2\) and is itself contained in \(\exists \forall \mathbb {R}\). In fact \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\) contains the class \((\mathrm {\Sigma }^\mathrm {p}_2)^\text {PosSLP}\), where \(\text {PosSLP}\) is the problem of deciding whether an integer given by a division free arithmetic circuit is positive, as introduced by Allender et al. [3]. This follows since \(\mathrm {P}^\text {PosSLP}={\text {BP}}(\mathrm {P}^0_\mathbb {R})\) [3, Proposition 1.1], and thus

$$ \begin{aligned} (\mathrm {\Sigma }^\mathrm {p}_2)^\text {PosSLP}&= \exists ^\mathrm {D}\cdot \forall ^\mathrm {D}\cdot \mathrm {P}^\text {PosSLP}= \exists ^\mathrm {D}\cdot \forall ^\mathrm {D}\cdot {\text {BP}}(\mathrm {P}^0_\mathbb {R}) \\ {}&\subseteq \exists ^\mathrm {D}\cdot {\text {BP}}( \forall ^\mathbb {R}\cdot \mathrm {P}^0_\mathbb {R}) = \exists ^\mathrm {D}\cdot {\text {BP}}(\mathrm {coNP}^0_\mathbb {R}) = \exists ^\mathrm {D}\cdot \forall \mathbb {R}. \end{aligned} $$

2.4 The First-Order Theory of the Reals

The discrete complexity classes \({\text {BP}}(\mathrm {\Sigma }^{\mathbb {R},0}_k)\) and \({\text {BP}}(\mathrm {\Pi }^{\mathbb {R},0}_k)\) may alternatively be characterized using the decision problem for the first-order theory of the reals. We denote by \(\mathrm {Th}(\mathbb {R})\) the set of all true first-order sentences over the reals. We shall consider the restriction to sentences in prenex normal form

$$\begin{aligned} \left( Q_1 x_1 \in \mathbb {R}^{n_1}\right) \cdots \left( Q_k x_k \in \mathbb {R}^{n_k}\right) \varphi (x_1,\dots ,x_k) , \end{aligned}$$
(2)

where \(\varphi \) is a quantifier free Boolean formula of equalities and inequalities of polynomials with integer coefficients, where each \(Q_i\) is one of the quantifiers \(\exists \) or \(\forall \), typically alternating, and gives rise to k blocks of quantified variables. The restriction of \(\mathrm {Th}(\mathbb {R})\) to formulas in prenex normal form with k being a fixed constant and also \(Q_1=\exists \) is complete for \({\text {BP}}(\mathrm {\Sigma }^{\mathbb {R},0}_k)\); when instead \(Q_1=\forall \) it is complete for \({\text {BP}}(\mathrm {\Pi }^{\mathbb {R},0}_k)\). In particular, the existential theory of the reals \(\mathrm {Th}_\exists (\mathbb {R})\), where \(k=1\) and \(Q_1=\exists \), is complete for \(\exists \mathbb {R}\). Similarly \(\mathrm {Th}_{\forall \exists }(\mathbb {R})\) where \(k=2\) and \(Q_1=\forall \) is complete for \(\forall \exists \mathbb {R}\); when we furthermore restrict the first quantifier block to Boolean variables the problem becomes complete for \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\).

2.5 Real Polynomials with Discrete Quantification

In this section we shall prove that the following problem, \(\forall ^\mathrm {D}\textsc {Hom4Feas}(\varDelta )\), is complete for the complexity class \(\forall ^\mathrm {D}\cdot \exists \mathbb {R}\). In Sect. 3 we use the complement of this problem to prove our main result of \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\)-hardness of \(\exists \mathrm {ESS}\).

Denote by \(\varDelta ^n \subseteq \mathbb {R}^{n+1}\) the n-simplex \(\{x \in \mathbb {R}^{n+1} \mid x\ge 0 \wedge \sum _{i=1}^{n+1} x_i = 1\}\) and similarly by \(\varDelta _\mathrm {c}^n \subseteq \mathbb {R}^n\) the corner n-simplex \(\{x \in \mathbb {R}^n \mid x\ge 0 \wedge \sum _{i=1}^n x_i \le 1\}\).

Definition 2

(\(\forall ^\mathrm {D}\textsc {Hom4Feas}(\varDelta )\)). For the problem \(\forall ^\mathrm {D}\textsc {Hom4Feas}(\varDelta )\) we are given as input rational coefficients \(a_{i,\alpha }\), where \(i \in \{0,\dots ,n\}\) and \(\alpha \in [m]^4\) forming the polynomial

$$ F(y,z) = F_0(z) + \sum _{i=1}^n y_i F_i(z) , $$

where

$$ F_i(z) = \sum _{\alpha \in [m]^4} a_{i,\alpha } \prod _{j=1}^4 z_{\alpha _j} \text { , for } i=0,\dots ,n . $$

We are to decide whether for all \(y \in \{0,1\}^n\) there exists \(z \in \varDelta ^{m-1}\) such that \(F(y,z)=0\).

The proof of \(\forall ^\mathrm {D}\cdot \exists \mathbb {R}\)-hardness of \(\forall ^\mathrm {D}\textsc {Hom4Feas}(\varDelta )\) given below is mainly a combination of existing ideas and proofs, and the reader may thus defer reading it.

Theorem 1

The problem \(\forall ^\mathrm {D}\textsc {Hom4Feas}(\varDelta )\) is complete for \(\forall ^\mathrm {D}\cdot \exists \mathbb {R}\), and remains \(\forall ^\mathrm {D}\cdot \exists \mathbb {R}\)-hard even with the promise that for all \(y \in \{0,1\}^n\) and \(z \in \mathbb {R}^m\) it holds that \(F(y,z)\ge 0\).

Proof

We shall prove hardness of \(\forall ^\mathrm {D}\textsc {Hom4Feas}(\varDelta )\) by describing a general reduction from a language L in \(\forall ^\mathrm {D}\cdot \exists \mathbb {R}\) in several steps making use of reductions that proves several problems involving real polynomials \(\exists \mathbb {R}\)-hard. Consider first the standard complete problem \(\textsc {Quad}\) for \(\exists \mathbb {R}\) which is that of deciding if a system of multivariate quadratic polynomials have a common root [8, 36]. The general reduction from a language L in \(\exists \mathbb {R}\) to \(\textsc {Quad}\) works by treating the input x as variables and computes, based only on \(\left| x \right| \) and not the actual value of x, a system of quadratic polynomials \(q_i(x,y)\), \(i=1,\dots ,\ell \), where \(y \in \mathbb {R}^{p(\left| x \right| )}\) for some polynomial p. The system has the property that for all x it holds that \(x \in L\) if and only if there exists y such that \(q_i(x,y)=0\), for all i.

Suppose now that \(L \in \forall ^\mathrm {D}\cdot \exists \mathbb {R}\). Then there is \(L'\) in \(\exists \mathbb {R}\) and a polynomial p such that \(x \in L\) if and only if \(\forall y \in \{0,1\}^{p(\left| x \right| )} : \langle x , y \rangle \in L'\). On input x we may apply the reduction from \(L'\) to \(\textsc {Quad}\) and in this way obtain a system of quadratic equations \(q_i(x,y,z)\), \(i=1,\dots ,\ell _1\) where \(z \in \mathbb {R}^{p_1(\left| x \right| )}\) such that \(\langle x , y \rangle \in L'\) if and only if there exists \(z \in \mathbb {R}^{p_1(\left| x \right| )}\) such that \(q_i(x,y,z)=0\) for all i. At this point we may just treat x as fixed constants, and we view the system as polynomials in variables (yz), suppressing the dependence on x in the notation. Define \(n=p(\left| x \right| )\). We next introduce additional existentially quantified variables \(w \in \mathbb {R}^n\), substitute \(w_i\) for \(y_i\) in all polynomials, and then add new polynomials \(w_i-y_i\), for \(i \in [n]\). Renaming polynomials and bundling the existentially quantified variables we now have a system of polynomials \(q_i(y,z)\), \(i\in [\ell _2]\) where \(z \in \mathbb {R}^{m_2}\), where \(m_2 \le p_2(\left| x \right| )\) for some polynomial \(p_2\), such that \(x \in L\) if and only if

$$ \forall y \in \{0,1\}^n \exists z \in \mathbb {R}^{m_2} \forall i \in [\ell _2] : q_i(y,z)=0 , $$

and where each polynomial \(q_i\) depends on at most 1 coordinate of y.

For the next step we use that \(\textsc {Quad}\) remains \(\exists \mathbb {R}\)-hard when asking for a solution in the unit ball [34], or analogously in the corner simplex [24]. Applying the reduction of [24, Proposition 2] we first rewrite each variable \(z_i\) as a difference \(z_i=z^+_i-z^-_i\) of two non-negative real variables \(z^+_i\) and \(z^-_i\) and then introduce additional existentially quantified variables \(w_0,\dots ,w_t\) for suitable \(t=O(\log \tau +m_2)\), where \(\tau \) is the maximum bitlength of the coefficients of the given system. Then polynomials are added that together implement t steps of repeated squaring of \(\frac{1}{2}\), i.e. we add polynomials \(w_t-\frac{1}{2}\), and \(w_{j-1}-w_j\), for \(j \in [t]\), which means that any solution must then have \(w_0=2^{-2^t}\). In the given polynomial system we now substitute \(z_i\) by \((z^+_i-z^-_i)/w_0\) in each of the polynomials and then multiply them by \(w_0^2\) to clear \(w_0\) from the denominators. For suitable t this means that if for fixed y, the given system of polynomials has a solution \(z \in \mathbb {R}^{m_2}\), then the transformed system has a solution \((z^+,z^-,w)\) in \(\varDelta _\mathrm {c}^{2m_2+t+1}\). Note also, that since the variables \(x_i\) are not divided by \(w_0\), multiplying by \(w_0^2\) causes an increase in the degree of the polynomials, but the degree in the other variables remains at most 2. Again, renaming polynomials and bundling the existentially quantified variables we now have a system of polynomials \(q_i(y,z)\), \(i\in [\ell _3]\) where \(z \in \mathbb {R}^{m_3}\), where \(m_3 \le p_3(\left| x \right| )\) for some polynomial \(p_3\), such that \(x \in L\) if and only if

$$ \forall y \in \{0,1\}^n \exists z \in \varDelta _\mathrm {c}^{m_3} \forall i \in [\ell _3] : q_i(y,z)=0 , $$

and where each polynomial \(q_i\) depends on at most 1 coordinate of y.

The next step simply consists of homogenizing the polynomials in the existentially quantified variables z. For this we simply introduce a slack variable \(z_{m_3+1}=1-\sum _{i=1}^{m_3}z_i\) and homogenize by multiplying terms by \(\sum _{i=1}^{m_3+1}z_i\) or \(\sum _{i=1}^{m_3+1}\sum _{j=1}^{m_3+1}z_iz_j\) as needed. Letting \(q'_i\) be the homogenization of \(q_i\) we now have that \(x \in L\) if and only if

$$ \forall y \in \{0,1\}^n \exists z \in \varDelta ^{m_3} \forall i \in [\ell _3] : q'_i(y,z)=0 , $$

and where each polynomial \(q'_i\) depends on at most 1 coordinate of y and are homogeneous of degree 2 in the variables z.

For the final step we reuse the idea of the reduction from \(\textsc {Quad}\) to \(\textsc {4Feas}\), which merely takes the sum of the squares of every given polynomial. Thus we let

$$ F(y,z)=\sum _{i=1}^{\ell _3} (q'(y,z))^2 . $$

We note that \((q'(y,z))^2\ge 0\) for all y and z and is homogeneous of degree 4 in the variables z. Further, since \(y_j^2=y_j\) for any \(y_j\in \{0,1\}\) we may replace all occurrences of \(y_j^2\) by \(y_j\) thereby obtaining an equivalent polynomial (when \(y\in \{0,1\}^n\)) of the form of Definition 2. We have that for every fixed \(y\in \{0,1\}^n\) and all \(z\in \mathbb {R}^m\) that \(F(y,z)=0\) if and only if \(q_i(y,z)=0\) for all i. Thus \(x \in L\) if and only if

$$ \forall y \in \{0,1\}^n \exists z \in \varDelta ^{m_3} F(y,z)=0 , $$

which completes the proof of hardness. Let us also note that the definition of F guarantees that \(F(y,z)\ge 0\) for all \(y \in \{0,1\}^n\) and \(z \in \mathbb {R}^m\) it holds that \(F(y,z)\ge 0\). Since on the other hand clearly \(\forall ^\mathrm {D}\textsc {Hom4Feas}(\varDelta ) \in \forall ^\mathrm {D}\cdot \exists \mathbb {R}\) the result follows.    \(\square \)

As a special case, (when there are no universally quantified variables) the proof gives a reduction from the \(\exists \mathbb {R}\)-complete problem \(\textsc {Quad}\) to the problem \(\textsc {Hom4Feas}(\varDelta )\), where we are given as input a homogeneous degree 4 polynomial F(z) in m variables with rational coefficients and are to decide whether there exists \(z \in \varDelta ^{m-1}\) such that \(F(z)=0\). Also, we clearly have that \(\textsc {Hom4Feas}(\varDelta )\) is a member of \(\exists \mathbb {R}\) and therefore have the following result.

Theorem 2

The problem \(\textsc {Hom4Feas}(\varDelta )\) is complete for \(\exists \mathbb {R}\), and remains \(\exists \mathbb {R}\)-hard even when assuming that for all \(z \in \mathbb {R}^m\) it holds that \(F(z)\ge 0\).

3 Complexity of ESS

In this section we shall prove our results for deciding existence of an ESS. In the proof we will re-use a trick used by Conitzer [14] for the case of 2-player games, where by duplicating a subset of the actions of a game we ensure that no ESS can be supported by any of the duplicated actions as shown in the following lemma. Here, by duplicating an action we mean that the utilities assigned to any pure strategy profile involving the duplicated action is defined to be equal to the utility for the pure strategy profile obtained by replacing occurrences of the duplicated action by the original action. The precise property is as follows.

Lemma 2

Let \(\mathcal {G}\) be an m-player symmetric game given by S and u. Suppose that \(s,s' \in S\) are such that for all strategies \(\tau \) we have \(u(s;\tau ^{m-1})=u(s';\tau ^{m-1})\). Then s can not be in the support of an ESS \(\sigma \).

Proof

Suppose \(\sigma \) is a strategy with \(s \in {\text {Supp}}(\sigma )\). Let \(\sigma '\) be obtained from \(\sigma \) by moving the probability mass of s to \(s'\). From our assumption we then have \(u(\sigma ;\tau ^{m-1})=u(\sigma ';\tau ^{m-1})\) for all \(\tau \). In particular we have \(u(\sigma ;\sigma _\varepsilon ^{m-1})=u(\sigma ';\sigma _\varepsilon ^{m-1})\), for all \(\varepsilon >0\), where we have \(\sigma _\varepsilon =\varepsilon \sigma '+(1-\varepsilon )\sigma \). This means that \(\sigma '\) invades \(\sigma \) and \(\sigma \) is therefore not an ESS.    \(\square \)

We now state and prove the main result of this paper.

Theorem 3

\(\exists \mathrm {ESS}\) is \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\)-hard for 5-player games.

Proof

We prove our result by giving a reduction from the complement of the problem \(\forall ^\mathrm {D}\textsc {Hom4Feas}(\varDelta )\) to \(\exists \mathrm {ESS}\). It follows from Theorem 1 that the former problem is complete for \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\). Thus let \(a_{i,\alpha }\) be given rational coefficients, with \(i=0,\dots ,n\) and \(\alpha \in [m]^4\), forming the polynomials F(yz) and \(F_i(z)\), for \(i=0,\dots ,n\) as in Definition 2. We may assume that for all \(y\in \{0,1\}^n\) and all \(z\in \mathbb {R}^m\) it holds that \(F(y,z)\ge 0\). Also, without loss of generality we may assume that each \(F_i\) is symmetrized, i.e. that for all i and \(\alpha \), if \(\pi \) is a permutation on [4], then defining \(\pi \cdot \alpha \in [m]^4\) by \((\pi \cdot \alpha )_i=\alpha _{\pi (i)}\), we have that \(a_{i,\alpha }=a_{i,\pi \cdot \alpha }\). Namely, we may simply replace each coefficient \(a_{i,\alpha }\) by the average of all coefficients of the form \(a_{i,\pi \cdot \alpha }\). This leaves the functions given by the expressions for \(F_i\) unchanged, but crucially ensures that the game defined below is symmetric.

We next define a 5-player game \(\mathcal {G}\) based on F. The strategy set is naturally divided in three parts \(S = S_1 \cup S_2 \cup S_3\). These are defined as follows.

$$\begin{aligned} \begin{aligned} S_1&= \{(i,\alpha ,b) \mid i \in \{0,\dots ,n\},~\alpha \in [m]^4,~b \in \{0,1\}\}\\ S_2&= \{\gamma \}\\ S_3&= \{1,\dots ,m\} \end{aligned} \end{aligned}$$
(3)

An action \((i,\alpha ,b)\) of \(S_1\) thus identifies a term of \(F_i\) together with \(b \in \{0,1\}\), which is supposed to be equal to \(y_i\). When convenient we may describe the actions of \(S_1\) by pairs (tb), where \(t=(i,\alpha )\) for some i and \(\alpha \). The single action \(\gamma \) is used for rewarding inconsistencies in the choices of b among strategies of \(S_1\). Finally, a probability distribution on \(S_3\) will define an input z. Let \(M=(n+1)m^4\) be the total number of terms of F. Thus \(\left| S_1 \right| =2M\).

We shall duplicate all actions of \(S_2 \cup S_3\) and let duplicates behave exactly the same regarding the utility function defined below. By Lemma 2 it then follows that any ESS \(\sigma \) of \(\mathcal {G}\) must have \({\text {Supp}}(\sigma ) \subseteq S_1\). For simplicity we describe the utilities of \(\mathcal {G}\) without the duplicated actions.

When all players are playing an action of \(S_1\) we define

$$\begin{aligned} u((t_1,b_1),\dots ,(t_5,b_5)) = {\left\{ \begin{array}{ll} 2 &{} \text {if } t_1 \notin \{t_2,\dots ,t_5\}\\ 1 &{} \text {if } t_1 \in \{t_2,\dots ,t_5\} \text { and } t_1 = t_j \Rightarrow b_1 = b_j\\ 0 &{} \text {otherwise} \end{array}\right. } . \end{aligned}$$
(4)

Before defining the remaining utilities, we consider the payoff of strategies that play uniformly on the set of terms and according to a fixed assignment y. Define the number T by

$$\begin{aligned} T = 2-\frac{4}{M}+\frac{6}{M^2}-\frac{4}{M^3}+\frac{1}{M^4} . \end{aligned}$$
(5)

Lemma 3

Let \(y \in \{0,1\}^n\), let \(y_0\in \{0,1\}\) be arbitrary, and define \(\sigma _y\) to be the strategy that plays \((i,\alpha ,y_i)\) with probability \(\frac{1}{M}\) for all \(\alpha \), and the remaining strategies with probability 0. Then \(u(\sigma _y^5)=T\).

Proof

Note that \(2-u(\sigma _y^5)\) is precisely the probability of the union of the events \(t_1=t_j\), where \(j=2,\dots ,5\), and \(t_j\) is the term chosen by player j. For fixed \(t_1\), these events are independent and each occurs with probability \(\frac{1}{M}\). By the principle of inclusion-exclusion we thus have

$$ 2-u(\sigma _y^5) = \frac{4}{M}-\frac{6}{M^2}+\frac{4}{M^3}-\frac{1}{M^4} . $$

We will construct the game \(\mathcal {G}\) in such a way that any ESS \(\sigma \) will have \(u(\sigma ^5)=T\). Making use of Lemma 3, we now define utilities when at least one player is playing the action \(\gamma \). In case at least two players are playing \(\gamma \), these players receive utility 0 while the remaining players receive utility T. In case exactly one player is playing \(\gamma \), the player receives utility \(T+1\) in case there are two players that play actions \((i,\alpha ,b)\) and \((i,\alpha ',b')\) with \(b\ne b'\); otherwise the player receives utility T. In either case, when exactly one player is playing \(\gamma \), the remaining players receive utility T.

We finally define utilities when one player is playing an action from \(S_1\) and the remaining four players are playing an action from \(S_3\). Suppose for simplicity of notation that player j is playing action \(\beta _j \in S_3\), for \(j=1,\dots ,4\), while player 5 is playing action \((i,\alpha ,b)\). We let player 5 receive utility T. In case \(\alpha =\beta \) the first four players receive utility \(T-a_{i,\alpha }\); otherwise they receive utility T. Here we use that \(a_{i,\alpha }=a_{i,\pi \cdot \alpha }\) for any permutation \(\pi \) on [4], to ensure that \(\mathcal {G}\) is symmetric.

At this point we have only partially specified the utilities of the game \(\mathcal {G}\); we simply let all remaining unspecified utilities equal T, thereby completing the definition of \(\mathcal {G}\).

We are now ready to prove that \(\mathcal {G}\) has an ESS if and only if there exists \(y\in \{0,1\}^n\) such that \(F(y,z)>0\) for all \(z \in \varDelta ^{m-1}\). Suppose first that \(y \in \{0,1\}^n\) exists such that \(F(y,z)>0\) for all \(z \in \varDelta ^{m-1}\). We define \(\sigma =\sigma _y\) as in Lemma 3 and show that any \(\tau \ne \sigma \) satisfies the conditions of Lemma 1 thereby proving that \(\sigma \) is an ESS of \(\mathcal {G}\). Suppose that \(\tau \ne \sigma \) invades \(\sigma \). Consider first playing \(\tau \) against \(\sigma ^4\). From the proof of Lemma 3 it follows that playing a strategy of form \((i,\alpha ,b)\) against \(\sigma ^4\) gives payoff T if \(b=y_i\) and otherwise payoff strictly below T. The strategies of \(S_2 \cup S_3\) all give payoff T against \(\sigma ^4\). It follows that to invade \(\sigma \), \(\tau \) can only play strategies from \(S_1\) contained in \({\text {Supp}}(\sigma )\). Let us write \(\tau = \delta _1\tau _1+\delta _2\tau _2+\delta _3\tau _3\) as a convex combination of strategies \(\tau _j\) with \({\text {Supp}}(\tau _j)\subseteq S_j\), for \(j=1,2,3\). We shall consider playing \(\tau \) against \((\tau ,\sigma ^3)\) and argue that \(\tau _1=\sigma \) if \(\delta _1>0\) and that \(\delta _2=0\). Note first that if a strategy of \(S_3\) is played, all players receive utility T, so we may focus on the case when all players play using strategies from \(S_1 \cup S_2\). Suppose that \(\delta _1>0\) and let \(p_{t} = \Pr _{\tau _1}[t]\), where t is a term of F. Using the principle of inclusion-exclusion we have

$$ \begin{aligned} 2-u(\tau _1;\tau _1,\sigma ^3)&= \sum _t p_t\left[ \frac{3}{M}-\frac{3}{M^2}+\frac{1}{M^3} + p_t\left( 1-\frac{3}{M}+\frac{3}{M^2}-\frac{1}{M^3}\right) \right] \\&= \frac{3}{M}-\frac{3}{M^2}+\frac{1}{M^3} + \left( 1-\frac{3}{M}+\frac{3}{M^2}-\frac{1}{M^3}\right) \sum _t p_t^2 . \end{aligned} $$

By Jensen’s inequality, \(\sum _t p_t^2 \ge M\left( \sum _t p_t/M\right) ^2 = \frac{1}{M}\), with equality if and only if \(p_t=\frac{1}{M}\) for all t. This means that \(u(\tau _1;\tau _1,\sigma ^3) \le T\), with equality if and only if \(p_t=\frac{1}{M}\) for all t. Thus if \(\tau _1\ne \sigma \), then \(u(\tau _1;\tau _1,\sigma ^3)<u(\sigma ;\tau _1,\sigma ^3)=T\), where the last equality may be derived again using the principle of inclusion-exclusion. Now, since \({\text {Supp}}(\tau _1)\subseteq {\text {Supp}}(\sigma )\) when \(\delta _1>0\), playing \(\gamma \) can give utility at most T but gives utility 0 in case another player plays \(\gamma \) as well.

Combining these observations it follows that unless \(\delta _2=0\) and \(\tau _1=\sigma \) when \(\delta _1>0\) we have \(u(\sigma ;\tau ,\sigma ^3)>u(\tau ;\tau ,\sigma ^3)\). Thus we may now assume that this is the case, i.e., that \(\tau =\delta _1 \sigma + \delta _3 \tau _3\). From the definition of \(\mathcal {G}\) we now have that \(u(\tau ;\tau ^j,\sigma ^{4-j}) \le u(\sigma ;\tau ^j,\sigma ^{4-j})= T\), for \(j=1,2,3\). For \(\tau \) to invade \(\sigma \) it is thus required that \(u(\tau ; \tau ^4) \ge u(\sigma ; \tau ^4)\), and it follows from the definition of \(\mathcal {G}\) that this is equivalent to \(u(\tau _3; \tau _3^3,\sigma )\ge T\). Now \(\tau _3 \in \varDelta (S_3) = \varDelta ^{m-1}\) and by assumption we have \(F(y,\tau _3)>0\). Furthermore we have \(u(\tau _3; \tau _3^3,\sigma ) = T - F(y,\tau _3)/M\) and thus \(u(\tau _3; \tau _3^3,\sigma ) < T\), which means \(\sigma \) is actually ES against \(\tau \).

Suppose now on the other hand that \(\sigma \) is an ESS of \(\mathcal {G}\). First, since we duplicated the actions of \(S_2 \cup S_3\), it follows from Lemma 2 that \({\text {Supp}}(\sigma )\subseteq S_1\). We next show that for all terms t, if \(\sigma (t,b)>0\), then unless \(\sigma (t,1-b)=0\), \(\sigma \) can be invaded. Suppose that t is a term of F, let \(p_0=\sigma (t,0)\) and \(p_1=\sigma (t,1)\), and suppose that \(p_0>0\) and \(p_1>0\). Suppose without loss of generality that \(p_0\ge p_1\). Note now that

$$ u((t,0);\sigma ^4) - u((t,1);\sigma ^4) = (1-p_1)^4-(1-p_0)^4 \ge 0 , $$

which can be seen by noting that the left hand side of the equality does not change when replacing all utilities of 2 by 1. Similarly

$$ u((t,0);(t,0),\sigma ^3) - u((t,1);(t,1),\sigma ^3) = (1-p_1)^3-(1-p_0)^3 \ge 0 . $$

Define the strategy \(\sigma '\) from \(\sigma \) by playing the strategy (t, 0) with probability \(p=p_0+p_1\), the strategy (t, 1) with probability 0, and otherwise according to \(\sigma \). Then

$$ \begin{aligned} u(\sigma ';\sigma ^4)-u(\sigma ^5)&= (p_0+p_1)u((t,0);\sigma ^4)-p_0u((t,0);\sigma ^4)-p_1u((t,1);\sigma ^4) \\&= p_1(u((t,0);\sigma ^4) - u((t,1);\sigma ^4)) \ge 0 , \end{aligned} $$

and by definition, \(u((t,0);(t,1), \sigma _3) = u((t,1);(t,0), \sigma _3) = 0\). Thus we have

$$ \begin{aligned} u&(\sigma ';\sigma ', \sigma ^3) - u(\sigma ;\sigma ', \sigma ^3) \\&= (p_0+p_1)^2u((t,0);(t,0), \sigma ^3) - p_0(p_0+p_1)u((t,0);(t,0), \sigma ^3)\\&=(p_0p_1+p_1^2)u((t,0);(t,1), \sigma ^3) > 0 . \end{aligned} $$

which means that \(\sigma '\) invades \(\sigma \). Since \(\sigma \) is an ESS, this means that for each term t there is \(b_t\in \{0,1\}\) such that \(\sigma \) plays \((t,1-b_t)\) with probability 0. Let \(p_t=\sigma (t,b_t)\) and define the function \(h : \mathbb {R}\rightarrow \mathbb {R}\) by \(h(p) = 4p-6p^2+4p^3-p^4\), and note that \(\frac{\mathrm{d}}{\mathrm{d}p}h(p)=4(1-p)^3\). By the principle of inclusion-exclusion we have

$$ 2-u(\sigma ^5) = \sum _t p_t h(p_t) . $$

Suppose now there exists terms t and \(t'\) such that \(p_t < p_{t'}\). Since h is strictly increasing on [0, 1] we also have \(h(p_t) < h(p_{t'})\), and therefore \(p_t h(p_t) + p_{t'} h(p_{t'}) > p_{t'} h(p_t) + p_t h(p_{t'})\). Define \(\sigma '\) to play t with probability \(p_{t'}\), \(t'\) with probability \(p_t\), and otherwise according to \(\sigma \). We then have

$$ (2-u(\sigma ^5)) - (2-u(\sigma ';\sigma ^4)) = p_t(h(p_t)-h(p_{t'}))+p_{t'}(h(p_{t'})-h(p_t)) > 0 , $$

and therefore \(u(\sigma ';\sigma ^4)>u(\sigma ^5)\), which means that \(\sigma '\) invades \(\sigma \). Since \(\sigma \) is an ESS this means that \(p_t=\frac{1}{M}\) for all t. From the proof of Lemma 3 it then follows that \(u(\sigma ^5) = T\).

Suppose now that there exists \(i \in [n]\) and \(\alpha ,\alpha '\) such that \(b_{(i,\alpha )} \ne b_{(i,\alpha ')}\). But then \(u(\gamma ;\sigma ^4)>T = u(\sigma ^5)\), which means that \(\gamma \) invades \(\sigma \). Since \(\sigma \) is an ESS there must exist \(y\in \{0,1\}^n\) (and some \(y_0\in \{0,1\}\)) such that \(\sigma =\sigma _y\), using the notation of Lemma 3.

Finally, let \(z \in \varDelta ^{m-1}=\varDelta (S_3)\). By definition of u we have \(u(z;z^j,\sigma ^{4-j})=T=u(\sigma ;z^j,\sigma ^{4-j})\), for all \(j\in \{0,1,2\}\). Next \(u(z;z^3,\sigma )=T-F(y,z)/M\) while we have \(u(\sigma ;z^3,\sigma )=T\). For \(\sigma \) to be ES against z we must thus have \(F(y,z)>0\), and this concludes the proof.    \(\square \)

The best upper bound on the complexity of \(\exists \mathrm {ESS}\) we know is membership of \(\forall \exists \mathbb {R}\) which easily follows from definitions. For the simpler problem \(\mathrm {IsESS}\) of determining whether a given strategy is an ESS we can fully characterize its complexity.

Theorem 4

\(\mathrm {IsESS}\) is \(\forall \mathbb {R}\)-complete for 5-player games.

Proof

Clearly \(\mathrm {IsESS}\) belongs to \(\forall \mathbb {R}\). To show \(\forall \mathbb {R}\)-hardness we reduce from the complement of the problem \(\textsc {Hom4Feas}(\varDelta )\) to \(\mathrm {IsESS}\). It follows from Theorem 2 that the former problem is complete for \(\forall \mathbb {R}\). From F we construct the game \(\mathcal {G}\) as in the proof of Theorem 3 letting \(n=0\). We let \(\sigma \) be the uniform distribution on the set of actions \((0,\alpha ,0)\), where \(\alpha \in [m]^4\). It then follows from the proof of Theorem 3 that \(\sigma \) is an ESS of \(\mathcal {G}\) if and only if \(F(z)>0\) for all \(z \in \varDelta ^{m-1}\). Since we may assume that \(F(z)\ge 0\) for all \(z \in \mathbb {R}^m\) this completes the proof.    \(\square \)

4 Conclusion

We have shown the problem \(\exists \mathrm {ESS}\) to be hard for \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\) and member of \(\exists \forall \mathbb {R}\). The main open problem is to characterize the precise complexity of \(\exists \mathrm {ESS}\), perhaps by improving the upper bound. Another point is that our hardness proofs construct 5-player games, whereas the recent and related \(\exists \mathbb {R}\)-completeness results for decision problems about NE in multi-player games holds already for 3-player games. This leads to the question about the complexity of \(\exists \mathrm {ESS}\) and \(\mathrm {IsESS}\) in 3-player and 4-player games. The reason that we end up with 5-player games is that we construct a degree 4 polynomial in the reduction, rather than (a system of) degree 2 polynomials as used in the related \(\exists \mathbb {R}\)-completeness results. In both cases a number of players equal to the degree is used to simulate evaluation of a monomial and a last player is used to select the monomial. For our proof we critically use that the degree 4 polynomial involved in the reduction may be assumed to be non-negative.