Abstract
An evolutionary game is usually identified by a smooth (possibly nonlinear) payoff function. We propose a model of evolutionary game in which for any two different pure strategies, a nonlinear payoff function of a pure strategy which is frequent in number is better/worse than other one. In order to observe some evolutionary bifurcation diagram, we also control the nonlinear payoff function in two different regimes: positive and negative. One of the interesting feature of the model is that if we switch a controlling parameter from positive to negative regime then a set of local evolutionarily stable strategies (ESSs) changes from one set to another one. We show that the Folk Theorem of Evolutionary Game Theory is true for a discrete-time replicator equation governed by the proposed nonlinear payoff function. In the long-run time, the following scenario can be observed: (i) in the positive regime, the active dominating pure strategies will outcompete other strategies and only they will survive forever; (ii) in the negative regime, all active pure strategies will coexist together and they will survive forever. As an application, we also show that the nonlinear payoff functions defined by discrete population models for a single species such as Beverton–Holt’s model, Hassell’s model, Maynard Smith–Slatkin’s model, Ricker’s model, Skellam’s model satisfy the hypothesis of the proposed model.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Evolutionary game theory is the study of frequency-dependent natural selection in which the fitness of individuals is not constant, but depends on frequencies of the different phenotypes in population. This theory was introduced by Maynard Smith and Price [28] and further developed by Maynard Smith [26, 27]. Since then, there has been a veritable explosion of interest by economists and social scientists in evolutionary game theory [4, 6, 7, 9,10,11, 17, 20, 22,23,24, 36, 43, 45, 53]. Classical non-cooperative game theory, which was invented by Von Neumann and Morgenstern [52], typically analyzes an interaction between two players and studies their behavior in strategic and economic decisions. The main problem is how the players can maximize their payoffs in a game given that the players are aware of the structure of the game and consciously try to predict the moves of their opponents. This depends on the cognitive abilities of the players in which the concept of rationality plays an important role. However, evolutionary game theory differs from classical non-cooperative game theory in focusing more on the dynamics of strategy change. It does not rely on rationality. It is presumed that a population of players are randomly interacted and the strategy of a player is fixed which is biologically encoded and heritable that controls its action. However, they have no control over their strategy and need not to be aware of the game. In other words, the players do not choose their strategy and cannot change it: they are born with a strategy and their offspring inherit that same strategy. Evolutionary game theory interprets payoff as biological fitness and success as reproductive success. Hence, the payoff represents reproductive success and the success of a strategy is determined by how good the strategy is in the presence of competing strategies and of the frequency with which those strategies are used. Strategies that do well reproduce faster and strategies that do poorly are outcompeted.
The Nash equilibrium, which was invented by Nash [34, 35], is the solution concept in classical non-cooperative game theory in which each strategy in the Nash equilibrium is the best response to all other strategies in that equilibrium. In other words, it is a strategy profile in which no player can do better by unilaterally changing their strategy to another strategy. An evolutionarily stable strategy (ESS) is akin to the Nash equilibrium in classical non-cooperative game theory. An ESS is a strategy of game dynamics for which another mutant strategy cannot successfully enter the population to disturb the existing dynamics. If it is adopted by a population in a given environment then it is unbeatable which means that it cannot be invaded by any alternative (mutant) strategy that are initially rare. The ESS is an equilibrium refinement of the Nash equilibrium that is “evolutionarily” stable: once it is fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from invading successfully. The ESS must be resistant to these alternatives. Therefore, the ESS must be effective against any alternative (mutant) strategy when it is initially rare and successful when it is eventually abundant.
The primary way to study the evolutionary dynamics in games is through replicator equations. They are used to describe the evolutionary dynamics of an entity called replicator which has means of making more or less accurate copies of itself. The replicator can be a gene in population genetics, a molecule in prebiotic evolution, an organism in population ecology, a strategy in evolutionary game. The replicator equation, which was first introduced into models of animal behavior of a single species by Taylor and Jonker [48, 49], is the cornerstone of evolutionary game dynamics. The replicator equation shows the growth rate of the proportion of players using a certain strategy for which that rate is equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole. The general idea is that replicators whose fitness is larger (smaller) than the average fitness of population will increase (decrease) in numbers. The static approach of evolutionary game theory has been complemented by a dynamic stability analysis of rest (stationary) solutions of the replicator equations. The replicator equation satisfies the Folk Theorem of Evolutionary Game Theory (see [7, 23, 24]) which asserts the following four statements for all “reasonable” dynamics of an evolutionary game:
-
(i)
A Nash equilibrium is a rest (fixed) point;
-
(ii)
A stable rest (fixed) point is a Nash equilibrium;
-
(iii)
A strictly Nash equilibrium is asymptotically stable;
-
(iv)
Any interior convergent orbit evolves to a Nash equilibrium;
In this paper, we are aiming to show that the Folk Theorem of Evolutionary Game Theory is also true for a model of evolutionary game in which the nonlinear payoff functions are defined by discrete population models for a single species such as Beverton–Holt’s model [5], Hassell’s model [18], Maynard Smith–Slatkin’s model [29], Ricker’s model [39], Skellam’s model [46]. Indeed, we consider a general nonlinear model of evolutionary game in which for any two different pure strategies, a biological fitness of a pure strategy which is frequent in number is better/worse than other one. Particularly, the nonlinear payoff functions defined by discrete population models mentioned above satisfy this hypothesis. In order to observe some evolutionary bifurcation diagram, the nonlinear payoff functions are controlled in two different regimes: positive and negative. We also study the dynamics and stability analysis of the discrete-time replicator equation governed by the proposed nonlinear payoff functions. Namely, we describe all rest (fixed) points, Nash equilibria, and ESSs of the proposed discrete-time replicator equation in both regimes. One of the interesting feature of the model is that if we switch the controlling parameter from positive to negative regime then the set of ESSs changes from one set to another one. In the long-run time, the following scenario can be observed: (i) in the positive regime, the active dominating pure strategies will outcompete other strategies and only they will survive forever; (ii) in the negative regime, all active pure strategies will coexist together and they will survive forever.
The paper is organized as follows: in the next section, we provide all necessary notions and notation from evolutionary game theory which will be used throughout this paper. In Sect. 3, we state the main results (see Theorems A, B, and C) of the paper. In Sect. 4, we describe the sets of the rest points, Nash equilibria, and local ESSs of the proposed discrete-time replicator equation. In Sects. 5 and 6, we study the dynamics and stability analysis of the discrete-time replicator equation in positive and negative regimes, respectively. The applications are presented in Sect. 7. Finally, we finish this paper with some concluding remarks in Sect. 8.
2 Preliminaries
In this paper, we consider a symmetric two-player normal form game that is a game of a single population (no difference of being player-1 or player-2) in which all players share the same strategy set and payoff function. Namely, we consider a unit mass of players each of whom chooses a pure strategy from the set \(\mathbf {I}_m=\{1,2,\ldots ,m\}\). A mixed strategy is a probability vector \(\mathbf {x}=(x_1,\ldots ,x_m)\), i.e., \(x_1+\cdots +x_m=1\) and \(x_i\ge 0\) for all \(i\in \mathbf {I}_m\) where \(x_i\) is the probability that the player will choose the pure strategy i. A pure strategy i can be also seen as a mixed strategy \(\mathbf {e}_i=(0,\ldots ,0,1,0,\ldots ,0)\) with 1 at the ith place which means that the player uses the strategy i with probability 1. The set of mixed strategy of player is the simplex \(\mathbb {S}^{m-1}=\{\mathbf {x}\in \mathbb {R}_{+}^{m}: \sum _{i=1}^{m}x_i=1\}\). Sometimes, the simplex \(\mathbb {S}^{m-1}\) can be also considered as the set of population states which describe densities of the population playing pure strategies. Namely, a probability vector \(\mathbf {x}=(x_1,x_2,\ldots ,x_m)\) is a population state where \(x_i\) is the frequency of the strategy i.
A continuous (possibly nonlinear) function \(f_i(\mathbf {x})\) is a payoff to a strategy \(i\in \mathbf {I}_m\) in a population state \(\mathbf {x}\in \mathbb {S}^{m-1}\). A continuous mapping \(F(\mathbf {x}):=(f_1(\mathbf {x}),\ldots ,f_m(\mathbf {x}))\) is a payoff of the game when the population state is \(\mathbf {x}\in \mathbb {S}^{m-1}\). A payoff to the strategy \(\mathbf {y}\in \mathbb {S}^{m-1}\) when the population state is \(\mathbf {x}\in \mathbb {S}^{m-1}\) is a bivariate continuous function \(\mathcal {E}_F(\mathbf {y},\mathbf {x}):=\sum _{i=1}^{m}y_if_i(\mathbf {x})\) which is linear in the first argument. It is easy to see that \(\mathcal {E}_F(\mathbf {e}_i,\mathbf {x})=f_i(\mathbf {x})\) for any \(i\in \mathbf {I}_m\). A average fitness of the population when it is in a state \(\mathbf {x}\in \mathbb {S}^{m-1}\) is \(\mathcal {E}_F(\mathbf {x},\mathbf {x})=\sum _{i=1}^{m}x_if_i(\mathbf {x})\).
Sometimes, we call \(\mathbf {y}\) a reply strategy to a strategy \(\mathbf {x}\) in an expression like \(\mathcal {E}_F(\mathbf {y},\mathbf {x})\). With this presumption, we say that the strategy \(\mathbf {y}\) is better reply to the strategy \(\mathbf {x}\) than the strategy \(\mathbf {z}\) if one has \(\mathcal {E}_F(\mathbf {y},\mathbf {x})>\mathcal {E}_F(\mathbf {z},\mathbf {x})\). The strategy \(\mathbf {y}\) is called a best reply to the strategy \(\mathbf {x}\) if one has \(\mathcal {E}_F(\mathbf {y},\mathbf {x})\ge \mathcal {E}_F(\mathbf {z},\mathbf {x})\) for any \(\mathbf {z}\in \mathbb {S}^{m-1}\). The set of all best replies to \(\mathbf {x}\) is denoted by
Definition 2.1
(Nash equilibrium) A strategy \(\mathbf {x}\) is called a Nash equilibrium if it is a best reply to itself, i.e., one has \(\mathcal {E}_F(\mathbf {x},\mathbf {x})\ge \mathcal {E}_F(\mathbf {y},\mathbf {x})\) for any \(\mathbf {y}\in \mathbb {S}^{m-1}\). A strategy \(\mathbf {x}\) is called a strictly Nash equilibrium if it is the unique best reply to itself, i.e., one has \(\mathcal {E}_F(\mathbf {x},\mathbf {x})> \mathcal {E}_F(\mathbf {y},\mathbf {x})\) for any \(\mathbf {y}\in \mathbb {S}^{m-1}\) with \(\mathbf {y}\ne \mathbf {x}\).
It was pointed out by Pohley and Thomas [37] that the original definition of an evolutionarily stable strategy (ESS) is inappropriate for the games with nonlinear payoff functions. The main reason for this was the global character of the original conditions that define an ESS. These conditions always cover complete spaces of strategies which do not allow to have more than one ESS in the interior of the whole space. In the series of papers [37, 50, 51] (for further developments also see [2, 47]), in order to avoid such constraints, the concept of a local ESS was introduced for the games with nonlinear payoff functions for which the conditions that define an ESS must hold only locally, i.e., within a small neighborhood of an ESS. It turns out that for linear models the local ESS is equivalent to the original global ESS (see [37]).
Definition 2.2
(Local ESS) A strategy \(\mathbf {x}\) is called a local evolutionarily stable strategy (a local ESS) if the following conditions hold true:
-
(i)
\(\mathbf {x}\) is a Nash equilibrium, i.e., \(\mathcal {E}_F(\mathbf {x},\mathbf {x})\ge \mathcal {E}_F(\mathbf {y},\mathbf {x})\) for any \(\mathbf {y}\in \mathbb {S}^{m-1}\);
-
(ii)
There is a small neighborhood \(U(\mathbf {x})\subset \mathbb {S}^{m-1}\) of \(\mathbf {x}\) such that \(\mathcal {E}_F(\mathbf {x},\mathbf {y})>\mathcal {E}_F(\mathbf {y},\mathbf {y})\) for all \(\mathbf {y}\in U(\mathbf {x}){\setminus }\{\mathbf {x}\}\).
The replicator equation which was originally developed for symmetric games with finitely many strategies (see [48, 49]) is the most important evolutionary game dynamics (see also [1, 8, 15, 19, 38]). In the continuous case, the replicator and other types of equations are thoroughly studied in the literature [6, 7, 22, 23, 36, 43,44,45, 53].
In this paper, we consider a discrete-time replicator equation \(\mathcal {R}_F:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\)
Remark 2.3
In replicator equation (2.1), the relative growth rate \(\frac{\left( \mathcal {R}_F(\mathbf {x})\right) _k-x_k}{x_k}\) of the player using a strategy k is equal to the difference between the payoff \(f_k(\mathbf {x})\) of that strategy k and the average payoff \(\mathcal {E}_F(\mathbf {x},\mathbf {x})=x_1f_1(\mathbf {x})+\cdots +x_mf_m(\mathbf {x})\) of the population as a whole. The key idea is that a replicator whose fitness is larger (smaller) than the average fitness of population will increase (decrease) in numbers. Obviously, in order the simplex to be an invariant set for replicator equation (2.1), we must have some constraints on payoff functions. For example, the following condition
is sufficient for the simplex to be invariant. Indeed, since
we obtain for all \(\mathbf {x}\in \mathbb {S}^{m-1}\) and \(k\in \mathbf {I}_m\) that
Hence, we have \(\sum _{k=1}^m\left( \mathcal {R}_F(\mathbf {x})\right) _k=\sum _{k=1}^mx_k=1\) and \(\left( \mathcal {R}_F(\mathbf {x})\right) _k\ge 0\) for all \(\mathbf {x}\in \mathbb {S}^{m-1}\) and \(k\in \mathbf {I}_m\). It is worth mentioning that the model which we are going to propose satisfies the condition given above.
In order to study the stability of rest solutions (fixed points) of the replicator equation, we employ a Lyapunov function.
Definition 2.4
(Lyapunov function) A continuous function \(\varphi :\mathbb {S}^{m-1}\rightarrow \mathbb {R}\) is called a Lyapunov function if the number sequence \(\{\varphi (\mathbf {x}), \varphi (\mathcal {R}(\mathbf {x})), \ldots , \varphi (\mathcal {R}^{(n)}(\mathbf {x})), \cdots \}\) is a bounded monotone sequence for any initial point \(\mathbf {x}\in \mathbb {S}^{m-1}\).
Definition 2.5
(Stable and attracting points) A rest (fixed) point \(\mathbf {y}\in \mathbb {S}^{m-1}\) is called stable if for every neighborhood \(U(\mathbf {y})\subset \mathbb {S}^{m-1}\) of \(\mathbf {y}\) there exists a neighborhood \(V(\mathbf {y})\subset \mathbb {S}^{m-1}\) of \(\mathbf {y}\) such that an orbit \(\{\mathbf {x}, \mathcal {R}(\mathbf {x}), \ldots , \mathcal {R}^{(n)}(\mathbf {x}), \cdots \}\) of any initial point \(\mathbf {x}\in V(\mathbf {y})\) remains inside of the neighborhood \(U(\mathbf {y})\). A rest (fixed) point \(\mathbf {y}\in \mathbb {S}^{m-1}\) is called attracting if there exists a neighborhood \(V(\mathbf {y})\subset \mathbb {S}^{m-1}\) of \(\mathbf {y}\) such that an orbit \(\{\mathbf {x}, \mathcal {R}(\mathbf {x}), \ldots , \mathcal {R}^{(n)}(\mathbf {x}), \cdots \}\) of any initial point \(\mathbf {x}\in V(\mathbf {y})\) converges to \(\mathbf {y}\). A rest (fixed) point \(\mathbf {y}\in \mathbb {S}^{m-1}\) is called asymptotically stable if it is both stable and attracting.
We use the following notions and notations throughout this paper.
Some Notations: Let \(\mathbf {x}=(x_1,\ldots ,x_m)\in \mathbb {R}^m\) and \(\Vert \mathbf {x}\Vert _1:=\sum _{k=1}^{m}|x_k|\). We say that \(\mathbf {x}\ge 0\) (resp. \(\mathbf {x}> 0\)) if \(x_{k}\ge 0\) (resp. \(x_{k}>0\)) for all \(k\in \mathbf {I}_m\). We set \((\mathbf {x},\mathbf {y}):=\sum _{i=1}^{m}x_iy_i\) for any \(\mathbf {x},\mathbf {y}\in \mathbb {R}^m\). Let \(\mathbb {S}^{m-1}=\{\mathbf {x}\in \mathbb {R}^m: \Vert \mathbf {x}\Vert _1=1, \ \mathbf {x}\ge 0\}\) be the standard simplex. We let \(supp (\mathbf {x}):=\{i\in \mathbf {I}_m: x_i\ne 0\}\) and \(null (\mathbf {x}):=\{i\in \mathbf {I}_m: x_i=0\}\) for \(\mathbf {x}\in \mathbb {S}^{m-1}\). The vertex \(\mathbf {e}_i:=(0,\ldots ,0,1,0,\ldots ,0)\) with 1 at the ith place is the pure strategy \(i\in \mathbf {I}_m\). Let \({\mathbb {S}}^{|\alpha |-1}:=conv \{\mathbf {e}_i\}_{i\in \alpha }\) for \(\alpha \subset \mathbf {I}_m\) where \(conv (\mathbf {A})\) is the convex hull of \(\mathbf {A}\). Let \(int {\mathbb {S}}^{|\alpha |-1} :=\{\mathbf {x}\in {\mathbb {S}}^{|\alpha |-1}: supp (\mathbf {x})=\alpha \}\) and \(\partial \mathbb {S}^{|\alpha |-1}:= \mathbb {S}^{|\alpha |-1}{\setminus }int \mathbb {S}^{|\alpha |-1}\) be, respectively, an interior and boundary of the face \({\mathbb {S}}^{|\alpha |-1}\). The center \(\mathbf {c}_\alpha :=\frac{1}{|\alpha |}\sum _{i\in \alpha }\mathbf {e}_i\) of the face \({\mathbb {S}}^{|\alpha |-1}\) is the equally distributed population state of the active pure strategies \(i\in \alpha \). We define a function \(\mathcal {M}_{\alpha , k}(\mathbf {x}):=\max \limits _{i\in \alpha }\{x_{i}\}-x_k\) for \(k\in \alpha \subset \mathbf {I}_m\). Particularly, when \(\alpha =\mathbf {I}_m\), we write \(\mathcal {M}_k(\mathbf {x}):=\max \limits _{i\in \mathbf {I}_m}\{x_{i}\}-x_k\) for \(k\in \mathbf {I}_m\). We define the sets \(\mathsf {MaxInd}_\alpha (\mathbf {x}):=\{k\in \alpha : x_k=\max \limits _{i\in \alpha }\{x_{i}\}\}\) and \(\mathsf {{MinInd}}_\alpha (\mathbf {x}):=\{k\in \alpha : x_k=\min \limits _{i\in \alpha }\{x_{i}\}\}\) for \(\alpha \subset \mathbf {I}_m\). Particularly, we write \(\mathsf {MaxInd}(\mathbf {x})\) and \(\mathsf {MinInd}(\mathbf {x})\) for the set \(\alpha =\mathbf {I}_m\). An orbit (trajectory) of an initial point \(\mathbf {x}\in \mathbb {S}^{m-1}\) is defined as \(\{\mathbf {x}, \mathcal {R}(\mathbf {x}), \ldots , \mathcal {R}^{(n)}(\mathbf {x}), \cdots \}\). An omega limiting set \(\omega (\mathbf {x})\) of the orbit is defined as \(\omega (\mathbf {x}):=\bigcap \limits _{n\in \mathbb {N}}\overline{\bigcup \limits _{k\ge n}\{\mathcal {R}^{(k)}(\mathbf {x})\}}\). A rest (fixed) point set of the replicator operator is \(\mathbf{Fix} (\mathcal {R})=\{\mathbf {x}\in \mathbb {S}^{m-1}: \mathcal {R}(\mathbf {x})=\mathbf {x}\}\). We denote by \(\mathbf{NE} (F)\) and \(\mathbf{ESS} (F)\) the sets of all Nash equilibria and all ESSs, respectively, of the payoff mapping F.
3 The Main Results
In this paper, we would like to consider the following evolutionary game. Let an evolutionary game be identified by a nonlinear payoff mapping \(F(\mathbf {x}):=\left( F_1(\mathbf {x}),\ldots , F_m(\mathbf {x})\right) \) for which \(F_k(\mathbf {x})\) is a payoff (biological fitness) of a pure strategy \(k\in \mathbf {I}_m\) in a population state \(\mathbf {x}\in \mathbb {S}^{m-1}\). Then for any two different pure strategies, say i and j, a payoff (biological fitness) \(F_i(\mathbf {x})\) of a pure strategy which is frequent in number, say i, is better/worse than a payoff (biological fitness) \(F_j(\mathbf {x})\) of a pure strategy j. Mathematically, it means that for any \(\mathbf {x}\in \mathbb {S}^{m-1}\) and \(i,j\in \mathbf {I}_m\) we always have either one of two conditions:
-
\(F_i(\mathbf {x})>F_j(\mathbf {x})\) (resp. \(F_i(\mathbf {x})<F_j(\mathbf {x})\)) whenever \(x_i>x_j\) (resp. \(x_i<x_j\));
-
\(F_i(\mathbf {x})<F_j(\mathbf {x})\) (resp. \(F_i(\mathbf {x})>F_j(\mathbf {x})\)) whenever \(x_i>x_j\) (resp. \(x_i<x_j\)).
We study the proposed evolutionary game when the payoff mapping \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) is defined as follows
where \(\varepsilon \in (-1,1)\) is a parameter, \(f:[0,1]\rightarrow [0,1]\) is a strictly increasing continuous function such that \(f(0)=0\) and the function xf(x) is continuously differentiable on [0, 1], and \(g:\mathbb {S}^{m-1}\rightarrow (0,1)\) is a smooth non-vanishing function. In this case, we have \(C_1\le g(\mathbf {x})\le C_2\) for any \(\mathbf {x}\in \mathbb {S}^{m-1}\) for some constants \(0<C_1<C_2<1\).
We now consider the discrete-time replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) defined by the nonlinear payoff mapping (3.1)
Here, \(\varepsilon \in (-1,1)\) is a controlling regime of the replicator equation. We always assume that \(\varepsilon \ne 0\). We will see that the dynamics and the stability analysis of replicator equation (3.2) will depend on the controlling regime \(\varepsilon \). Namely, if we switch \(\varepsilon \) from positive to negative regime then the set of ESSs changes from one set to another one. This is a quite interesting feature of this replicator equation (3.2). This feature can be also seen in the following case.
Remark 3.1
If g is constant, i.e., \(g(\mathbf {x})=C>0\) for all \(\mathbf {x}\in \mathbb {S}^{m-1}\) then we get
It is easy to check that
Since f is an increasing function and \(\mathbf {x},\mathbf {y}\in \mathbb {S}^{m-1}\), it is clear \(\left( f(x_k)-f(y_k)\right) (x_k-y_k)\ge 0\) for any \(k\in \mathbf {I}_m\). Consequently, for any \(\mathbf {x},\mathbf {y}\in \mathbb {S}^{m-1}\) we have that if \(\varepsilon >0\) then \(\left( F_0(\mathbf {x})-F_0(\mathbf {y}),\mathbf {x}-\mathbf {y}\right) \ge 0\) and if \(\varepsilon <0\) then \(\left( F_0(\mathbf {x})-F_0(\mathbf {y}),\mathbf {x}-\mathbf {y}\right) \le 0\).
An evolutionary game is called a stable game if the nonlinear payoff mapping \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) satisfies the following condition \(\left( F(\mathbf {x})-F(\mathbf {y}),\mathbf {x}-\mathbf {y}\right) \le 0\) for any \(\mathbf {x},\mathbf {y}\in \mathbb {S}^{m-1}\). The dynamics of the continuous-time replicator equation
governed with the stable game has been studied in the paper [21] (also see [43]).
A nonlinear payoff mapping \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) is called monotone if one has that \(\left( F(\mathbf {x})-F(\mathbf {y}),\mathbf {x}-\mathbf {y}\right) \ge 0\) for any \(\mathbf {x},\mathbf {y}\in \mathbb {S}^{m-1}\). The dynamics of the discrete-time replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\)
governed with nonlinear monotone mappings \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) subject to the constraint \(\mathcal {E}_F(\mathbf {x},\mathbf {x})=0\) for all \(\mathbf {x}\in \mathbb {S}^{m-1}\) has been studied in the papers [12,13,14, 31,32,33, 42].
Hence, if g is constant then the nonlinear payoff mapping \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) given by (3.1) is either monotone or stable depending on the controlling regime \(\varepsilon \in (-1,1)\). In general, if g is not constant then the nonlinear payoff mapping \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) given by (3.1) is neither monotone nor stable. In this paper, we are aiming to study the dynamics of the discrete-time replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) given by (3.2) for any non-constant function g (for some examples see Application section).
We now describe the sets \(\mathbf{NE} (F)\) and \(\mathbf{ESS} (F)\) of all Nash equilibria and all local ESSs, respectively, of the nonlinear payoff mapping \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) given by (3.1) and the set \(\mathbf{Fix} (\mathcal {R})\) of fixed (rest) points of the discrete-time replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) given by (3.2).
Theorem A
(Rest points, Nash equilibria, and ESS) Let \(\mathbf {e}_i\) be the vertex of the simplex \({\mathbb {S}}^{m-1}\) for \(i\in \mathbf {I}_m\), let \(\mathbf {c}_\alpha :=\frac{1}{|\alpha |}\sum _{i\in \alpha }\mathbf {e}_i\) be the center of the face \({\mathbb {S}}^{|\alpha |-1}\) for all \(\alpha \subset \mathbf {I}_m\), and let \(\mathbf {c}:=\mathbf {c}_{\mathbf {I}_m}=(\frac{1}{m},\ldots ,\frac{1}{m})\) be the center of the simplex \({\mathbb {S}}^{m-1}\). Then the following statements hold true:
-
(i)
One has \(\mathbf{Fix}\mathbf (\mathcal {R})=\bigcup \limits _{\alpha \subset \mathbf {I}_m}\{\mathbf {c}_\alpha \}\) for any nonzero \(\varepsilon \in (-1,1)\);
-
(ii)
If \(\varepsilon \in (0,1)\) then \(\mathbf{NE}\mathbf (F)=\mathbf{Fix}\mathbf (\mathcal {R})\) and if \(\varepsilon \in (-1,0)\) then \(\mathbf{NE}\mathbf (F)=\{\mathbf {c}\}\subset \mathbf{Fix}\mathbf (\mathcal {R})\);
-
(iii)
If \(\varepsilon \in (0,1)\) then \(\mathbf{ESS}\mathbf (F)=\{\mathbf {e}_1,\mathbf {e}_2, \ldots , \mathbf {e}_m\}\subset \mathbf{NE}\mathbf (F)\) and if \(\varepsilon \in (-1,0)\) then \(\mathbf{ESS}\mathbf (F)=\mathbf{NE}\mathbf (F)\).
We define the following constant
We are now ready to state the main results of this paper.
Theorem B
(Positive regime) Let \(\varepsilon \in (0,1)\) and let \(\mathbf {e}_i\) be the vertex of the simplex \({\mathbb {S}}^{m-1}\) for \(i\in \mathbf {I}_m\). Then the following statements hold true:
-
(i)
A stable rest point \(\mathbf {e}_i\) for \(i\in \mathbf {I}_m\) is a Nash equilibrium;
-
(ii)
A strictly Nash equilibrium \(\mathbf {e}_i\) for \(i\in \mathbf {I}_m\) is asymptotically stable;
-
(iii)
Any interior convergent orbit evolves to a Nash equilibrium.
Theorem C
(Negative regime) Let \(\varepsilon \in (-\frac{1}{\mu },0)\cap (-1,0)\) and let \(\mathbf {c}=(\frac{1}{m},\ldots ,\frac{1}{m})\) be the center of the simplex \({\mathbb {S}}^{m-1}\). Then the following statements hold true:
-
(i)
A stable rest point \(\mathbf {c}\) is a Nash equilibrium;
-
(ii)
There is no any strictly Nash equilibrium;
-
(iii)
Any interior convergent orbit evolves to a Nash equilibrium.
4 The Rest Points, Nash Equilibria, and ESS
4.1 The Proof of Theorem A
(i) We first show \(\mathbf{Fix} (\mathcal {R})=\bigcup \nolimits _{\alpha \subset \mathbf {I}_m}\{\mathbf {c}_\alpha \}\). It is obvious that \(\bigcup \nolimits _{\alpha \subset \mathbf {I}_m}\{\mathbf {c}_\alpha \}\subset \mathbf{Fix} (\mathcal {R})\). Let \(\mathbf {x}\in \mathbf{Fix} (\mathcal {R})\) be a fixed point. We set \(\alpha :=supp (\mathbf {x})\). Since \(\varepsilon \ne 0\) and \(g(\mathbf {x})>0\) for any \(\mathbf {x}\in \mathbb {S}^{m-1}\), it follows from (3.2)
This means that \(f(x_{k_1})=f(x_{k_2})\) for any distinct \(k_1,k_2\in \alpha \). Since f is strictly increasing, we obtain \(x_{k_1}=x_{k_2}\) for any \(k_1,k_2\in \alpha \). Hence, we get \(\mathbf {x}=\mathbf {c}_\alpha \). This shows \(\mathbf{Fix} (\mathcal {R})=\bigcup \nolimits _{\alpha \subset \mathbf {I}_m}\{\mathbf {c}_\alpha \}\).
(ii) We now describe the set \(\mathbf{NE} (F)\) of all Nash equilibria of the nonlinear payoff mapping \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\).
Let \(\varepsilon >0\). We show \(\mathbf{NE} (F)=\bigcup \nolimits _{\alpha \subset \mathbf {I}_m}\{\mathbf {c}_\alpha \}\). Since for any \(\mathbf {x}\in \mathbb {S}^{m-1}\) and \(\alpha \subset \mathbf {I}_m\)
we get \(\bigcup \nolimits _{\alpha \subset \mathbf {I}_m}\{\mathbf {c}_\alpha \}\subset \mathbf{NE} (F)\). Let \(\mathbf {x}\in \mathbf{NE} (F)\) and \(\alpha :=supp (\mathbf {x})\). We have for \(\mathbf {z}\in \mathbb {S}^{m-1}\)
Since \(\mathbf {x}\in \mathbf{NE} (F)\), we must have
On the other hand, we have
Consequently, we obtain \(f(x_k)=\max \nolimits _{i\in \alpha }\{f(x_i)\}\) for any \(k\in \alpha \). Since f is strictly increasing, we obtain \(x_{k_1}=x_{k_2}\) for any \(k_1,k_2\in \alpha \) or \(\mathbf {x}=\mathbf {c}_\alpha \). This shows \(\mathbf{NE} (F)=\bigcup \nolimits _{\alpha \subset \mathbf {I}_m}\{\mathbf {c}_\alpha \}\) for \(\varepsilon >0\).
Let \(\varepsilon <0\). We show \(\mathbf{NE} (F)=\{\mathbf {c}\}\). It is obvious that \(\mathbf {c}\in \mathbf{NE} (F)\).
Let \(\mathbf {x}\in \mathbf{NE} (F)\) and \(\alpha :=supp (\mathbf {x})\). We then want to prove that \(\alpha =\mathbf {I}_m\). We assume the contrary, i.e., \(\alpha \ne \mathbf {I}_m\). Let \(k\in \mathbf {I}_m{\setminus }\alpha \) (since \(\mathbf {I}_m{\setminus }\alpha \ne \emptyset \)). Since \(\varepsilon <0\), \(g(\mathbf {x})>0\), and \(\mathbf {x}\in \mathbf{NE} (F)\), we face the contradiction
This shows \(\alpha =\mathbf {I}_m\). In this case, we have for any \(\mathbf {z}\in \mathbb {S}^{m-1}\)
Since \(\mathbf {x}\in \mathbf{NE} (F)\), we must have
On the other hand, we have
Consequently, we obtain \(f(x_k)=\min \nolimits _{i\in \mathbf {I}_m}\{f(x_i)\}\) for any \(k\in \mathbf {I}_m\). Since f is strictly increasing, we obtain that \(x_{k_1}=x_{k_2}\) for any \(k_1,k_2\in \mathbf {I}_m\) or \(\mathbf {x}=\mathbf {c}\). This shows \(\mathbf{NE} (F)=\{\mathbf {c}\}\) for \(\varepsilon <0\).
(iii) Finally, we describe the set \(\mathbf{ESS} (F)\) of all local ESSs of the nonlinear payoff mapping \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) given by (3.1). According to Definition 2.2 (see item (i)), we have \(\mathbf{ESS} (F)\subset \mathbf{NE} (F)\).
Let \(\varepsilon >0\). We first show \(\{\mathbf {e}_1,\mathbf {e}_2, \ldots , \mathbf {e}_m\}\subset \mathbf{ESS} (F)\). It order to accomplish it, we want to show that there is a sufficiently small neighborhood \(U(\mathbf {e}_k)\subset \mathbb {S}^{m-1}\) of the vertex \(\mathbf {e}_k\) for \(k\in \mathbf {I}_m\) such that \(\mathcal {E}_F(\mathbf {e}_k,\mathbf {x})>\mathcal {E}_F(\mathbf {x},\mathbf {x})\) for all \(\mathbf {x}\in U(\mathbf {e}_k){\setminus }\{\mathbf {e}_k\}\). Since \(U(\mathbf {e}_k)\subset \mathbb {S}^{m-1}\) is sufficiently small, we have \(\mathsf {MaxInd}(\mathbf {x})=\{k\}\) and \(supp (\mathbf {x}){\setminus }\{k\}\ne \emptyset \) for all \(\mathbf {x}\in U(\mathbf {e}_k){\setminus }\{\mathbf {e}_k\}\). Consequently, we get for all \(\mathbf {x}\in U(\mathbf {e}_k){\setminus }\{\mathbf {e}_k\}\)
This shows \(\{\mathbf {e}_1,\mathbf {e}_2, \ldots , \mathbf {e}_m\}\subset \mathbf{ESS} (F)\). We now show \(\mathbf{ESS} (F)\cap \mathbf{NE} (F)=\{\mathbf {e}_1,\mathbf {e}_2, \ldots , \mathbf {e}_m\}\).
Let \(\mathbf {c}_\alpha \in \mathbf{NE} (F)\) with \(|\alpha |\ge 2\) be any Nash equilibrium. Let \(\alpha :=\{i_1,i_2,\ldots ,i_{|\alpha |}\}\) such that \(i_1<i_2<\cdots <i_{|\alpha |}\). We want to show that for any sufficiently small neighborhood \(U(\mathbf {c}_\alpha )\subset \mathbb {S}^{m-1}\) of the point \(\mathbf {c}_\alpha \) there always exists \(\bar{\mathbf {x}}\in U(\mathbf {c}_\alpha ){\setminus }\{\mathbf {c}_\alpha \}\) such that \(\mathcal {E}_F(\mathbf {c}_\alpha ,\bar{\mathbf {x}})<\mathcal {E}_F(\bar{\mathbf {x}},\bar{\mathbf {x}})\). Let \(\delta >0\) be any sufficiently small number and \(\Vert \bar{\mathbf {x}}-\mathbf {c}_\alpha \Vert _1<\delta \) such that \(supp (\bar{\mathbf {x}})=\alpha \) and \(\bar{x}_{i_1}<\bar{x}_{i_2}<\cdots <\bar{x}_{i_{|\alpha |}}\) (it is always possible to choose such \(\bar{\mathbf {x}}\in U(\mathbf {c}_\alpha )\) for any \(\delta >0\)). Since f is strictly increasing, it follows from \(f(\bar{x}_{i_1})<f(\bar{x}_{i_2})<\cdots <f(\bar{x}_{i_{|\alpha |}})\) and Chebyshev’s sum inequality (see [30])
It means that \(\mathbf {c}_\alpha \in \mathbf{NE} (F)\) (\(|\alpha |\ge 2\)) is not a local ESS. Consequently, we obtain \(\mathbf{ESS} (F)=\{\mathbf {e}_1,\mathbf {e}_2, \ldots , \mathbf {e}_m\}\) for \(\varepsilon >0\).
Let \(\varepsilon <0\). According to Definition 2.2, we have \(\mathbf{ESS} (F)\subset \mathbf{NE} (F)=\{\mathbf {c}\}\). We show \(\mathbf {c}\in \mathbf{ESS} (F)\).
Let \(U(\mathbf {c})\subset \mathbb {S}^{m-1}\) be a sufficiently small neighborhood of the center \(\mathbf {c}\) of the simplex \({\mathbb {S}}^{m-1}\). Since f is strictly increasing, two vectors \((x_1,\ldots ,x_m)\) and \((f(x_1),\ldots ,f(x_m))\) are the same ordered, i.e., if \(x_i>x_j\) (resp. \(x_i<x_j\)) then \(f(x_i)>f(x_j)\) (resp. \(f(x_i)<f(x_j)\)), Since \(\varepsilon <0\), it then follows from Chebyshev’s sum inequality (see [30]) that
for any \(\mathbf {x}\in U(\mathbf {c}){\setminus }\{\mathbf {c}\}\) (for our case, the equality in Chebyshev’s sum inequality holds only for \(\mathbf {x}=\mathbf {c}\)). It means \(\mathbf{ESS} (F)=\mathbf {c}\) for \(\varepsilon <0\). This completes the proof of Theorem A.
5 The Regime \(\varepsilon \in (0,1)\)
5.1 The Lyapunov Function
We study the dynamics of the discrete-time replicator equation (3.2) by means of a Lyapunov function.
Proposition 5.1
Let \(\varepsilon \in (0,1)\). Then the following statements hold true:
-
(i)
\(\mathcal {M}_k(\mathbf {x}):=\max \limits _{i\in \mathbf {I}_m}\{x_{i}\}-x_k\) for \(k\in \mathbf {I}_m\) is an increasing Lyapunov function for \(\mathcal {R}:int \mathbb {S}^{m-1}\rightarrow int \mathbb {S}^{m-1};\)
-
(ii)
\(\mathcal {M}_{\alpha , k}(\mathbf {x}):=\max \limits _{i\in \alpha }\{x_{i}\}-x_k\) for \(k\in \alpha \subset \mathbf {I}_m\) is an increasing Lyapunov function for \(\mathcal {R}:int \mathbb {S}^{|\alpha |-1}\rightarrow int \mathbb {S}^{|\alpha |-1}\).
Proof
Here, we only present the proof of the part (i). The part (ii) can be similarly proved. Its proof is omitted here.
We show that \(\mathcal {M}_k(\mathbf {x})=\max \limits _{i\in \mathbf {I}_m}\{x_{i}\}-x_k\) for \(k\in \mathbf {I}_m\) is an increasing Lyapunov function along an orbit (trajectory) of the operator \(\mathcal {R}:int \mathbb {S}^{m-1}\rightarrow int \mathbb {S}^{m-1}\).
Let \(\mathbf {x}\in int \mathbb {S}^{m-1}\). It follows from (3.2) for any \(k,t\in \mathbf {I}_m\) that if \(x_t=x_k\) then \((\mathcal {R}(\mathbf {x}))_t=(\mathcal {R}(\mathbf {x}))_k\) and if \(x_t\ne x_k\) then
which yields the following equality
Since \(0<\varepsilon , g(\mathbf {x})<1\), \(0\le f(x_i)\le 1\) for all \(i\in \mathbf {I}_m\) and the function xf(x) is increasing on [0, 1], i.e.,
we obtain \(\mathsf {Sign}\left( (\mathcal {R}(\mathbf {x}))_t-(\mathcal {R}(\mathbf {x}))_k\right) =\mathsf {Sign}(x_t-x_k)\) for any \(k,t\in \mathbf {I}_m\) and
Moreover, if \(t\in \mathsf {MaxInd}\left( \mathcal {R}(\mathbf {x})\right) =\mathsf {MaxInd}(\mathbf {x})\) then for all \(k\in \mathbf {I}_m\)
This means \(\mathcal {M}_k\left( \mathcal {R}(\mathbf {x})\right) \ge \mathcal {M}_k(\mathbf {x})\) for all \(k\in \mathbf {I}_m\).
By repeating this process, we get for all \(k\in \mathbf {I}_m\), \(n\in \mathbb {N}\), and \(\mathbf {x}\in int \mathbb {S}^{m-1}\) that
This shows that \(\mathcal {M}_k(\mathbf {x})\) for \(k\in \mathbf {I}_m\) is a Lyapunov function over the set \(int \mathbb {S}^{m-1}\). This completes the proof. \(\square \)
5.2 The Stability Analysis
Theorem 5.2
If \(\varepsilon \in (0,1)\) then an orbit of the replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) starting from any initial point \(\mathbf {x}\in \mathbb {S}^{m-1}\) converges to the center of the face \(\mathbb {S}^{|\mathsf {MaxInd}(\mathbf {x})|-1}\).
Proof
Without loss of generality, we may assume that \(\mathbf {x}\in int \mathbb {S}^{m-1}\). Otherwise, we choose a suitable Lyapunov function given in the part (ii) of Proposition 5.1 depending on an initial point \(\mathbf {x}\in \mathbb {S}^{m-1}\).
We fix \(t_0\in \mathsf {MaxInd}(\mathbf {x})\). As shown in Proposition 5.1, we obtain for any \(n\in \mathbb {N}\) that \(\mathsf {MaxInd}(\mathbf {x})=\mathsf {MaxInd}\left( \mathcal {R}^{(n)}(\mathbf {x})\right) \). Since the sequence \( \left\{ \mathcal {M}_k\left( \mathcal {R}^{(n)}(\mathbf {x})\right) \right\} _{n=0}^\infty \) where
is convergent for each \(k\in \mathbf {I}_m\), the sequence \(\left\{ \left( \mathcal {R}^{(n)}(\mathbf {x})\right) _{t_0}\right\} _{n=0}^\infty \) where
is also convergent (m is the number of pure strategies \(\mathbf {I}_m=\{1,2,\cdots m\}\)). Hence, the sequence \(\left\{ \left( \mathcal {R}^{(n)}(\mathbf {x})\right) _{k}\right\} _{n=0}^\infty \) for each \(k\in \mathbf {I}_m\) where
is also convergent. This means that the trajectory \(\left\{ \mathcal {R}^{(n)}(\mathbf {x})\right\} _{n=0}^\infty \) is convergent and its omega limiting point is some fixed (rest) point \(\mathbf {x}^{*}\). We now want to show that
Since \(\mathsf {MaxInd}(\mathbf {x})=\mathsf {MaxInd}\left( \mathcal {R}^{(n)}(\mathbf {x})\right) \) and \(\mathcal {M}_k\left( \mathcal {R}^{(n+1)}(\mathbf {x})\right) \ge \mathcal {M}_k\left( \mathcal {R}^{(n)}(\mathbf {x})\right) \) for any \(n\in \mathbb {N}\) and \(k\in \mathbf {I}_m\), we have that \(\mathsf {MaxInd}(\mathbf {x}^{*})\supset \mathsf {MaxInd}(\mathbf {x})\). We now show \(\mathsf {MaxInd}(\mathbf {x}^{*})\subset \mathsf {MaxInd}(\mathbf {x})\). Indeed, if \(k_0\in \mathsf {MaxInd}(\mathbf {x}^{*})\) then we get that
This means \(\mathcal {M}_{k_0}\left( \mathbf {x}\right) =0\) or \(k_0\in \mathsf {MaxInd}(\mathbf {x})\). Hence, we prove \(\mathsf {MaxInd}(\mathbf {x}^{*})=\mathsf {MaxInd}(\mathbf {x})\). Consequently, due to Theorem A, among the centers of all faces of the simplex the only fixed point \(\mathbf {x}^{*}\) which satisfies the property \(\mathsf {MaxInd}(\mathbf {x}^{*})=\mathsf {MaxInd}(\mathbf {x})\) is the center of the face \(\mathbb {S}^{|\mathsf {MaxInd}(\mathbf {x})|-1}\). This completes the proof. \(\square \)
5.3 The Proof of Theorem B
(i) We show that the stable rest points are only the vertices of the simplex.
On the one hand, according to Theorem 5.2, if \(|\alpha |>1\) for \(\alpha \subset \mathbf {I}_m\) then the center \(\mathbf {c}_\alpha \) of the face \(\mathbb {S}^{|\alpha |-1}\) (which is the rest (fixed) point) is not stable. Indeed, for any small neighborhood \(U(\mathbf {c}_\alpha )\) of the center \(\mathbf {c}_\alpha \) there are points \(\mathbf {x}\in U(\mathbf {c}_\alpha )\) and \(\mathbf {y}\in U(\mathbf {c}_\alpha )\) such that \(|\mathsf {MaxInd}(\mathbf {x})|=1\), \(|\mathsf {MaxInd}(\mathbf {y})|=1\), and \(\mathsf {MaxInd}(\mathbf {x})\ne \mathsf {MaxInd}(\mathbf {y})\) for which the orbits of the points \(\mathbf {x}\) and \(\mathbf {y}\) converge to two different vertices of the simplex \({\mathbb {S}}^{m-1}\). Therefore, the center \(\mathbf {c}_\alpha \) of the face \(\mathbb {S}^{|\alpha |-1}\) is not stable whenever \(|\alpha |>1\) for \(\alpha \subset \mathbf {I}_m\).
On the other hand, according to Theorem 5.2, the vertex \(\mathbf {e}_k\) for \(k\in \mathbf {I}_m\) of the simplex \({\mathbb {S}}^{m-1}\) is stable. Indeed, for any small neighborhood \(U(\mathbf {e}_k)\) of the vertex \(\mathbf {e}_k\) one has \(\mathsf {MaxInd}(\mathbf {x})=\{k\}\) for any \(\mathbf {x}\in U(\mathbf {e}_k)\). As shown in Proposition 5.1 that \(\mathsf {MaxInd}\left( \mathcal {R}^{(n)}(\mathbf {x})\right) =\mathsf {MaxInd}(\mathbf {x})=\{k\}\) for any \(n\in \mathbb {N}\). Since \(\varepsilon >0\), it follows from (3.2) and \(k\in \mathsf {MaxInd}(\mathbf {x})\) that
Hence, we obtain \(\Vert \mathcal {R}(\mathbf {x})-\mathbf {e}_k\Vert _1=2(1-\left( \mathcal {R}(\mathbf {x})\right) _k)\le 2(1-x_k)=\Vert \mathbf {x}-\mathbf {e}_k\Vert _1\) which implies \(\mathcal {R}(U(\mathbf {e}_k))\subset U(\mathbf {e}_k)\) and consequently \(\mathcal {R}^{(n)}(U(\mathbf {e}_k))\subset U(\mathbf {e}_k)\). This means that the vertex \(\mathbf {e}_k\) for \(k\in \mathbf {I}_m\) of the simplex \({\mathbb {S}}^{m-1}\) is stable.
(ii) We first show that the vertex \(\mathbf {e}_k\) for \(k\in \mathbf {I}_m\) of the simplex \({\mathbb {S}}^{m-1}\) is a strictly Nash equilibrium, i.e., the strategy \(\mathbf {e}_k\) is the unique best reply to itself, \(\mathsf {BR}(\mathbf {e}_k)=\{\mathbf {e}_k\}\) for all \(k\in \mathbf {I}_m\). Indeed, for all \(\mathbf {x}\in {\mathbb {S}}^{m-1}{\setminus }\{\mathbf {e}_k\}\) we have that
As we already showed that the vertex \(\mathbf {e}_k\) for \(k\in \mathbf {I}_m\) of the simplex \({\mathbb {S}}^{m-1}\) is both stable (due to part (i)) and attracting (due to Theorem 5.2). Hence, a strictly Nash equilibrium \(\mathbf {e}_i\) for \(i\in \mathbf {I}_m\) is asymptotically stable.
(iii) Due to Theorem 5.2, an interior orbit starting from any initial point \(\mathbf {x}\in int \mathbb {S}^{m-1}\) converges to the center of the face \(\mathbb {S}^{|\mathsf {MaxInd}(\mathbf {x})|-1}\) which is a Nash equilibrium. This completes the proof of Theorem B.
6 The Regime \(\varepsilon \in (-1,0)\)
We first define the following constant
Since the function xf(x) is continuously differentiable on [0, 1], \(\mu \) is well-defined.
6.1 The Lyapunov Function
We study the dynamics of the discrete-time replicator equation given by (3.2) by means of a Lyapunov function.
Proposition 6.1
Let \(\varepsilon \in (-\frac{1}{\mu },0)\cap (-1,0)\). Then the following statements hold true:
-
(i)
\(\mathcal {M}_k(\mathbf {x}):=\max \limits _{i\in \mathbf {I}_m}\{x_{i}\}-x_k\) for \(k\in \mathbf {I}_m\) is a decreasing Lyapunov function for \(\mathcal {R}:int \mathbb {S}^{m-1}\rightarrow int \mathbb {S}^{m-1}\);
-
(ii)
\(\mathcal {M}_{\alpha , k}(\mathbf {x}):=\max \limits _{i\in \alpha }\{x_{i}\}-x_k\) for \(k\in \alpha \subset \mathbf {I}_m\) is a decreasing Lyapunov function for \(\mathcal {R}:int \mathbb {S}^{|\alpha |-1}\rightarrow int \mathbb {S}^{|\alpha |-1}\).
Proof
Here, we only present the proof of the part (i). The part (ii) can be similarly proved. Its proof is omitted here.
We show that \(\mathcal {M}_k(\mathbf {x})=\max \limits _{i\in \mathbf {I}_m}\{x_{i}\}-x_k\) for \(k\in \mathbf {I}_m\) is a decreasing Lyapunov function along an orbit (trajectory) of the operator \(\mathcal {R}:int \mathbb {S}^{m-1}\rightarrow int \mathbb {S}^{m-1}\).
Let \(\mathbf {x}\in int \mathbb {S}^{m-1}\). It follows from (3.2) for any \(k,t\in \mathbf {I}_m\) that if \(x_t=x_k\) then \((\mathcal {R}(\mathbf {x}))_t=(\mathcal {R}(\mathbf {x}))_k\) and if \(x_t\ne x_k\) then
Since \(-\frac{1}{\mu }<\varepsilon <0\) and for all \(t,k\in \mathbf {I}_m\)
we obtain \(\mathsf {Sign}\left( (\mathcal {R}(\mathbf {x}))_t-(\mathcal {R}(\mathbf {x}))_k\right) =\mathsf {Sign}(x_t-x_k)\) for any \(k,t\in \mathbf {I}_m\) and
Moreover, if \(t\in \mathsf {MaxInd}\left( \mathcal {R}(\mathbf {x})\right) =\mathsf {MaxInd}(\mathbf {x})\) then for all \(k\in \mathbf {I}_m\)
Since \(\varepsilon <0\), we obtain that \(\mathcal {M}_k\left( \mathcal {R}(\mathbf {x})\right) \le \mathcal {M}_k(\mathbf {x})\) for all \(k\in \mathbf {I}_m\).
By repeating this process, we get for all \(k\in \mathbf {I}_m\), \(n\in \mathbb {N}\), and \(\mathbf {x}\in int \mathbb {S}^{m-1}\) that
This shows that \(\mathcal {M}_k(\mathbf {x})\) for \(k\in \mathbf {I}_m\) is a Lyapunov function over the set \(int \mathbb {S}^{m-1}\). This completes the proof. \(\square \)
6.2 The Stability Analysis
Theorem 6.2
If \(\varepsilon \in (-\frac{1}{\mu },0)\cap (-1,0)\) then an orbit of the replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) starting from any initial point \(\mathbf {x}\in \mathbb {S}^{m-1}\) converges to the center of the face \(\mathbb {S}^{|supp (\mathbf {x})|-1}\).
Proof
Without loss of generality, we may assume that \(\mathbf {x}\in int \mathbb {S}^{m-1}\). Otherwise, we choose a suitable Lyapunov function given in the part (ii) of Proposition 6.1 depending on an initial point \(\mathbf {x}\in \mathbb {S}^{m-1}\).
Since \(\mathcal {M}_k(\mathbf {x})\) for \(k\in \mathbf {I}_m\) is a Lyapunov function over the set \(int \mathbb {S}^{m-1}\), one can show by using the same technique implemented in Theorem 5.2 that the orbit (trajectory) \(\left\{ \mathcal {R}^{(n)}(\mathbf {x})\right\} _{n=0}^\infty \) of the operator \(\mathcal {R}:int \mathbb {S}^{m-1}\rightarrow int \mathbb {S}^{m-1}\) starting from any initial point \(\mathbf {x}\in int \mathbb {S}^{m-1}\) is convergent and its omega limiting point is some fixed (rest) point \(\mathbf {x}^{**}\). Moreover, we now show that the orbit (trajectory) \(\left\{ \mathcal {R}^{(n)}(\mathbf {x})\right\} _{n=0}^\infty \) of the operator \(\mathcal {R}:int \mathbb {S}^{m-1}\rightarrow int \mathbb {S}^{m-1}\) is separated from the boundary \(\partial \mathbb {S}^{m-1}\) of the simplex \(\mathbb {S}^{m-1}\), i.e., for some \(\delta _0>0\) we have
Indeed, as we already showed in the part (i) of Proposition 6.1 that
for any \(k,t\in \mathbf {I}_m\), we obtain \(\mathsf {MinInd}\left( \mathcal {R}(\mathbf {x})\right) =\mathsf {MinInd}(\mathbf {x})\) for all \(\mathbf {x}\in int \mathbb {S}^{m-1}\).
Since \(\varepsilon <0\), it follows from (3.2) and \(k\in \mathsf {MinInd}(\mathbf {x})\) that
This means that \( \min \limits _{k\in \mathbf {I}_m}\left( \mathcal {R}(\mathbf {x})\right) _k\ge \delta _0\) for \(\delta _0:=\min \limits _{k\in \mathbf {I}_m}x_k>0\) (since \(\mathbf {x}\in int \mathbb {S}^{m-1}\)).
By repeating this process, we obtain that
Since the trajectory \(\left\{ \mathcal {R}^{(n)}(\mathbf {x})\right\} _{n=0}^\infty \) of the operator \(\mathcal {R}:int \mathbb {S}^{m-1}\rightarrow int \mathbb {S}^{m-1}\) is separated from the boundary \(\partial \mathbb {S}^{m-1}\) of the simplex \(\mathbb {S}^{m-1}\), its limiting point \(\mathbf {x}^{**}\) belongs to the interior \(int \mathbb {S}^{m-1}\) of the simplex \(\mathbb {S}^{m-1}\). Then the only fixed point in the interior \(int \mathbb {S}^{m-1}\) of the simplex \(\mathbb {S}^{m-1}\) is its center. Hence, the omega limiting point of the trajectory starting from any initial point \(\mathbf {x}\in int \mathbb {S}^{m-1}\) is the center of the simplex \(\mathbb {S}^{m-1}\). This completes the proof. \(\square \)
6.3 The Proof of Theorem C
(i) We show that the only stable rest point is the center \(\mathbf {c}=(\frac{1}{m},\ldots ,\frac{1}{m})\) of the simplex \(\mathbb {S}^{m-1}\) which is, due to Theorem A, a Nash equilibrium.
On the one hand, according to Theorem 6.2, if \(|\alpha |<m\) for \(\alpha \subset \mathbf {I}_m\) (which means \(\alpha \ne \mathbf {I}_m\)) then the center \(\mathbf {c}_\alpha \) of the face \(\mathbb {S}^{|\alpha |-1}\) (which is the rest point) is not stable. Indeed, for any small neighborhood \(U(\mathbf {c}_\alpha )\subset \mathbb {S}^{m-1}\) of the center \(\mathbf {c}_\alpha \) there is a point \(\mathbf {x}\in U(\mathbf {c}_\alpha )\cap int \mathbb {S}^{m-1}\) such that an orbit of the point \(\mathbf {x}\) converges to the center \(\mathbf {c}\) of the simplex \({\mathbb {S}}^{m-1}\). Therefore, the center \(\mathbf {c}_\alpha \) of the face \(\mathbb {S}^{|\alpha |-1}\) is not stable whenever \(|\alpha |<m\) for \(\alpha \subset \mathbf {I}_m\) (which means \(\alpha \ne \mathbf {I}_m\)).
On the other hand, according to Theorem 6.2, the center \(\mathbf {c}\) of the simplex \(\mathbb {S}^{m-1}\) is stable. Namely, for any ball \(U_R(\mathbf {c})\subset int \mathbb {S}^{m-1}\) of radius R there exists a (small) ball \(U_r(\mathbf {c})\subset int \mathbb {S}^{m-1}\) of radius r (we may choose \(r=\frac{R}{m}\)) such that \(\Vert \mathcal {R}^{(n)}(\mathbf {x})-\mathbf {c}\Vert _1\le R\) for any \(n\in \mathbb {R}\) whenever \(\Vert \mathbf {x}-\mathbf {c}\Vert _1\le r\). Indeed, it is clear that \(\max _{i\in \mathbf {I}_m}\{x_i\}-\min _{i\in \mathbf {I}_m}\{x_i\}\le \Vert \mathbf {x}-\mathbf {c}\Vert _1\le r\). Since \(\varepsilon <0\), for \(k_1\in \mathsf {MinInd}(\mathbf {x})\) and \(k_2\in \mathsf {MaxInd}(\mathbf {x})\) it follows from (3.2) that
Since \(\mathsf {MinInd}\left( \mathcal {R}(\mathbf {x})\right) =\mathsf {MinInd}(\mathbf {x})\) and \( \mathsf {MaxInd}\left( \mathcal {R}(\mathbf {x})\right) =\mathsf {MaxInd}(\mathbf {x})\), we obtain
Consequently, we obtain
By repeating this process, we obtain that \(\Vert \mathcal {R}^{(n)}(\mathbf {x})-\mathbf {c}\Vert _1\le R\) for any \(n\in \mathbb {R}\) whenever \(\Vert \mathbf {x}-\mathbf {c}\Vert _1\le r\). This means that the center \(\mathbf {c}\) of the simplex \(\mathbb {S}^{m-1}\) is the stable rest point.
(ii) We show that the center \(\mathbf {c}\) of the simplex \(\mathbb {S}^{m-1}\) which is the unique Nash equilibrium is not a strictly Nash equilibrium. Namely, we show \(\mathsf {BR}(\mathbf {c})=\mathbb {S}^{m-1}\). Indeed, for all \(\mathbf {x}\in \mathbb {S}^{m-1}\) we have that
(iii) Due to Theorem 6.2, an interior orbit starting from any initial point \(\mathbf {x}\in int \mathbb {S}^{m-1}\) converges to the center \(\mathbf {c}\) of the simplex \(\mathbb {S}^{m-1}\) which is a Nash equilibrium. This completes the proof of Theorem C.
7 Applications
In replicator equation (3.2), the fitness function \(F:\mathbb {S}^{m-1}\rightarrow \mathbb {R}^m\) is defined by a strictly increasing continuous function \(f:[0,1]\rightarrow [0,1]\) and a smooth non-vanishing function \(g:\mathbb {S}^{m-1}\rightarrow (0,1)\).
In this section, we choose population models for a single species such as Beverton–Holt’s model [5], Hassell’s model [18], Maynard Smith–Slatkin’s model [29], Ricker’s model [39], Skellam’s model [46] as a strictly increasing continuous function \(f:[0,1]\rightarrow [0,1]\). In general, there are two types of population models for a single species: density-independent and density-dependent population models. In its own turn, density-dependent population models can be split into two classes:
-
under-compensating in which the models are given by monotone functions;
-
over-compensating in which the models are given by unimodal functions.
Over-compensating population models which are given by unimodal (non-monotone) functions may produce cycles and chaos (see [16]). However, in this paper, we only consider under-compensating population models which are given by monotone functions.
Meanwhile, as a smooth non-vanishing function \(g:\mathbb {S}^{m-1}\rightarrow (0,1)\), for example, we can choose the following functions
-
\(g(\mathbf {x})=a_1x^{1+r_1}_1+\cdots +a_mx^{1+r_m}_m\) for any \(\mathbf {x}\in \mathbb {S}^{m-1}, \ a_i\in (0,1), \ r_i\in \mathbb {R}_{+}, \ i\in \mathbf {I}_m\);
-
\(g(\mathbf {x})=a_1x_1e^{-b_1x_1}+\cdots +a_mx_me^{-b_mx_m}\) for any \(\mathbf {x}\in \mathbb {S}^{m-1}, \ a_i,b_i\in (0,1), \ i\in \mathbf {I}_m\);
-
\(g(\mathbf {x})=a_1x_1h_1(x_1)+\cdots +a_mx_mh_m(x_m)\) for any \(\mathbf {x}\in \mathbb {S}^{m-1}, \ a_i\in (0,1)\) and \(h_i:[0,1]\rightarrow [0,1]\) is any smooth function such that \(h_i(t)>0\) for all \(t>0, \ i\in \mathbf {I}_m\).
However, we do not specify a function \(g:\mathbb {S}^{m-1}\rightarrow (0,1)\) in any examples given below.
Example 7.1
Let
where \(0<a\le (1+b)^s\), \(b>0\), \(q>0\), \(r>0\), \(s>0\), and \(0<bsq\le (1+b)r\).
-
(i)
If \(q=r=s=1\) then we obtain Beverton–Holt’s model [5];
-
(ii)
If \(r=q=1\) then we obtain Hassel’s model [18];
-
(iii)
If \(r=s=1\) then we obtain Maynard Smith–Slatkin’s model [29];
The discrete-time replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) takes the following form
Example 7.2
Let
where \(0<a\le 1\) and \(0<b\le r\). If \(r=1\) then we obtain Ricker’s model [39]. The discrete-time replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) takes the following form
Example 7.3
Let
where \(a,r>0\), \(\max \{1-\frac{1}{a},0\}< b\le 1\) and \(0<c \le \ln b^{-1}\). If \(r=0\) then we obtain Skellam’s model [46]. The discrete-time replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) takes the following form
Example 7.4
Let
where \(a_1, \ \ldots , \ a_n > 0\) such that \(0<\sum \nolimits _{k=1}^{n}a_k\le 1\) and \(r_n>r_{n-1}>\cdots>r_2>r_1>0\). Some special cases have been already studied: \(n=1\) with \(r_1=1\) in [25, 40, 41] and \(n=1\) with any \(r_1\in \mathbb {N}\) in [32, 33]. The discrete-time replicator equation \(\mathcal {R}:\mathbb {S}^{m-1}\rightarrow \mathbb {S}^{m-1}\) takes the following form
8 Conclusions
The basis for evolutionary game theory is the Folk Theorem of Evolutionary Game Theory (see [7, 23, 24]) which asserts the following three statements for all “reasonable” dynamics (particularly, the replicator equation) of an evolutionary game:
-
(i)
A stable rest point is a Nash equilibrium;
-
(ii)
A strictly Nash equilibrium is asymptotically stable;
-
(iii)
Any interior convergent orbit evolves to a Nash equilibrium;
In the continuous case, the replicator and other types of equations are thoroughly studied for various kinds of evolutionary games such as potential games, stable games, supermodular games, imitation dynamics, best response dynamics, and monotone dynamics in the literature [6, 7, 14, 22,23,24, 32, 33, 36, 43,44,45, 53].
In this paper, we have considered a discrete-time replicator equation. Namely, we have proposed a nonlinear model of evolutionary game in which for any two different pure strategies, a biological fitness of a pure strategy which is frequent in number is better/worse than other one. Particularly, the nonlinear payoff functions defined by discrete population models such as Beverton–Holt’s model, Hassell’s model, Maynard Smith–Slatkin’s model, Ricker’s model, Skellam’s model satisfy this hypothesis. In order to observe some evolutionary bifurcation diagram, we also control the nonlinear payoff function in two different regimes: positive and negative. It has been shown that the Folk Theorem of Evolutionary Game Theory is true for a discrete-time replicator equation governed by the proposed nonlinear payoff function. In the long-run time, the following scenario can be observed: (i) in the positive regime, the active dominating pure strategies will outcompete other strategies and only they will survive forever; (ii) in the negative regime, all active pure strategies will coexist together and they will survive forever.
As an application, we also showed that nonlinear payoff functions defined by under-compensating population models for a single species satisfy the hypothesis of the proposed model. It is of independent interest to study the dynamics of replicator equations governed by nonlinear payoff functions of over-compensating population models for a single species. Since over-compensating population models which are given by unimodal functions may produce cycles and chaos (see [16]), probably, one could expect complicated and non-convergent dynamics of the associated discrete-time replicator equations.
Finally, we would like to mention that it is also plausible to consider a payoff function which takes on negative values. For example, let us consider the simple highway congestion game in the economics of transportation (see [3, 43]). A pair of towns, Home and Work, are connected by m different roads \(\mathbf {I}_m=\{1,2,\ldots , m\}\). To commute from Home to Work, an agent must choose a road connecting two towns, Home and Work. Obviously, driving on a road increases the delays experienced by other drivers on that road. Therefore, a highway congestion involves negative externalities and the payoff of choosing a road is the negation of a delay on that road. Meanwhile, a delay on a road is an increasing function of the number of agents driving on that road. To formalize it mathematically, let \(\mathbf {x}=(x_1,\ldots ,x_m)\) be a probability vector for which \(x_k\) is a frequency of agents driving on a road k. Let \(f(x_k)\) be a delay on a road k which is an increasing function of a variable \(x_k\) and \(g(\mathbf {x})\) be an external cost function which has an equal impact on a usage of each and every road. The payoff \(F_k(\mathbf {x})\) of choosing a road k is the negation (which is \(\varepsilon <0\)) of the product of a delay function \(f(x_k)\) on a road k and a common external cost function \(g(\mathbf {x})\), i.e., \(F_k(\mathbf {x})=\varepsilon f(x_k)g(\mathbf {x})\). Hence, it is plausible to consider the negative payoff functions \(F(\mathbf {x})=\left( \varepsilon f(x_1)g(\mathbf {x}),\ \varepsilon f(x_2)g(\mathbf {x}), \ \cdots \ ,\ \varepsilon f(x_m)g(\mathbf {x})\right) \) for \(\varepsilon <0\) in the economics of transportation (for a detailed discussion, the reader may refer to [3, 43]).
References
Adams MR, Sornborger AT (2007) Analysis of a certain class of replicator equations. J Math Biol 54:357–384
Balkenborg D, Schlag K (2001) Evolutionarily stable sets. Game Theory 29:571–595
Beckmann M, McGuire CB, Winsten CB (1956) Studies in the economics of transportation. Yale University Press, New Haven
Benaim M, Hofbauer J, Sorin S (2012) Perturbations of set-valued dynamical systems, with applications to game theory. Dyn Games Appl 2(2):195–205
Beverton RJH, Holt SJ (1957) On the dynamics of exploited fish populations. Fisheries investigations, series 2, vol 19. H.M. Stationery Office, London
Cressman R (1992) The stability concept of evolutionary game theory: a dynamic approach. Springer, Berlin
Cressman R (2003) Evolutionary dynamics and extensive form games. MIT Press, Cambridge
Cooney DB (2019) The replicator dynamics for multilevel selection in evolutionary games. J Math Biol 79:101–154
Duong MH, Han TA (2020) On equilibrium properties of the replicator–mutator equation in deterministic and random games. Dyn Games Appl 10:641–663
Friedman D (1991) Evolutionary games in economics. Econometrica 59(3):637–666
Friedman D (1998) On economic applications of evolutionary game theory. J Evol Econ 8(1):15–43
Ganikhodjaev N, Saburov M, Jamilov U (2013) Mendelian and non-Mendelian quadratic operators. Appl Math Inf Sci 7:1721–1729
Ganikhodjaev N, Saburov M, Nawi AM (2014) Mutation and chaos in nonlinear models of heredity. Sci World J 2014:1–11
Ganikhodzhaev RN, Saburov M (2008) A generalized model of the nonlinear operators of Volterra type and Lyapunov functions. J Sib Fed Univ Math Phys 1(2):188–196
Garay J, Cressman R, Mori TF, Varga T (2018) The ESS and replicator equation in matrix games under time constraints. J Math Biol 76:1951–1973
Geritz SAH, Kisdi É (2004) On the mechanistic underpinning of discrete-time population models with complex dynamics. J Theor Biol 228(2):261–269
Gomes DA, Saude J (2021) A mean-field game approach to price formation. Dyn Games Appl 11:29–53
Hassell MP (1975) Density-dependence in single-species populations. J Anim Ecol 44:283–295
Hofbauer J (1996) Evolutionary dynamics for bimatrix games: a Hamiltonian system? J Math Biol 34:675–688
Hofbauer J (2018) Minmax via Replicator Dynamics. Dyn Games Appl 8:637–640
Hofbauer J, Sandholm WH (2009) Stable games and their dynamics. J Econ Theory 144(4):1665–1693
Hofbauer J, Sigmund K (1988) The theory of evolution and dynamical systems. Cambridge University Press, Cambridge
Hofbauer J, Sigmund K (1998) Evolutionary games and population dynamics. Cambridge University Press, Cambridge
Hofbauer J, Sigmund K (2003) Evolutionary game dynamics. Bull Am Math Soc 40(4):479–519
Jamilov U, Khamraev A, Ladra M (2018) On a Volterra cubic stochastic operator. Bull Math Biol 80:319–334
Maynard Smith J (1974) The theory of games and the evolution of animal conflicts. J Theor Biol 47(1):209–221
Maynard Smith J (1982) Evolution and the theory of games. Cambridge University Press, Cambridge
Maynard Smith J, Price GR (1973) The logic of animal conflict. Nature 246(5427):15–18
Maynard Smith J, Slatkin M (1973) The stability of predator–prey systems. Ecology 54:384–391
Mitrinovic DS, Pecaric JE, Fink AM (1993) Classical and new inequalities in analysis. Mathematics and its applications, vol 61. Kluwer Academic Publishers, Dordrecht
Mukhamedov F, Saburov M (2010) On homotopy of Volterrian quadratic stochastic operator. Appl Math Inf Sci 4:47–62
Mukhamedov F, Saburov M (2014) On dynamics of Lotka-Volterra type operators. Bull Malays Math Sci Soc 37:59–64
Mukhamedov F, Saburov M (2017) Stability and monotonicity of Lotka-Volterra type operators. Qual Theory Dyn Syst 16:249–267
Nash JF (1950) Equilibrium points in n-person games. Proc Natl Acad Sci USA 36(1):48–49
Nash JF (1951) Non-cooperative games. Ann Math 54:287–295
Nowak MA (2006) Evolutionary dynamics: exploring the equations of life. Harvard University Press, Cambridge
Pohley H-J, Thomas B (1983) Non-linear ESS models and frequency dependent selection. Biosystems 16:87–100
Pontz M, Hofbauer J, Burger R (2018) Evolutionary dynamics in the two-locus two-allele model with weak selection. J Math Biol 76:151–203
Ricker WE (1954) Stock and recruitment. J Fish Res Bd Canada 11:559–623
Rozikov U, Hamraev A (2004) On a cubic operator defined in finite dimensional simplex. Ukr Math J 56:1418–1427
Rozikov U (2020) Population dynamics: algebraic and probabilistic approach. World Scientific, Singapore
Saburov M (2013) Some strange properties of quadratic stochastic Volterra operators. World Appl Sci J 21:94–97
Sandholm WH (2010) Population games and evolutionary dynamics. MIT Press, Cambridge
Schuster P, Sigmund K (1983) Replicator dynamics. J Theor Biol 100(3):533–538
Sigmund K (2010) Evolutionary game dynamics: American Mathematical Society short course. American Mathematical Society, Providence
Skellam JG (1951) Random dispersal in theoretical populations. Biometrica 38:196–218
Swinkels J (1992) Evolutionary stability with equilibrium entrants. J Econ Theory 57:306–332
Taylor PD (1979) Evolutionarily stable strategies with two types of players. J Appl Probab 16:76–83
Taylor PD, Jonker L (1978) Evolutionarily stable strategies and game dynamics. Math Biosci 40:145–156
Thomas B (1984) Evolutionary stability: states and strategies. Theor Popul Biol 26:49–67
Thomas B (1985) On evolutionarily stable sets. J Math Biol 22:105–115
von Neumann J, Morgenstern O (1944) Theory of games and economic behavior. Princeton University Press, Princeton
Weibull JW (1995) Evolutionary game theory. MIT Press, Cambridge
Acknowledgements
This work was supported by American University of the Middle East, Kuwait. The author is greatly indebted to two anonymous reviewers for carefully reading the manuscript and for providing such constructive comments and suggestions which substantially contributed to improving the quality and presentation of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Saburov, M. On Discrete-Time Replicator Equations with Nonlinear Payoff Functions. Dyn Games Appl 12, 643–661 (2022). https://doi.org/10.1007/s13235-021-00404-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13235-021-00404-0