1 Introduction

Calculating Nash equilibria (NE) in two-player games is polynomial parity argument directed version (PPAD) complete (Chen et al. 2009; Daskalakis et al. 2009). Chen et al. conjectured that there is a \(\bigcirc (n^{k+\epsilon ^{-c}})\)-time algorithm for finding an \(\epsilon \)-NE in a two-player game for some constants c and k, where n is the number of strategies (Chen et al. 2009). Ortiz and Irfan proposed a fully polynomial time approximation scheme algorithm that is able to calculate an \(\epsilon \)-NE in graphical multi-hypermatrix games (Ortiz and Irfan 2017). They used a discretization scheme to discretize the probability space and the payoff space, and formed a gradually increased probability distribution for each player. The discretization scheme depends on the number of strategies. Rubinstein proved that computing \(\epsilon -\)NE in 2 player \(n\times n\) games requires \(n^{log^{1-\bigcirc (1)}n}\) time (Rubinstein 2016). Fearnley et al. (2012) indicated that a well-supported \(\epsilon \)-NE in bimatrix games can be computed within less than two-third approximate ratio. Kontogiannis and Spirakis used linear programming techniques and proved that finding \(\epsilon \)-NE in win-lose bimatrix games can achieve \(\epsilon =0.5\), and finding \(\epsilon \)-NE in arbitrary [0, 1]-bimatrix games can achieve \(\epsilon =0.658\) approximate ratio (Kontogiannis and Spirakis 2007). Lipton et al. used k-uniformed mixed strategies to approximate the randomized strategies in [0,1] and they indicated that the approximate ratio must be bigger than \((12ln(n)/k)^{1/2}\) where n was the number of strategies and \(k\le n\) was the number of nonzero values (Lipton et al. 2003). Bosse et al. proved that calculating an approximate NE in a bimatrix game can achieve \(\epsilon =0.36392\) approximate ratio (Bosse et al. 2010).

A well-known efficient algorithm for solving bimatrix games is the Lemke–Howson algorithm (LH). However, there is no way of determining how close one is to a solution during computation due to the lack of an objective function in the LH algorithm (McKelvey and McLennan 1996). They also used sets of 100 random games to measure average computational time for the LH algorithm (McKelvey and McLennan 1996). The classical LH algorithm finds one exact NE in exponential time (Rubinstein 2016).

Fuzzy sets theory has been applied to matrix game theory (Bector and Chandra 2005; Campos et al. 1992; Nishizaki and Sakawa 2001; Vijay et al. 2005; Kacher and Larbanib 2006; Larbani 2009; Li and Zhang 2011; Gao 2012; Li 2016). The fuzzy sets approach to game theory is based on the fact that players are not able to exactly estimate payoffs in noncooperative games due to inadequate information on the environment (Larbani 2009; Li 2016). The imprecision and uncertainty are appropriately modeled with fuzzy sets (Li and Zhang 2011; Li 2016; Nishizaki and Sakawa 2001). Note that the gains and losses for the players should be identical for a zero-sum matrix game. However, the aforementioned approach cannot always guarantee that the gain-floor and loss-ceiling for the players in zero-sum matrix games are identical (Li 2016).

This paper describes a novel algorithm that achieves the following: (1) the transformation of the NE problem in bimatrix games into a reduced form using fuzzification; (2) the solutions of the algorithm are the saddle points of the expected payoff functions; (3) the performance of the new algorithm is superior to the LH algorithm; (4) the determination of the approximation deviation during computation. This paper is organized as follows: Sect. 2 gives concepts that are related to game theory and fuzzy sets theory. Section 3 describes the algorithm in detail. Section 4 analyzes the algorithm and determines the approximation deviations. Section 5 gives an example to demonstrate the procedure of the algorithm and numerical results. Section 6 gives a conclusion.

2 Preliminaries

Definition 1

(Fuzzy number) A fuzzy number is a fuzzy set that is defined in domain R. The membership function \(\lambda (t)\) of a fuzzy number is a continuous function and its range is [0, 1] such that \(0\le \lambda (t)\le 1\) (Dijkman et al. 1983).

Definition 2

(Triangular fuzzy number (TFN)) A triangular fuzzy number is a special case of fuzzy number, whose membership function forms a triangle. A TFN \(\lambda \) is usually denoted with three values such that \(\lambda =(d^l,d^m,d^r)\), where \(d^m\) is called the mean value of the TFN \(\lambda \), such that \(\lambda (d^m)=1\), \(d^l\) and \(d^r\) are the lower and upper limits of the TFN, such that \(\lambda (d^l)=\lambda (d^r)=0\) (Kaufmann and Gupta 1998).

Definition 3

(Fuzzification) Fuzzification is the process of assigning the numerical inputs of a system to fuzzy sets with some degree of membership function.

The concept of fuzzification is commonly used in fuzzy sets theory and fuzzy control theory (Gao 1999; Zimmermann 2011).

Definition 4

(Possibility distribution) Suppose \(x\in [0,1]^n\subseteq R^n\), where \(R^n\) is a n-dimensional Euclidean space. Each \(x_i\in x\) is mapped to a fuzzy number \(\lambda _{_i} (i=1,2,\ldots ,n)\). A possibility distribution associated with the variable x is defined as \(\lambda =(\lambda _{1}, \lambda _{2}, \ldots , \lambda _{n})\), where the support of \(\lambda _i\) is in [0, 1] such that \(supp \lambda _i\in [0,1](i=1,2,\ldots , n)\). When \(\sum _{j=1}^n\lambda _j(t) > 0(t\in [0,1])\), we say that \(\lambda \) is a suitable possibility distribution.

As Zadeh described in his initial paper, the sum of a possibility distribution is not necessarily 1 (Zadeh 1978). However, practical problems sometimes require the normalization of the possibility distributions. Researchers studied the normalization of possibility distributions (Dubois and Prade 2015; Zimmermann 2011). The definition of normalized possibility distribution is given as follows.

Definition 5

(The normalization of possibility distribution) When fuzzy numbers \(\lambda _i (i=1,2,\ldots ,n)\) in the possibility distribution \(\lambda \) are TFNs and \(\lambda \) is a suitable possibility distribution, the normalization of a possibility distribution \(\mu (t)\) is defined as follows.

$$\begin{aligned}&\mu (t) =\left( \frac{\lambda _1(t)}{\sum _{j=1}^n\lambda _j(t)}, \frac{\lambda _2(t)}{\sum _{j=1}^n\lambda _j(t)}, \ldots , \frac{\lambda _n(t)}{\sum _{j=1}^n\lambda _j(t)}\right) , \nonumber \\&\qquad \qquad t\in [0,1]. \end{aligned}$$
(2.1)

where \(\mu _i(t)=\frac{\lambda _i(t)}{\sum _{j=1}^n\lambda _j(t)} (i=1,2,\ldots , n), t\in [0,1]\).

3 The algorithm

3.1 The problem

Definition 6

(Bimatrix games) A bimatrix game is described as follows.

$$\begin{aligned} G=(2, S, u), S=(S_1, S_2), u=(u_1, u_2), \end{aligned}$$
(3.1)

where \(S_1=(s_{1}^1,s_{2}^1,\ldots , s_{k}^1)\)(\(S_2=(s_{1}^2,s_{2}^2,\ldots , s_{l}^2)\)) is a set of strategies for player 1 (2); \(u_i\) is the expected payoff function of player \(i(i=1,2)\) that is defined as follows.

$$\begin{aligned} {\left\{ \begin{array}{ll} u_1(x, y) = x^TAy &{} \\ u_2(x, y) = x^TBy &{} \end{array}\right. } \end{aligned}$$
(3.2)

where \(x^T=(x_1,x_2,\ldots , x_k)\) and \(x\in \bigtriangleup ^k(\bigtriangleup ^k\) is a k-dimensional standard simplex); \(y^T=(y_1,y_2,\ldots , y_l)\) and \(y\in \bigtriangleup ^l\); A, B is a \(k\times l\) matrix for player 1, player 2, respectively.

Definition 7

(Nash equilibrium (NE)) The pair of strategies \((x^0,y^0)\) is called a NE, if and only if

$$\begin{aligned} {\left\{ \begin{array}{ll} u_1(x, y^0)\le u_1(x^0,y^0), \text { } \forall x\in S_1 \\ u_2(x^0, y)\le u_2(x^0,y^0), \text { } \forall y\in S_2 \end{array}\right. }. \end{aligned}$$
(3.3)

Definition 8

(\(\epsilon \)-NE) (Lipton et al. 2003) For any \(\epsilon > 0\), a pair of strategies \((x^*, y^*)\) is an \(\epsilon \)-NE for the bimatrix game (3.2) if

  1. (i)

    \(u_1(x, y^*)\le u_1(x^*,y^*)+\epsilon , \forall x\in S_1\).

  2. (ii)

    \(u_2(x^*, y)\le u_2(x^*,y^*)+\epsilon , \forall y\in S_2\).

Suppose \(t\in R\) and \(s\in R\) are two free independent variables. If one can find two transformations such that,

$$\begin{aligned} x&=\mu (t)\in \bigtriangleup ^k, t\in R, \end{aligned}$$
(3.4)
$$\begin{aligned} y&=\eta (s)\in \bigtriangleup ^l, s\in R. \end{aligned}$$
(3.5)

Then, according to the parameterized methodology (Downey and Fellows 2013), (3.2) can be described as follows.

$$\begin{aligned} {\left\{ \begin{array}{ll} u_1(\mu (t), \eta (s)) = (\mu (t))^TA\eta (s), (t\in R; s\in R), &{} \\ u_2(\mu (t), \eta (s)) = (\mu (t))^TB\eta (s), (t\in R; s\in R). &{} \end{array}\right. } \end{aligned}$$
(3.6)

The key question is how we can construct the two transformations (3.4) and (3.5). This paper proposes a method to construct these two transformations \(\mu (t)\) and \(\eta (s)\) by using the fuzzification of mixed strategies (xy).

3.2 The fuzzification

The expected payoff function of player 1 is \(u_1(x,y)\). \(u_1(x,y)\) has two variables such that \(x\in \bigtriangleup ^k\) and \(y\in \bigtriangleup ^l\), where \(\bigtriangleup ^k (\bigtriangleup ^l)\) is k(l)-dimensional standard simplex. The fuzzification is to assign a fuzzy number to each \(x_i\). As a result, a set of fuzzy numbers is assigned to variable \(x\in R^k\).

Note that a \(x=(x_1,x_2,\ldots , x_k)\in R^k\) and \(x\in \bigtriangleup ^k\). That is, \(x_i\in [0,1]\) can be fuzzified with a TFN \(\lambda _i(t)\) because of \(\lambda _i(t)\in [0,1]\). Therefore, \(x\in R^k\) can be represented with a set of TFNs.

However, the above fuzzification has a problem that \(\lambda \in \bigtriangleup ^k\) is not guaranteed because \(x\in \bigtriangleup ^k\). In order to guarantee that the set of TFNs is over the standard simplex and to make the fuzzification reasonable, instead of assigning a set of TFNs \(\lambda \) directly to x, the fuzzification in this paper assigns the normalization of a possibility distribution to x such that,

$$\begin{aligned} x = \mu (t)=\frac{\lambda (t)}{\sum _{j=1}^k\lambda _j(t)} , (t\in D^1=[0,1]). \end{aligned}$$
(3.7)

where \(\lambda _i=(d_i^l,d_i^m,d_i^r)\) is a TFN, and \(\lambda (t)=(\lambda _1(t), \ldots , \lambda _k(t))\) is a set of TFNs. When a set of TFNs \(\lambda (t)\) is given in domain \(D^1=[0,1]\) (\(D^1\) means for player 1), the mean value \(d_i^m\) of TFN \(\lambda _i\) will distribute in domain \(D^1=[0,1]\). Without loss of generality, we suppose that

$$\begin{aligned} 0\le d_1^m\le d_2^m\le \cdots \le d_k^m\le 1. \end{aligned}$$

These mean values decompose the domain \(D^1=[0,1]\) into M intervals such that,

$$\begin{aligned} D^1=\bigcup _{i=1}^{M}D_i^1, \end{aligned}$$
(3.8)

where \(1\le M\le k\) and \(D_1^1=[d_1^l,d_1^m], D_{M}^1 =[d_{k-1}^m,d_k^r], D_i^1=[d_{i-1}^m,d_i^m] (i=2,\ldots , M-1)\). It is obvious that we have the following proposition. The proof of the proposition can be found in Gao (2017).

Proposition 1

Suppose that \(\lambda _i(t)\) is a membership function of a TFN \(\lambda _i=(d_i^l,d_i^m,d_i^r)\), then, \(\lambda _i(t)\) is a monotonic, continuous, and linear function in domain \(D_j^1(j=1,2,\ldots ,M)\).

Based on proposition 1, \(\lambda _i(t)\) can be denoted as follows.

$$\begin{aligned}&\lambda _i(t)=p_it+q_i, p_i\in R, q_i\in R (i=1,2,\ldots , k), \\&\qquad t\in D_j^1 (j=1,2,\ldots , M) \end{aligned}$$

Equation (3.8) shows that domain \(D^1=[0,1]\) is decomposed by the mean values \(d_i^m\) into M sub-domains, where \(1\le M \le k\). The fuzzification can be denoted as follows.

$$\begin{aligned} x = \mu (t), x\in \bigtriangleup ^k, \mu (t)\in \bigtriangleup ^k, t\in D_i^1(i=1,2,\ldots ,M). \end{aligned}$$
(3.9)

where \(\mu (t)\) is defined in Eq. (3.7).

Similarly, the variable \(y\in R^l\) can be fuzzified with another normalization of a possibility distribution \(\eta (s)\) such that,

$$\begin{aligned} y = \eta (s)= \frac{\theta (s)}{\sum _{j=1}^l\theta _j(s)}, (s\in D^2=[0,1]). \end{aligned}$$
(3.10)

where \(\theta _j=(g_j^l,g_j^m,g_j^r)\) is a TFN, and \(\eta (s)=(\eta _1(s), \ldots , \eta _l(s))\) is a set of TFNs. When a set of TFNs \(\theta (s)\) is give in domain \(D^2=[0,1]\) (\(D^2\) means for player 2), the mean value \(g_j^m\) of \(\theta _j(s)\) will distribute in domain \(D^2=[0,1]\). Without loss of generality, we suppose that

$$\begin{aligned} 0\le g_1^m\le g_2^m \le \cdots \le g_l^m\le 1. \end{aligned}$$

These mean values of TFNs \(\theta _j\) decompose the domain \(D^2=[0,1]\) (\(D^2\) means for player 2) into L intervals such that,

$$\begin{aligned} D^2=\bigcup _{j=1}^{L}D_j^2, \end{aligned}$$
(3.11)

where \(1\le L\le l\) and \(D_1^2=[g_1^l,g_1^m], D_{L}^2 =[g_l^m,g_l^r], D_j^2=[g_{j-1}^m,g_j^m] (j=2,\ldots , L-1)\). Similarly, TFN \(\theta _j(s)\) can be defined as follows.

$$\begin{aligned}&\theta _j(s)=e_js+h_j, e_j\in R, h_j\in R (j=1,2,\ldots , l), \\&\qquad s\in D_i^2 (i=1,2,\ldots , L-1). \end{aligned}$$

Equation (3.11) shows that domain \(D^2=[0,1]\) is decomposed by the mean values \(g_j^m\) into L sub-domains, where \(1\le L\le l\). The fuzzification can be denoted as follows.

$$\begin{aligned} y = \eta (s), y\in \bigtriangleup ^l, \eta (s)\in \bigtriangleup ^l, s\in D_j^2(j=1,2,\ldots , L). \end{aligned}$$
(3.12)

where \(\eta (s)\) is defined in Eq. (3.10).

By substituting xy in Eq. (3.2) with \(\mu (t), \eta (s)\), respectively, the expected payoff function \(u_1(x,y)\)(\(u_2(x,y)\)) in (3.2) can be transformed as follows in each domain \(D_i^1\times D_j^2\).

$$\begin{aligned} {\left\{ \begin{array}{ll} u_1(\mu (t),\eta (s))=(\mu (t))^TA\eta (s), t\in D_i^1, s\in D_j^2. \\ u_2(\mu (t), \eta (s))=(\mu (t))^TB\eta (s), t\in D_i^1, s\in D_j^2. \end{array}\right. } \end{aligned}$$
(3.13)

It is clear that Eqs. (3.6) and (3.13) are similar. The only difference between the two formulas is that the domains are different. The domain in (3.6) is \(R\times R\), and the domain in (3.13) is \(D_i^1\times D_j^2(i=1,2,\ldots ,M;j=1,2,\ldots ,L)\).

In this paper, we use a technique that adds the two payoff functions \(u_1(x,y)\) and \(u_2(x,y)\) together to constitute a single payoff function u(xy) such that,

$$\begin{aligned} u(\mu (t),\eta (s))&= u_1(\mu (t),\eta (s)) +u_2(\mu (t),\eta (s))\nonumber \\&=(\mu (t))^T(A+B)\eta (s), t\in D_i^1, s\in D_j^2. \end{aligned}$$
(3.14)

NE points are critical points that satisfy the following equations in domain \(D_i^1\times D_j^2\)(\(i=1,2,\ldots , M; j=1,2,\ldots , L)\).

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial u(\mu (t),\eta (s))}{\partial t}=\frac{\partial u_1(\mu (t),\eta (s))}{\partial t}+\frac{\partial u_2(\mu (t),\eta (s))}{\partial t}= 0, &{} t\in D_i^1, s\in D_j^2\\ \frac{\partial u(\mu (t),\eta (s))}{\partial s}=\frac{\partial u_1(\mu (t),\eta (s))}{\partial s}+\frac{\partial u_2(\mu (t),\eta (s))}{\partial s}= 0, &{} t\in D_i^1, s\in D_j^2. \end{array}\right. } \end{aligned}$$
(3.15)

Based on Eqs. (3.7) and (3.10), one can simplify Eq. (3.15) as follows.

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\frac{\partial u(\mu (t),\eta (s))}{\partial t}= K_1(t,s)V_1^{T}(A+B)\theta (s) = 0, t\in D_i^1, s\in D_j^2 \\ &{}\frac{\partial u(\mu (t),\eta (s))}{\partial s}= K_2(t,s)V_2^{T}(A^T+B^T)\lambda (t) = 0, t\in D_i^1, s\in D_j^2 \end{array}\right. } \end{aligned}$$
(3.16)

where \(K_i(t,s)(i=1,2)\) and \(V_i(i=1,2)\) are denoted as follows.

$$\begin{aligned} K_1(t,s)&= \frac{1}{(\sum _{i=1}^k\lambda _i(t))^2\sum _{j=1}^l\theta _j(s)}, t\in D_i^1; s\in D_j^2 \end{aligned}$$
(3.17)
$$\begin{aligned} K_2(t,s)&= \frac{1}{(\sum _{i=1}^l\theta _i(s))^2\sum _{j=1}^k\lambda _j(t)}, t\in D_i^1; s\in D_j^2 \end{aligned}$$
(3.18)
$$\begin{aligned} V_1^{T}&=\left( p_1\sum _{j=1}^kq_j-q_1\sum _{j=1}^kp_j, \ldots , p_k\sum _{j=1}^kq_j\right. \nonumber \\&\quad \left. -\,q_k\sum _{j=1}^kp_j\right) \end{aligned}$$
(3.19)
$$\begin{aligned} V_2^{T}&=\left( e_1\sum _{j=1}^lh_j-h_1\sum _{j=1}^le_j, \ldots , e_l\sum _{j=1}^lh_j\right. \nonumber \\&\quad \left. -\,h_l\sum _{j=1}^le_j\right) . \end{aligned}$$
(3.20)

Based on the definition of membership functions of TFNs, when \(\forall t\in R\), \(\lambda _i(t)\ge 0\) and \(\forall s\in R\), \(\theta _j(s)\ge 0\). If the fuzzifications for variables x and y are reasonable, then the sum of all membership functions of TFNs will be larger than zero such that \(\sum _{i=1}^k \lambda _i(t)> 0\) and \(\sum _{j=1}^l\theta _j(s) >0\). If \(K_i(t,s) > 0(i=1,2)\), then the right hand of Eq. (3.16) can be simplified as follows.

$$\begin{aligned} {\left\{ \begin{array}{ll} V_1^{T}(A+B)\theta (s) = 0, t\in D_i^1, s\in D_j^2, \\ V_2^{T}(A^T+B^T)\lambda (t) = 0, t\in D_i^1, s\in D_j^2. \end{array}\right. } \end{aligned}$$
(3.21)

When possibility distributions \(\lambda (t)=(\lambda _1(t),\ldots , \lambda _k(t))\in R^k\) and \(\theta (s)=(\theta _1(s),\ldots ,\theta _l(s))\in R^l\) are given, where \(\lambda _i(t)=p_it+q_i(i=1,2,\ldots , k)\) and \(\theta _j(s)=e_js+h_j(j=1,2,\ldots l)\), the coefficients \(p_i\), \(q_i(i=1,2,\ldots , k)\) and \(e_j\), \(h_j(j=1,2,\ldots , l)\) are known. The elements of \(V_i(i=1,2)\) are constant. The elements of \(\lambda (t)\) are linear functions of t. The elements of \(\theta (s)\) are linear functions of s. Therefore, the solution of Eq. (3.21) can be explicitly denoted as follows.

$$\begin{aligned} t^*=\frac{V_2^T(A^T+B^T)Q}{V_2^T(A^T+B^T)P}, \quad s^*=\frac{V_1^T(A+B)H}{V_1^T(A+B)E}. \end{aligned}$$
(3.22)

where E, H, P, and Q are as follows.

$$\begin{aligned} E^T&=(e_1,e_2,\ldots , e_l),\quad H^T=(-h_1,-h_2,\ldots , -h_l), \end{aligned}$$
(3.23)
$$\begin{aligned} P^T&=(p_1,p_2,\ldots , p_k), \quad Q^T=(-q_1,-q_2,\ldots , -q_k). \end{aligned}$$
(3.24)

The sub-domain \(D_i^1(D_j^2)\) is divided by the mean value \(d_i^m(g_j^m)\) of TFN \(\lambda _i(\eta _j)\). The necessary and sufficient conditions of Eq. (3.22) being a solution of Eq. (3.21) is as follows.

$$\begin{aligned} {\left\{ \begin{array}{ll} t^*\in D_i^1, \text { such that } d_{i-1}^m\le t^* \le d_i^m, \\ s^*\in D_j^2, \text { such that } g_{j-1}^m\le s^* \le g_j^m. \end{array}\right. } \end{aligned}$$
(3.25)

That is, \((t^*,s^*)\in D_i^1\times D_j^2(i=1,2,\ldots ,M;j=1,2,\ldots L)\). If either \(t^*\) or \(s^*\) does not satisfy the inequality (3.25), then, Eq. (3.21) does not have a solution in \(D_i^1\times D_j^2\) and the algorithm will try the next sub-domain (Fig. 2). If \((t^*, s^*)\) satisfies the inequalities in (3.25), then \((t^*, s^*)\) is a solution of Eq. (3.16).

The diagram of the algorithm is depicted in Fig. 1. First, the mixed strategies (xy) are fuzzified with the normalized possibility distributions defined in Eqs. (3.7) and (3.10). Secondly, the algorithm calculates the potential critical point \((t^*,s^*)\) with Eq. (3.22). Finally, the algorithm verifies if \((t^*,s^*)\) satisfies the inequality (3.25) to determine if \((t^*,s^*)\) is a solution of Eq. (3.16). In case if there is no solution is found in all the sub-domains \(D_i^1\times D_j^2(i=1,2,\ldots , M;j=1,2,\ldots ,L)\), the algorithm is able to adjust the fuzzification and try the procedure again until an approximate NE is found.

The details of the algorithm are described in Algorithm 1.

figure a

3.3 The analysis of the fuzzification

From a conceptual point of view, the proposed algorithm might be considered as a type of divide-and-conquer algorithms. However, it differs from the divide-and-conquer algorithms because it requires neither a base case for subproblems nor recursive operations.

Fig. 1
figure 1

The diagram of the algorithm

The algorithm decomposes domain \(D^1\times D^2=[0,1]\times [0,1]\) into \(M\times L\) sub-domains, and then finds a critical point in each sub-domain \(D_i^1\times D_j^2\) (Fig. 2). The purpose of the decomposition is to take advantage of the piecewise linearity of possibility distributions. The algorithm calculates \((t^*,s^*)\) with Eq. (3.22) in each sub-domain. Then, it verifies if \((t^*,s^*)\) satisfies the inequalities in (3.25) and the inequalities (4.3) and (4.4) in Sect. 4. If the verification passes, then the algorithm finds a saddle point of the expected payoff functions in the reduced forms in sub-domain \(D_i^1\times D_j^2(i=1,\ldots ,k;j=1,\ldots ,l)\).

The advantages of the fuzzification are the simplification of the NE problem of bimatrix games and the encapsulation of the standard simplex of strategies x and y in possibility distributions.

4 The analysis of the algorithm

First, we prove that if \((t^*,s^*)\) is a solution of Eq. (3.16), then \((\mu (t^*),\eta (s^*))\) can be a saddle point of function \(u(\mu (t),\eta (s))\) in domain \(D_i^1\times D_j^2\). Secondly, we prove that a saddle point of function \(u(\mu (t),\eta (s))\) can also be a saddle point of function \(u_1(\mu (t),\eta (s))\) and function \(u_2(\mu (t),\eta (s))\). Finally, we discuss how to determine the approximation parameter \(\epsilon \).

Theorem 1

If \((t^{*},s^{*})\) is a solution of Eq. (3.16) in domain \(D_i^1\times D_j^2\) and \(\frac{\partial ^2 u}{\partial t\partial s}\mid _{\begin{array}{c} t=t^*\\ s=s^* \end{array}} \ne 0\), then \((\mu (t^{*}),\eta (s^{*}))\) is a saddle point of function \(u(\mu (t),\eta (s))\) that is denoted in Eq. (3.14).

Proof

Since \((t^{*},s^{*})\) is a solution of Eq. (3.16), we have the following.

$$\begin{aligned} \frac{\partial u(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =0 \text { and } \frac{\partial u(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =0. \end{aligned}$$

The Taylor expansion of \(u(\mu (t),\eta (s))\) at \((t^{*},s^{*})\) in domain \(D_i^1\times D_j^2\) is as follows.

$$\begin{aligned}&u(\mu (t),\eta (s))=u(\mu (t^{*}),\eta (s^{*}))+\frac{\partial u}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})\nonumber \\&\qquad +\,\frac{\partial u}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(s-s^{*}) \nonumber \\&\qquad +\,\frac{1}{2}\frac{\partial ^2 u}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})^2+\frac{1}{2}\frac{\partial ^2 u}{\partial s^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(s-s^{*})^2 \nonumber \\&\qquad +\,\frac{\partial ^2 u}{\partial t\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})(s-s^{*})\nonumber \\&\qquad +\, H.O.T., t\in D_i^1, s\in D_j^2. \nonumber \\&\quad =u(\mu (t^{*}),\eta (s^{*}))+ \frac{1}{2}\frac{\partial ^2 u}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})^2 \nonumber \\&\qquad +\,\frac{1}{2}\frac{\partial ^2 u}{\partial s^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(s-s^{*})^2\nonumber \\&\qquad +\, \frac{\partial ^2 u}{\partial t \partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})(s-s^{*}), t\in D_i^1, s\in D_j^2 \nonumber \\&\quad =u(\mu (t^{*}),\eta (s^{*}))\nonumber \\&\qquad +\,\frac{1}{2}(t-t^{*}, s-s^{*}) \begin{pmatrix} \frac{\partial ^2 u}{\partial t^2} &{} \frac{\partial ^2 u}{\partial t\partial s} \\ \frac{\partial ^2 u}{\partial t\partial s} &{} \frac{\partial ^2 u}{\partial s^2} \\ \end{pmatrix}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} \begin{pmatrix} t-t^{*} \\ s-s^{*} \\ \end{pmatrix}, \nonumber \\&\qquad t\in D_i^1, s\in D_j^2. \end{aligned}$$
(4.1)

If we calculate the second-order partial derivative of the first equation in (3.16), we have the following.

$$\begin{aligned}&\frac{\partial ^2 u(\mu (t),\eta (s))}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} \nonumber \\&\quad =\frac{\partial K_1(t,s)}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} V_1^T(A+B)\theta (s^{*}) =0. \end{aligned}$$
(4.2)

The determinant of the Hessian matrix is as follows.

$$\begin{aligned} \mathrm{DetH}= & {} \frac{\partial ^2 u}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} \frac{\partial ^2 u}{\partial s^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} -\left( \frac{\partial ^2 u}{\partial t\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2 \\= & {} -\left( \frac{\partial ^2 u}{\partial t\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2. \end{aligned}$$

If \(\frac{\partial ^2 u(\mu (t),\eta (s))}{\partial t\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} \ne 0\), then \(DetH <0\). That is, \((\mu (t^{*}),\eta (s^{*}))\) is a saddle point of \(u(\mu (t), \eta (s)).\) \(\square \)

Fig. 2
figure 2

An example of sub-domain\(D_i^1\times D_j^2\)

According to Theorem 1, if \((t^*,s^*)\) is a solution of Eq. (3.16) in \(D_i^1\times D_j^2\) and the mixed second-order partial derivative at \((t^*,s^*)\) is not zero, then \((\mu (t^*),\theta (s^*))\) is a saddle point of function \(u(\mu (t),\eta (s))\). It is interesting to know under what conditions the solution \((t^*, s^*)\) will be a critical point of the payoff function \(u_1(\mu (t),\eta (s))\) and the payoff function \(u_2(\mu (t),\eta (s))\). We have the following proposition.

Proposition 2

If \((t^{*},s^{*})\) is a solution of Eq. (3.16) in domain \(D_i^1\times D_j^2\), and the following inequalities are satisfied,

$$\begin{aligned}&\frac{\partial u_1(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\frac{\partial u_2(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\ge 0 \end{aligned}$$
(4.3)
$$\begin{aligned}&\frac{\partial u_1(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\frac{\partial u_2(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\ge 0. \end{aligned}$$
(4.4)

Then, \((t^{*},s^{*})\) is a critical point of \(u_1(\mu (t),\eta (s))\) and \(u_2(\mu (t),\eta (s))\) such that

$$\begin{aligned} \begin{matrix} {\left\{ \begin{array}{ll} \frac{\partial u_1(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0 \\ \frac{\partial u_1(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0. \end{array}\right. } \quad &{} {\left\{ \begin{array}{ll} \frac{\partial u_2(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0 \\ \frac{\partial u_2(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0. \end{array}\right. } \end{matrix} \end{aligned}$$

Proof

Since \((t^*,s^*)\) is a solution of Eq. (3.16) in domain \(D_i^1\times D_j^2\), based on Eq. (3.15), we have the following.

$$\begin{aligned} {\left\{ \begin{array}{ll} \left( \frac{\partial u}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2 =\left( \frac{\partial u_1}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2 +\left( \frac{\partial u_2}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2+2\frac{\partial u_1}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\frac{\partial u_2}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0. \\ \left( \frac{\partial u}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2 =\left( \frac{\partial u_1}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2 +\left( \frac{\partial u_2}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2+2\frac{\partial u_1}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\frac{\partial u_2}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0. \end{array}\right. } \end{aligned}$$
(4.5)

According to inequalities (4.3) and (4.4), if and only if the following is satisfied,

$$\begin{aligned} \begin{matrix} {\left\{ \begin{array}{ll} \frac{\partial u_1(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0 \\ \frac{\partial u_1(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0, \end{array}\right. } \quad &{} {\left\{ \begin{array}{ll} \frac{\partial u_2(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0 \\ \frac{\partial u_2(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = 0, \end{array}\right. }, \end{matrix} \end{aligned}$$

then Eq. (4.5) is satisfied. That is, the solution \((t^*,s^*)\) is a critical point of function \(u_1(\mu (t),\eta (s))\) and function \(u_2(\mu (t),\eta (s))\). \(\square \)

The following theorem proves that \((\mu (t^{*}),\eta (s^{*}))\) is a saddle point of functions \(u_1(\mu (t),\eta (s))\) and \(u_2(\mu (t),\eta (s))\) if \((\mu (t^{*}),\eta (s^{*}))\) is a saddle point of function \(u(\mu (t),\eta (s))\) and the inequalities (4.3) and (4.4) are satisfied.

Theorem 2

Suppose that \((t^{*},s^{*})\) is a solution of Eq. (3.16) in domain \(D_i^1\times D_j^2\) and inequalities (4.3) and (4.4) are satisfied. If \(\frac{\partial ^2 u_i}{\partial t\partial s}\mid _{\begin{array}{c} t=t^*\\ s=s^* \end{array}} \ne 0 (i=1,2)\), then, \((\mu (t^{*}),\eta (s^{*}))\) is a saddle point of function \(u_1(\mu (t),\eta (s))\) and \(u_2(\mu (t),\eta (s))\)

Proof

First, we prove that \((\mu (t^{*}),\eta (s^{*}))\) is a saddle point of \(u_1(\mu (t),\eta (s))\). The Taylor expansion of \(u_1(\mu (t),\eta (s))\) at \((t^{*},s^{*})\) in domain \(D_i^1\times D_j^2\) is as follows.

$$\begin{aligned} u_1(\mu (t),\eta (s))= & {} u_1(\mu (t^{*}),\eta (s^{*}))\nonumber \\&+\frac{\partial u_1}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})+\frac{\partial u_1}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(s-s^{*}) \nonumber \\&+\frac{1}{2}\frac{\partial ^2 u_1}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})^2\nonumber \\&+\frac{1}{2}\frac{\partial ^2 u_1}{\partial s^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(s-s^{*})^2\nonumber \\&+\frac{\partial ^2 u_1}{\partial t\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})(s-s^{*})\nonumber \\&+ H.O.T., t\in D_i^1, s\in D_j^2. \end{aligned}$$
(4.6)

Since inequalities (4.3) and (4.4) are satisfied and according to Proposition 2, \((t^*,s^*)\) is a critical point of \(u_1(\mu (t),\eta (s))\) such that,

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial u_1(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = K_1(t,s)V_1^TA\theta (s)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =0 \\ \frac{\partial u_1(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = K_2(t,s)V_2^TA\lambda (t)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =0. \end{array}\right. } \end{aligned}$$
(4.7)

If we apply Eqs. (4.7)–(4.6), then Eq. (4.6) is simplified as follows.

$$\begin{aligned}&u_1(\mu (t),\eta (s))=u_1(\mu (t^{*}),\eta (s^{*}))+ \frac{1}{2}\frac{\partial ^2 u_1}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})^2 \nonumber \\&\qquad +\frac{1}{2}\frac{\partial ^2 u_1}{\partial s^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(s-s^{*})^2 + \frac{\partial ^2 u_1}{\partial t \partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})(s-s^{*}), \nonumber \\&\quad =u_1(\mu (t^{*}),\eta (s^{*}))\nonumber \\&\qquad +\frac{1}{2}(t-t^{*}, s-s^{*}) \begin{pmatrix} \frac{\partial ^2 u_1}{\partial t^2} &{} \frac{\partial ^2 u_1}{\partial t\partial s} \\ \frac{\partial ^2 u_1}{\partial t\partial s} &{} \frac{\partial ^2 u_1}{\partial s^2} \\ \end{pmatrix}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} \begin{pmatrix} t-t^{*} \\ s-s^{*} \\ \end{pmatrix},\nonumber \\&\quad \qquad t\in D_i^1, s\in D_j^2. \end{aligned}$$
(4.8)

If we calculate the second-order partial derivative of Eq. (4.7), we get the following.

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial ^2 u_1(\mu (t),\eta (s))}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = \frac{\partial K_1(t,s)}{\partial t} V_1^TA\theta (s)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =0 \\ \frac{\partial ^2 u_1(\mu (t),\eta (s))}{\partial s^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =\frac{\partial K_2(t,s)}{\partial s} V_2^TA\lambda (t)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =0. \end{array}\right. } \end{aligned}$$
(4.9)

If we apply Eq. (4.9) to Hessian matrix in Eq. (4.8), then the determinant of the Hessian matrix is as follows.

$$\begin{aligned} \mathrm{DetH}= & {} \frac{\partial ^2 u_1}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} \frac{\partial ^2 u_1}{\partial s^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} -\left( \frac{\partial ^2 u_1}{\partial t\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2 \\= & {} -\left( \frac{\partial ^2 u_1}{\partial t\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\right) ^2. \end{aligned}$$

If \(\frac{\partial ^2 u_1}{\partial t\partial s}\mid _{\begin{array}{c} t=t^*\\ s=s^* \end{array}} \ne 0\), then \(Det H <0\). That is, \((\mu (t^*),\eta (s^*))\) is a saddle point of \(u_1(\mu (t),\eta (s))\).

Similarly, we can prove that \((\mu (t^*),\eta (s^*))\) is a saddle point of function \(u_2(\mu (t),\eta (s))\) if the conditions in Theorem 2 are satisfied. \(\square \)

4.1 The analysis of approximation ratio

Without loss of generality, suppose that domain \(D^1=[0,1](D^2=[0,1])\) is equally decomposed with possibility distributions \(\lambda (t)(\theta (s))\) such that \(\mid t-t^{*}\mid \le 1/M\) for \(t,t^{*}\in D_i^1\)(\(\mid s-s^{*}\mid \le 1/L\) for \(s,s^{*}\in D_i^2)\). We have the following theorem.

Theorem 3

If \((t^*,s^*)\) is a solution of Eq. (3.16) in domain \(D_i^1\times D_j^2\), and inequalities (4.3) and (4.4) are satisfied, then there exist \(\epsilon _1 > 0\) and \(\epsilon _2 > 0\) such that,

$$\begin{aligned} \epsilon _1= \max \left\{ \frac{1}{ML}K_1(t^{*},s^{*})\mid V_1^TAE \mid , \frac{1}{ML}K_2(t^{*},s^{*})\mid V_2^TA^TP\mid \right\} \end{aligned}$$
(4.10)
$$\begin{aligned} \epsilon _2= \max \left\{ \frac{1}{ML}K_1(t^{*},s^{*})\mid V_1^TBE \mid , \frac{1}{ML}K_2(t^{*},s^{*})\mid V_2^TB^TP\mid \right\} \end{aligned}$$
(4.11)

and

$$\begin{aligned} \mid u_1(\mu (t),\eta (s))-u_1(\mu (t^{*}),\eta (s^{*}))\mid \le \epsilon _1, \forall t\in D_i^1; \forall s\in D_j^2. \end{aligned}$$
(4.12)
$$\begin{aligned} \mid u_2(\mu (t),\eta (s))-u_2(\mu (t^{*}),\eta (s^{*}))\mid \le \epsilon _2, \forall t\in D_i^1; \forall s\in D_j^2. \end{aligned}$$
(4.13)

where \(K_1(t,s)(K_2(t,s))\) is defined in Eqs. (3.17), (3.18), \(V_1(V_2)\) is defined in Eq. (3.19), (3.20), and EP is denoted in Eqs. (3.23), (3.24), respectively.

Proof

First, we prove that there exists \(\epsilon _1\) in (4.10) and \(u_1(\mu (t),\eta (s))\) satisfies (4.12). Since \((t^*, s^*)\) is a solution of Eq. (3.16) in domain \(D_i^1\times D_j^2\) and inequalities (4.3) and (4.4) are satisfied, based on Proposition 2, we have the following.

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial u_1(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =K_1(t,s)V_1^TA\theta (s)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}= 0, t\in D_i^1, s\in D_j^2, \\ \frac{\partial u_1(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =K_2(t,s)V_2^TA^T\lambda (t)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}= 0, t\in D_i^1, s\in D_j^2 \end{array}\right. } \end{aligned}$$
(4.14)

The Taylor expansion of \(u_1(\mu (t),\eta (s))\) at \((t^*,s^*)\) in domain \(D_i^1\times D_j^2\) is as follows.

$$\begin{aligned}&u_1(\mu (t),\eta (s))=u_1(\mu (t^*),\eta (s^*))+\frac{1}{2}\frac{\partial ^2 u_1}{\partial t^2}\mid _{\begin{array}{c} t=t^*\\ s=s^* \end{array}}(t-t^*)^2 \nonumber \\&\quad +\frac{1}{2}\frac{\partial ^2 u_1}{\partial s^2}\mid _{\begin{array}{c} t=t^*\\ s=s^* \end{array}}(s-s^*)^2 + \frac{\partial ^2 u_1}{\partial t \partial s}\mid _{\begin{array}{c} t=t^*\\ s=s^* \end{array}}(t-t^*)(s-s^*)\nonumber \\&\quad + H.O.T., t\in D_i^1, s\in D_j^2. \end{aligned}$$
(4.15)

If we calculate the second-order partial derivative of Eq. (4.14), then we have the following.

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\frac{\partial ^2 u_1(\mu (t),\eta (s))}{\partial t^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}=\frac{\partial K_1(t,s)}{\partial t} V_1^TA\theta (s)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =0, t\in D_i^1,j\in D_j^2. \\ &{}\frac{\partial ^2 u_1(\mu (t),\eta (s))}{\partial s^2}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}=\frac{\partial K_2(t,s)}{\partial s} V_2^TA^T\lambda (t)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} =0, t\in D_i^1,j\in D_j^2. \end{array}\right. } \end{aligned}$$

Therefore, Eq. (4.15) is simplified as follows.

$$\begin{aligned}&u_1(\mu (t),\eta (s))=u_1(\mu (t^{*}),\eta (s^{*}))\nonumber \\&\quad +\frac{\partial ^2 u_1}{\partial t \partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}(t-t^{*})(s-s^{*}), t\in D_i^1, s\in D_j^2. \end{aligned}$$
(4.16)

If we calculate the mixed second-order partial derivative of the first equation in (4.14), then we have the following.

$$\begin{aligned} \frac{\partial ^2 u_1(\mu (t),\eta (s))}{\partial t\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}&= \frac{\partial K_1(t,s)}{\partial s}V_1^TA\theta (s)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\nonumber \\&\quad +K_1(t,s)V_1^TA\frac{d\theta (s)}{ds}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} \nonumber \\&= K_1(t,s)V_1^TA\frac{d\theta (s)}{ds}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\nonumber \\&=K_1(t^*,s^*)V_1^TAE. \end{aligned}$$
(4.17)

If we calculate the mixed second-order partial derivative of the second equation in (4.14), then we have the following.

$$\begin{aligned} \frac{\partial ^2 u_1(\mu (t),\eta (s))}{\partial s\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}&= \frac{\partial K_2(t,s)}{\partial s}V_2^TA^T\lambda (t)\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\nonumber \\&\qquad +K_2(t,s)V_2^TA^T\frac{d\lambda (t)}{dt}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} \nonumber \\&= K_2(t,s)V_2^TA^T\frac{d\lambda (t)}{dt}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\nonumber \\&\quad =K_2(t^*,s^*)V_2^TA^TP. \end{aligned}$$
(4.18)

Since \(\mid t-t^*\mid \le 1/M\) and \(\mid s-s^*\mid \le 1/L\), based on Eq. (4.16), the following inequality is derived.

$$\begin{aligned}&\mid u_1(\mu (t),\eta (s))- u_1(\mu (t^{*}),\eta (s^{*}))\mid \nonumber \\&\quad \le \frac{1}{ML}\mid \frac{\partial ^2 u_1}{\partial t \partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}}\mid , t\in D_i^1, s\in D_j^2. \end{aligned}$$
(4.19)

Generally speaking, the right hands of Eqs. (4.17) and (4.18) are identical. However, because they are approximations of the second-order partial derivatives, they may be not equal to each other. Therefore, we choose \(\epsilon _1\) as the maximum value of them such that,

$$\begin{aligned}&\epsilon _1= \max \left\{ \frac{1}{ML}K_1(t^{*},s^{*})\mid V_1^TAE \mid ,\right. \\&\qquad \left. \frac{1}{ML}K_2(t^{*},s^{*})\mid V_2^TA^TP\mid \right\} . \end{aligned}$$

We have the following inequality.

$$\begin{aligned} \mid u_1(\mu (t),\eta (s))- u_1(\mu (t^{*}),\eta (s^{*}))\mid \le \epsilon _1, t\in D_i^1, s\in D_j^2. \end{aligned}$$

Similarly, we can prove that there exist \(\epsilon _2\) in (4.11) and \(u_2(\mu (t),\eta (s))\) satisfies inequality (4.13). \(\square \)

Based on the definition of \(\epsilon -\)NE (Lipton et al. 2003), Theorem 3 indicates that \(u_i(\mu (t^{*}),\eta (s^{*}))(i=1,2)\) is an \(\epsilon -\)NE. For a given fuzzification, the value of \(\epsilon _i(i=1,2)\) is determined. Theoretically, \(\epsilon _i(i=1,2)\) can be any value in interval \([0,+\infty )\). Theorem 3 provides the measurement of approximation deviation. Numerical results in Sect. 5.1 show that the value of \(\epsilon _i(i=1,2)\) can be as small as 0.1.

5 Example and numerical results

First, an example of \(5\times 5\) bimatrix game is given to demonstrate the procedure of the novel algorithm. Secondly, we compare the new algorithm with the LH algorithm by using numerical experiments.

Example 1

Find \(\epsilon -\)NE in the following bimatrix game.

$$\begin{aligned} A =\begin{pmatrix} 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 1&{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ \end{pmatrix}, B= \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 1&{}\quad 0 &{}\quad 1 &{}\quad 0\\ 1 &{}\quad 0&{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \end{pmatrix}. \end{aligned}$$

Step 1. Fuzzify strategies x and y. A possibility distribution for player 1 is given as follows.

$$\begin{aligned} \lambda _1= & {} (0,0,1), \lambda _2=(0,1,1), \lambda _3=(0,1,1),\\ \lambda _4= & {} (0,0,1), \lambda _5=(0,1,1). \end{aligned}$$

The membership functions \(\lambda _i(t)\) of TFNs \(\lambda _i(i=1,\ldots ,5)\) are as follows.

$$\begin{aligned} \lambda _1(t)&={\left\{ \begin{array}{ll} 1-t &{}t\in [0, 1] \\ 0 &{} \text {otherwise} \end{array}\right. }, \lambda _2(t)={\left\{ \begin{array}{ll} t &{}t\in [0, 1] \\ 0 &{} \text {otherwise} \end{array}\right. },\\ \lambda _3(t)&={\left\{ \begin{array}{ll} t &{} t\in [0, 1] \\ 0 &{} \text {otherwise} \end{array}\right. },\\ \lambda _4(t)&={\left\{ \begin{array}{ll} 1-t &{} t \in [0, 1] \\ 0 &{} \text {otherwise} \end{array}\right. }, \lambda _5(t) ={\left\{ \begin{array}{ll} t &{} t\in [0,1] \\ 0 &{} \text {otherwise} \end{array}\right. }. \end{aligned}$$

A possibility distribution for player 2 is given as follows.

$$\begin{aligned} \theta _1= & {} (0,1,1), \theta _2=(0,1,1), \theta _3=(0,0,1),\\ \theta _4= & {} (0,1,1), \theta _5=(0,1,1). \end{aligned}$$

The membership functions \(\eta _j(s)\) of TFNs \(\theta _j(j=1,\ldots ,5)\) are as follows.

$$\begin{aligned} \theta _1(s)=&{\left\{ \begin{array}{ll} s &{} s\in [0, 1] \\ 0 &{} \text {otherwise} \end{array}\right. }, \theta _2(s)={\left\{ \begin{array}{ll} s &{} s\in [0, 1] \\ 0 &{} \text {otherwise} \end{array}\right. },\\ \theta _3(s)=&{\left\{ \begin{array}{ll} 1-s &{} s\in [0, 1] \\ 0 &{} \text {otherwise} \end{array}\right. }, \\ \theta _4(s)=&{\left\{ \begin{array}{ll} s &{} s \in [0, 1] \\ 0 &{} \text {otherwise} \end{array}\right. }, \theta _5(s) ={\left\{ \begin{array}{ll} s &{} s\in [0,1] \\ 0 &{} \text {otherwise} \end{array}\right. }. \end{aligned}$$

Step 2. Verify \(K_i(t,s)>0(i=1,2)\).

$$\begin{aligned} K_1(t,s)&=\frac{1}{(2+t)^2\times (1+3s)}>0, t\in [0,1], s\in [0,1] \\ K_2(t,s)&=\frac{1}{(1+3s)^2\times (2+t)}>0, t\in [0,1], s\in [0,1]. \end{aligned}$$

Step 3. Calculate \(V_1\) and \(V_2\). Based on equations (3.19) and (3.20), we can calculate \(V_i(i=1,2)\) as follows.

$$\begin{aligned} V_1^T=(-3,2,2,-3,2), \quad V_2^T=(1,1,-4,1,1). \end{aligned}$$

Step 4. Calculate E, H, P and Q with Eq. (3.23) and (3.24).

$$\begin{aligned} P^T&=(-1, 1, 1, -1, 1), Q=(-1, 0, 0, -1, 0) \\ E^T&= (1, 1, -1, 1, 1), H^T=(0, 0, -1, 0, 0). \end{aligned}$$

Step 5. Calculate \((t^*,s^*)\) based on Eq. (3.22). \((t^{*},s^{*})=(0.75, 0.4)\). It is clear that inequalities in (3.25) are satisfied such that \(0.75\in [0,1]\) and \(0.4\in [0,1]\).

Step 6. Verify that inequalities (4.3) and (4.4) are satisfied.

$$\begin{aligned}&\frac{\partial u_1(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = K_1(t^*,s^*)V_1^TA\theta (s^*) = 0 \\&\frac{\partial u_2(\mu (t),\eta (s))}{\partial t}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = K_1(t^*,s^*)V_1^TB\theta (s^*) = 0 \\&\frac{\partial u_1(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = K_2(t^*,s^*)V_2^TA^T\lambda (t^*) \approx -0.0939 \\&\frac{\partial u_2(\mu (t),\eta (s))}{\partial s}\mid _{\begin{array}{c} t=t^{*}\\ s=s^{*} \end{array}} = K_2(t^*,s^*)V_2^TB^T\lambda (t^*) \approx -0.0939. \end{aligned}$$

Step 7. Calculate \((x^{*},y^{*})\). Based on Eqs. (3.7) and (3.10), we get the following.

$$\begin{aligned} x^{*}&=(0.0909,0.2727,0.2727,0.0909,0.2727)\\ y^{*}&=(0.1818,0.1818,0.2727,0.1818,0.1818). \end{aligned}$$

Step 8. Calculate the parameter \(\epsilon _i(i=1,2)\). Based on Eqs. (4.10) and (4.11), we can get \(\epsilon _1\approx 0.3005\) and \(\epsilon _2 \approx 0.3005\).

Step 9. Calculate \(u_i(\mu (t^*)),\eta (s^*))(i=1,2)\). \(u_1(\mu (t^*),\eta (s^*))\approx 0.2727\) and \(u_2(\mu (t^*),\eta (s^*))\approx 0.2727.\)

5.1 Numerical results

The algorithm is implemented in C++ with Microsoft Visual Studio 2015 and run in a computer with one Intel Core i5-7200U CPU (2.50GHz) and 8 GB RAM.

First, we compare the new algorithm with the LH algorithm with one instance for each size 12, 16, 24, 32, 48, 64, and 96 of bimatrix games. The instance is generated with C++ generator of random numbers uniformly distributed in the interval [0,1]. Matrices \(A=(a_{ij})\) and \(B=(b_{ij})\) are generated independently for each size of bimatrix games. The data of the LH algorithm in Table 1 are obtained with the Windows version gambit-lcp.exe of GamBit (McKelvey et al. 2014). The data of the new algorithm in Table 1 are obtained when \(\epsilon _i=0.1(i=1,2)\).

Table 1 The comparison of the new algorithm with the LH algorithm

Table 1 shows that the new algorithm is superior to the LH algorithm (gambit-lcp.exe) for each size game from the perspective of computational time. It may not be prudent to conclude that one algorithm is better than another by using only one instance because the computational times differ from one instance to another.

To have unbiased comparison, we use sets of random bimatrix games to measure the average of the computational times (McKelvey and McLennan 1996). We generate 100 sets of random games by using C++ generator of random numbers uniformly distributed in the interval [0,1]. In all cases, the computational time is measured when one approximate NE is found. The average computational times for each game are shown in Table 2. The data are visualized in Fig. 3.

Table 2 Average times on 100 random games in seconds
Fig. 3
figure 3

Computational time comparison with the LH algorithm

When the game size is smaller than or equal to 24, the computational time of the new algorithm is similar to that of the classical LH algorithm. However, when game sizes increase, the computational time of the classical LH algorithm surges sharply, while the computational time of the new algorithm keeps its growth rate within three. For example, the computational time of the classical LH is more than sixfold of that of the new algorithm when the game size is 96.

6 Conclusion

This paper proposed a new algorithm for computing approximately mixed NE in bimatrix games. Advantages of the algorithm are that: the encapsulation of the standard simplex into the normalized possibility distributions by using the fuzzifications, the transformation of the NE problem in bimatrix games into a reduced form by taking advantage of the piecewise linearity of possibility distributions, the solutions of the new algorithm are the saddle points of payoff functions, and the new algorithm provides a method of determining how close an approximate NE is to a solution and the numerical results show that the value of approximation deviation can be as small as 0.1.

The example in Sect. 5 demonstrates that the algorithm is efficient and easy to use when proper possibility distributions are defined. The numerical results show that the new algorithm is superior to the LH algorithm from the perspectives of computational time. The essential and vital step of the algorithm is the technique of fuzzifying mixed strategies (xy) appropriately. Appropriate fuzzifications result in both better performance and accuracy for calculating mixed NE.

This work leaves many potentials for future research, for instance, comprehensive comparisons with other algorithms such as semidefinite programming and heuristic algorithms, and the theoretical analysis of the new algorithm from the perspective of computational complexity theory. For example, it is necessary to categorize the complexity class of the new algorithm.