1 Introduction

Throughout the paper, we suppose that \(\mathcal {H}\) is a real Hilbert space endowed with weak topology defined by the inner product 〈⋅,⋅〉 and its reduced norm ∥⋅∥. Let \(C\subseteq \mathcal {H}\) be a nonempty closed convex subset and \(f: C\times C \to \mathbb {R}\) be a bifunction satisfying f(x,x)=0 for every xC. As usual, we call such a bifunction an equilibrium bifunction. We consider the following equilibrium problem:

$$ \text{Find }~ x^{\ast} \in C:~ f(x^{\ast},x) \geq 0 \quad \forall x\in C. $$
(EP)

This problem is also often called Ky Fan’s inequality due to his contribution to the subject.

Equilibrium problem (EP) gives a unified formulation for some problems such as optimization problems, saddle point, variational inequalities, fixed point, and Nash equilibria, in the sense that it includes these problems as particular cases (see for instance [4, 6, 22]).

An important approach for solving equilibrium problem (EP) is the subgradient projection method which can be regarded as an extension of the steepest descent projection method in smooth optimization. It is well known that when the bifunction f is convex subdifferentiable with respect to the second argument and Lipschitz, strongly monotone on C, one can choose regularization parameters so that this method linearly convergent (see, e.g., [23]). However, when f is monotone, the method may not be convergent (see Example 12.1.3 in [9] for monotone variational inequality). In recent years, the extragradient (or double projection) method developed by Korpelevich in [17] has been extended to obtain convergent algorithms for pseudomonotone equilibrium problems [26]. The extragradient method requires two projections onto the strategy set C, which in some cases is computational cost. Recently, in [8, 28], inexact subgradient algorithms using only one projection have been proposed for solving equilibrium problems with paramonotone equilibrium bifunctions. Other methods such as auxiliary problem principle [18, 24], penalization technique [5, 22], gap and merit functions [19, 25], and the Tikhonov and proximal point regularization methods [1214, 16, 20] are commonly used for equilibrium problems. Solution existence for equilibrium problems can be found in some papers (see, e.g., [24, 6]).

In this paper, we study equilibrium problem (EP) with strongly pseudomonotone bifunctions. We show the existence of a unique solution of the problem. We then propose a generalized projection method for strongly pseudomonotone equilibrium problems. Three main features of the proposed method are:

  • It uses only one projection without requiring Lischitz continuity allowing strong convergence;

  • It allows that moving directions can be chosen in such a general way taking both the cost bifunction and the feasible set into account;

  • It does not require that the bifunction is subdifferentiable with respect to the second argument everywhere.

2 Solution Existence

As usual, by P C , we denote the projection operator onto the closed convex set C with the norm ∥⋅∥, that is,

$$P_{C}(x) \in C:~ \|x - P_{C}(x)\| \leq \|x - y\| \quad \forall y \in C. $$

The following well-known results on the projection operator will be used in the sequel.

Lemma 1

([1]) Suppose that C is a nonempty closed convex set in \(\mathcal {H}\) . Then,

  1. (i)

    P C (x) is singleton and well defined for every x;

  2. (ii)

    π = P C (x) if and only if 〈x−π,y−π〉≤0 ∀y∈C;

  3. (iii)

    ∥P C (x)−P C (y)∥ 2 ≤∥x−y∥ 2 −∥P C (x)−x+y−P C (y)∥ 2 ∀x,y∈C.

We recall some well-known definitions on monotonicity (see, e.g., [2,3,6]).

Definition 1

A bifunction \(\phi : C\times C\to \mathbb {R}\) is said to be

  1. (a)

    strongly monotone on C with modulus β>0 (shortly β-strongly monotone) if

    $$\phi(x,y) + \phi(y,x)\leq -\beta \|x - y\|^{2} \quad \forall x, y \in C; $$
  2. (b)

    monotone on C, if

    $$\phi(x,y) + \phi(y,x) \leq 0 \quad \forall x, y \in C; $$
  3. (c)

    strongly pseudomonotone on C with modulus β>0 (shortly β-strongly pseudomonotone), if

    $$\phi(x,y) \geq 0\quad \Longrightarrow\quad \phi(y,x) \leq - \beta\|x-y\|^{2} \quad \forall x, y\in C; $$
  4. (d)

    pseudomonotone on C, if

    $$\phi(x, y) \geq 0\quad \Longrightarrow\quad \phi(y,x) \leq 0 \quad \forall x, y\in C. $$

From the definitions, it follows that \({\mathrm (a)} \Rightarrow {\mathrm (b)}\Rightarrow {\mathrm (d)}\) and \({\mathrm (a)}\Rightarrow {\mathrm (c)} \Rightarrow {\mathrm (d)}\), but there is no relationship between (b) and (c). Furthermore, if f is strongly monotone (respectively pseudomonotone) with modulus β>0, then it is strongly monotone (respectively pseudomonotone) with modulus \(\beta ^{\prime }\) for every \(0 < \beta ^{\prime } \leq \beta \).

The following example, for strongly pseudomonotone, bifunction is a generalization of that for strongly pseudomonotone operator in [15]. Let

$$f(x,y):= (R-\|x\|) g(x,y), \quad B_{r}:= \{x: \in \mathcal{H}: \|x\| \leq r\}, $$

where g is strongly monotone on B r with modulus β>0, for instance g(x,y)=〈x,yx〉, and R>r>0. We see that f is strongly pseudomonotone on B r . Indeed, suppose that f(x,y)≥0. Since xB r , we have g(x,y)≥0. Then, by β-strong monotonicity of g on B r , g(y,x)≤−βxy2 for every x,yB r . From the definition of f and yB r , it follows that

$$f(y,x) = (R - \|y\|) g(y,x) \leq -\beta (R - \|y\|) \|x-y\|^{2}\leq -\beta (R - r) \|x-y\|^{2}. $$

Thus, f is strongly pseudomonotone on B r with modulus β(Rr).

The following lemma, that will be used to prove Proposition 1 below, is a direct consequence of Theorem 3.2 in [3].

Lemma 2

Let \(\phi : C\times C \to \mathbb {R}\) be an equilibrium bifunction such that ϕ(⋅,y) is hemicontinuous for each y∈C and ϕ(x,⋅) is lower semicontinuous convex for each x∈C. Suppose that the following coercivity condition holds

$$\exists ~ \text{ compact set } ~ W:~ (\forall x \in C\setminus W,~ \exists y\in C: \phi(x,y) < 0). $$

Then, the equilibrium problem (EP) has a solution.

In what follows, we need the following blanket assumptions on the bifunction f:

  1. (A1)

    For each xC, the function f(x,⋅) is lower semicontinuous, convex (not necessarily subdifferentiable everywhere), and for each yC, the function f(⋅,y) is hemicontinuous on C;

  2. (A2)

    f is β-strongly pseudomonotone on C.

It is well known that if f is strongly monotone on C, then under assumption (A1), equilibrium problem (EP) has a unique solution. The following lemma extends this result to (EP) with strongly pseudomonotone bifunctions.

Proposition 1

Suppose that f is β-strongly pseudomonotone on C, then under assumption (A1), Problem (EP) has a unique solution.

Proof

First, suppose that C is unbounded. Then, by Lemma 2, it is sufficient to prove the following coercivity condition:

$$ \exists \text{ closed ball }~ B:~ (\forall x \in C\setminus B,~ \exists y\in C\cap B: f(x,y) < 0). $$
(C0)

Indeed, otherwise, for every closed ball B r around 0 with radius r, there is x rCB r such that \(f(x,y) \geq 0~ \forall y\in C\cap B_{r}\).

Fix r 0>0, then for every r>r 0, there exists x rCB r such that f(x r,y 0)≥0 with \(y^{0} \in C\cap B_{r_{0}}\). Thus, since f is β-strongly pseudomonotone, we have

$$ f(y^{0}, x^{r} ) + \beta\|x^{r}-y^{0}\|^{2}\leq 0 \quad \forall r. $$
(1)

On the other hand, since C is convex and f(y 0,⋅) is convex on C, it is well known from convex analysis that there exists x 0C such that 2 f(y 0,x 0)≠, where 2 f(y 0,x 0) stands for the subdifferential of the convex function f(y 0,⋅) at x 0. Take w 2 f(y 0,x 0), by the definition of subgradient one has

$$\langle w^{\ast}, x - x^{0}\rangle + f(y^{0},x^{0}) \leq f(y^{0},x) \quad \forall x. $$

With x=x r it yields

$$\begin{array}{@{}rcl@{}} f(y^{0}, x^{r} ) + \beta\|x^{r}-y^{0}\|^{2} &\geq& f(y^{0},x^{0}) + \langle w^{\ast}, x^{r} - x^{0}\rangle +\beta\|x^{r}-y^{0}\|^{2}\\ &\geq& f(y^{0},x^{0}) -\|w^{\ast}\| \|x^{r}- x^{0}\| + \beta\|x^{r} - y^{0}\|^{2}. \end{array} $$

Letting \(r \to \infty \), since \(\|x^{r} \| \to \infty \), we obtain \(f(y^{0}, x^{r} ) + \beta \|x^{r}-y^{0}\|^{2} \to \infty \) which contradicts (1). Thus, the coercivity condition (C0) must hold true, then by virtue of Lemma 2, equilibrium problem (EP) admits a solution.

In the case, C is bounded, the proposition is a consequence of Ky Fan’s theorem [10].

The uniqueness of the solution is immediate from the strong pseudomonotonicity of f. □

We recall [11] that an operator \(F : C \to \mathcal {H}\) is said to be strongly pseudomonotone on C with modulus β>0, shortly β-strongly pseudomonotone, if

$$\langle F(x), y-x\rangle \geq 0\quad \Longrightarrow \quad\langle F(y),y-x\rangle \geq \beta\|y-x\|^{2} \quad \forall x, y \in C. $$

In order to apply the above proposition to the variational inequality problem

$$ \text{ Find }~ x^{\ast}\in C: \langle F(x^{\ast}), y- x^{\ast}\rangle \geq 0 \quad \forall y\in C, $$
(VI)

where F is a strongly pseudomonotone operator on C, we define the bifunction f by taking

$$ f(x,y):= \langle F(x), y-x\rangle. $$
(2)

It is obvious that x is a solution of (VI) if and only if it is a solution of equilibrium problem (EP) with f defined by (2). Moreover, it is easy to see that F is β-strongly pseudomonotone and hemicontinuous on C if and only if so is f. The following solution existence result, which is an immediate consequence of Proposition 1, seems to have not appeared in the literature.

Corollary 1

Suppose that F is hemicontinuous and strongly pseudomonotone on C. Then, the variational inequality problem (VI) has a unique solution.

3 A Generalized Projection Method for Strongly Pseudomonotone EPs

For ε≥0, we call a point x εC an ε -solution to Problem (EP), if f(x ε,y)≥−ε for every yC. The following well-known lemma will be used in the proof of the convergence theorem below.

Lemma 3

[21] Suppose that \(\{\alpha _{k}\}_{0}^{\infty }\) is an infinite sequence of positive numbers satisfying

$$\alpha_{k+1} \leq \alpha_{k} + \xi_{k} \quad \forall k $$

with \({\sum }_{k=0}^{\infty }\xi _{k} < \infty \) . Then, the sequence {α k } is convergent.

figure a

Theorem 1

Suppose that assumptions (A1) and (A2) are satisfied. Then,

  1. (i)

    if the algorithm terminates at iteration k, then x k is an ε-solution;

  2. (ii)

    it holds that

    $$ \|x^{k+1} - x^{\ast}\|^{2} \leq (1-2\beta\rho_{k})\|x^{k}-x^{\ast}\|^{2}+2{\rho^{2}_{k}} +{\rho^{2}_{k}}\|g^{k}\|^{2}\quad \forall k, $$
    (5)

    where x is the unique solution of (EP). Furthermore, if the algorithm does not terminate, then the sequence {x k } strongly converges to the solution x whenever the sequence {g k } is bounded.

Proof

  1. (i)

    If the algorithm terminates at Step 2, then g k=0 and ρ k ε. Then, by (4), f(x k,y)≥−ρ k ≥−ε for every yC. Hence, x k is an ε-solution. If the algorithm terminates at Step 3, then x k is an ε-solution because of Lemma 1 (ii) and (4).

  2. (ii)

    Since x k+1=P C (x kρ k g k), one has

    $$\begin{array}{@{}rcl@{}} \|x^{k+1} - x^{\ast}\|^{2} &\leq& \|x^{k} - \rho_{k} g^{k} - x^{\ast}\|^{2} \\ &=&\|x^{k} -x^{\ast}\|^{2} - 2\rho_{k}\langle g^{k}, x^{k}-x^{\ast}\rangle +{\rho^{2}_{k}}\|g^{k}\|^{2}. \end{array} $$
    (6)

Applying (4) with y=x , we obtain

$$f(x^{k}, x^{\ast}) + \langle g^{k}, x^{k} - x^{\ast}\rangle \geq -\rho_{k},</p><p class="noindent">$$

which implies

$$ - \langle g^{k}, x^{k}-x^{\ast}\rangle \leq f(x^{k},x^{\ast}) + \rho_{k}. $$
(7)

Then, it follows from (4) that

$$ \|x^{k+1} - x^{\ast}\|^{2} \leq \|x^{k} -x^{\ast}\|^{2} +2\rho_{k} \left( f(x^{k},x^{\ast}) + \rho_{k} \right) + {\rho^{2}_{k}}\|g^{k}\|^{2}. $$
(8)

Since x is a solution, f(x ,x k)≥0, it follows from β-strong pseudomonotonicity of f that

$$f(x^{k},x^{\ast}) \leq-\beta\|x^{k}-x^{\ast}\|^{2}. $$

Combining the last inequality with (8), we obtain

$$\begin{array}{@{}rcl@{}} \|x^{k+1} - x^{\ast}\|^{2} &\leq& \|x^{k} -x^{\ast}\|^{2}-2\beta\rho_{k}\|x^{k}-x^{\ast}\|^{2}+2{\rho^{2}_{k}} + {\rho^{2}_{k}} \|g^{k}\|^{2}\\ &=& (1-2\beta\rho_{k})\|x^{k}-x^{\ast}\|^{2}+2{\rho^{2}_{k}} +{\rho^{2}_{k}}\|g^{k}\|^{2}. \end{array} $$
(9)

Now suppose that the algorithm does not terminate, and that ∥g k∥≤C for every k. Then, it follows from (9) that

$$\begin{array}{@{}rcl@{}} \|x^{k+1} - x^{\ast}\|^{2}&\leq& (1-2\beta\rho_{k})\|x^{k}-x^{\ast}\|^{2} + (2+C^{2}){\rho^{2}_{k}}\\ &=& \|x^{k} - x^{\ast}\|^{2} -\lambda_{k}\|x^{k} - x^{\ast}\|^{2} + (2+ C^{2}) {\rho^{2}_{k}}, \end{array} $$
(10)

where λ k :=2β ρ k . Since \({\sum }_{k=1}^{\infty } {\rho ^{2}_{k}} < \infty \), in virtue of Lemma 3, we can conclude that the sequence {∥x kx 2} is convergent. In order to prove that the limit of this sequence is 0, we sum up inequality (10) from 1 to k+1 to obtain

$$\|x^{k+1}- x^{\ast}\|^{2} \leq\|x^{1}-x^{\ast}\|^{2} - \sum\limits_{j=1}^{k}\lambda_{j}\|x^{j}-x^{\ast}\|^{2} + (2+ C^{2})\sum\limits_{j=2}^{k}{\rho^{2}_{j}}, $$

which implies

$$ \|x^{k+1}- x^{\ast}\|^{2} +\sum\limits_{j=1}^{k}\lambda_{j}\|x^{j}-x^{\ast}\|^{2} \leq \|x^{1}-x^{\ast}\|^{2} + (2+ C^{2}) \sum\limits_{j=1}^{k}{\rho^{2}_{j}}. $$
(11)

Since λ j :=2β ρ j , we have

$$ \sum\limits_{j=1}^{\infty} \lambda_{j} = 2\beta \sum\limits_{j=1}^{\infty} \rho_{j} = \infty. $$
(12)

Note that {x k} is bounded and that \({\sum }_{j=0}^{\infty } {\rho ^{2}_{j}} < \infty ,\) we can deduce from (11) and (12) that ∥x kx 2→0 as \(k \to \infty \). □

The algorithm described above can be regarded as an extension of the one in [28] in a Hilbert space setting. The main difference lies in the determination of g k given by formula (4). This formula is motivated from the projection-descent method in optimization, where a moving direction must be both descent and feasible. Such a direction thus involves both the objective function and the feasible domain. In fact moving directions defined by (4) rely not only upon the gradient or a subgradient as in [28] and other projection algorithms for equilibrium problems, but also upon the feasible set. Points (i), (ii), and (iii) in Remark 1 below discuss more detail on formula (4).

Remark 1

  1. (i)

    A subproblem in this algorithm is to find a moving direction g k satisfying (4). It is obvious that if g k is a ρ k -subgradient of the convex function f(x k,⋅) at x k, then g k satisfies (4).

    When \(m_{k}:=\inf _{y\in C} f(x^{k},y) > -\infty \), it is easy to see that if g k is any vector satisfying

    $$\langle g^{k}, y- x^{k} \rangle \leq m_{k} + \rho_{k} := t_{k} \quad \forall y\in C, $$

    i.e., g k is a vector in t k -normal set N C t k (x k) of C at x k, then (4) holds true.

  2. (ii)

    For variational inequality (VI) with f(x,y) defined by (2), formula (4) takes the form

    $$ \langle F(x^{k}), y- x^{k} \rangle + \langle g^{k}, x^{k}- y \rangle \geq -\rho_{k} \quad \forall y \in C, $$
    (13)

    which means that g kF(x k)∈N C ρ k (x k), where N C ρ k (x k) denotes the ρ k -normal set of C at x k, that is,

    $$N^{\rho_{k}}_{C}(x^{k}):= \{w^{k}: \langle w^{k}, y-x^{k}\rangle \leq \rho_{k}~ \forall y\in C\}. $$

    In an usual case, when C is given by \(C := \{x \in \mathcal {H}: g(x) \leq 0\}\) with g being a subdifferentiable continuous convex function, one can take g k=F(x k) when g(x k)<0, and g k may be any vector such that g kF(x k)∈ g(x k) when g(x k)=0. Since

    $$N_{C}(x^{k}) = \{0\}~ \text { if }~ g(x^{k}) < 0,\quad \text{and} \quad N_{C}(x^{k}) \supseteq \partial g(x^{k})~\text{ if }~ g(x^{k}) = 0, $$

    in both cases \(g^{k} - F(x^{k}) \in N_{C}(x^{k})\subset N^{\rho _{k}}_{C}(x^{k})\) for any ρ k ≥0.

  3. (iii)

    The direction g k satisfying g kF(x k)∈N C ρ k (x k) takes not only the cost operator F, but also the constrained set C into account. This is helpful in certain cases, for example, it allows avoiding the projection onto C. Indeed, it may happen that −F(x k) is not a feasible direction of C at x k, but −g k is it.

  4. (iv)

    Note that since f(x k,⋅) is convex, for every ρ k >0, a ρ k -subgradient of the function f(x k,⋅) does always exist at every point in dom f(x k,⋅), but it may fail to exist if ρ k =0. Thus, the proposed algorithm does not require that the convex function f(x k,⋅) is subdifferentiable at every point in its domain.

  5. (v)

    For implementing the algorithm, one suggests that we take \(\rho _{k} := \varepsilon \rho ^{\prime }_{k}\), where \(\rho ^{\prime }_{k}\) is a decreasingly monotone positive sequence satisfying (3).

Remark 2

If f is jointly continuous on an open set Δ×Δ containing C×C, then {g k} is bounded whenever ρ k →0 (see, e.g., Proposition 3.4 in [29]). In the case of variational inequality (VI) with f(x,y) defined by (2), if g k=F(x k) and F is continuous, then {g k} is bounded because of boundedness of {x k}.

4 A Numerical Example

We consider an oligopolistic equilibrium model of the electricity markets (see, e.g., [7 , 27]). In this model, there are n c companies, each company i may possess I i generating units. Let x denote the vector whose entry x i stands for the power generating by unit i. Following [7], we suppose that the price p is a decreasing affine function of σ with \(\sigma = {\sum }_{i=1}^{n^{g}} x_{i}\) where n g is the number of all generating units, that is,

$$p(x) = a_{0} - 2\sum\limits_{i=1}^{n^{g}} x_{i}= p(\sigma), $$

where a 0>0 is a constant (in general is large). Then, the profit made by company i is given by

$$f_{i}(x) = p(\sigma) \sum\limits_{j\in I_{i}} x_{j} - \sum\limits_{j \in I_{i}}c_{j}(x_{j}), $$

where c j (x j ) is the cost for generating x j . Unlike [7], we do not suppose that the cost c j (x j ) is differentiable, but

$$c_{j}(x_{j}) := \max\{ {c^{0}_{j}}(x_{j}), {c^{1}_{j}}(x_{j})\} $$

with

$${c^{0}_{j}}(x_{j}) := \frac{{\alpha^{0}_{j}}}{2}{x_{j}^{2}} +{\beta^{0}_{j}}x_{j} + {\gamma^{0}_{j}}, \quad {c^{1}_{j}}(x_{j}) := {\alpha^{1}_{j}}x_{j} +\frac{{\beta^{1}_{j}}}{{\beta^{1}_{j}} + 1} \gamma_{j}^{-1/{\beta^{1}_{j}}}(x_{j})^{({\beta^{1}_{j}}+1)/{\beta^{1}_{j}}}, $$

where \({\alpha ^{k}_{j}}, {\beta ^{k}_{j}}, {\gamma ^{k}_{j}}~(k=0, 1)\) are given parameters.

Let \(x^{\min }_{j}\) and \(x^{\max }_{j}\) be the lower and upper bounds for the power generating by the unit j. Then, the strategy set of the model takes the form

$$C:=\{ x=(x_{1},\dots,x_{n^{g}})^{T}:~ x^{\min}_{j}\leq x_{j} \leq x^{\max}_{j}~ \forall j\}. $$

Define the matrices A, B by taking

$$ A:= 2\sum\limits_{i=1}^{n^{c}}(1- q^{i})(q^{i})^{T}, \quad B:= 2\sum\limits_{i=1}^{n^{c}}q^{i}(q^{i})^{T}, $$
(14)

where \(q^{i}:=({q^{i}_{1}},\dots ,q^{i}_{n^{q}})^{T}\) with

$$ {q^{i}_{j}}:= \left\{\begin{array}{cc} 1 & \text{ if }~ j\in I_{i}, \\ 0 & \text{ otherwise}, \end{array}\right. $$
(15)

and let

$$ a:= -a_{0} \sum\limits_{i=1}^{n^{c}} q^{i}, \quad c(x) := \sum\limits_{j=1}^{n^{g}}c_{j}(x_{j}). $$
(16)

Then, by Lemma 7 in [27], the equilibrium problem being solved can be formulated as

$$ x \in C:~ f(x,y):= \left(\left(A+\frac{3}{2}B\right) x + \frac{1}{2}By + a\right)^{T}(y-x)+c(y)-c(x)\geq 0 \quad \forall y\in C. $$
(EP)

Note that f(x,y)+f(y,x)=−(yx)T(A+B)(yx)T. Thus, since A+B is not positive semidefinite, f may not be monotone on C. However, if we replace f by f 1 defined as

$$f_{1}(x,y):= f(x,y) - \frac{1}{2} (y-x)^{T} B(y-x), $$

then f 1 is strongly pseudomonotone on C. In fact, one has

$$f_{1}(x,y) + f_{1}(y,x) = -(y-x)^{T} (A+ 2B)(y-x). $$

Thus, if f 1(x,y)≥0, then

$$f_{1}(y,x) \leq -(y-x)^{T} (A+2B)(y-x)\leq - \lambda \|y-x\|^{2} $$

for some λ>0. The following lemma is an immediate consequence of the auxiliary principle (see, e.g., [23 , 24]).

Lemma 4

The problem

$$\text{Find }~ x^{\ast} \in C: f_{1}(x^{\ast},y) \geq 0\quad \forall y\in C $$

is equivalent to the one

$$ \text{Find }~ x^{\ast} \in C: f_{1}(x^{\ast},y) + \frac{1}{2} (y-x^{\ast})^{T} B (y-x^{\ast}) \geq 0\quad \forall y\in C $$
(EP1)

in the sense that their solution sets coincide.

In virtue of this lemma, we can apply the proposed algorithm to the model by solving the equilibrium problem (EP1), which in turns, is just equilibrium problem (EP).

We test the proposed algorithm for this problem which corresponds to the first model in [7] where three companies (n c=3) are considered with a 0:=387 and the parameters are given in Tables 1 and 2.

Table 1 The lower and upper bounds for the power generation and companies
Table 2 The parameters of the generating unit cost functions

We implement Algorithm 1 in Matlab R2008a running on a Laptop with Intel(R) Core(TM) i3CPU M330 2.13 GHz with 2 GB Ram. We choose \(\varepsilon = 10^{{~}_{3}}\) and \(\rho _{k}:= \frac {\varepsilon }{k}\) for every k. The computational results are reported in Table 3 with the starting point x 1=0.

Table 3 The power made by three companies