Keywords

1 Introduction

Throughout this chapter, we assume that H is a real Hilbert space with inner product (⋅ , ⋅ ) and norm ∥ ⋅ ∥ . The symbol ⇀ denote weak convergence.

Let C be a nonempty closed convex subset of H and \(F: C \times C \rightarrow \mathbb{R}\) be a bifunction with F(x, x) = 0 for all x ∈ C. Consider the following equilibrium problem in the sense of Blum and Oettli [12]:

$$\displaystyle{ \mbox{ find}\ \ x \in C\,\,\,\mbox{ such that}\quad F(x,y) \geq 0\quad \forall \,y \in C. }$$
(1)

The equilibrium problem (1) (problem of equilibrium programming, Ky Fan inequality) is very general in the sense that it includes, as special cases, many applied mathematical models such as: variational inequalities, fixed point problems, optimization problems, saddle point problems, Nash equilibrium point problems in non-cooperative games, complementarity problems, see [3, 5, 6, 10, 12, 14, 17, 25] and the references therein. This problem is interesting because it allows to unify all these particular problems in a convenient way. In recent years, many methods have been proposed for solving equilibrium and related problems [210, 14, 16, 26, 28, 32, 3537]. The solution approximation methods for the equilibrium problem are often based on the resolvent of equilibrium bifunction (see, for instance [14]) where at each iterative step a strongly monotone regularization equilibrium problem is solved. It is also called the proximal point method [16, 18, 20, 26, 37].

The variational inequality problem is a special case of the equilibrium problem. For solving the variational inequality in Euclidean space, Korpelevich [21] introduced the extragradient method where two metric projections onto feasible sets must be found at each iterative step. This method was setted in Hilbert spaces by Nadezhkina and Takahashi [27]. Some extragradient-like algorithms proposed for solving variational inequality problems can be found in [19, 33, 34, 38]. In 2011, the authors in [13, 22] have replaced the second projection onto any closed convex set in the extragradient method by one onto a half-space and proposed the subgradient extragradient method for variational inequalities in Hilbert spaces, see also [15, 39].

In recent years, the extragradient method has been extended to equilibrium problems for monotone (more general, pseudomonotone) and Lipschitz-type continuous bifunctions and studied both theoretically and algorithmically [1, 31, 40]. In this methods we must solve two strongly convex minimization problems on a closed convex constrained set at each iterative step. We note that similar methods have been previously proposed and studied by Antipin [24].

In 1980, Russian mathematician Popov [30] introduced very interesting modification of Arrow-Hurwicz scheme for approximation of saddle points of convex-concave functions in Euclidean space. Let X and Y are closed convex subset of Euclidean spaces \(\mathbb{R}^{d}\) and \(\mathbb{R}^{p}\), respectively, and \(L: X \times Y \rightarrow \mathbb{R}\) be a differentiable convex-concave function. Then, the method [30] approximation of saddle points of L on X × Y can be written as

$$\displaystyle{\left \{\begin{array}{l} x_{1},\bar{x}_{1} \in X,\,\,\,y_{1},\bar{y}_{1} \in Y,\,\,\,\lambda > 0, \\ x_{n+1} = P_{X}\left (x_{n} -\lambda L'_{1}(\bar{x}_{n},\bar{y}_{n})\right ),\,\,y_{n+1} = P_{Y }\left (y_{n} +\lambda L'_{2}(\bar{x}_{n},\bar{y}_{n})\right ), \\ \bar{x}_{n+1} = P_{X}\left (x_{n+1} -\lambda L'_{1}(\bar{x}_{n},\bar{y}_{n})\right ),\,\,\bar{y}_{n+1} = P_{Y }\left (y_{n+1} +\lambda L'_{2}(\bar{x}_{n},\bar{y}_{n})\right ), \end{array} \right.}$$

where P X and P Y are metric projection onto X and Y, respectively, L1 and L2 are partial derivatives. Under some suitable assumptions, Popov proved the convergence of this method.

In this chapter, we have been motivated and inspired by the results of the authors in [30, 31], proposed a new two-step proximal algorithm for solving equilibrium problems. This algorithm is the extension of Popov method [30].

The set of solutions of the equilibrium problem (1) is denoted EP(F, C). Further, we assume that the solution set EP(F, C) is nonempty.

Here, for solving equilibrium problem (1), we assume that the bifunction F satisfies the following conditions:

(A1):

F(x, x) = 0 for all x ∈ C;

(A2):

for all x, y ∈ C from F(x, y) ≥ 0 it follows that F(y, x) ≤ 0 (pseudo-monotonicity);

(A3):

for all x ∈ C the function F(x, ⋅ ) is convex and lower semicontinuous on C;

(A4):

for all y ∈ C the function F(⋅ , y) is weakly upper semicontinuous on C;

(A5):

for all x, y, z ∈ C the next inequality holds

$$\displaystyle{F(x,y) \leq F(x,z) + F(z,y) + a\left \|x - z\right \|^{2} + b\left \|z - y\right \|^{2},}$$

where a, b are positive constants (Lipschitz-type continuity);

(A6):

for all bounded sequences (x n ), (y n ) from C we have

$$\displaystyle{\left \|x_{n} - y_{n}\right \| \rightarrow 0\,\, \Rightarrow \,\,\, F(x_{n},y_{n}) \rightarrow 0.}$$

It is easy to show that under the assumptions (A1)–(A4), we have

$$\displaystyle{x \in EP(F,C)\quad \Leftrightarrow \quad x \in C:\ F(y,x) \leq 0\,\,\,\,\forall \,y \in C.}$$

In particular, the set EP(F, C) is convex and closed (see, for instance [31]).

The hypothesis (A5) was introduced by Mastroeni [25]. It is necessary to imply the convergence of the auxiliary principle method for equilibrium problems. For example, the bifunction F(x, y) = (Ax, yx) with k-Lipschitz operator A: C → H satisfies (A5). Actually,

$$\displaystyle\begin{array}{rcl} F(x,y) - F(x,z) - F(z,y) = (Ax,y - x) - (Ax,z - x) - (Az,y - z) = & & {}\\ = (Ax - Az,y - z) \leq \left \|Ax - Az\right \|\left \|y - z\right \| \leq k\left \|x - z\right \|\left \|y - z\right \| \leq & & {}\\ \leq \frac{k} {2}\left \|x - z\right \|^{2} + \frac{k} {2}\left \|y - z\right \|^{2}.& & {}\\ \end{array}$$

This implies that F satisfies the condition (A5) with a = b = k∕2.

The condition (A6) is satisfied by bifunction F(x, y) = (Ax, yx) with Lipschitz operator A: C → H.

2 The Algorithm

Let \(g: H \rightarrow \mathbb{R} \cup \{ +\infty \}\) be a convex, lower semicontinuous, and proper. The proximity operator of a function g is the operator prox g : H → dom g ⊆ H (dom g denotes the effective domain of g) which maps every x ∈ H to the unique minimizer of the function g + ∥ ⋅ −x ∥ 2∕2, i.e.,

$$\displaystyle{\forall x \in H\quad \mathrm{prox}_{g}x = \mathrm{argmin}_{y\in \mathrm{dom}\,g}\left \{g(y) + \frac{1} {2}\|y - x\|^{2}\right \}.}$$

We have

$$\displaystyle{z = \mathrm{prox}_{g}x\quad \Leftrightarrow \quad g(y) - g(z) + (z - x,y - z) \geq 0\quad \forall y \in \mathrm{dom}\,g.}$$

Proximity operators have attractive properties that make them particularly well suited for iterative minimization algorithms. For instance, prox g is firmly nonexpansive and its fixed point set is precisely the set of minimizers of g. For detailed accounts of the proximity operators theory, see [11].

Now we extend the Popov method [30] to an equilibrium problem (1). In Algorithm 1 we are going to describe, in order to be able to obtain its convergence, the parameter λ must satisfy some condition (see convergence Theorem 1).

Algorithm 1.

For x 1, y 1 ∈ C generate the sequences x n , y n  ∈ C with the iterative scheme

$$\displaystyle{\left \{\begin{array}{l} x_{n+1} = \mathrm{prox}_{\lambda F(y_{n},\cdot )}x_{n} = \mathrm{argmin}_{y\in C}\left \{\lambda F(y_{n},y) + \frac{1} {2}\|y - x_{n}\|^{2}\right \}, \\ y_{n+1} = \mathrm{prox}_{\lambda F(y_{n},\cdot )}x_{n+1} = \mathrm{argmin}_{y\in C}\left \{\lambda F(y_{n},y) + \frac{1} {2}\|y - x_{n+1}\|^{2}\right \}, \end{array} \right.}$$

where λ > 0.

Extragradient method for the equilibrium problem (1) has the form

$$\displaystyle{\left \{\begin{array}{l} y_{n} = \mathrm{prox}_{\lambda F(x_{n},\cdot )}x_{n}, \\ x_{n+1} = \mathrm{prox}_{\lambda F(y_{n},\cdot )}x_{n},\end{array} \right.}$$

where λ > 0 [31]. A distinctive and attractive feature of the Algorithm 1 consists in the fact that in the iterative step is used only one function F(y n , ⋅ ).

Remark 1.

If F(x, y) = (Ax, yx), then Algorithm 1 takes the form:

$$\displaystyle{\left \{\begin{array}{lr} x_{1} \in C,\,\,y_{1} \in C, \\ x_{n+1} = P_{C}(x_{n} -\lambda Ay_{n}), \\ y_{n+1} = P_{C}(x_{n+1} -\lambda Ay_{n}), \end{array} \right.}$$

where P C is the operator of metric projection onto the set C.

A particular case of the scheme from the Remark 1 was proposed by Popov [30] for search of saddle points of convex-concave functions, which are defined on finite-dimensional Euclidean space. In recent works Malitsky and Semenov [23, 24] proved the convergence of this algorithm for variational inequalities with monotone and Lipschitz operators in infinite-dimensional Hilbert space, and proposed some modifications of this algorithm.

For substantiation of the iterative Algorithm 1 we note first, that if for some number \(n \in \mathbb{N}\) next equalities are satisfied

$$\displaystyle{ x_{n+1} = x_{n} = y_{n} }$$
(2)

than y n  ∈ EP(F, C) and the following stationarity condition holds

$$\displaystyle{y_{k} = x_{k} = y_{n}\quad \forall \,k \geq n.}$$

Actually, the equality

$$\displaystyle{x_{n+1} = \mathrm{prox}_{\lambda F(y_{n},\cdot )}x_{n}}$$

means that

$$\displaystyle{F(y_{n},y) - F(y_{n},x_{n+1}) + \frac{(x_{n+1} - x_{n},y - x_{n+1})} {\lambda } \geq 0\quad \forall y \in C.}$$

From (2) it follows that

$$\displaystyle{F(y_{n},y) \geq 0\,\,\,\forall y \in C,}$$

i.e. y n  ∈ EP(F, C).

Taking this into account the practical variant of the Algorithm 1 can be written as

Algorithm 2.

Choose x 1 ∈ C, y 1 ∈ C, λ > 0, and ɛ > 0.

Step 1.:

For x n and y n compute

$$\displaystyle{x_{n+1} = \mathrm{prox}_{\lambda F(y_{n},\cdot )}x_{n}.}$$
Step 2.:

If \(\mathop{\mathrm{max}}\limits \left \{\|x_{n+1} - x_{n}\|,\|y_{n} - x_{n}\|\right \} \leq \varepsilon\), then STOP, else compute

$$\displaystyle{y_{n+1} = \mathrm{prox}_{\lambda F(y_{n},\cdot )}x_{n+1}.}$$
Step 3.:

Set n: = n + 1 and go to Step 1.

Further, we assume that for all numbers \(n \in \mathbb{N}\) the condition (2) doesn’t hold. In the following section the weak convergence of the sequences (x n ), (y n ) generated by the Algorithm 1 is proved.

3 Convergence Results

To prove the convergence we need next facts.

Lemma 1.

Let non-negative sequences (a n ), (b n ) such that

$$\displaystyle{a_{n+1} \leq a_{n} - b_{n}.}$$

Then exists the limit \(\lim _{n\rightarrow \infty }a_{n} \in \mathbb{R}\) and ∑ n=1 b n < +∞.

Lemma 2 (Opial [29]).

Let the sequence (x n ) of elements from Hilbert space H converges weakly to x ∈ H. Then for all y ∈ H∖{x} we have

$$\displaystyle{\liminf _{n\rightarrow \infty }\|x_{n} - x\| <\liminf _{n\rightarrow \infty }\|x_{n} - y\|.}$$

We start the analysis of the convergence with the proof of important inequality for sequences (x n ) and (y n ), generated by the Algorithm 1.

Lemma 3.

Let sequences (x n ), (y n ) be generated by the Algorithm  1,and let z ∈ EP(F,C). Then, we have

$$\displaystyle\begin{array}{rcl} \left \|x_{n+1} - z\right \|^{2} \leq \left \|x_{ n} - z\right \|^{2} -\left (1 - 2\lambda b\right )\left \|x_{ n+1} - y_{n}\right \|^{2} -& & \\ -\left (1 - 4\lambda a\right )\left \|y_{n} - x_{n}\right \|^{2} + 4\lambda a\left \|x_{ n} - y_{n-1}\right \|^{2}.& &{}\end{array}$$
(3)

Proof.

We have

$$\displaystyle\begin{array}{rcl} \left \|x_{n+1} - z\right \|^{2} = \left \|x_{ n} - z\right \|^{2} -\left \|x_{ n} - x_{n+1}\right \|^{2} + 2\left (x_{ n+1} - x_{n},x_{n+1} - z\right ) = & & \\ = \left \|x_{n} - z\right \|^{2} -\left \|x_{ n} - y_{n}\right \|^{2} -\left \|y_{ n} - x_{n+1}\right \|^{2} -& & \\ -2\left (x_{n} - y_{n},y_{n} - x_{n+1}\right ) + 2\left (x_{n+1} - x_{n},x_{n+1} - z\right ).& &{}\end{array}$$
(4)

From the definition of points x n+1 and y n it follows that

$$\displaystyle{ \lambda F(y_{n},z) -\lambda F(y_{n},x_{n+1}) \geq (x_{n+1} - x_{n},x_{n+1} - z), }$$
(5)
$$\displaystyle{ \lambda F(y_{n-1},x_{n+1}) -\lambda F(y_{n-1},y_{n}) \geq -(x_{n} - y_{n},y_{n} - x_{n+1}). }$$
(6)

Using inequalities (5), (6) to estimate inner products in (4), we get

$$\displaystyle\begin{array}{rcl} \left \|x_{n+1} - z\right \|^{2} \leq \left \|x_{ n} - z\right \|^{2} -\left \|x_{ n} - y_{n}\right \|^{2} -\left \|y_{ n} - x_{n+1}\right \|^{2} + & & \\ +2\lambda \left \{F(y_{n},z) - F(y_{n},x_{n+1}) + F(y_{n-1},x_{n+1}) - F(y_{n-1},y_{n})\right \}.& &{}\end{array}$$
(7)

From pseudomonotonicity of the bifunction F and z ∈ EP(F, C) it follows that

$$\displaystyle{F(y_{n},z) \leq 0,}$$

and Lipschitz-type continuity F guaranties the satisfying of inequality

$$\displaystyle\begin{array}{rcl} -F(y_{n},x_{n+1}) + F(y_{n-1},x_{n+1}) - F(y_{n-1},y_{n}) \leq & & {}\\ \leq a\left \|y_{n-1} - y_{n}\right \|^{2} + b\left \|y_{ n} - x_{n+1}\right \|^{2}.& & {}\\ \end{array}$$

Using the above estimations (7), we get

$$\displaystyle\begin{array}{rcl} \left \|x_{n+1} - z\right \|^{2} \leq \left \|x_{ n} - z\right \|^{2} -\left \|x_{ n} - y_{n}\right \|^{2} -\left \|y_{ n} - x_{n+1}\right \|^{2} + & & \\ +2\lambda a\left \|y_{n-1} - y_{n}\right \|^{2} + 2\lambda b\left \|y_{ n} - x_{n+1}\right \|^{2}.& &{}\end{array}$$
(8)

The term \(\left \|y_{n-1} - y_{n}\right \|^{2}\) we estimate in the next way

$$\displaystyle{\left \|y_{n-1} - y_{n}\right \|^{2} \leq 2\left \|y_{ n-1} - x_{n}\right \|^{2} + 2\left \|y_{ n} - x_{n}\right \|^{2}.}$$

Taking this into account (8), we get the inequality

$$\displaystyle\begin{array}{rcl} \left \|x_{n+1} - z\right \|^{2} \leq \left \|x_{ n} - z\right \|^{2} -\left \|x_{ n} - y_{n}\right \|^{2} -\left \|y_{ n} - x_{n+1}\right \|^{2} + & & {}\\ +4\lambda a\left \|y_{n-1} - x_{n}\right \|^{2} + 4\lambda a\left \|y_{ n} - x_{n}\right \|^{2} + 2\lambda b\left \|y_{ n} - x_{n+1}\right \|^{2},& & {}\\ \end{array}$$

i.e. the inequality (3). □ 

Proceed directly to proof of the convergence of the algorithm. Let z ∈ EP(F, C). Assume

$$\displaystyle{\begin{array}{l} a_{n} = \left \|x_{n} - z\right \|^{2} + 4\lambda a\left \|y_{n-1} - x_{n}\right \|^{2}, \\ b_{n} = \left (1 - 4\lambda a\right )\left \|y_{n} - x_{n}\right \|^{2} + \left (1 - 4\lambda a - 2\lambda b\right )\left \|y_{n} - x_{n+1}\right \|^{2}.\end{array} }$$

Then inequality (3) takes form

$$\displaystyle{a_{n+1} \leq a_{n} - b_{n}.}$$

The following condition are required

$$\displaystyle{0 <\lambda < \frac{1} {2(2a + b)}.}$$

Then from Lemma 1 we can conclude that exists the limit

$$\displaystyle{\lim _{n\rightarrow \infty }\left (\left \|x_{n} - z\right \|^{2} + 4\lambda a\left \|y_{ n-1} - x_{n}\right \|^{2}\right )}$$

and

$$\displaystyle{\sum _{n=1}^{\infty }\left (\left (1 - 4\lambda a\right )\left \|y_{ n} - x_{n}\right \|^{2} + \left (1 - 4\lambda a - 2\lambda b\right )\left \|y_{ n} - x_{n+1}\right \|^{2}\right ) < +\infty.}$$

Whence we obtain

$$\displaystyle{ \lim _{n\rightarrow \infty }\left \|y_{n} - x_{n}\right \| =\lim _{n\rightarrow \infty }\left \|y_{n} - x_{n+1}\right \| =\lim _{n\rightarrow \infty }\left \|x_{n} - x_{n+1}\right \| = 0 }$$
(9)

and convergence of the sequence \(\left (\left \|x_{n} - z\right \|\right )\) for all z ∈ EP(F, C). In particular, sequences (x n ), (y n ) are bounded.

Now we consider the subsequence \((x_{n_{k}})\), which converges weakly to the point \(\bar{z} \in C\). Then from (9) it follows that \(y_{n_{k}} \rightharpoonup \bar{ z}\). Show that \(\bar{z} \in EP(F,C)\). We have

$$\displaystyle{ F(y_{n},y) \geq F(y_{n},x_{n+1}) + \frac{(x_{n+1} - x_{n},x_{n+1} - y)} {\lambda } \quad \forall y \in C. }$$
(10)

Passing to the limit (10) taking into account (9) and conditions (A4), (A6), we get

$$\displaystyle\begin{array}{rcl} F(\bar{z},y) \geq \limsup _{k\rightarrow \infty }F(y_{n_{k}},y) \geq \lim _{k\rightarrow \infty }\left \{F(y_{n_{k}},x_{n_{k}+1})+\right.& & {}\\ +\left.\frac{(x_{n_{k}+1} - x_{n_{k}},x_{n_{k}+1} - y)} {\lambda } \right \} = 0\quad \forall y \in C,& & {}\\ \end{array}$$

i.e. \(\bar{z} \in EP(F,C)\).

Now we show that \(x_{n} \rightharpoonup \bar{ z}\). Then from (9) it follows that \(y_{n} \rightharpoonup \bar{ z}\). Assume the converse. Let exists the subsequence \((x_{m_{k}})\) such that \(x_{m_{k}} \rightharpoonup \tilde{ z}\) and \(\tilde{z}\neq \bar{z}\). It is clear that \(\tilde{z} \in EP(F,C)\). Use the Lemma 2 twice. We have

$$\displaystyle\begin{array}{rcl} \lim _{n\rightarrow \infty }\|x_{n} -\bar{ z}\| =\lim _{k\rightarrow \infty }\|x_{n_{k}} -\bar{ z}\| <\lim _{k\rightarrow \infty }\|x_{n_{k}} -\tilde{ z}\| =\lim _{n\rightarrow \infty }\|x_{n} -\tilde{ z}\| = & & {}\\ =\lim _{k\rightarrow \infty }\|x_{m_{k}} -\tilde{ z}\| <\lim _{k\rightarrow \infty }\|x_{m_{k}} -\bar{ z}\| =\lim _{n\rightarrow \infty }\|x_{n} -\bar{ z}\|,& & {}\\ \end{array}$$

it is impossible. So, sequence (x n ) converges weakly to \(\bar{z} \in EP(F,C)\).

Thus, we obtain the following result.

Theorem 1.

Let H be a Hilbert space, C ⊆ H is nonempty convex closed set, for bifunction \(F: C \times C \rightarrow \mathbb{R}\) conditions (A1)–(A6) are satisfied and EP(F,C) ≠. Assume that \(\lambda \in \left (0, \frac{1} {2(2a+b)}\right )\) . Then sequences (x n ), (y n ) generated by the Algorithm  1 converge weakly to the solution \(\bar{z} \in EP(F,C)\) of the equilibrium problem (1),and limn→∞ ∥x n − y n ∥ = 0.

Remark 2.

The asymptotics lim n →   ∥ x n y n  ∥  = 0 can be specified up to the following:

$$\displaystyle{ \liminf _{n\rightarrow \infty }\sqrt{n}\|x_{n} - y_{n}\| = 0. }$$
(11)

Indeed, if (11) does not hold, then ∥ x n y n  ∥ ≥ μ n −1∕2 for some μ > 0 and all sufficiently large n. Hence, the series  ∥ x n y n  ∥ 2 diverges. We have obtained an contradiction.

4 Conclusion and Future Work

In this work we have proposed a new iterative two-step proximal algorithm for solving the equilibrium programming problem in the Hilbert space. The method is the extension of Popov’s modification [30] for Arrow-Hurwitz scheme for search of saddle points of convex-concave functions. The convergence of the algorithm is proved under the assumption that the solution exists and the bifunction is pseudo-monotone and Lipschitz-type.

In one of a forthcoming work we’ll consider the next regularized variant of the algorithm that converges strongly

$$\displaystyle{\left \{\begin{array}{l} x_{n+1} = \mathrm{prox}_{\lambda F(y_{n},\cdot )}\left (1 -\alpha _{n}\right )x_{n}, \\ y_{n+1} = \mathrm{prox}_{\lambda F(y_{n},\cdot )}\left (1 -\alpha _{n+1}\right )x_{n+1},\end{array} \right.}$$

where λ > 0, (α n ) is infinitesimal sequence of positive numbers. Also we plan to study the variant of the method using Bregman’s distance instead of Euclidean.

The interesting question is the substantiation of using Algorithm 1 as the element of an iterative method for equilibrium problem with a priori information, described in the form of inclusion to the fixed points set of quasi-nonexpansive operator.

Another promising area is the development of Algorithm 1 variants for solving stochastic equilibrium problems.