Abstract
The paper proposes two new iterative methods for solving pseudomonotone equilibrium problems involving fixed point problems for quasi-\(\phi \)-nonexpansive mappings in Banach spaces. The proposed algorithms combine the extended extragradient method or the linesearch method with the Halpern iteration. The strong convergence theorems are established under standard assumptions imposed on equilibrium bifunctions and mappings. The results in this paper have generalized and enriched existing algorithms for equilibrium problems in Banach spaces.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let C be a nonempty closed convex subset of a real Banach space E. Let \(f:C\times C\rightarrow \mathfrak {R}\) be a bifunction with \(f(x,x)=0\) for all \(x\in C\). The equilibrium problem (shortly, EP) for f on C is stated as follows:
Let us denote by EP(f, C) the solution set of problem EP. Mathematically, problem EP is a generalization of many mathematical models including variational inequality problems, fixed point problems, optimization problems and Nash equilibrium problems [5, 34]. In recent years, problem EP has received a lot of attention by many authors and some notable methods have been proposed for solving problem EP such as: gap function methods [31], auxiliary problem principle methods [30] and proximal point methods [33, 35].
One of the most popular methods for solving problem EP is the proximal-like method [15]. This method is based on the auxiliary problem principle introduced by Cohen [13] and extended by Mastroeni to problem EP in a real Banach space [30]. Recently, the convergence of the proximal point method has been further investigated and extended in [37] under the assumptions that the function f is pseudomonotone and satisfies a Lipschitz-type condition on C. Let us recall here that f is pseudomonotone on C if
and that f satisfies the Lipschitz-type condition on C if there exist two constants \(c_1,c_2>0\) such that, for all \(x,y\in C\),
The methods studied in [37] are also called the extragradient methods due to the results obtained by Korpelevich in [27]. In Euclidean spaces, the extragradient method [15, 37] generates two iterative sequences \(\left\{ x_n\right\} \) and \(\left\{ y_n\right\} \) defined for each integer n by
where \(x_0\in C\), \(\rho \) is a suitable parameter and G(x, y) is the Bregman distance function defined on \(C \times C\). In recent years, the extragradient method has been widely and intensively investigated in Hilbert spaces by several authors, see for instance [3, 10, 18,19,20,21,22,23, 36, 43, 44, 48, 49]. In Banach spaces, approximations of solutions to problem EP are frequently based on the resolvent \(J_{rf}\) of the monotone bifunction f. More precisely, given \(x_0 \in C\), a sequence \(\{x_n\}\) is generated by the iteration \(x_{n+1}=J_{rf}(x_n)\) where \(J_{rf}(x)\), with \(x\in E\) and \(r>0\), is the unique solution of the following regularized equilibrium problem (REP):
Here, \(J:E\rightarrow 2^{E^*}\) is the normalized duality mapping on E. A disadvantage of using the resolvent \(J_{rf}\) is that the non-linear inequality (2) seems to be more difficult to solve numerically than the extragradient iteration. Another advantage of the extragradient method is that it can also be used for the more general class of pseudomonotone bifunctions.
Let \(S:C\rightarrow C\) be a mapping whose fixed point set is denoted by Fix(S), i.e., Fix\((S)=\left\{ x\in C: S(x)=x\right\} \). The problem of finding a common solution of an equilibrium problem and a fixed point problem is a task arising in various fields of applicable mathematics, sciences, engineering and economy, see for example [14, 26]. This happens, in particular, in practical models when the constraint set of the problem is expressed as the fixed point set of a mapping, see for instance [24, 28, 29]. Several methods for solving this problem in Banach spaces can be found, for instance, in [9, 38, 41, 45, 46, 50, 53]. Most of them use a hybrid method for solving simultaneously a regularized equilibrium problem and a fixed point problem. The aim of these methods is to construct closed convex subsets of the feasible set and to compute the sequence of iterates by projecting the starting point \(x_0\) onto intersections of these subsets. For example, in uniformly smooth and uniformly convex Banach spaces, Takahashi and Zembayashi [45] introduced the following iterative scheme which is called the shrinking projection method:
where S is a relatively nonexpansive mapping, f is monotone, \(\phi \) is the Lyapunov functional and \(\varPi _{C_{n+1}}\) denotes the generalized projection onto the set \(C_{n+1}\). Under suitable assumptions on the sequences \(\left\{ \alpha _n\right\} \) and \(\left\{ r_n\right\} \), the authors proved that the sequence \(\left\{ x_n\right\} \) generated by (3) converges strongly to the projection of \(x_0\) onto the set \({{\text {EP}}(f,C)\cap {\text {Fix}}(S)}\). On the other hand, a class of more general mappings than the class of relatively nonexpansive mappings is the class of quasi-\(\phi \)-nonexpansive (or relatively quasi-nonexpansive) mappings. This class has been widely studied in recent years, see, e.g., [28, 29, 38, 53]. Hence, we have naturally the following question:
Question: Is it possible to construct strongly convergent algorithms in Banach spaces for solving pseudomonotone equilibrium problems and fixed point problems for quasi- \(\phi \) - nonexpansive mappings without using the resolvent of the bifunction and the hybrid (outer approximation) or shrinking projection methods?
To answer this question, motivated by the results obtained in [37, 40, 49] and the Halpern iteration in [17], we propose two new iterative methods for finding a common solution of an equilibrium problem for pseudomonotone bifunctions and a fixed point problem for quasi-\(\phi \)-nonexpansive mappings in Banach spaces. The first algorithm combines the extended extragradient method with the Halpern iteration. The strong convergence of the iterations is proved under the \(\phi \)-Lipschitz-type assumption of the equilibrium bifunction. To avoid this slightly strong assumption, we use the linesearch technique to design the second algorithm having the same strong convergence to the first one.
This paper is organized as follows: In Sect. 2, we recall some definitions and preliminary results used in the next sections. Section 3 presents the Halpern extragradient method and proves its convergence. In Sect. 4, we establish the convergence of the Halpern linesearch algorithm. Finally, in Sect. 5, we study and develop several numerical experiments to illustrate the convergence of the proposed algorithms following the choice of stepsizes and parameters.
2 Preliminaries
In this section, we recall some definitions and results which will be used later. We begin with several concepts and properties of a Banach space, see [2, 16] for more details.
Definition 2.1
A Banach space E is called
-
(i)
strictly convex if the unit sphere \(S_1(0) = \{x \in X: ||x||\le 1\}\) is strictly convex, i.e., the inequality \(||x+y|| <2\) holds for all \(x, y \in S_1(0), x \ne y ;\)
-
(ii)
uniformly convex if for any given \(\epsilon >0\) there exists \(\delta = \delta (\epsilon ) >0\) such that for all \(x,y \in E\) with \(\left\| x\right\| \le 1, \left\| y\right\| \le 1,\left\| x-y\right\| = \epsilon \) the inequality \(\left\| x+y\right\| \le 2(1-\delta )\) holds;
-
(iii)
smooth if the limit
$$\begin{aligned} \lim \limits _{t \rightarrow 0} \frac{\left\| x+ty\right\| -\left\| x\right\| }{t} \end{aligned}$$(4)exists for all \(x,y \in S_1(0)\);
-
(iv)
uniformly smooth if the limit (4) exists uniformly for all \(x,y\in S_1(0)\).
The modulus of convexity of E is the function \(\delta _E:[0,2]\rightarrow [0,1]\) defined by
for all \(\epsilon \in [0,2]\). Note that E is uniformly convex if and only if \(\delta _{E}(\epsilon )>0\) for all \(0<\epsilon \le 2\) and \(\delta _{E}(0)=0\).
Let \(p>1\). A uniformly convex Banach space E is said to be p-uniformly convex if there exists some constant \(c>0\) such that
It is well known that the spaces \(L^p\), \(l^p\) and \(W_m^p\) are p-uniformly convex when \(p>2\) and 2-uniformly convex when \(1< p \le 2\). Furthermore, any Hilbert space H is uniformly smooth and 2-uniformly convex.
Let E be a real Banach space with its dual \(E^*\). The dual product of \(f\in E^*\) and \(x\in E\) is denoted by \(\left\langle x, f\right\rangle \) or \(\left\langle f, x\right\rangle \). For the sake of simplicity, the norms of E and \(E^*\) are denoted by the same symbol \(\Vert \cdot \Vert \). The normalized duality mapping \(J:E\rightarrow 2^{E^*}\) is defined by
We have the following properties, see for instance [12, 39]:
-
(i)
If E is a smooth, strictly convex, and reflexive Banach space, then the normalized duality mapping \(J:E\rightarrow 2^{E^*}\) is single-valued, one-to-one, and onto;
-
(ii)
If E is a reflexive and strictly convex Banach space, then \(J^{-1}\) is norm to weak \(^*\) continuous;
-
(iii)
If E is a uniformly smooth Banach space, then J is uniformly continuous on each bounded subset of E;
-
(iv)
A Banach space E is uniformly smooth if and only if \(E^*\) is uniformly convex.
Next, we assume that E is a smooth, strictly convex, and reflexive Banach space. In the sequel, we always use \(\phi : E\times E \rightarrow [0, \infty )\) to denote the Lyapunov functional defined by
From the definition of \(\phi \), it is easy to show that \(\phi \) has the following properties:
-
(i)
For all \(x,y,z\in E\),
$$\begin{aligned} \left( \left\| x\right\| -\left\| y\right\| \right) ^2\le \phi (x,y)\le \left( \left\| x\right\| +\left\| y\right\| \right) ^2. \end{aligned}$$(5) -
(ii)
For all \(x,y,z\in E\),
$$\begin{aligned} \phi (x,y)=\phi (x,z)+\phi (z,y)+2\left\langle z-x,Jy-Jz\right\rangle . \end{aligned}$$(6)Particularly, when \(x=y\), we have \(\left\langle y-z,Jy-Jz\right\rangle =\frac{1}{2}\phi (y,z)+\frac{1}{2}\phi (z,y)\).
-
(iii)
For all \(x,y,z\in E\) and \(\lambda \in [0,1]\),
$$\begin{aligned} \phi (x,J^{-1}(\lambda Jy+(1-\lambda )Jz))\le \lambda \phi (x,y)+(1-\lambda )\phi (x,z). \end{aligned}$$(7)
Throughout this paper, we assume that C is a nonempty closed convex subset of E.
Let \(S:C\rightarrow C\) be a mapping with the fixed point set Fix(S). A point \(p\in C\) is called an asymptotic fixed point [11] of S if there exists a sequence \(\left\{ x_n\right\} \subset C\) converging weakly to p and such that \(||x_n-S(x_n)||\rightarrow 0\). The set of asymptotic fixed points of S is denoted by \(\hat{F}(S)\). A mapping \(S:C\rightarrow C\) is called:
(i) nonexpansive if \(||S(x)-S(y)||\le ||x-y||\) for all \(x,y\in C\);
(ii) relatively nonexpansive [6] if Fix\((S)\ne \emptyset \), Fix\((S)=\hat{F}(S)\) and
(iii) quasi-\(\phi \)-nonexpansive if Fix\((S)\ne \emptyset \) and
(iv) demiclosed at zero if for any sequence \(\left\{ x_n\right\} \subset C\) converging weakly to x such that \(\left\{ x_n-S(x_n)\right\} \) converges strongly to 0, one has \(x\in C\) and \(x\in {\text {Fix}}(S)\).
The fixed point set Fix(S) of a relatively nonexpansive mapping S in a strictly convex and smooth Banach space is closed and convex [32, Proposition 2.4]. Since the proof in [32, Proposition 2.4] does not use the assumption Fix\((S)=\hat{F}(S)\), this conclusion is still true for the class of quasi-\(\phi \)-nonexpansive mappings.
We have the following results.
Lemma 2.1
[51] Let E be a uniformly convex Banach space and \(r>0\). Then, there exists a strictly increasing, continuous and convex function \(g:[0,2r] \rightarrow [0,+\infty )\) such that \(g(0) = 0\) and
for all \(\lambda \in [0,1]\) and \(x,y\in B_r=\left\{ z\in E:||z||\le r\right\} \).
Lemma 2.2
[25] Let E be an uniformly convex Banach space and \(\bar{r}>0\). Then, there exists a strictly increasing, continuous and convex function \(\bar{g}:[0,2\bar{r}] \rightarrow [0,+\infty )\) such that \(\bar{g}(0) = 0\) and \( \bar{g}(||x-y||)\le \phi (x,y) \) for all \(x,y\in B_{\bar{r}}=\left\{ z\in E:||z||\le \bar{r}\right\} \).
Lemma 2.3
[1] Let \(\left\{ x_n\right\} \) and \(\left\{ y_n\right\} \) be two sequences in an uniformly convex and uniformly smooth real Banach space E. If \(\phi (x_n,y_n)\rightarrow 0\) and either \(\left\{ x_n\right\} \) or \(\left\{ y_n\right\} \) is bounded, then \(\left\| x_n-y_n\right\| \rightarrow 0\) as \(n\rightarrow \infty \).
Lemma 2.4
[7, 51] Let E be a 2—uniformly convex and smooth Banach space. Then, for all \(x,y\in E\), we have
where \(\frac{1}{c}~(0<c\le 1)\) is the 2—uniformly convex constant of E.
Lemma 2.5
[7, 8] Let E be a 2—uniformly convex and smooth Banach space. Then, there exists \(\tau >0\) such that
The generalized projection \(\varPi _C:E\rightarrow C\) is defined by
If E is a Hilbert space, then \(\phi (x,y)=||x-y||^2\). Thus, the generalized projection \(\varPi _C\) is the metric projection \(P_C\). In what follows, we need the following properties of the functional \(\phi \) and the generalized projection \(\varPi _C\).
Lemma 2.6
[1] Let E be a smooth, strictly convex, and reflexive Banach space and C a nonempty closed convex subset of E. Then, the following conclusions hold:
(i) \(\phi (x,\varPi _C (y))+\phi (\varPi _C (y),y) \le \phi (x,y), \forall x \in C, y \in E\);
(ii) if \(x\in E, z \in C\), then \(z= \varPi _C(x)\) iff \(\left\langle y-z,Jx-Jz\right\rangle \le 0, \forall y \in C\);
(iii) \(\phi (x,y)=0\) iff \(x=y\).
Let E be a real Banach space. In [1], Alber studied the function \(V:E\times E^*\rightarrow \mathfrak {R}\) defined by
Then, from the definition of the Lyapunov functional \(\phi (x,y)\), it follows that
Lemma 2.7
[1] Let E be a reflexive, strictly convex and smooth Banach space with its dual \(E^*\). Then
In the sequel, we use the following definitions:
The normal cone \(N_C\) to a set C at the point \( x \in C\) is defined by
Let \(g:C\rightarrow \mathfrak {R}\) be a function. The subdifferential of g at x is defined by
Remark 2.1
It follows easily from the definitions of the subdifferential and \(\phi (x,y)\) that
We have the following result, see [47, Section 7.2, Chapter 7] for more details.
Lemma 2.8
Let C be a nonempty convex subset of a Banach space E and \(g:C\rightarrow \mathfrak {R}\cup \left\{ +\infty \right\} \) be a convex, subdifferentiable and lower semicontinuous function. Furthermore, the function g satisfies the following regularity condition:
Then, \(x^*\) is a solution to the following convex optimization problem \(\min \left\{ g(x):x\in C\right\} \) if and only if \(0\in \partial g(x^*)+N_C(x^*)\), where \(\partial g(.)\) denotes the subdifferential of g and \(N_C(x^*)\) is the normal cone to C at \(x^*\).
We also need the following two technical lemmas to prove the strong convergence of the proposed algorithms.
Lemma 2.9
[42, 52] Let \(\left\{ a_n\right\} \) be a sequence of nonnegative real numbers. Suppose that
for all \(n \ge 0\), where the sequences \(\left\{ \gamma _n\right\} \) in (0, 1) and \(\left\{ \delta _n\right\} \) in \(\mathfrak {R}\) satisfy the conditions: \(\lim _{n\rightarrow \infty }\gamma _n=0\), \(\sum _{n=1}^\infty \gamma _n=\infty \) and \(\lim \sup _{n\rightarrow \infty }\delta _n\le 0\). Then \(\lim _{n\rightarrow \infty }a_n=0\).
Lemma 2.10
[28, Remark 4.4] Let \(\left\{ \epsilon _n\right\} \) be a sequence of non-negative real numbers. Suppose that for any integer m, there exists an integer p such that \(p\ge m\) and \(\epsilon _p\le \epsilon _{p+1}\). Let \(n_0\) be an integer such that \(\epsilon _{n_0}\le \epsilon _{n_0+1}\) and define, for all integer \(n\ge n_0\),
Then, \(0\le \epsilon _n \le \epsilon _{\tau (n)+1}\) for all \(n\ge n_0\). Furthermore, the sequence \(\left\{ \tau (n)\right\} _{n\ge n_0}\) is non-decreasing and tends to \(+\infty \) as \(n\rightarrow \infty \).
3 Halpern extragradient method
In this section, we introduce an algorithm for finding a solution of an equilibrium problem which is also a solution of a fixed point problem. The algorithm combines the extragradient method with the Halpern iteration. The algorithm is designed as follows:
Algorithm 3.1
(Halpern extragradient method).
Initialization. Choose \(x_0,~u\in C\). The sequences \(\left\{ \lambda _n\right\} \), \(\left\{ \alpha _n\right\} \), \(\left\{ \beta _n\right\} \) satisfy Condition C below.
Step 1. Solve successively the two optimization problems
Step 2. Compute \(x_{n+1}=\varPi _C\left( J^{-1}(\alpha _n Ju+(1-\alpha _n)(\beta _nJz_n+(1-\beta _n)JSz_n)) \right) \).
Some explicit formulas for computing J and \(J^{-1}\) in the Banach spaces \(l^p\), \(L^p\) and \(W_m^p\) can be found in [2, page 36]. In order to establish the convergence of Algorithm 3.1, we consider the following conditions for the bifunction f, the mapping S and the parameter sequences \(\left\{ \lambda _n\right\} \), \(\left\{ \alpha _n\right\} \), \(\left\{ \beta _n\right\} \).
Condition A:
A0. Either int\((C)\ne \emptyset \) or for each \(x\in C\), the function \(f(x,\cdot )\) is continuous at a point in C.
A1. f is pseudomonotone on C and \(f(x,x)=0\) for all \(x\in C\).
A2. f is \(\phi \)-Lipschitz-type continuous on C, i.e., there exist two positive constants \(c_1,~c_2\) such that
A3. \(\lim \sup _{n\rightarrow \infty }f(x_n,y)\le f(x,y)\) for each sequence \(\left\{ x_n\right\} \) converging weakly to \(x\in C\) and for all \(y \in C\).
A4. \(f(x,\cdot )\) is convex and subdifferentiable on C for every \(x\in C\).
Condition B:
B1. S is a quasi-\(\phi \)-nonexpansive mapping.
B2. \(I-S\) is demiclosed at zero.
Condition C:
C1. \(0<\underline{\lambda }\le \lambda _n\le \overline{\lambda }<\min \left\{ \frac{1}{2c_1},\frac{1}{2c_2}\right\} \).
C2. \(\left\{ \alpha _n\right\} \subset (0,1)\), \(\lim _{n\rightarrow \infty }\alpha _n=0\), \(\sum _{n=1}^\infty \alpha _n=\infty \).
C3. \(\left\{ \beta _n\right\} \subset (0,1)\), \(0<\lim \inf _{n\rightarrow \infty }\beta _n\le \lim \sup _{n\rightarrow \infty }\beta _n<1.\)
It is easy to show that the solution set EP(f, C) is closed and convex when f satisfies conditions A1, A3 and A4. Since the fixed point set Fix(S) of S is closed and convex under condition B1, the solution set \(F:={\text {EP}}(f,C)\cap {\text {Fix}}(S)\) is also closed and convex.
In this paper, we assume that the solution set \(F:={\text {EP}}(f,C)\cap {\text {Fix}}(S)\) is nonempty, with the consequence that the generalized projection \(\varPi _F(u)\) exists and is unique for all \(u\in E\).
Let us mention here that when E is a Hilbert space, the \(\phi \)-Lipschitz-type condition of f becomes the Lipchitz-type condition introduced by Mastroeni in [30].
In the following three lemmas, we assume that the nonempty closed convex set C is a subset of a uniformly smooth and uniformly convex Banach space E.
Lemma 3.1
Let \(\left\{ x_n\right\} \), \(\left\{ y_n\right\} \) and \(\left\{ z_n\right\} \) be the sequences generated by Algorithm 3.1. Then, the followings hold for all \(x^*\in {\text {EP}}(f,C)\), \(y\in C\) and \(n\ge 0\).
-
(i)
\(\lambda _n f(y_n,y)\ge \left\langle Jy_n-Jx_n,y_n-z_n\right\rangle \, +\, \left\langle Jz_n-Jx_n,z_n-y\right\rangle -c_1\lambda _n \phi (y_n,x_n)-c_2\lambda _n\phi (z_n,y_n)\).
-
(ii)
\(\phi (x^*,z_n)\le \phi (x^*,x_n)-(1-2\lambda _n c_1)\phi (y_n,x_n)-(1-2\lambda _n c_2)\phi (z_n,y_n)\).
Proof
Let n be fixed.
(i) From the definition of \(y_n\) and Lemma 2.8, we have
Thus, there exist \(w\in \partial _2 f(x_n,y_{n})\) and \(\bar{w}\in N_C(y_{n})\) such that
Hence,
This together with the definition of \(N_C\) implies that
Since \(w\in \partial _2 f(x_n,y_{n})\), we have
From the last two inequalities, we obtain
Substituting \(y=z_n\) into the last inequality, we obtain
From the \(\phi \)-Lipschitz-type condition of f, we have
Two last inequalities imply that
Similarly to inequality (8), from the definition of \(z_n\), we obtain
Thus,
This together with inequality (9) comes to the desired conclusion.
(ii) Substituting \(y=x^*\) into (10), we obtain
Since \(x^*\in {\text {EP}}(f,C)\), \(f(x^*,y_n)\ge 0\). It follows from the pseudomonotonicity of f that \(f(y_n,x^*)\le 0\). This together with the last inequality implies that
It follows from relations (9) and (11) that
This together with relation (6) leads to the desired conclusion. Lemma 3.1 is proved. \(\square \)
Next, we will prove the boundedness of all the sequences generated by Algorithm 3.1.
Lemma 3.2
The sequences \(\left\{ x_n\right\} \), \(\left\{ y_n\right\} \), \(\left\{ z_n\right\} \) and \(\left\{ Sz_n\right\} \) are bounded.
Proof
Put \(t_n=J^{-1}(\beta _nJz_n+(1-\beta _n)JSz_n)\). Thus, \(x_{n+1}=\varPi _C\big (J^{-1}(\alpha _n Ju+(1-\alpha _n)Jt_n) \big )\). It follows from Lemma 2.6(i) and relation (7) that, for all \(x^*\in F:={\text {EP}}(f,C)\cap {\text {Fix}}(S)\),
From relation (7), the property of S and Lemma 3.1(ii),
It follows from (12) and (13) that
This implies that the sequence \(\left\{ \phi (x^*,x_n)\right\} \) is bounded. Thus, from (13), we also obtain the boundedness of the sequences \(\left\{ \phi (x^*,z_n)\right\} \) and \(\left\{ \phi (x^*,t_n)\right\} \). Since \(\phi (x^*,Sz_n)\le \phi (x^*,z_n)\) for all n, the sequence \(\left\{ \phi (x^*,Sz_n)\right\} \) is bounded. This together with relation (5) implies that the sequences \(\left\{ x_n\right\} \), \(\left\{ z_n\right\} \), \(\left\{ Sz_n\right\} \) are bounded. The boundedness of the sequence \(\left\{ y_n\right\} \) follows from Lemma 3.1(ii), the hypothesis of \(\lambda _n\) and relation (5). This completes the proof of Lemma 3.2. \(\square \)
It follows from Lemma 3.2 that there exists \(r>0\) such that \(\left\{ x_n\right\} ,\left\{ y_n\right\} ,\left\{ z_n\right\} ,\left\{ Sz_n\right\} \subset B_r=\left\{ z\in E:||z||\le r\right\} \) for all \(n\ge 0\). From Lemma 2.1 and the definition of J, there exists a strictly increasing, continuous and convex function \(g:[0,2r] \rightarrow [0,+\infty )\) such that \(g(0) = 0\) and for all \(n\ge 0\),
Put \(w_n=J^{-1}(\alpha _nJu+(1-\alpha _n)Jt_n)\) and \(x^\dagger =\varPi _F (u)\), where \(t_n=J^{-1}(\beta _nJz_n+(1-\beta _n)JSz_n)\), and
From Lemma 3.2 and the conditions on \(\{\lambda _n\}\) and \(\{\beta _n\}\), we deduce that the sequence \(\left\{ T_n\right\} \) is bounded. Thus, there exists \(M>0\) such that
We have the following result which plays an important role in proving the strong convergence of Algorithm 3.1.
Lemma 3.3
-
(i)
\(T_n\le \phi (x^\dagger ,x_n)-\phi (x^\dagger ,x_{n+1})+\alpha _n M\) for all n.
-
(ii)
\(\phi (x^\dagger ,x_{n+1})\le (1-\alpha _n)\phi (x^\dagger ,x_{n})+2\alpha _n \left\langle w_n-x^\dagger ,Ju-Jx^\dagger \right\rangle \) for all n.
Proof
From the definitions of \(t_n\), \(\phi \), relation (14) and the property of S, we have
This together with relation (12) with \(x^*=x^\dagger \) and Lemma 3.1 (ii) with \(x^*=x^\dagger \) implies that
Hence, we obtain the desired conclusion.
(ii) Note that \(x_{n+1}=\varPi _Cw_n\) where \(w_n=J^{-1}(\alpha _nJu+(1-\alpha _n)Jt_n)\). From Lemmas 2.6(i), 2.7, the definitions of \(\phi \) and V, and relation (7), we obtain
Now, we prove the first main result. \(\square \)
Theorem 3.1
Let C be a nonempty closed convex subset of an uniformly smooth and uniformly convex Banach space E. Assume that Conditions A, B and C are satisfied and that the solution set \(F:=\mathrm{{EP}}(f,C)\cap \mathrm{{Fix}}(S)\) is nonempty. Then, the sequences \(\left\{ x_n\right\} \), \(\left\{ y_n\right\} \), \(\left\{ z_n\right\} \) generated by Algorithm 3.1 converge strongly to \(\varPi _F(u)\).
Proof
We use the two inequalities of Lemma 3.3 to prove Theorem 3.1. Recalling the first inequality in that lemma, we have, for all n, that
where \(T_n\) and M are defined, respectively, by (15) and (16), and \(x^\dagger =\varPi _F(u)\).
Then, we consider two cases.
Case 1. There exists \(n_0\ge 0\) such that the sequence \(\left\{ \phi (x^\dagger ,x_n)\right\} \) is nonincreasing.
In this case, the limit of \(\left\{ \phi (x^\dagger ,x_n)\right\} \) exists. Thus, from (17) and \(\alpha _n\rightarrow 0\), we have \(\lim _{n\rightarrow \infty }T_n=0.\) From the definition of \(T_n\) and the hypotheses on \(\lambda _n\) and \(\beta _n\), we obtain
Thus, from Lemmas 2.1, 2.3 and 3.2 we have, when \(n \rightarrow \infty \), that
and so \(||x_n-z_n||\rightarrow 0\) by the triangle inequality. Since J and \(J^{-1}\) are uniformly continuous on each bounded subset of E and \(E^*\) (resp.), we obtain
Since \(\left\{ z_n\right\} \subset C\) is bounded, there exists a subsequence \(\left\{ z_m\right\} \) of \(\left\{ z_n\right\} \) converging weakly to \(p\in C\) such that
From \(||z_n-Sz_n||\rightarrow 0\) as \(n \rightarrow \infty \), and the demi-closedness at zero of S, we have that \(p\in {\text {Fix}}(S)\). From Lemma 3.1(i), relations (18), (20), the hypothesis on \(\lambda _n\) and (A3) and noting that \(y_n\rightharpoonup p\) as \(n \rightarrow \infty \), we obtain that
Thus, \(p\in {\text {EP}}(f,C)\) and \(p\in F:={\text {EP}}(f,C)\cap {\text {Fix}}(S)\). From (21), \(x^\dagger =\varPi _Fu\) and Lemma 2.6 (ii), we have
From \(w_n=J^{-1}(\alpha _nJu+(1-\alpha _n)Jt_n)\), \(\alpha _n\rightarrow 0\) and the boundedness of \(\left\{ Ju-Jt_n\right\} \), we obtain
Since \(J^{-1}\) is uniformly continuous on each bounded subset of \(E^*\), we deduce that \(||w_n-t_n||\rightarrow 0\) as \(n \rightarrow \infty \). From \(t_n=J^{-1}(\beta _nJz_n+(1-\beta _n)JSz_n)\) and (20), we have
Thus, \(||t_n-Sz_n||\rightarrow 0\) and, from (19), it follows that \(||t_n-z_n||\rightarrow 0\). Hence, \(||w_n-z_n||\rightarrow 0\) because \(||w_n-t_n||\rightarrow 0\) . This together with (22) implies that
Thus, from Lemmas 2.9 and 3.3 (ii), we obtain \(\phi (x^\dagger ,x_n)\rightarrow 0\) or \(x_n\rightarrow x^\dagger \).
Case 2. There exists a subsequence \(\left\{ \phi (x^\dagger ,x_{n_i})\right\} \) of \(\left\{ \phi (x^\dagger ,x_{n})\right\} \) such that \(\phi (x^\dagger ,x_{n_i})\le \phi (x^\dagger ,x_{n_i+1})\) for all \(i\ge 0\).
In that case, it follows from Lemma 2.10 that
where \(\tau (n)=\max \left\{ k\in N:n_0\le k\le n, ~\phi (x^\dagger ,x_k)\le \phi (x^\dagger ,x_{k+1})\right\} \). Furthermore, the sequence \(\left\{ \tau (n)\right\} _{n\ge n_0}\) is non-decreasing and \(\tau (n)\rightarrow +\infty \) as \(n\rightarrow \infty \). From relations (17), (24), and \(\alpha _{\tau (n)}\rightarrow 0\), we have
From the definition of \(T_{\tau (n)}\) and the hypotheses on \(\lambda _{\tau (n)}\) and \(\beta _{\tau (n)}\), we obtain
Thus, from Lemmas 2.1, 2.3 and 3.2 we have, when \(n \rightarrow \infty \), that
and so \(||x_{\tau (n)}-z_{\tau (n)}||\rightarrow 0\). Since J and \(J^{-1}\) are uniformly continuous on each bounded subset of E and \(E^*\) (resp.),
Since \(\left\{ z_{\tau (n)}\right\} \subset C\) is bounded, there exists a subsequence \(\left\{ z_{\tau (m)}\right\} \) of \(\left\{ z_{\tau (n)}\right\} \) converging weakly to \(p\in C\) such that
By arguing as in Case 1, we have \(p\in F\) and
From Lemma 3.3(ii), one has
which, from relation (24), implies that
Thus, it follows from \(\alpha _{\tau (n)}>0\) that \(\phi (x^\dagger ,x_{\tau (n)})\le 2\left\langle w_{\tau (n)}-x^\dagger ,Ju-Jx^\dagger \right\rangle \). This together with (29) implies that \(\lim \sup _{n\rightarrow \infty }\phi (x^\dagger ,x_{\tau (n)})\le 0\) or \(\lim _{n\rightarrow \infty }\phi (x^\dagger ,x_{\tau (n)})= 0\). Thus, one obtains from (30) and \(\alpha _{\tau (n)}\rightarrow 0\) that \(\phi (x^\dagger ,x_{{\tau (n)}+1})\rightarrow 0\). Since \(\phi (x^\dagger ,x_{n}) \le \phi (x^\dagger ,x_{\tau (n)+1})\) for all \(n\ge n_0\) by relation (24), one has that \(\phi (x^\dagger ,x_{n})\rightarrow 0\) and thus that \(x_n\rightarrow x^\dagger \) as \(n \rightarrow \infty \). This completes the proof of Theorem 3.1. \(\square \)
4 Halpern linesearch method
The convergence of Algorithm 3.1 is established under hypothesis A2 of \(\phi \)-Lipschitz-type condition of the bifunction f depending on two constants \(c_1\) and \(c_2\). In some cases, these constants are not known or difficult to approximate. In this section, we use the linesearch technique to avoid this assumption. We also assume that the Banach space E is uniformly smooth and 2-uniformly convex with the constant c defined as in Lemma 2.4. Without assuming hypothesis A2, however, we have to slightly strengthen hypothesis A3. More precisely, we assume that the mapping S satisfies Condition B and the bifunction f satisfies the following condition.
Condition D: Assumptions A0, A1, A4 hold and
A3a. f is jointly weakly continuous on the product \(\varDelta \times \varDelta \) where \(\varDelta \) is an open convex set containing C, in the sense that if \(x,~y\in \varDelta \) and \(\left\{ x_n\right\} ,~\left\{ y_n\right\} \) are two sequences in \( \varDelta \) converging weakly to x and y, respectively, then \(f(x_n,y_n)\rightarrow f(x,y)\).
It is easy to see that condition A3a implies condition A3. So the solution set \({\text {EP}}(f,C)\) is closed and convex. Furthermore, the fixed point set Fix(S) of S being closed and convex, the solution set \(F={\text {EP}}(f,C) \cap {\text {Fix}}(S)\) is also closed and convex. As Section 3, we assume that the solution set F is nonempty. Consequently, the generalized projection \(\varPi _F(u)\) exists and is unique for all \(u \in E\).
On the other hand, we have to use some other parameters \(\nu \), \(\gamma \), \(\alpha \), and since f is not assumed to satisfy condition A2, we have to consider another hypothesis on the sequence \(\left\{ \lambda _n\right\} \). More precisely, we introduce the following condition:
Condition E: The sequences \(\left\{ \alpha _n\right\} \), \(\left\{ \beta _n\right\} \) satisfy C2, C3 and
C1a. \(\left\{ \lambda _n\right\} \subset [\lambda ,1]\), where \(\lambda \in (0,1]\).
C4. \(\nu \in (0,\frac{c^2}{2})\), \(\gamma \in (0,1)\), \(\alpha \in (0,1)\).
Now, we are in a position to formulate the second algorithm:
Algorithm 4.1
(Halpern linesearch method).
Initialization. Choose \(x_0,~u\in C\), the parameters \(\nu \), \(\gamma \), \(\alpha \) and the sequences \(\left\{ \alpha _n\right\} \), \(\left\{ \beta _n\right\} \) and \(\left\{ \lambda _n\right\} \) such that Condition E above is satisfied .
Step 1. Solve the optimization problem
Step 2. If \(y_n=x_n\), then set \(z_n=x_n\). Otherwise
Step 2.1. Find m the smallest nonnegative integer such that
Step 2.2. Set \(\rho _n=\gamma ^m\) and \(z_n=z_{m,n}\).
Step 3. Choose \(g_n\in \partial _2 f(z_n,x_n)\) and compute \(w_n=\varPi _CJ^{-1}(Jx_n-\sigma _n g_n)\), where \(\sigma _n=\frac{\nu f(z_n,x_n)}{||g_n||^2}\) if \(y_n\ne x_n\) and \(\sigma _n=0\) otherwise.
Step 4. Compute \(x_{n+1}=\varPi _CJ^{-1}\left( \alpha _n Ju+(1-\alpha _n)(\beta _nJw_n+(1-\beta _n)JSw_n) \right) \).
To prove the strong convergence of this algorithm, we need the following lemmas where we assume that the nonempty closed convex set C is a subset of a uniformly smooth and 2-uniformly convex Banach space E.
Lemma 4.1
Suppose that \(y_n\ne x_n\) for some \(n\ge 0\). Then, for each n,
-
(i)
The linesearch corresponding to \(x_n\) and \(y_n\) (Step 2.1) is well defined.
-
(ii)
\(f(z_n,x_n)>0\).
-
(iii)
\(0 \notin \partial _2f(z_n,x_n)\).
Proof
Repeating the proof as in [48, Proposition 4.1] and replacing \(||x_n-y_n||^2\) by \(\phi (y_n,x_n)\), we obtain the desired conclusion. \(\square \)
Lemma 4.2
Let \(f:\varDelta \times \varDelta \) be a bifunction satisfying conditions \({\mathrm{A3a-A4}}\). Let \(x,~z\in \varDelta \) and let \(\left\{ x_n\right\} \) and \(\left\{ z_n\right\} \) be two sequences in \(\varDelta \) converging weakly to x, z, respectively. Then, for any \(\epsilon >0\), there exist \(\eta >0\) and \(n_\epsilon >0\) such that
for every \(n\ge n_\epsilon \), where B denotes the closed unit ball in E.
Proof
See, e.g., [48, Proposition 4.3]. \(\square \)
In the sequel, let us denote \(t_n=J^{-1}(\beta _nJw_n+(1-\beta _n)JSw_n)\) where \(w_n\) is defined as in Step 3 of Algorithm 4.1. Then, it follows from the definition of \(x_{n+1}\) that
Next, we prove the boundedness of the sequences generated by Algorithm 4.1.
Lemma 4.3
-
(i)
The sequences \(\left\{ x_n\right\} \), \(\left\{ w_n\right\} \), \(\left\{ t_n\right\} \), \(\left\{ Sw_n\right\} \) generated by Algorithm 4.1 are bounded.
-
(ii)
If \(\left\{ x_{n_k}\right\} \) is a subsequence of \(\left\{ x_n\right\} \) converging weakly to \(x\in C\), then the subsequences \(\left\{ y_{n_k}\right\} \), \(\left\{ z_{n_k}\right\} \) and \(\left\{ g_{n_k}\right\} \) are bounded.
Proof
(i) Let n be fixed. By arguing as in relations (12) and (13), we have
From Lemmas 2.6(i), 2.7 and the definitions of \(\phi \) and V, we obtain successively that
Since \(x^*\in {\text {EP}}(f,C)\), we have that \(f(x^*,z_n)\ge 0\). Then, it follows from the pseudomonotonicity of f that \(f(z_n,x^*)\le 0\). Consequently, from the definition of \(g_n\) and \(\sigma _n\), we have the following inequalities:
From Lemma 2.4, we can write
Combining relations (33)–(35) and the hypothesis on \(\nu \), we obtain that
Consequently, from relations (31), (32) and (36), deduce that
Thus, the sequence \(\left\{ \phi (x^*,x_{n})\right\} \) is bounded and it follows from relations (36) and (32) that the sequences \(\left\{ \phi (x^*,w_{n})\right\} \) and \(\left\{ \phi (x^*,t_{n})\right\} \) are also bounded. Moreover, from the property of S, we have that \(\phi (x^*,Sw_{n})\le \phi (x^*,w_{n})\) for all \(n\ge 0\). Thus, the sequence \(\left\{ \phi (x^*,Sw_{n})\right\} \) is bounded. This together with relation (5) gives the desired conclusion.
(ii) Firstly, we show that the sequence \(\left\{ y_{n_k}\right\} \) is bounded. Without loss of generality, we can suppose that \(y_{n_k}\ne x_{n_k}\) for all k. Now let k be fixed and define the function \(T_k\) by
The subdifferential of \(T_k\) at y is given by
Note that \(\frac{1}{2}\partial _1\phi (y,x_{n_k})=\left\{ Jy-Jx_{n_k}\right\} \). Furthermore, for all \(y_1,y_2\in C\) and \(s(y_1)\in \partial T_k(y_1)\), \(s(y_2)\in \partial T_k(y_2)\), it is easy to see that there exists \(w_1\in \partial _2f(x_{n_k},y_1)\) and \(w_2\in \partial _2f(x_{n_k},y_2)\) such that \(s(y_1)=\lambda _{n_k}w_1+Jy_1-Jx_{n_k}\), \(s(y_2)=\lambda _{n_k}w_2+Jy_2-Jx_{n_k}\) and
From \(w_1\in \partial _2f(x_{n_k},y_1)\) and \(w_2\in \partial _2f(x_{n_k},y_2)\), we have
Adding both sides of the last two inequalities, we obtain
From relations (37), (38) and Lemma 2.5, there exists \(\tau >0\) such that
Note that \(0\in \partial T_k(y_{n_k})+N_C(y_{n_k})\) by definition of \(y_{n_k}\) and Lemma 2.8. Thus, there exists \(s(y_{n_k})\in \partial T_k(y_{n_k})\) such that \(-s(y_{n_k})\in N_C(y_{n_k})\), i.e.,
Substituting \(y=x_{n_k}\) into the last inequality, we have \(\left\langle s(y_{n_k}), x_{n_k}-y_{n_k}\right\rangle \ge 0\). Hence, it follows from (39) with \(y_2=x_{n_k}\) and \(y_1=y_{n_k}\) that
Thus
Since \(\frac{1}{2}\partial _1\phi (x_{n_k},x_{n_k})=0\), \(\partial T_k(x_{n_k})= \lambda _{n_k} \partial _2f(x_{n_k},x_{n_k})\). Moreover, it follows from \(x_{n_k}\rightharpoonup x\) and Lemma 4.2 that , for any \(\epsilon >0\), there exist \(\eta >0\) and \(k_0>0\) such that
for every \(k\ge k_0\), where B denotes the closed unit ball in E. Thus, we can conclude from the boundedness of \(\partial _2 f(x,x)\) and the hypothesis on \(\lambda _{n_k}\) and (41) that the sequence \(\left\{ ||x_{n_k}-y_{n_k}||\right\} \) is bounded. Since the sequence \(\left\{ x_{n_k}\right\} \) is bounded, the sequence \(\left\{ y_{n_k}\right\} \) is also bounded. Thus, it follows from the definition of \(z_{n_k}\) that the sequence \(\left\{ z_{n_k}\right\} \) is also bounded. Hence, there exists a subsequence of \(\left\{ z_{n_k}\right\} \), again denoted by \(\left\{ z_{n_k}\right\} \), converging weakly to z. It also follows from \(x_{n_k}\rightharpoonup x\), \(z_{n_k}\rightharpoonup z\) and Lemma 4.2 that, for any \(\epsilon >0\), there exist \(\eta >0\) and \(k_0>0\) such that
for every \(k\ge k_0\), where B denotes the closed unit ball in E. Since \(g_{n_k}\in \partial _2 f(z_{n_k},x_{n_k})\) and since B and \(\partial _2 f(z,x)\) are bounded, we also obtain that the sequence \(\left\{ g_{n_k}\right\} \) is bounded. This completes the proof of Lemma 4.3. \(\square \)
It follows from Lemma 4.3(i) that there exists \(r>0\) such that the sequences \(\left\{ x_n\right\} \), \(\left\{ w_n\right\} \), \(\left\{ t_n\right\} \) and \(\left\{ Sw_n\right\} \) are all contained in the ball \(B_r=\left\{ z\in E:||z||\le r\right\} \) for all \(n \ge 0\). From Lemma 2.1 and the definition of J, there exists a strictly increasing, continuous and convex function \(g:[0,2r] \rightarrow [0,+\infty )\) such that \(g(0) = 0\), and the following inequality holds for all \(n\ge 0\)
Lemma 4.4
Let \(\left\{ x_{n_k}\right\} \) be a subsequence of the sequence \(\left\{ x_{n}\right\} \) converging weakly to \(\bar{x}\in C\). Assume that
as \(k \rightarrow \infty \). Then, \(\bar{x}\in F:= \mathrm{{EP}}(f,C)\cap \mathrm{{Fix}}(S)\).
Proof
From the property of function g and hypothesis (43), we have that \(||Jw_{n_k}-JSw_{n_k}||\rightarrow 0\), and since \(J^{-1}\) is uniformly continuous on each bounded set, we can write
Then, it follows from (43), Lemma 4.3(i) that the sequences \(\left\{ \sigma _{n_k}g_{n_k}\right\} \), \(\left\{ x_{n_k}\right\} \), \(\left\{ w_{n_k}\right\} \) are bounded. Since J and \(J^{-1}\) are uniformly continuous on every bounded set of E and \(E^*\) (resp.), the sequence \(\left\{ J^{-1}(Jx_{n_k}-\sigma _{n_k} g_{n_k})\right\} \) is also bounded. Thus, there exists \(\bar{r}>0\) such that for all k,
By Lemma 2.2, there exists a strictly increasing, continuous and convex function \(\bar{g}:[0,2\bar{r}] \rightarrow [0,+\infty )\) such that \(\bar{g}(0) = 0\) and
Thus, from (43), we obtain \(\bar{g}(||w_{n_k}-J^{-1}(Jx_{n_k}-\sigma _{n_k} g_{n_k})||)\rightarrow 0\) as \(k \rightarrow \infty \). It follows from the property of \(\bar{g}\) that \(||w_{n_k}-J^{-1}(Jx_{n_k}-\sigma _{n_k} g_{n_k})||\rightarrow 0\). Since J is uniformly continuous on every bounded set, \(||Jw_{n_k}-JJ^{-1}(Jx_{n_k}-\sigma _{n_k} g_{n_k})||=||Jw_{n_k}-(Jx_{n_k}-\sigma _{n_k} g_{n_k})||\rightarrow 0\). This together with \(\sigma _{n_k}||g_{n_k}||\rightarrow 0\) implies that \(||Jw_{n_k}-Jx_{n_k}||\rightarrow 0\). Thus,
because J is uniformly continuous on every bounded set. Since \(x_{n_k}\rightharpoonup \bar{x}\), we have that \(w_{n_k}\rightharpoonup \bar{x} \in C\). This together with the demi-closedness at zero of S and (44) implies that \(\bar{x}\in {\text {Fix}}(S)\). Now, we show that \(\bar{x}\in {\text {EP}}(f,C)\). Since \(x_{n_k}\rightharpoonup \bar{x} \in C\), we obtain from Lemma 4.3 that the sequences \(\left\{ y_{n_k}\right\} \), \(\left\{ z_{n_k}\right\} \) and \(\left\{ g_{n_k}\right\} \) are bounded. Before proving the main result, we show that \(||x_{n_k}-y_{n_k}||\rightarrow 0\). Without loss of generality, we can assume that \(x_{n_k}\ne y_{n_k}\) for all \(k\ge 0\). Thus, it follows from Lemma 4.1 that \(g_{n_k}\ne 0\) and \(f(z_{n_k},x_{n_k})>0\). From the definition of \(\sigma _{n_k}\), we have
where we have used, for each k, hypothesis (43) and the boundedness of the sequence \(\left\{ g_{n_k}\right\} \). Thus, from \(\nu >0\),
Since \(f(z_{n_k},\cdot )\) is convex for each k, we can write
Thus, \(\rho _{n_k}[f(z_{n_k},x_{n_k})-f(z_{n_k},y_{n_k})]\le f(z_{n_k},x_{n_k})\) for each k and it follows from the linesearch step (Step 2.1) that \( \frac{\rho _{n_k}\alpha }{2\lambda _n}\phi (y_{n_k},x_{n_k})\le f(z_{n_k},x_{n_k}).\) Hence, from (46) and the hypotheses on \(\alpha \) and \(\lambda _{n_k}\), we obtain
In the case when \(\lim \sup _{k\rightarrow \infty }\rho _{n_k}>0\), we obtain from (47) that\(\phi (y_{n_k},x_{n_k})\rightarrow 0\). Thus, from Lemma 2.3, \(||y_{n_k}-x_{n_k}||\rightarrow 0\). In the case when \(\lim _{k\rightarrow \infty }\rho _{n_k}=0\), let \(\left\{ m_k\right\} \) be the sequence of the smallest positive integers such that
where \(z_{n_k}=(1-\gamma ^{m_k})x_{n_k}+\gamma ^{m_k} y_{n_k}\). From \(\rho _{n_k}=\gamma ^{m_k}\rightarrow 0\), we can conclude that \(m_k>1\). From the linesearch inequality, we have
where \(\bar{z}_{n_k}=(1-\gamma ^{m_k-1})x_{n_k}+\gamma ^{m_k-1} y_{n_k}\). So, it follows from the definition of \(y_{n_k}\) that (see 8)
Hence, choosing \(y=x_{n_k}\), we obtain that
where the equality follows from relation (6). Then, combining relations (49) and (51), we obtain for each k
Since the sequence \(\left\{ y_{n_k}\right\} \) is bounded, without loss of generality, we can assume that \(y_{n_k}\rightarrow \bar{y}\). Then, it follows from \(x_{n_k}\rightharpoonup \bar{x}\), the definition of \(\bar{z}_{n_k}\) and \(\gamma ^{m_k-1}\rightarrow 0\) that \(\bar{z}_{n_k}\rightharpoonup \bar{x}\). Passing to the limit in (52) as \(k\rightarrow \infty \) and using conditions A3a and A1, we obtain
which implies that \(f(\bar{x},\bar{y})\ge 0\) because \(\alpha \in (0,1)\). Taking the limit in (51) and using the hypothesis on \(\lambda _{n_k}\), we get
Hence, \(\lim _{k\rightarrow \infty }\phi (y_{n_k},x_{n_k})=0\) which thanks to Lemma 2.3 implies that \(||x_{n_k}-y_{n_k}||\rightarrow 0\) as \(k \rightarrow \infty \). Consequently, \(\bar{x}=\bar{y}\). Now, passing to the limit in (50), noting that \(x_{n_k},y_{n_k},z_{n_k}\rightharpoonup \bar{x}\) and using condition A1 and the hypothesis on \(\lambda _{n_k}\), we get \(f(\bar{x},y)\ge 0\) for all \(y\in C\), i.e., \(\bar{x}\in \mathrm{{EP}}(f,C)\). Lemma 4.4 is proved. \(\square \)
Let n be fixed and consider \(u_n=J^{-1}\left( \alpha _n Ju+(1-\alpha _n)Jt_n\right) \) where \(t_n=J^{-1}(\beta _nJw_n+(1-\beta _n)JSw_n)\). From the definition of \(x_{n+1}\), we have \(x_{n+1}=\varPi _C(u_n)\). Let \(x^\dagger =\varPi _F(x_0)\) and
Using the same development as in Lemma 3.3, we have the following result.
Lemma 4.5
The following inequalities hold for each n:
\(\mathrm{{(i)}}\) \((1-\alpha _n)\bar{T}_n\le \phi (x^\dagger ,x_n)-\phi (x^\dagger ,x_{n+1})+\alpha _n \phi (x^\dagger ,u)\)
\(\mathrm{{(ii)}}\) \(\phi (x^\dagger ,x_{n+1})\le (1-\alpha _n)\phi (x^\dagger ,x_{n})+2\alpha _n \left\langle u_n-x^\dagger ,Ju-Jx^\dagger \right\rangle \).
Proof
Let n be fixed. From the definition of \(t_n\), \(\phi \) and relation (42), we have
This together with relation (36) implies that
Thus, it follows from (31) and the definition of \(\bar{T}_n\) that
So conclusion (i) is true. Conclusion (ii) can be proven as in Lemma 3.3(ii). \(\square \)
Now, we prove the second main result.
Theorem 4.1
Let C be a nonempty closed convex subset of an uniformly smooth and 2-uniformly convex Banach space E. Assume that Conditions B, D and E hold and that the solution set \(F:=\mathrm{{EP}}(f,C)\cap \mathrm{{Fix}}(S)\) is nonempty. Then, the sequence \(\left\{ x_n\right\} \) generated by Algorithm 4.1 converges strongly to \(\varPi _F(u)\).
Proof
Recall the first inequality in Lemma 4.5 as:
where \(x^\dagger =\varPi _F(u)\) and \(\bar{T}_n\) is defined by (54).
We consider two cases.
Case 1. There exists \(n_0\ge 0\) such that the sequence \(\left\{ \phi (x^\dagger ,x_n)\right\} \) is nonincreasing.
In that case, the limit of \(\left\{ \phi (x^\dagger ,x_n)\right\} \) exists. Thus, from Lemma 4.5 (i) and \(\alpha _n\rightarrow 0\), we have that \(\lim _{n\rightarrow \infty }\bar{T}_n=0.\) From the definition of \(\bar{T}_n\) and the hypotheses on \(\nu \) and \(\beta _n\), we obtain when \(n \rightarrow \infty \), that
Since \(\left\{ x_n\right\} \) is bounded, there exists a subsequence \(\left\{ x_{n_k}\right\} \) of \(\left\{ x_{n}\right\} \) converging weakly to p such that
Since C is closed and convex in the uniformly smooth and uniformly convex Banach space E, the set C is weakly closed. Thus, from \(\left\{ x_{n_k}\right\} \subset C\) and \(x_{n_k}\rightharpoonup p\), we obtain \(p\in C\). It follows from (56) with \(n=n_{k}\) and Lemma 4.4 that \(p\in EP(f,C)\cap Fix(S)\). From (56), by arguing as in Lemma 4.4 (see 45), we also obtain
Thus, from (57), \(x^\dagger =\varPi _Fu\) and Lemma 2.6 (ii), we can write
Since \(u_n=J^{-1}(\alpha _nJu+(1-\alpha _n)Jt_n)\) for each n, we obtain that
because \(\alpha _n\rightarrow 0\) and the sequence \(\left\{ ||Ju-Jt_n||\right\} \) is bounded. Since \(J^{-1}\) is uniformly continuous on each bounded subset of \(E^*\), we deduce that
From \(t_n=J^{-1}(\beta _nJw_n+(1-\beta _n)JSw_n)\), we have that \(Jt_n-JSw_n =\beta _n(Jw_n-JSw_n)\). Thus, it follows from relation (56) that
when \(n \rightarrow \infty \). Consequently, \(||t_n-w_n||\rightarrow 0\) and from (58) and (60), it follows that
This together with (59) implies that
Thus, from Lemma 2.9 and Lemma 4.5(ii) we obtain \(\phi (x^\dagger ,x_n)\rightarrow 0\) and \(x_n\rightarrow x^\dagger \) as \(n \rightarrow \infty \).
Case 2. There exists a subsequence \(\left\{ \phi (x^\dagger ,x_{n_i})\right\} \) of \(\left\{ \phi (x^\dagger ,x_{n})\right\} \) such that \(\phi (x^\dagger ,x_{n_i})\le \phi (x^\dagger ,x_{n_i+1})\) for all \(i\ge 0\).
In that case, it follows from Lemma 2.10 that
where \(\tau (n)=\max \left\{ k\in N:n_0\le k\le n, ~\phi (x^\dagger ,x_k)\le \phi (x^\dagger ,x_{k+1})\right\} \). Furthermore, the sequence \(\left\{ \tau (n)\right\} _{n\ge n_0}\) is non-decreasing and \(\tau (n)\rightarrow +\infty \) as \(n\rightarrow \infty \). From (55) and Lemma 4.5(i), we can write
when \(n \rightarrow \infty \), which implies that \(\bar{T}_{\tau (n)}\rightarrow 0\) because \(\alpha _{\tau (n)}\rightarrow 0\). From the definition of \(\bar{T}_{\tau (n)}\) and the hypotheses on \(\nu \) and \(\beta _{\tau (n)}\), we obtain, when \(n \rightarrow \infty \),
From (64) and by arguing as in Lemma 4.4 (see 45), we obtain
Since \(\left\{ x_{\tau (n)}\right\} \subset C\) is bounded, there exists a subsequence \(\left\{ x_{\tau (m)}\right\} \) of \(\left\{ z_{\tau (n)}\right\} \) converging weakly to \(p\in C\) such that
From (64) with \(n=m\), Lemma 4.4 and \(x_{\tau (m)}\rightharpoonup p \in C\), we obtain \(p\in F\). Moreover, from (66), we have
where the last inequality follows from \(x^\dagger =\varPi _F(u)\) and Lemma 2.6 (ii). From (64), (65) and by arguing as in Case 1, we also obtain that \(||u_{\tau (n)}-x_{\tau (n)}||\rightarrow 0\) !‘hen \(n \rightarrow \infty \) (see 61). This together with (67) implies that
From Lemma 4.5 (ii), one has for each n that
which, from relation (63), implies that
Thus, it follows from \(\alpha _{\tau (n)}>0\) that \(\phi (x^\dagger ,x_{\tau (n)})\le 2\left\langle u_{\tau (n)}-x^\dagger ,Ju-\right. \left. Jx^\dagger \right\rangle \) for each n. This together with (68) implies that \(\lim \sup _{n\rightarrow \infty }\phi (x^\dagger ,x_{\tau (n)})\le 0\) or \(\lim _{n\rightarrow \infty }\phi (x^\dagger ,x_{\tau (n)})= 0\). Thus, it follows from (69) and \(\alpha _{\tau (n)}\rightarrow 0\) that \(\phi (x^\dagger ,x_{{\tau (n)}+1})\rightarrow 0\) as \(n \rightarrow \infty \). Since \(\phi (x^\dagger ,x_{n}) \le \phi (x^\dagger ,x_{\tau (n)+1})\) for all \(n\ge n_0\), we have that \(\phi (x^\dagger ,x_{n})\rightarrow 0\) and \(x_n\rightarrow x^\dagger \) as \(n \rightarrow \infty \). This completes the proof of Theorem 4.1. \(\square \)
5 Computational experiments
In this section, we consider two numerical examples, one in finite dimensional space and another in infinite dimensional space, to illustrate the convergence of Algorithms 3.1 and 4.1.
Example 1
Let \(E=\mathfrak {R}^m\) and C be a box defined by \(C=\big \{x\in \mathfrak {R}^m:-1\le x_i\le 1, i=1,\ldots ,m\big \}\). Define the bifunction f on \(C \times C\) by
where \(q\in \mathfrak {R}^m\) and P, Q are two matrices of order m such that Q is symmetric positive semidefinite and \(Q-P\) is negative semidefinite. Since \(\phi (x,y)=||x-y||^2\), it follows from the properties of P and Q that the bifunction f satisfies conditions A and D with two \(\phi \)-Lipschitz-type constants \(c_1=c_2=||P-Q||/2\), see, e.g., [19, Section 5].
Let \(g:\mathfrak {R}^m\rightarrow \mathfrak {R}\) be a convex function such that \(\text{ lev }_{\le g}:=\left\{ x\in \mathfrak {R}^m:g(x)\le 0\right\} \ne \emptyset \). Define the subgradient projection relative to g as follows:
where \(z(x)\in \partial g(x)\). Since \(\phi (x,y)=||x-y||^2\), the mapping S is quasi-\(\phi \)-nonexpansive and demiclosed at zero, see, e.g., [4, Lemma 3.1]. Besides, Fix\((S)=\text{ lev }_{\le g}\) . From Algorithms 3.1 and 4.1, \(\phi (x,y)=||x-y||^2\) and the definition of f, we need to solve the following optimization problem
where \(H_n=2\lambda _n Q+I\) and \(b_n=\lambda _n(Px_n-Qx_n+q)-x_n\) or \(b_n=\lambda _n(Py_n-Qy_n+q)-x_n\). Problem (70) can be solved effectively by Matlab Optimization Toolbox. Since J is the identity mapping I, all the projections onto C can be rewritten equivalently as quadratic convex optimization problems over the polydedral convex set C and they are also solved similarly to problem (70). All the programs are written in Matlab 7.0 and computed on a PC Desktop Intel(R) Core(TM) i5-3210M CPU @ 2.50 GHz, RAM 2.00 GB.
In all the experiments below, we work in the Euclidean space \(\mathfrak {R}^{10},~(m=10)\) and choose \(g(x)=\max \left\{ 0,\left\langle c,x\right\rangle +d\right\} \), where \(d\in \mathfrak {R}_{-}\) is a negative real number generated randomly and uniformly in \([-m,0]\), c is a vector in \(\mathfrak {R}^m\) with its entries being generated randomly and uniformly in [1, m] and \(\bar{c}\in \partial g(x)\) (if \(\left\langle c,x\right\rangle =-d\)) was chosen randomly. The vector q is the zero vector and the two matrices P, Q are generated randomly such that Q is symmetric positive semidefinite and \(Q-P\) negative semidefiniteFootnote 1. The starting point is \(x_0=(1,1,\ldots ,1)^{\text {T}}\) and the anchor in the Halpern iteration is \(u=x_0\). Note that the normalized duality mapping \(J=I\) and the constant c in Lemma 2.4 is 1. The fixed parameters are chosen in all the experiments as \(\nu =0.25\), \(\beta _n=0.5\) and \(\gamma =\alpha =0.5\). We study numerical behaviors of the proposed algorithms on a test problem for different parameters \(\alpha _n\) and stepsizes \(\lambda _n\).
We use the sequence \(D_n=||x_{n+1}-x_n||^2,~n=0,1,2,\ldots \) to illustrate the convergence of the algorithms. Figures 1 and 2, 3 and 4 describe the results for Algorithms 3.1, 4.1, respectively. Only the 5000 first iterations have been taken into account. In these figures, the x-axes represent the number of iterations (# iterations) while the y-axes represent the values of \(D_n\).
In Fig. 1, the performance of Algorithm 3.1 is reported following the choice of the stepsizes \(\lambda _n\). From this figure, we see that the behavior of Algorithm 3.1 is the best one with \(\lambda _n=\frac{1}{2.01c_1}\). It seems to be that the smaller the stepsize \(\lambda _n\) is, the slower the convergence of Algorithm 3.1 is.
Figure 2 shows the results for Algorithm 3.1 following the choice of the parameters \(\alpha _n\). It seems that it is with \(\alpha _n=\frac{1}{n+1}\) that the convergence is the best one.
Figure 3 reports the results obtained by Algorithm 4.1 with different stepsizes \(\lambda _n\). In this algorithm, we do not need the information on the two \(\phi \)-Lipschitz-type constants \(c_1\) and \(c_2\). The convergence of Algorithm 4.1 is the same for stepsizes \(\lambda _n=1\) and \(\lambda _n=0.1\). The sequence \(D_n\) with \(\lambda _n=0.5\) is not monotone decreasing while it is the case for the other values of \(\lambda _n\).
Figure 4 shows the results of Algorithm 4.1 for different values of the parameter \(\alpha _n\). From this figure, we see that the convergence of \(D_n\) with \(\alpha _n=\frac{1}{(n+1)^{0.01}}\) is the best one when \(n \rightarrow \infty \).
Example 2
Let \(E=L^2[0,1]\) with the inner product \(\left\langle x,y\right\rangle =\int _0^1 x(t)y(t){\text {d}}t\) and the induced norm \(||x||^2=\int _0^1 x(t)^2dt\) for all \(x,y\in L^2[0,1]\). Let C be the unit ball \(\text{ B }[0,1]\) and the equilibrium bifunction f is of the form \(f(x,y)=\left\langle Ax,y-x\right\rangle \) with the operator \(A:H\rightarrow H\) defined by
and the mapping \(S:H\rightarrow H\) is given by
where
We choose here \(g_1(t)\) and \(g_2(t)\) such that \(x^*=0\) is the solution of the problem. We also have here that \(\phi (x,y)=||x-y||^2\) and \(J=I\). Set \(S_i(x)(t)=\int _{0}^1 F_i(t,s)f_i(x(s))ds,~i=1,~2\), and we see that the mappings \(S_i,~i=1,2\) are Fréchet differentiable and \(||S_i'(x)h||\le ||x||||h||\) for all \(x,~h\in H\). Thus, a straightforward computation implies that f is pseudomonotone and satisfies the \(\phi \)-Lipschitz-type condition with \(c_1=c_2=1\), and U is quasi-\(\phi \)-nonexpansive. All the optimization problems in the algorithms become the projections on C which is inherently explicit. All integrals in (71), (72) and others are computed by the trapezoidal formula with the stepsize \(\tau =0.001\). Since the solution of the problem is known, we use the sequence \(F_n=||x_n-x^*||^2,~n=0,1,2,\ldots \) to show the behavior of the algorithms. The starting point is \(x_0(t)=t+0.5\cos t\). The numerical results for some different parameters are described in Figs. 5 and 6 for Algorithms 3.1 and 4.1, respectively.
From the above results, we see that the execution times of the Halpern linesearch algorithm are greater than those of the Halpern extragradient algorithm. This is not surprising because for each iteration in the linesearch algorithm, we have to find the smallest integer steplength, which is time consuming. Of course, the advantage of Algorithm 4.1 is not to require the \(\phi \)-Lipschitz-type condition of cost bifunction.
6 Conclusions
In this paper, we have proposed two strong convergence algorithms for solving a pseudomonotone equilibrium problem involving a fixed point problem for a quasi-\(\phi \)-nonexpansive mapping in Banach spaces, namely, the Halpern extragradient method and the Halpern linesearch method. The convergence theorems have been established with or without the \(\phi \)-Lipschitz-type condition imposed on bifunctions. The paper can help us in the design and analysis of practical algorithms and also in the improvement of existing methods for solving equilibrium problems in Banach spaces. Several numerical experiments on two test problems for different stepsizes and parameters are also performed to illustrate the convergence of the proposed algorithms.
Notes
We randomly choose \(\lambda _{1k}\in [-m,0],~\lambda _{2k}\in [0,m],~ k=1,\ldots ,m\). We set \(\widehat{Q}_1\), \(\widehat{Q}_2\) as two diagonal matrices with eigenvalues \(\left\{ \lambda _{1k}\right\} _{k=1}^m\) and \(\left\{ \lambda _{2k}\right\} _{k=1}^m\), respectively. Then, we construct a positive semidefinite matrix Q and a negative semidefinite matrix T using random orthogonal matrices with \(\widehat{Q}_2\) and \(\widehat{Q}_1\), respectively. Finally, we set \(P=Q-T\)
References
Alber, Ya.: I.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Kartosator, A.G. (ed.) Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, vol. 178 of Lecture Notes in Pure and Applied Mathematics, pp. 15–50. Dekker, New York (1996)
Alber, Ya I., Ryazantseva, I.: Nonlinear Ill-Posed Problems of Monotone Type. Spinger, Dordrecht (2006)
Anh, P.N., Hieu, D.V.: Multi-step algorithms for solving EPs. Math. Model. Anal. 23, 453–472 (2018)
Bauschke, H.H., Chen, J.: A projection method for approximating fixed points of quasi—nonexpansive mappings without the usual demiclosedness condition. J. Nonlinear Convex Anal. 15, 129–135 (2014)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Program. 63, 123–145 (1994)
Butnariu, D., Reich, S., Zaslavski, A.J.: Asymptotic behavior of relatively nonexpansive operators in Banach spaces. J. Appl. Anal. 7, 151–174 (2001)
Bruck, R.E., Reich, S.: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houston J. Math. 3, 459–470 (1977)
Chidume, C.: Geometric Properties of Banach Spaces and Nonlinear Iterations. Springer, London (2009)
Ceng, L.C., Yao, J.C.: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 214, 186–201 (2008)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Censor, Y., Reich, S.: Iterations of paracontractions and firmly nonexpansive operators with applications to feasibility and optimization. Optimization 37, 323–339 (1996)
Cioranescu, I.: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems, Mathematics and Its Applications, vol. 62. Kluwer Academic Publishers, Dordrecht (1990)
Cohen, G.: Auxiliary problem principle and decomposition of optimization problems. J. Optim. Theory Appl. 32, 277–305 (1980)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2002)
Flam, S.D., Antipin, A.S.: Equilibrium programming and proximal-like algorithms. Math. Program. 78, 29–41 (1997)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
Halpern, B.: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73(6), 957–961 (1967)
Hieu, D.V.: An inertial-like proximal algorithm for equilibrium problems. Math. Method Oper. Res. (2018). https://doi.org/10.1007/s00186-018-0640-6
Hieu, D.V.: Parallel extragradient-proximal methods for split equilibrium problems. Math. Model. Anal. 21(4), 478–501 (2016)
Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73, 197–217 (2016)
Hieu, D.V., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 66, 75–96 (2017)
Hieu, D.V.: An extension of hybrid method without extrapolation step to equilibrium problems. J. Ind. Manag. Optim. 13, 1723–1741 (2017)
Hieu, D.V.: Hybrid projection methods for equilibrium problems with non-Lipschitz type bifunctions. Math. Methods Appl. Sci. 40, 4065–4079 (2017)
Iiduka, H., Yamada, I.: A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 58, 251–261 (2009)
Kamimura, S., Takahashi, W.: Strong convergence of proximal-type algorithm in Banach space. SIAM J. Optim. 13, 938–945 (2002)
Konnov, I.V.: Equilibrium Models and Variational Inequalities. Elsevier, Amsterdam (2007)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 12, 747–756 (1976)
Maingé, P.E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)
Maingé, P.E.: Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints. Eur. J. Oper. Res. 205, 501–506 (2010)
Mastroeni, G.: On Auxiliary Principle for Equilibrium Problems. Vol. 68 of the Series Nonconvex Optimization and Its Applications, pp. 289–298 (2003)
Mastroeni, G.: Gap function for equilibrium problems. J. Glob. Optim. 27(4), 411–426 (2003)
Matsushita, S.-Y., Takahashi, W.: A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory 134(2), 257–266 (2005)
Moudafi, A.: Proximal point algorithm extended to equilibrum problem. J. Nat. Geometry 15, 91–100 (1999)
Muu, L.D., Oettli, W.: Convergence of an adative penalty scheme for finding constrained equilibria. Nonlinear Anal. TMA 18(12), 1159–1166 (1992)
Muu, L.D., Quoc, T.D.: Regularization algorithms for solving monotone Ky Fan inequalities with application to a Nash–Cournot equilibrium model. J. Optim. Theory Appl. 142(1), 185–204 (2009)
Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: Hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a Hilbert space. J. Optim. Theory Appl. 160, 809–831 (2014)
Quoc, T.D., Muu, L.D., Hien, N.V.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Qin, X., Cho, Y.J., Kang, S.M.: Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 225, 20–30 (2009)
Reich, S.: The book review of Geometry of Banach spaces, duality mappings and nonlinear problems by Ioana Cioranescu. Bull. Am. Math. Soc. 26, 367–370 (1992)
Reich, S.: Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. Math. 75, 287–292 (1980)
Reich, S., Sabach, S.: Three strong convergence theorems regarding iterative methods for solving equilibrium problems in reflexive Banach spaces. Contemp. Math. 568, 225–240 (2012)
Reich, S.: Constructive techniques for accretive and monotone operators. In: Applied Nonlinear Analysis, pp. 335–345. Academic Press, New York (1979)
Strodiot, J.J., Nguyen, T.T.V., Nguyen, V.H.: A new class of hybrid extragradient algorithms for solving quasi-equilibrium problems. J. Glob. Optim. 56(2), 373–397 (2013)
Strodiot, J.J., Vuong, P.T., Nguyen, T.T.V.: A class of shrinking projection extragradient methods for solving non-monotone equilibrium problems in Hilbert spaces. J. Glob. Optim. 64, 159–178 (2016)
Takahashi, W., Zembayashi, K.: Strong and weak convergence theorems for equilibrium problems and relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. 70, 45–57 (2009)
Takahashi, W., Zembayashi, K.: Strong convergence theorem by a new hybrid method for equilibrium problems and relatively nonexpasinve mappings. Fixed Point Theory Appl. 2008, 11 (2008). https://doi.org/10.1155/2008/528476. (Article ID 528476)
Tiel, J.V.: Convex Analysis: An Introductory Text. Wiley, New York (1984)
Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155, 605–627 (2012)
Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: On extragradient-viscosity methods for solving equilibrium and fixed point problems in a Hilbert space. Optimization 64, 429–451 (2015)
Wang, Y.Q., Zeng, L.C.: Hybrid projection method for generalized mixed equilibrium problems, variational inequality problems, and fixed point problems in Banach spaces. Appl. Math. Mech. Engl. Ed. 32(2), 251–264 (2011)
Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127–1138 (1991)
Xu, H.K.: Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 65, 109–113 (2002)
Zegeye, H., Shahzad, N.: A hybrid scheme for finite families of equilibrium, variational inequality and fixed point problems. Nonlinear Anal. 74, 263–272 (2011)
Acknowledgements
The authors would like to thank the Associate Editor and two anonymous referees for their valuable comments and suggestions which helped us very much in improving the original version of this paper. The first author is supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under the project: 101.01-2017.315
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Van Hieu, D., Strodiot, J.J. Strong convergence theorems for equilibrium problems and fixed point problems in Banach spaces. J. Fixed Point Theory Appl. 20, 131 (2018). https://doi.org/10.1007/s11784-018-0608-4
Published:
DOI: https://doi.org/10.1007/s11784-018-0608-4
Keywords
- Equilibrium problem
- fixed point problem
- extragradient method
- linesearch method
- monotone bifunction
- pseudomonotone bifunction
- quasi nonexpansive mapping