Abstract
In this article, using a hybrid extragradient method, we introduce a new iterative process for approximating a common element of the set of solutions of an equilibrium problem and a common zero of a finite family of monotone operators in Hadamard spaces. We also give a numerical example to solve a nonconvex optimization problem in an Hadamard space to support our main result.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Preliminaries
Let (X, d) be a metric space. A geodesic path joining x to y in X is a mapping c from a closed interval \( [0,l] \subseteq \mathbb {R}\) to X such that c(0) = x, \( c(l) = y\) and \(d(c(s), c(t)) = |s - t|\) for all \(s, t \in [0,l]\). The image of c is called geodesic segment joining x and y and is denoted by [x, y]. We denote the unique point \(z \in [x, y]\) such that \(d(x, z) = t d(x, y)\) and \(d(y, z) = (1 -t)d(x, y)\) by \((1 -t)x \oplus t y\), where \(0 \le t \le 1.\) The metric space (X, d) is called a geodesic space if any two points of X are joined by a geodesic, and X is said to be uniquely geodesic if there is exactly one geodesic segment joining x and y for each \(x, y \in X\). A subset K of a uniquely geodesic space X is said to be convex when for any two points \(x, y \in K,\) the geodesic joining x and y is contained in K. A geodesic space (X, d) is a CAT(0) space if it satisfies the (CN) inequality:
for all \(x,y,z\in X\) and \(t\in [0,1].\) In particular, if x, y, z, w are points in X and \(t\in [0,1],\) then we have
It is well known that a CAT(0) space is a uniquely geodesic space. A complete CAT(0) space is called an Hadamard space. The class of Hadamard spaces comprises Hilbert spaces, complete simply connected Riemannian manifolds of nonpositive sectional curvature (for instance classic hyperbolic spaces and the manifold of positive definite matrices), Euclidean buildings, CAT(0) complexes, nonlinear Lebesgue spaces, the Hilbert ball and many other spaces (see[1, 4, 18]).
Let K be a nonempty closed convex subset of an Hadamard space X. It is known [8, Proposition 2.4] that for any \(x\in X\) there exists a unique point \(x_{0}\in K\) such that \(d(x, x_{0}) =\min _{y\in K}d(x,y)\). The mapping \(P_{K}: X \rightarrow K\) defined by \(P_{K}x=x_{0}\) is called the metric projection from X onto K. The following theorem summarizes the basic properties of the projection.
Theorem 1
[1] Let X be an Hadamard space and \(K\subset X\) be a closed convex set. Then:
(i) For every \(x\in X\), there exists a unique point \(P_{K}(x)\in K\) such that
(ii) If \(x\in X \) and \(y\in K\), then
(iii) The mapping \(P_{K}\) is a nonexpansive mapping from X onto K, that is, we have
for all \(x,y\in X\).
Definition 1
A function \(f :X \rightarrow (-\infty , +\infty ]\) is called
-
(1)
convex iff
$$f((1-t)x \oplus ty) \le (1-t)f(x) + tf(y), \,\,\,\,\forall x,y \in X\,\, and\,\, t \in (0,1),$$ -
(2)
strictly convex iff
$$f((1-t)x \oplus ty) < (1-t)f(x) + tf(y),\,\,\,\,\forall x,y \in X,x\ne y\,\, and\,\,t \in (0,1).$$
It is easy to see that each strictly convex function has at most one minimizer on X.
Let \(\{x_{n}\}\) be a bounded sequence in an Hadamard space X. For \(x\in X\), we set
The asymptotic radius \(r({x_{n}})\) of \(\{x_{n}\}\) is defined by:
and the asymptotic center \(A(\{x_{n}\})\) of \(\{x_{n}\}\) is the set
It is known that in an Hadamard space, \(A(\{x_{n}\})\) consists of exactly one point [12]. A sequence \(\{x_{n}\}\) in an Hadamard space X is said to be \(\Delta \)-convergent to \(x \in X\) if x is the unique asymptotic center of every subsequence of \(\{x_{n}\}\). We denote \(\Delta \)-convergence in X by \(\xrightarrow {\Delta }\) and the metric convergence by \(\rightarrow \). It is well known that every bounded sequence in an Hadamard space X has a \(\Delta \)-convergent subsequence (see[23]).
Lemma 1
[13] Let K be a closed and convex subset of an Hadamard space X, \(T : K \rightarrow K\) be a nonexpansive mapping and \(\{x_{n}\}\) be a bounded sequence in K such that \(\displaystyle \lim _{n\rightarrow \infty } d(x_{n}, Tx_{n}) = 0\) and \(x_{n} \xrightarrow {\Delta } x\). Then \(x =Tx\).
The following lemma is a generalization of Opial Lemma in Hadamard spaces (See [33]).
Lemma 2
Let (X, d) be an Hadamard space and \(\{x_{n}\}\) be a sequence in X. If there exists a nonemty subset F of X satisfying:
(i) For every \(z \in F\), \(\displaystyle \lim _{n\rightarrow \infty }d(x_{n},z) \, exists.\)
(ii) If a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\)\(\Delta \)-converges to \(x\in X\), then \(x\in F.\)
Then, there exists \(p\in F\) such that \(\{x_{n}\}\)\(\Delta \)-converges to p in X.
Berg and Nikolaev in [2] introduced the concept of quasilinearization in a metric space X. Let us formally denote a pair \((a,b)\in X\times X\) by \(\overrightarrow{ab}\) and call it a vector. Then quasilinearization is a map \(\langle \cdot ,\cdot \rangle : (X \times X) \times (X \times X) \rightarrow \mathbb {R}\) defined by:
for all \(a, b,c,d \in X.\) It is easily seen that \(\langle \overrightarrow{ab},\overrightarrow{cd}\rangle =\langle \overrightarrow{cd},\overrightarrow{ab}\rangle \), \(\langle \overrightarrow{ab},\overrightarrow{cd}\rangle =-\langle \overrightarrow{ba},\overrightarrow{cd}\rangle \) and
\(\langle \overrightarrow{ax},\overrightarrow{cd}\rangle +\langle \overrightarrow{xb},\overrightarrow{cd}\rangle =\langle \overrightarrow{ab},\overrightarrow{cd}\rangle ,\) for all \( a, b,c,d,x \in X \). We say that X satisfies the Cauchy–Schwarz inequality if
for all \( a, b, c, d \in X \). It is known [2] that a geodesically connected metric space is a CAT(0) space if and only if it satisfies the Cauchy–Schwarz inequality.
Lemma 3
[11] Let C be a nonempty closed convex subset of a CAT(0) space X, \(x\in X\) and \(u\in C\). Then \(u=P_{C}x\) if and only if \(\langle \overrightarrow{xu},\overrightarrow{uy}\rangle \ge 0 \) for all \(y\in C.\)
Kakavandi and Amini [19] have introduced the concept of dual space of an Hadamard space X, based on a work of Berg and Nikolaev [2], as follows.
Consider the map \(\Theta : \mathbb {R}\)\( \times X \times X \rightarrow C(X,\mathbb {R})\) defined by:
where \(C(X,\mathbb {R})\) is the space of all continuous real-valued functions on \(\mathbb {R}\)\(\times X \times X \). Then the Cauchy–Schwarz inequality implies that \(\Theta (t,a,b)\) is a Lipschitz function with Lipschitz semi-norm \(L(\Theta (t,a,b)) = |t|d(a, b)\), for all \(t\in \mathbb {R}\) and \(a,b\in X,\) where \(L(\varphi )=\sup \{\frac{\varphi (x)-\varphi (y)}{d(x,y)}; x, y \in X, x\ne y\}\) is the Lipschitz semi-norm for any function \(\varphi : X\rightarrow \mathbb {R}. \) A pseudometric D on \(\mathbb {R}\)\(\times X \times X \) is defined by
For an Hadamard space (X, d), the pseudometric space \((\mathbb {R}\)\(\times X \times X\), D) can be considered as a subspace of the pseudometric space of all real-valued Lipschitz functions (Lip(X, R), L). By [19, Lemma 2.1], \(D((t, a, b), (s,c, d)) = 0\) if and only if \(t\langle \overrightarrow{ab},\overrightarrow{xy}\rangle =s\langle \overrightarrow{cd},\overrightarrow{xy}\rangle \) for all \(x, y \in X.\) Thus, D induces an equivalence relation on \(\mathbb {R} \times X \times X \) where the equivalence class of (t, a, b) is
The set \(X^{*} := \{[t\overrightarrow{ab}]; (t,a,b)\in \mathbb {R}\)\(\times X \times X\}\) is a metric space with metric \(D([tab], [scd]) := D((t, a, b), (s, c, d))\), which is called the dual space of (X, d). It is clear that \([\overrightarrow{aa}]=[\overrightarrow{bb}]\) for all \(a,b\in X.\) Fix \(o\in X,\) we write \(\mathbf 0 =[\overrightarrow{oo}]\) as the zero of the dual space. Note that \(X^{*}\) acts on \(X\times X\) by:
Let X be an Hadamard space with dual \(X^{*}\) and let \(A : X\rightrightarrows X^{*} \) be a multivalued operator with domain \(D(A) := \{x\in X, Ax\ne \emptyset \}\), range \(R(A) :=\bigcup _{x\in X}Ax,\, A^{-1}(x^{*})=\{x\in X, x^{*}\in Ax\} \) and graph \(gra(A) := \{(x, x^{*})\in X\times X^{*}, x\in D(A), x^{*}\in Ax\}\) .
Definition 2
[22] Let X be an Hadamard space with dual \(X^{*}\). The multivalued operator \(A:X\rightrightarrows X^{*}\) is said to be monotone if the inequality \(\langle x^{*}-y^{*},\overrightarrow{yx}\rangle \ge 0\) holds for every \((x,x^{*})\), \((y,y^{*})\in gra(A).\)
A monotone operator \(A:X\rightrightarrows X^{*}\) is maximal if there exists no monotone operator \(B:X\rightrightarrows X^{*}\) such that gra(B) properly contains gra(A) (that is, for any \((y,y^{*}) \in X \times X^{*},\) the inequality \(\langle x^{*}-y^{*}, \overrightarrow{yx } \rangle \ge 0\) for all \((x, x^{*}) \in \textit{gra}(A)\) implies that \(y^{*} \in Ay\) ).
Definition 3
[22] Let X be an Hadamard space with dual \(X^{*},\)\(\lambda >0\) and let \(A: X\rightrightarrows X^{*}\) be a multivalued operator. The resolvent of A of order \(\lambda ,\) is the multivalued mapping \(J_{\lambda }^{A} :X\rightrightarrows X,\) defined by \(J_{\lambda }^{A}(x):=\{z\in X, [\frac{1}{\lambda }\overrightarrow{zx}]\in Az\}.\) Indeed
where o is an arbitrary member of X and \(\overrightarrow{oI}(x):=[\overrightarrow{ox}]\). It is obvious that this definition is independent of the choice of o.
Let K be a nonempty subset of an Hadamard space X and \(T:K \rightarrow K\) be a mapping. The fixed point set of T is denoted by F(T), that is, \(F(T)=\{x \in K: x=Tx\}.\)
Theorem 2
[22]. Let X be a CAT(0) space with dual \(X^{*}\) and let \(A: X\rightrightarrows X^{*}\) be a multivalued mapping. Then
(i) For any \(\lambda >0\), \(R(J_{\lambda }^{A})\subset D(A),\)\(F(J_{\lambda }^{A})= A^{-1}(\mathbf 0 ),\)
(ii) If A is monotone, then \(J_{\lambda }^{A}\) is a single-valued on its domain and
in particular \(J_{\lambda }^{A}\) is a nonexpansive mapping.
(iii) If A is monotone and \(0<\lambda \le \mu ,\) then \(d^{2}(J_{\lambda }^{A}x,J_{\mu }^{A}x)\le \frac{\mu -\lambda }{\mu +\lambda } d^{2}(x,J_{\mu }^{A}x),\) which implies that \(d(x,J_{\lambda }^{A}x)\le 2d(x,J_{\mu }^{A}x).\)
It is well known that if T is a nonexpansive mapping on a subset K of a CAT(0) space X, then F(T) is closed and convex. Thus, if A is a monotone operator on a CAT(0) space X, then, by parts (i) and (ii) of Theorem 2, \(A^{-1}(\mathbf 0 )\) is closed and convex. Also by using part (ii) of this theorem for all \(u\in F(J_{\lambda }^{A})\) and \(x\in D (J_{\lambda }^{A})\), we have
We say that \(A: X\rightrightarrows X^{*}\) satisfies the range condition if, for every \(\lambda >0,\)\(D(J_{\lambda }^{A}) =X\). It is known that if A is a maximal monotone operator on a Hilbert space H, then \(R(I + \lambda A) = H\) for all \(\lambda > 0\). Thus, every maximal monotone operator A on a Hilbert space satisfies the range condition. Also as it has been shown in [25] if A is a maximal monotone operator on an Hadamard manifold, then A satisfies the range condition. Some examples of monotone operators in Hadamard spaces satisfying the range condition are presented in [22].
Definition 4
A bifunction \(f: \,\, X\,\times \, X \, \rightarrow \mathbb {R}\) is said to be:
-
(1)
Monotone if
$$f(x,y)\,+\,f(y,x)\,\le 0, \,\,\,\,\forall x,y \, \in \,X.$$ -
(2)
Pesudo-monotone if for every \(x,y \in X,\)\(f(x,y)\,\ge \,0\) implies \(f(y,x)\,\le \,0.\)
The following conditions on the bifunction f are essential and we will need them in the next section:
\(B_{1}:f(x,.)\, : \,X \rightarrow \, \mathbb {R}\) is convex and lower semicontinuous for all \(x \in X.\)
\(B_{2}:f(.,y)\, \) is \(\Delta \)-upper semicontinuous for all \(y \in X.\)
\(B_{3}:f\) is Lipschitz-type continuous, i.e. there exist two positive constants \(c_{1}\) and \(c_{2}\) such that
\(B_{4}:f\) is pesudo-monotone.
Let (X, d) be an Hadamard space. Equilibrium problems were originally studied in [3] as a unifying class of variational problems. Let K be a nonempty closed convex subset of X and \(f:K\times K\rightarrow \mathbb {R}\) be a bifunction. An equilibrium problem is to find \( x \in K\) such that
Denote the set of solutions of problem (6) by EP(f,K). Associated with the primal form EP(f, K), its dual form is defined as follows :
Let us denote by DEP(f, K) the solutions set of problem (7).
Lemma 4
If a bifunction f satisfying conditions \(B_{1}, B_{2}\) and \(B_{4},\) then EP(f, K) is closed and convex.
Proof
Take \(x^{*} \in {DEP(f, K)}.\) Let
Using (2), we have
Applying (8), we get \(y_{n}\rightarrow x^{*}.\) Using \(B_{3}\), we get \(f(y_{n}, y_{n})\ge 0\). Since f is pesudo-monotone, we have \(f(y_{n}, y_{n})= 0.\) Therefore, we have
because \(f(y_{n},x^{*})\le 0\)\((x^{*} \in DEP(f,K))\). Therefore \(f(y_{n}, y)\ge 0.\) Then letting n tend to infinity and using \(B_{2}\), we get \(x^{*} \in EP(f,K).\) Thus \(DEP(f ,K)\subseteq EP(f, K).\) Using \(B_{4},\) we get \(EP(f ,K) = DEP(f, K).\) Since f(x, .) is convex on K, it implies that DEP(f, K) is convex and hence EP(f, K) is convex. The closedness of EP(f, K) follows from \(B_{2}\).
Equilibrium problems and their generalizations have been important tools for solving problems arising in the fields of linear or nonlinear programming, variational inequalities, complementary problems, optimization problems, fixed point problems and have been widely applied to physics, structural analysis, management sciences and economics, etc. (see, for example, [7, 15, 29, 30]). An extragradient method for equilibrium problems in a Hilbert space has been studied in [31]. It has the following form:
Under certain assumptions, the weak convergence of the sequence \(\{x_{n}\}\) to a solution of the equilibrium problem has been established. In recent years some algorithms defined to solve equilibrium problems, variational inequalities and minimization problems, have been extended from the Hilbert space framework to the more general setting of Riemannian manifolds, especially Hadamard manifolds and the Hilbert unit ball (see, for example, [5, 9, 10, 16, 25, 28, 31, 32]). This popularization is due to the fact that several nonconvex problems may be viewed as a convex problem under such perspective. Equilibrium problems in Hadamard spaces were recently investigated in [18, 20, 21, 24]. In [20] the authors studied \(\Delta \)-convergence and strong convergence of the sequence generated by the extragradient method for pseudo-monotone equilibrium problems in Hadamard spaces.
Kumam and Chaipunya [24] established the existence of an equilibrium point of a bifunction satisfying some convexity, continuity, and coercivity assumptions, and they also established some fundamental properties of the resolvent of the bifunction. Furthermore, they proved that the proximal point algorithm \(\Delta \)-converges to an equilibrium point of a monotone bifunction in an Hadamard space.
Very recently Iusem and Mohebbi [18] proposed Extragradient Method with Linesearch (EML) for solving equilibrium problems of pseudo-monotone type in Hadamard spaces. They proved \(\Delta \)-convergence of the generated sequence to a solution of the equilibrium problem under standard assumptions on the bifunction. Also, they performed a minor modification on the EML algorithm which ensures strong convergence of the generated sequence to a solution of EP(f, K). Khatibzadeh and Mohebbi [21] studied the existence of solutions of equilibrium problems associated with pseudo-monotone bifunctions with suitable conditions on the bifunctions in Hadamard spaces and introduced the resolvent of a bifunction in Hadamard spaces. They also proved \(\Delta \)-convergence of the sequence generated by the proximal point algorithm to an equilibrium point of the pseudo-monotone bifunction and the strong convergence under additional assumptions on the bifunction in Hadamard spaces.
One of the most important problems in monotone operator theory is approximating a zero of a monotone operator. Martinet [27] introduced one of the most popular methods for approximating a zero of a monotone operator in Hilbert spaces that is called the proximal point algorithm. Very recently, Khatibzadeh and Ranjbar [22] generalized monotone operators and their resolvents to Hadamard spaces by using the duality theory (see also [17, 34]).
Reich and Salinas [36] established metric convergence theorems for infinite products of possibly discontinuous operators defined on Hadamard spaces.
In this article, motivated and inspired by the above results (see also [35]), we propose an iterative algorithm for finding a common element of the set of solutions of an equilibrium problem and a common zero of a finite family of monotone operators in Hadamard spaces. The \(\Delta \)-convergence and strongly convergence theorems are established under suitable assumptions. We also give a numerical example to solve a nonconvex optimization problem in an Hadamard space to support our main result.
2 \(\Delta \)-convergence
In this section for approximating a common zero of a finite family of monotone operators and a point of EP( f , K) in an Hadamard space X, we introduce algorithm (10). Let K be a nonempty closed convex subset of X and let f be a bifunction satisfies \(B_{1},B_{2},B_{3},B_{4}\) and let \(A_{i} : X \rightrightarrows X^{*} (1 \le i \le N)\) be N multi-valued monotone operators. Let \(\{x_{n}\}\) be a sequence generated by:
where \(x_{0}\in K, 0<\, \alpha \le \lambda _{k}\,\le \,\beta <\min \{\dfrac{1}{2c_{1}},\dfrac{1}{2c_{2}}\},\)\(\{\gamma _{n}^{i}\} \subset (0,\infty ), \,\displaystyle \liminf _{n\rightarrow \infty }\, \gamma _{n}^{i} >0.\) The proof of the following lemma is similar to that of [20, Lemma 2.1] and thus omitted.
Lemma 5
Let \(\{x_{n}\}, \{y_{n}\}\) and \(\{z_{n}\}\) be sequences generated by Algorithm (10) and \(x^{*}\in EP(f,K)\,\cap \,\bigcap _{i=1}^{N}A_{i}^{-1}(0)\,\), then
Remark 1
Similar to the proof of [20, Lemma 2.1], using Lemma 5, we get
and
Theorem 3
Let f be a bifunction satisfying \(B_{i}\)\((1\le i\le 4)\) and let \(A_{1},A_{2}, \ldots ,A_{N} : X \rightrightarrows X^{*}\) be N multi-valuid monotone operators that satisfy the range condition. In addition, if \(\Omega =:EP(f,K)\,\cap \,\bigcap _{i=1}^{N}A_{i}^{-1}(0)\,\ne \,\emptyset \), then the sequence \(\{x_{n}\}\) produced by (10) \(\Delta \)-converges to a point of \(\Omega \).
Proof
Let \(x^{*} \in \Omega .\) Since \(J_{\lambda }^{A}\) is a nonexpansive mapping, we have
By Lemma 5, we have \(d(x_{n+1},x^{*})\,\le \,d(z_{n},x^{*})\). So
Therefore \(\lim _{n\rightarrow \infty } d(x_{n},x^{*})\) exists and as a result \(\{x_{n}\}\) is bounded. We define for all \(1\le \,i \,\le \,N\),
So \(z_{n}=\,S_{n}^{N}x_{n}\) and assume that \(S^{0}= I\) where I is the Identity operator. Therefore
for all \(1\le \,i \,\le \,N\). It follows from \(d^{2}(x_{n+1},x^{*})\, \le \,d^{2}(z_{n},x^{*})\) that
This implies
Using the inequalities (14) and (15), for all \(1\,\le \,i\,\le \,N\), we have
Using the inequality (5), we have
so
using (16), we have
Now for every \(i\,=1,2, \ldots ,N,\) we have
Since \(\liminf _{n\rightarrow \infty }\gamma _{n}^{i}>0,\) there exists \(\gamma \in \mathbb {R}\) such that for all \(n\in \mathbb {N}\) and \(1 \, \le \,i\, \le \,N,\)\(\gamma _{n}^{i}\ge \,\gamma \,>0.\)
Now using the inequality (5) and Theorem 2, we have
Therefore
Now for every \(1\,\le i \, \le \,N,\) we have
So
Let \(\{x_{n_{j}}\}\) be a subsequence of \(\{x_{n}\}\) such that \(x_{n_{j}} \xrightarrow {\Delta }p \in K\). Using Lemma 1 and (17), we get \(p \in A_{i}^{-1}(0)\) for any \(i=1,2,...,N\). So \(p \in \bigcap _{i=1}^{N}A_{i}^{-1}(0)\).
Now we prove that \(p \in EP(f,K)\). We assume that \(u=\epsilon x_{n+1} \oplus (1-\epsilon )y\) where \(\epsilon \in [0,1)\) and \(y \in K\). So we have
So
Now, if \(\epsilon \rightarrow 1^{-}\), we have
It is easy to see that
Since \(\liminf _{n\rightarrow \infty }(1-2c_{i}\lambda _{n}) >0\) for \(i=1,2\), using Lemma 5 and inequality (13), we have
It follows from (19) that \(y_{n_{j}}\xrightarrow {\Delta } p.\) Using (11), (12) and (19), we have \(\lim _{n\rightarrow \infty }f(y_{n},x_{n+1})=0\). Replacing n with \(n_{j}\) in (18), taking \(\limsup \) and using (19), we have
Therefore, \(p\in E(f,K)\) and so \(p\in \Omega \). Finally using Lemma 2, the sequence \(\{x_{n}\}\) is \(\Delta \)-convergent to a point of \(\Omega \) and this completes the proof.
Definition 5
Let X be an Hadamard space with dual \(X^{*}\) and let \(f: X\rightarrow (-\infty , +\infty ]\) be a proper function with effective domain \(D(f) :=\{x : f(x) < +\infty \}.\) Then, the subdifferential of f is the multivalued mapping \(\partial f : X\rightrightarrows X^{*}\) defined by:
when \(x \in D(f)\) and \(\partial f (x) = \emptyset ,\) otherwise.
It has been proved in [19] that \(\partial f(x)\) of a convex, proper and lower semicontinuous function f satisfies the range condition. So using Theorem 3 , we can obtain the following corollary:
Corollary 1
Let K be a convex and closed subset of an Hadamard space X and let f be a bifunction satisfying \(B_{1}\), \(B_{2}\), \(B_{3}\) and \(B_{4},\) and let \(g_{i}: K \rightarrow (-\infty ,+\infty ] (i=1,\ldots ,N)\) be N proper convex and lower semicontinuous functions, with \(\Omega =:EP(f,K)\cap \,\bigcap _{i=1}^{N}argmin_{y \in K}g_{i}(y) \ne \emptyset .\) For \(x_{0} \in K,\) let \(\{x_{n}\}\) be a sequence produced by:
where \(0<\, \alpha \le \lambda _{k}\,\le \,\beta <\min \left\{ \dfrac{1}{2c_{1}},\dfrac{1}{2c_{2}}\right\} \), \(\{\gamma _{n}^{i}\}\,\, \subset \,\,(0,\infty )\) and \(\liminf _{n \rightarrow \infty }\, \gamma _{n}^{i}\,\, >0\). Then \(\{x_{n}\}\) is \(\Delta \)-convergent to a point of \(\Omega .\)
3 Strong convergence
In this section, using the Halpern regularization method, we study the strong convergence of the sequence generated by \(u, x_{0} \in K\) and
to a common zero of a finite family of monotone operators \(A_{1}, A_{2}, \ldots , A_{N}\) and an element of EP(f, K) in an Hadamard space X, where
\(0<\, \alpha \le \lambda _{k}\,\le \,\beta <\min \left\{ \dfrac{1}{2c_{1}},\dfrac{1}{2c_{2}}\right\} \), \(\alpha _{k}\, \in (0,1)\), \(\lim _{k \rightarrow \infty } \alpha _{k}\,=0\), \(\sum _{k=0}^{\infty }\alpha _{k}=\infty \), \(\{\gamma _{n}^{i}\}\,\, \subset \,\,(0,\infty )\) and \(\liminf _{n \rightarrow \infty }\, \gamma _{n}^{i}\,\, >0\).
The proof of the following lemma is similar to that of [20, Lemma 3.1] and thus omitted.
Lemma 6
Let \(\{x_{n}\}\), \(\{y_{n}\}\), \(\{t_{n}\}\), and \(\{z_{n}\}\) be sequences generated by Algorithm (21) and \(x^{*}\in EP(f,K) \cap \,\bigcap _{i=1}^{N}A_{i}^{-1}(0),\) then
Remark 2
Similar to the proof of [20, Lemma 3.1], using Lemma 6, we get
and
To establish strong convergence of the sequence \(\{x_{n}\}\) produced by Algorithm (21), we need an intermediate result which establishes an elementary property of real sequences.
Lemma 7
[37] Let \(\{s_{n} \}\) be a sequence of nonnegative real numbers,\(\{\alpha _{n}\}\) be a sequence of real numbers in (0, 1) with \(\sum _{n=0}^{\infty } \alpha _{n}=\infty \) and \(\{t_{n}\}\) be a sequence of real numbers. Suppose that
If \(\limsup _{k\rightarrow \infty }t_{n_{k}}\le 0,\) then, for every subsequence \(\{s_{n_{k}}\}\) of \(\{s_{n}\}\) satisfying \(\liminf _{k\rightarrow \infty }\,(s_{n_{k}+1}-s_{n_{k}})\,\ge 0\), it holds \(\lim _{n\rightarrow \infty }s_{n}=0.\)
Theorem 4
Let K be a convex and closed subset of X and let f be a bifunction satisfying \(B_{1},B_{2},B_{3}\) and \(B_{4}.\) Let \(A_{1},A_{2}, \ldots ,A_{N} : X \rightrightarrows X^{*}\) be N multi-valued monotone operators that satisfy the range condition. If \(\Omega = EP(f,K) \cap \bigcap _{i=1}^{N}A_{i}^{-1}(0)\ne \emptyset \), then the sequence \(\{x_{n}\}\) produced by Algorithm (21) converges strongly to \(x^{*}=P_{\Omega }u\).
Proof
First we show that the sequence \(\{x_{n}\}\) is bounded. Let \(x^{*}=P_{\Omega }u.\) From nonexpansivity of \(J_{\gamma _{n}^{i}}^{A_{i}}\), we have
Using Lemma 6, we have
So
Using induction, we have
So the sequence \(\{x_{n}\}\) is bounded. Consequently, we conclude that \(\{z_{n}\}\) and \(\{t_{n}\}\) are bounded. On the other hand, using (24), we have
Now we show that \(d^2(x_{n},x^{*})\rightarrow 0.\) To do this using Lemma 7, it is sufficient to show that:
for every subsequence \(\{d^2(x_{n_{k}},x^{*})\}\) of \(\{d^2(x_{n},x^{*})\}\) that satisfies,
Since \(\{t_{n_{k}}\}\) is bounded, we have
In conclusion, \(\lim _{k \rightarrow \infty }(d^2(t_{n_{k}},x^{*}) - d^2(x_{n_{k}},x^{*})) =0.\) There exists subsequence \(\{t_{n_{k_{\epsilon }}}\}\) of \(\{t_{n_{k}}\}\) such that \(t_{n_{k_{\epsilon }}} \xrightarrow {\Delta } p \in K,\) therefore we have
Since \(d^2(u,.)\) is \(\Delta \)-lower semicontinuous, we have
It remains to prove that
Let \(S_{n}^{i} =J_{\gamma _{n}^{i}}^{A_{i}}\circ ...\circ J_{\gamma _{n}^{1}}^{A_{1}}\), for \(1\le i \le N\) and \(n\in \mathbb {N}\). Thus \(z_{n}=S_{n}^{N}x_{n},\) and assume that \(S_{n}^{0}=I,\) where I is the Identity operator. Therefore
hence
We can write
So
Since \(\lim _{n \rightarrow \infty } \alpha _{n}=0\), using (25) and (28), for \(1 \le i \le N\), we have
Applying (5), we obtain
Using (30), we have
We have
hence
Since \(\liminf _{n\rightarrow \infty }\gamma _{n}^{i}>0,\) there exists \(\gamma \in \mathbb {R}\) such that \(\gamma _{n}^{i}\ge \,\gamma \,>0\) for all \(n\in \mathbb {N}\) and \(1 \, \le \,i\, \le \,N.\) Now using inequality (5) and Theorem 2, we have
Therefore
Now for every \(1\,\le i \, \le \,N,\) we have
So
Let \(\{x_{n_{k_{\epsilon }}}\}\) be a subsequence of \(\{x_{n_{k}}\}\) such that \(x_{n_{k_{\epsilon }}}\xrightarrow {\Delta }p \in K\). By Lemma 1 and (31), we get \(p \in A_{i}^{-1}(0)\). So \(p \in \bigcap _{i=1}^{N}A_{i}^{-1}(0)\). Since \(\liminf _{n \rightarrow \infty }(1-2c_{i}\lambda _{n})>0\), for \(i=1,2\), using Lemma 6, we have
Using (22), (23) and (32), we have
Now assume that \(z=\epsilon t_{n} \oplus (1-\epsilon )y,\) where \(0<\epsilon <1\) and \(y \in K\). we have
Therefore
So
Now, if \(\epsilon \rightarrow 1^{-},\) we obtain
It is easy to see that
Now replacing n with \(n_{k_{\epsilon }}\) in (34), taking \(\limsup \) and using (32) and (33), since \(y_{n_{k_{\epsilon }}} \xrightarrow {\Delta } p\), we have
Therefore \(p \in EP(f,K)\) and as a result, \(p \in \Omega \). Since \(x^{*}=P_{\Omega }u,\) we have
Using (26), we get
Now using Lemma 7, we get \(x_{n}\rightarrow x^{*}.\)
Using Theorem 4, we can obtain the following corollary:
Corollary 2
Let K be a convex and closed subset of an Hadamara space X and let f be a bifunction satisfying \(B_{1}\), \(B_{2}\), \(B_{3}\) and \(B_{4}.\) Let \(g_{i}: K \rightarrow (-\infty ,+\infty ] (i=1,\ldots ,N)\) be N proper convex and lower semicontinuous functions, and \(\Omega = EP(f,K)\cap \,\bigcap _{i=1}^{N}argmin_{y \in K}g_{i}(y) \ne \emptyset .\) For \(u, x_{0}\in K\), let \(\{x_{n}\}\) be a sequence produced by:
where \(0<\, \alpha \le \lambda _{k}\,\le \,\beta <\min \left\{ \dfrac{1}{2c_{1}},\dfrac{1}{2c_{2}}\right\} \), \(\alpha _{k}\, \in (0,1)\), \(\lim _{k \rightarrow \infty } \alpha _{k}\,=0\), \(\sum _{k=0}^{\infty }\alpha _{k}=\infty \), \(\{\gamma _{n}^{i}\}\,\, \subset \,\,(0,\infty )\) and \(\liminf _{n \rightarrow \infty }\, \gamma _{n}^{i}\,\, >0\). Then \(\{x_{n}\}\) converges strongly to \(x^{*}=P_{\Omega } u.\)
4 Numerical example
In this section, we provide a numerical experiment to validate our obtained results in an Hadamard space.
Example 1
Let \(f_{1}:\mathbb {R}^{2} \rightarrow \mathbb {R}\) and \(f_{2}:\mathbb {R}^{2} \rightarrow \mathbb {R}\) be two functions defined by:
and \(X=\mathbb {R}^{2}\) be endowed with a metric defined by:
where \(x=(x_{1},x_{2})\) and \(y=(y_{1},y_{2})\). So \((\mathbb {R}^{2},d_{H})\) is an Hadamard space (see [14, Example 5.2]) with the geodesic joining x to y given by:
It follows from [14, Example 5.2] that \(f_{1}\) is a proper convex and lower semicontinuous function in \((\mathbb {R}^{2},d_{H})\) but not convex in the classical sense. Let \(f:X \times X \rightarrow \mathbb {R}\) be a function defined by:
It is obvious that f satisfies \(B_{1},B_{2},B_{3}\) and \(B_{4}\). Letting \(N=2\), \(A_{1}=\partial f_{1}\) and \(A_{2}=\partial f_{2},\) Algorithm (21) takes the following form:
Now, take \(\alpha _{n}=\dfrac{1}{n+2}\), \(\gamma _{n}^{1}=\gamma _{n}^{2}=2n\), \(u=(u_{1},u_{2})=(0.7,0.7)\) and \(\lambda _{n}=\dfrac{1}{n+2} + \dfrac{1}{2}\) for every \(n \in \mathbb {N},\) and the initial point \(x_{1}=(0.3,0.2)\). It can be observed that all the assumptions of Theorem 4 are true and \(\Omega =EP(f,\mathbb {R}^{2}) \cap (\bigcap _{i=1}^{2}argmin_{i \in X}f_{i}(x))=\{0\}.\) Now using Algorithm (36), we have numerical results in Table 1 and Fig. 1.
References
Bačák, M.: Convex Analysis and Optimization in Hadamard Spaces. De Gruyter Series in Nonlinear Analysis and Applications, vol. 22. De Gruyter, Berlin (2014)
Berg, I.D., Nikolaev, I.G.: Quasilinearization and curvature of Alexandrov spaces. Geom. Dedi-cata. 133, 195–218 (2008)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Bridson, M., Haefliger, A.: Metric Spaces of Nonpositive Curvature. Springer, Berlin (1999)
Chang, S., Wang, L., Wang, X.R.: Common solution for a finite family of minimization problem and fixed point problem for a pair of demicontractive mappings in Hadamard spaces. RACSAM 114(2), 12 (2020)
Chaoha, P., Phon-on, A.: A note on fixed point sets in CAT(0) spaces. J. Math. Anal. Appl. 320, 983–987 (2006)
Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)
Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)
Da Cruz Neto, J.X., Ferreira, O.P, Lucambio Pérez, L.R.: Contributions to the study of monotone vector fields. Acta Math. Hungar. 94, 307–320 (2002)
Da Cruz Neto, J.X., Ferreira, O.P., Lucambio Pérez, L.R.: Monotone point-to-set vector fields. Balkan J. Geom. Appl. 5, 69–79 (2000)
Dehghan, H., Rooin, J.: A characterization of metric projection in CAT(0) spaces. In: International Conference on Functional Equation. Geometric Functions and Applications (ICFGA 2012) 11–12 th May 2012, pp. 41–43. Payame University, Tabriz (2012)
Dhompongsa, S., Kirk, W.A., Sims, B.: Fixed points of uniformly Lipschitzian mappings. Nonlinear Anal. 65, 762–772 (2006)
Dhompongsa, S., Panyanak, B.: On \(\Delta \)-convergence theorems in CAT(0) spaces. Comput. Math. Appl. 56, 2572–2579 (2008)
Eskandani, G.Z., Raeisi, M.: On the zero point problem of monotone operators in Hadamard spaces. Numer. Algorithm 80, 1155–1179 (2019)
Eskandani, G.Z., Raeisi, M., Rassias, T.M.: A hybrid extragradient method for solving pseudomonotone equilibrium problems using Bregman distance. J. Fixed Point Theory Appl. 20, 132 (2018)
Ferreira, O.P., Oliveira, P.R.: Proximal point algorithm on Riemannian manifolds. Optimization. 51, 257–270 (2002)
Heydari, M.T., Khadem, A., Ranjbar, S.: Approximating a common zero of finite family of monotone operators in Hadamard spaces. Optimization. 66, 2233–2244 (2017)
Iusem, A.N., Mohebbi, V.: Convergence analysis of the extragradient method for equilibrium problems in Hadamard spaces. Comput. Appl. Math. 39, 44 (2020)
Kakavandi, B.A., Amini, A.: Duality and subdifferential for convex functions on complete CAT(0) metric spaces. Nonlinear Anal. 73, 3450–3455 (2010)
Khatibzadeh, H., Mohebbi, V.: Approximating solutions of equilibrium problems in Hadamard spaces. Miskolc Math. Notes. 20, 281–297 (2019)
Khatibzadeh, H., Mohebbi, V.: Monotone and pseudo-monotone equilibrium problems in Hadamard spaces. J. Aust. Math. Soc. (2019). https://doi.org/10.1017/S1446788719000041
Khatibzadeh, H., Ranjbar, S.: Monotone operators and the proximal point algorithm in complete CAT(0) metric spaces. J. Aust. Math. Soc. 103, 70–90 (2017)
Kirk, W.A., Panyanak, B.: A concept of convergence in geodesic spaces. Nonlinear Anal. 68, 3689–3696 (2008)
Kumam, P., Chaipunya, P.: Equilibrium problems and proximal algorithms in Hadamard spaces. Optimization. 8, 155–172 (2017)
Li, G., López, C., Martín-Márquez, V.: Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 79, 663–683 (2009)
Maingé, P.E.: Strong convergence of projected pubgradient methods for nonsmooth and nonstrictly convex minimization. Set Valued Anal. 16, 899–912 (2008)
Martinet, B.: Régularisation dinéquations variationelles par approximations successives. Revue Fr. Inform. Rech. Oper. 4, 154–159 (1970)
Németh, S.Z.: Monotone vector fields. Publ. Math. Debrecen 54, 437–449 (1999)
Qin, X., Cho, Y.J., Kang, S.M.: Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 225, 20–30 (2009)
Qin, X., Kang, S.M., Cho, Y.J.: Convergence theorems on generalized equilibrium problems and fixed point problems with applications. Proc. Estonian Acad. Sci. 58, 170–318 (2009)
Quoc, T.D., Muu, L.D., Nguyen, V.H.: Extragradient methods extended to equilibrium problems. Optimization. 57, 749–776 (2008)
Rahimi, R., Farajzadeh, A.P., Vaezpour, S.M.: Study of equilibrium problem in Hadamard manifolds by extending some concepts of nonlinear analysis. RACSAM 112(4), 1521–1537 (2018)
Ranjbar, S., Khatibzadeh, H.: \(\Delta \)-convergence and \(W\)-convergence of the modified Mann iteration for a family of asymptotically nonexpansive tipy mappings in complete CAT(0) spaces. Fixed Point Theory. 17, 151–158 (2016)
Ranjbar, S., Khatibzadeh, H.: Strong and \(\Delta \)-convergence to a zero of a monotone operator in CAT(0) Spaces. Mediterr. J. Math. 14(2), 15 (2017)
Reich, S., Salinas, Z.: Infinite products of discontinuous operators in Banach and metric spaces. Linear Nonlinear Anal. 1, 169–200 (2015)
Reich, S., Salinas, Z.: Metric convergence of infinite products of operators in Hadamard spaces. J. Nonlinear Convex Anal. 18, 331–345 (2017)
Xu, H.K.: Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 65, 109–113 (2002)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Moharami, R., Eskandani, G.Z. An extragradient algorithm for solving equilibrium problem and zero point problem in Hadamard spaces. RACSAM 114, 152 (2020). https://doi.org/10.1007/s13398-020-00885-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13398-020-00885-5