1 Preliminaries

Let (Xd) be a metric space. A geodesic path joining x to y in X is a mapping c from a closed interval \( [0,l] \subseteq \mathbb {R}\) to X such that c(0) = x, \( c(l) = y\) and \(d(c(s), c(t)) = |s - t|\) for all \(s, t \in [0,l]\). The image of c is called geodesic segment joining x and y and is denoted by [x, y]. We denote the unique point \(z \in [x, y]\) such that \(d(x, z) = t d(x, y)\) and \(d(y, z) = (1 -t)d(x, y)\) by \((1 -t)x \oplus t y\), where \(0 \le t \le 1.\) The metric space (X, d) is called a geodesic space if any two points of X are joined by a geodesic, and X is said to be uniquely geodesic if there is exactly one geodesic segment joining x and y for each \(x, y \in X\). A subset K of a uniquely geodesic space X is said to be convex when for any two points \(x, y \in K,\) the geodesic joining x and y is contained in K. A geodesic space (Xd) is a CAT(0) space if it satisfies the (CN) inequality:

$$\begin{aligned} d^{2}((1- t) x \oplus ty,z) \le (1-t) d^{2} (x, z)+ t d^{2}(y, z) - t (1-t) d^{2}(x,y), \end{aligned}$$
(1)

for all \(x,y,z\in X\) and \(t\in [0,1].\) In particular, if xyzw are points in X and \(t\in [0,1],\) then we have

$$\begin{aligned} d((1- t) x \oplus t y,z) \le (1-t) d (x, z) +t d(y, z), \end{aligned}$$
(2)
$$\begin{aligned} d((1-t)x \oplus ty,(1-t)z \oplus tw) \le (1-t)d(x,z) + td(y,w). \end{aligned}$$
(3)

It is well known that a CAT(0) space is a uniquely geodesic space. A complete CAT(0) space is called an Hadamard space. The class of Hadamard spaces comprises Hilbert spaces, complete simply connected Riemannian manifolds of nonpositive sectional curvature (for instance classic hyperbolic spaces and the manifold of positive definite matrices), Euclidean buildings, CAT(0) complexes, nonlinear Lebesgue spaces, the Hilbert ball and many other spaces (see[1, 4, 18]).

Let K be a nonempty closed convex subset of an Hadamard space X. It is known [8, Proposition 2.4] that for any \(x\in X\) there exists a unique point \(x_{0}\in K\) such that \(d(x, x_{0}) =\min _{y\in K}d(x,y)\). The mapping \(P_{K}: X \rightarrow K\) defined by \(P_{K}x=x_{0}\) is called the metric projection from X onto K. The following theorem summarizes the basic properties of the projection.

Theorem 1

[1] Let X be an Hadamard space and \(K\subset X\) be a closed convex set. Then:

(i) For every \(x\in X\), there exists a unique point \(P_{K}(x)\in K\) such that

$$ d(x,P_{K}(x))=d(x,K). $$

(ii) If \(x\in X \) and \(y\in K\), then

$$d^{2}(x,P_{K}x)+d^{2}(P_{K}x,y)\le d^{2}(x,y).$$

(iii) The mapping \(P_{K}\) is a nonexpansive mapping from X onto K, that is, we have

$$d(P_{K}x,P_{K}y)\le d(x,y),$$

for all \(x,y\in X\).

Definition 1

A function \(f :X \rightarrow (-\infty , +\infty ]\) is called

  1. (1)

    convex iff

    $$f((1-t)x \oplus ty) \le (1-t)f(x) + tf(y), \,\,\,\,\forall x,y \in X\,\, and\,\, t \in (0,1),$$
  2. (2)

    strictly convex iff

    $$f((1-t)x \oplus ty) < (1-t)f(x) + tf(y),\,\,\,\,\forall x,y \in X,x\ne y\,\, and\,\,t \in (0,1).$$

It is easy to see that each strictly convex function has at most one minimizer on X.

Let \(\{x_{n}\}\) be a bounded sequence in an Hadamard space X. For \(x\in X\), we set

$$\begin{aligned} r(x, \{x_{n}\}) = \limsup _{ n\rightarrow \infty } d(x_{n}, x). \end{aligned}$$

The asymptotic radius \(r({x_{n}})\) of \(\{x_{n}\}\) is defined by:

$$\begin{aligned} r(\{x_{n}\}) = \inf \{r(x, \{x_{n}\}) : x \in X\}, \end{aligned}$$

and the asymptotic center \(A(\{x_{n}\})\) of \(\{x_{n}\}\) is the set

$$\begin{aligned} A(\{x_{n}\}) = \{x \in X : r(x, \{x_{n}\}) = r(\{x_{n}\})\}. \end{aligned}$$

It is known that in an Hadamard space, \(A(\{x_{n}\})\) consists of exactly one point [12]. A sequence \(\{x_{n}\}\) in an Hadamard space X is said to be \(\Delta \)-convergent to \(x \in X\) if x is the unique asymptotic center of every subsequence of \(\{x_{n}\}\). We denote \(\Delta \)-convergence in X by \(\xrightarrow {\Delta }\) and the metric convergence by \(\rightarrow \). It is well known that every bounded sequence in an Hadamard space X has a \(\Delta \)-convergent subsequence (see[23]).

Lemma 1

[13] Let K be a closed and convex subset of an Hadamard space X, \(T : K \rightarrow K\) be a nonexpansive mapping and \(\{x_{n}\}\) be a bounded sequence in K such that \(\displaystyle \lim _{n\rightarrow \infty } d(x_{n}, Tx_{n}) = 0\) and \(x_{n} \xrightarrow {\Delta } x\). Then \(x =Tx\).

The following lemma is a generalization of Opial Lemma in Hadamard spaces (See [33]).

Lemma 2

Let (Xd) be an Hadamard space and \(\{x_{n}\}\) be a sequence in X. If there exists a nonemty subset F of X satisfying:

(i) For every \(z \in F\), \(\displaystyle \lim _{n\rightarrow \infty }d(x_{n},z) \, exists.\)

(ii) If a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\)\(\Delta \)-converges to \(x\in X\), then \(x\in F.\)

Then, there exists \(p\in F\) such that \(\{x_{n}\}\)\(\Delta \)-converges to p in X.

Berg and Nikolaev in [2] introduced the concept of quasilinearization in a metric space X. Let us formally denote a pair \((a,b)\in X\times X\) by \(\overrightarrow{ab}\) and call it a vector. Then quasilinearization is a map \(\langle \cdot ,\cdot \rangle : (X \times X) \times (X \times X) \rightarrow \mathbb {R}\) defined by:

$$\begin{aligned} \langle \overrightarrow{ab},\overrightarrow{cd}\rangle =\frac{1}{2}\big [d^{2}(a,d)+d^{2}(b,c)-d^{2}(a,c)-d^{2}(b,d)\big ], \end{aligned}$$
(4)

for all \(a, b,c,d \in X.\) It is easily seen that \(\langle \overrightarrow{ab},\overrightarrow{cd}\rangle =\langle \overrightarrow{cd},\overrightarrow{ab}\rangle \), \(\langle \overrightarrow{ab},\overrightarrow{cd}\rangle =-\langle \overrightarrow{ba},\overrightarrow{cd}\rangle \) and

\(\langle \overrightarrow{ax},\overrightarrow{cd}\rangle +\langle \overrightarrow{xb},\overrightarrow{cd}\rangle =\langle \overrightarrow{ab},\overrightarrow{cd}\rangle ,\) for all \( a, b,c,d,x \in X \). We say that X satisfies the Cauchy–Schwarz inequality if

$$\langle \overrightarrow{ab},\overrightarrow{cd}\rangle \le d(a,b)d(c,d),$$

for all \( a, b, c, d \in X \). It is known [2] that a geodesically connected metric space is a CAT(0) space if and only if it satisfies the Cauchy–Schwarz inequality.

Lemma 3

[11] Let C be a nonempty closed convex subset of a CAT(0) space X, \(x\in X\) and \(u\in C\). Then \(u=P_{C}x\) if and only if \(\langle \overrightarrow{xu},\overrightarrow{uy}\rangle \ge 0 \) for all \(y\in C.\)

Kakavandi and Amini [19] have introduced the concept of dual space of an Hadamard space X, based on a work of Berg and Nikolaev [2], as follows.

Consider the map \(\Theta : \mathbb {R}\)\( \times X \times X \rightarrow C(X,\mathbb {R})\) defined by:

$$\Theta (t,a,b)(x)=t\langle \overrightarrow{ab},\overrightarrow{ax}\rangle ,\quad ( a,b,x\in X, t\in \mathbb {R}),$$

where \(C(X,\mathbb {R})\) is the space of all continuous real-valued functions on \(\mathbb {R}\)\(\times X \times X \). Then the Cauchy–Schwarz inequality implies that \(\Theta (t,a,b)\) is a Lipschitz function with Lipschitz semi-norm \(L(\Theta (t,a,b)) = |t|d(a, b)\), for all \(t\in \mathbb {R}\) and \(a,b\in X,\) where \(L(\varphi )=\sup \{\frac{\varphi (x)-\varphi (y)}{d(x,y)}; x, y \in X, x\ne y\}\) is the Lipschitz semi-norm for any function \(\varphi : X\rightarrow \mathbb {R}. \) A pseudometric D on \(\mathbb {R}\)\(\times X \times X \) is defined by

$$D((t, a, b), (s, c, d)) = L(\Theta (t,a,b) - \Theta (s,c,d)),\quad ( a, b, c, d \in X, t, s\in \mathbb {R}).$$

For an Hadamard space (Xd), the pseudometric space \((\mathbb {R}\)\(\times X \times X\), D) can be considered as a subspace of the pseudometric space of all real-valued Lipschitz functions (Lip(XR), L). By [19, Lemma 2.1], \(D((t, a, b), (s,c, d)) = 0\) if and only if \(t\langle \overrightarrow{ab},\overrightarrow{xy}\rangle =s\langle \overrightarrow{cd},\overrightarrow{xy}\rangle \) for all \(x, y \in X.\) Thus, D induces an equivalence relation on \(\mathbb {R} \times X \times X \) where the equivalence class of (tab) is

$$[t \overrightarrow{ab}]= \{s\overrightarrow{cd}; t\langle \overrightarrow{ab},\overrightarrow{xy}\rangle =s\langle \overrightarrow{cd},\overrightarrow{xy}\rangle , \quad \forall x,y\in X \}.$$

The set \(X^{*} := \{[t\overrightarrow{ab}]; (t,a,b)\in \mathbb {R}\)\(\times X \times X\}\) is a metric space with metric \(D([tab], [scd]) := D((t, a, b), (s, c, d))\), which is called the dual space of (Xd). It is clear that \([\overrightarrow{aa}]=[\overrightarrow{bb}]\) for all \(a,b\in X.\) Fix \(o\in X,\) we write \(\mathbf 0 =[\overrightarrow{oo}]\) as the zero of the dual space. Note that \(X^{*}\) acts on \(X\times X\) by:

$$\langle x^{*},\overrightarrow{xy}\rangle =t\langle \overrightarrow{ab} ,\overrightarrow{xy}\rangle , \quad (x^{*}=[t \overrightarrow{ab}]\in X^{*},x,y\in X).$$

Let X be an Hadamard space with dual \(X^{*}\) and let \(A : X\rightrightarrows X^{*} \) be a multivalued operator with domain \(D(A) := \{x\in X, Ax\ne \emptyset \}\), range \(R(A) :=\bigcup _{x\in X}Ax,\, A^{-1}(x^{*})=\{x\in X, x^{*}\in Ax\} \) and graph \(gra(A) := \{(x, x^{*})\in X\times X^{*}, x\in D(A), x^{*}\in Ax\}\) .

Definition 2

[22] Let X be an Hadamard space with dual \(X^{*}\). The multivalued operator \(A:X\rightrightarrows X^{*}\) is said to be monotone if the inequality \(\langle x^{*}-y^{*},\overrightarrow{yx}\rangle \ge 0\) holds for every \((x,x^{*})\), \((y,y^{*})\in gra(A).\)

A monotone operator \(A:X\rightrightarrows X^{*}\) is maximal if there exists no monotone operator \(B:X\rightrightarrows X^{*}\) such that gra(B) properly contains gra(A) (that is, for any \((y,y^{*}) \in X \times X^{*},\) the inequality \(\langle x^{*}-y^{*}, \overrightarrow{yx } \rangle \ge 0\) for all \((x, x^{*}) \in \textit{gra}(A)\) implies that \(y^{*} \in Ay\) ).

Definition 3

[22] Let X be an Hadamard space with dual \(X^{*},\)\(\lambda >0\) and let \(A: X\rightrightarrows X^{*}\) be a multivalued operator. The resolvent of A of order \(\lambda ,\) is the multivalued mapping \(J_{\lambda }^{A} :X\rightrightarrows X,\) defined by \(J_{\lambda }^{A}(x):=\{z\in X, [\frac{1}{\lambda }\overrightarrow{zx}]\in Az\}.\) Indeed

$$J_{\lambda }^{A}=(\overrightarrow{oI}+\lambda A)^{-1}\circ \overrightarrow{oI},$$

where o is an arbitrary member of X and \(\overrightarrow{oI}(x):=[\overrightarrow{ox}]\). It is obvious that this definition is independent of the choice of o.

Let K be a nonempty subset of an Hadamard space X and \(T:K \rightarrow K\) be a mapping. The fixed point set of T is denoted by F(T),  that is, \(F(T)=\{x \in K: x=Tx\}.\)

Theorem 2

[22]. Let X be a CAT(0) space with dual \(X^{*}\) and let \(A: X\rightrightarrows X^{*}\) be a multivalued mapping. Then

(i) For any \(\lambda >0\), \(R(J_{\lambda }^{A})\subset D(A),\)\(F(J_{\lambda }^{A})= A^{-1}(\mathbf 0 ),\)

(ii) If A is monotone, then \(J_{\lambda }^{A}\) is a single-valued on its domain and

$$d^{2}(J_{\lambda }^{A}x, J_{\lambda }^{A} y )\le \langle \overrightarrow{J_{\lambda }^{A}xJ_{\lambda } ^{A} y},\overrightarrow{xy}\rangle ,\quad \forall x,y\in D(J_{\lambda }^{A}),$$

in particular \(J_{\lambda }^{A}\) is a nonexpansive mapping.

(iii) If A is monotone and \(0<\lambda \le \mu ,\) then \(d^{2}(J_{\lambda }^{A}x,J_{\mu }^{A}x)\le \frac{\mu -\lambda }{\mu +\lambda } d^{2}(x,J_{\mu }^{A}x),\) which implies that \(d(x,J_{\lambda }^{A}x)\le 2d(x,J_{\mu }^{A}x).\)

It is well known that if T is a nonexpansive mapping on a subset K of a CAT(0) space X, then F(T) is closed and convex. Thus, if A is a monotone operator on a CAT(0) space X, then, by parts (i) and (ii) of Theorem 2, \(A^{-1}(\mathbf 0 )\) is closed and convex. Also by using part (ii) of this theorem for all \(u\in F(J_{\lambda }^{A})\) and \(x\in D (J_{\lambda }^{A})\), we have

$$\begin{aligned} d^{2} ( J_{\lambda }^{A}x,x)\le d^{2} (u, x) - d^{2} (u, J_{\lambda }^{A}x). \end{aligned}$$
(5)

We say that \(A: X\rightrightarrows X^{*}\) satisfies the range condition if, for every \(\lambda >0,\)\(D(J_{\lambda }^{A}) =X\). It is known that if A is a maximal monotone operator on a Hilbert space H, then \(R(I + \lambda A) = H\) for all \(\lambda > 0\). Thus, every maximal monotone operator A on a Hilbert space satisfies the range condition. Also as it has been shown in [25] if A is a maximal monotone operator on an Hadamard manifold, then A satisfies the range condition. Some examples of monotone operators in Hadamard spaces satisfying the range condition are presented in [22].

Definition 4

A bifunction \(f: \,\, X\,\times \, X \, \rightarrow \mathbb {R}\) is said to be:

  1. (1)

    Monotone if

    $$f(x,y)\,+\,f(y,x)\,\le 0, \,\,\,\,\forall x,y \, \in \,X.$$
  2. (2)

    Pesudo-monotone if for every \(x,y \in X,\)\(f(x,y)\,\ge \,0\) implies \(f(y,x)\,\le \,0.\)

The following conditions on the bifunction f are essential and we will need them in the next section:

\(B_{1}:f(x,.)\, : \,X \rightarrow \, \mathbb {R}\) is convex and lower semicontinuous for all \(x \in X.\)

\(B_{2}:f(.,y)\, \) is \(\Delta \)-upper semicontinuous for all \(y \in X.\)

\(B_{3}:f\) is Lipschitz-type continuous, i.e. there exist two positive constants \(c_{1}\) and \(c_{2}\) such that

$$f(x,y)\,+\,f(y,z)\,\ge f(x,z)\,-c_{1}d^{2}(x,y)\,-c_{2}d^{2}(y,z), \,\,\,\,\forall x,y,z \in X.$$

\(B_{4}:f\) is pesudo-monotone.

Let (Xd) be an Hadamard space. Equilibrium problems were originally studied in [3] as a unifying class of variational problems. Let K be a nonempty closed convex subset of X and \(f:K\times K\rightarrow \mathbb {R}\) be a bifunction. An equilibrium problem is to find \( x \in K\) such that

$$\begin{aligned} f(x, y) \ge 0, for all y \in K. \end{aligned}$$
(6)

Denote the set of solutions of problem (6) by EP(f,K). Associated with the primal form EP(f, K), its dual form is defined as follows :

$$\begin{aligned} \text {Find}\,\,\, x^{*} \in K \,\,\,\text {such}\,\text { that}\,\,\, f (x, x^{*}) \le 0, \,\,\, \forall x \in K. \end{aligned}$$
(7)

Let us denote by DEP(f, K) the solutions set of problem (7).

Lemma 4

If a bifunction f satisfying conditions \(B_{1}, B_{2}\) and \(B_{4},\) then EP(f, K) is closed and convex.

Proof

Take \(x^{*} \in {DEP(f, K)}.\) Let

$$y_{n}=\frac{1}{n}y \oplus \left( 1-\frac{1}{n}\right) x^{*},\,\,\,\,\, \forall y \in K, \, n \in \mathbb {N}.$$

Using (2), we have

$$\begin{aligned} d(y_{n}, x^{*})&= d\left( \frac{1}{n}y \oplus \left( 1-\frac{1}{n}\right) x^{*}, x^{*}\right) \nonumber \\&\le \frac{1}{n}d(y, x^{*}) + \left( 1-\frac{1}{n}\right) d(x^{*}, x^{*}) \nonumber \\&\le \frac{1}{n}d(y, x^{*}). \end{aligned}$$
(8)

Applying (8), we get \(y_{n}\rightarrow x^{*}.\) Using \(B_{3}\), we get \(f(y_{n}, y_{n})\ge 0\). Since f is pesudo-monotone, we have \(f(y_{n}, y_{n})= 0.\) Therefore, we have

$$\begin{aligned} 0= f(y_{n}, y_{n})&=f\left( y_{n}, \frac{1}{n}y \oplus \left( 1-\frac{1}{n}\right) x^{*}\right) \nonumber \\&\le \frac{1}{n} f(y_{n}, y) + \left( 1-\frac{1}{n}\right) f(y_{n},x^{*})\nonumber \\&\le \frac{1}{n}f(y_{n}, y), \end{aligned}$$
(9)

because \(f(y_{n},x^{*})\le 0\)\((x^{*} \in DEP(f,K))\). Therefore \(f(y_{n}, y)\ge 0.\) Then letting n tend to infinity and using \(B_{2}\), we get \(x^{*} \in EP(f,K).\) Thus \(DEP(f ,K)\subseteq EP(f, K).\) Using \(B_{4},\) we get \(EP(f ,K) = DEP(f, K).\) Since f(x, .) is convex on K, it implies that DEP(fK) is convex and hence EP(fK) is convex. The closedness of EP(fK) follows from \(B_{2}\).

Equilibrium problems and their generalizations have been important tools for solving problems arising in the fields of linear or nonlinear programming, variational inequalities, complementary problems, optimization problems, fixed point problems and have been widely applied to physics, structural analysis, management sciences and economics, etc. (see, for example, [7, 15, 29, 30]). An extragradient method for equilibrium problems in a Hilbert space has been studied in [31]. It has the following form:

$$\begin{aligned} y_{n}&\in \text {Argmin}_{y \in K} \bigg \{f(x_{n},y) +\dfrac{1}{2\lambda _{n}}\parallel y-x_{n}\parallel ^{2}\bigg \},\\ x_{n+1}&\in \text {Argmin}_{y \in K} \bigg \{f(y_{n},y)+\dfrac{1}{2\lambda _{n}}\parallel y-x_{n}\parallel ^{2}\bigg \}. \end{aligned}$$

Under certain assumptions, the weak convergence of the sequence \(\{x_{n}\}\) to a solution of the equilibrium problem has been established. In recent years some algorithms defined to solve equilibrium problems, variational inequalities and minimization problems, have been extended from the Hilbert space framework to the more general setting of Riemannian manifolds, especially Hadamard manifolds and the Hilbert unit ball (see, for example, [5, 9, 10, 16, 25, 28, 31, 32]). This popularization is due to the fact that several nonconvex problems may be viewed as a convex problem under such perspective. Equilibrium problems in Hadamard spaces were recently investigated in [18, 20, 21, 24]. In [20] the authors studied \(\Delta \)-convergence and strong convergence of the sequence generated by the extragradient method for pseudo-monotone equilibrium problems in Hadamard spaces.

Kumam and Chaipunya [24] established the existence of an equilibrium point of a bifunction satisfying some convexity, continuity, and coercivity assumptions, and they also established some fundamental properties of the resolvent of the bifunction. Furthermore, they proved that the proximal point algorithm \(\Delta \)-converges to an equilibrium point of a monotone bifunction in an Hadamard space.

Very recently Iusem and Mohebbi [18] proposed Extragradient Method with Linesearch (EML) for solving equilibrium problems of pseudo-monotone type in Hadamard spaces. They proved \(\Delta \)-convergence of the generated sequence to a solution of the equilibrium problem under standard assumptions on the bifunction. Also, they performed a minor modification on the EML algorithm which ensures strong convergence of the generated sequence to a solution of EP(fK). Khatibzadeh and Mohebbi [21] studied the existence of solutions of equilibrium problems associated with pseudo-monotone bifunctions with suitable conditions on the bifunctions in Hadamard spaces and introduced the resolvent of a bifunction in Hadamard spaces. They also proved \(\Delta \)-convergence of the sequence generated by the proximal point algorithm to an equilibrium point of the pseudo-monotone bifunction and the strong convergence under additional assumptions on the bifunction in Hadamard spaces.

One of the most important problems in monotone operator theory is approximating a zero of a monotone operator. Martinet [27] introduced one of the most popular methods for approximating a zero of a monotone operator in Hilbert spaces that is called the proximal point algorithm. Very recently, Khatibzadeh and Ranjbar [22] generalized monotone operators and their resolvents to Hadamard spaces by using the duality theory (see also [17, 34]).

Reich and Salinas [36] established metric convergence theorems for infinite products of possibly discontinuous operators defined on Hadamard spaces.

In this article, motivated and inspired by the above results (see also [35]), we propose an iterative algorithm for finding a common element of the set of solutions of an equilibrium problem and a common zero of a finite family of monotone operators in Hadamard spaces. The \(\Delta \)-convergence and strongly convergence theorems are established under suitable assumptions. We also give a numerical example to solve a nonconvex optimization problem in an Hadamard space to support our main result.

2 \(\Delta \)-convergence

In this section for approximating a common zero of a finite family of monotone operators and a point of EP( f , K) in an Hadamard space X, we introduce algorithm (10). Let K be a nonempty closed convex subset of X and let f be a bifunction satisfies \(B_{1},B_{2},B_{3},B_{4}\) and let \(A_{i} : X \rightrightarrows X^{*} (1 \le i \le N)\) be N multi-valued monotone operators. Let \(\{x_{n}\}\) be a sequence generated by:

$$\begin{aligned} \left\{ \begin{array}{rl} z_{n}&{}=J_{\gamma _{n}^{N}}^{A_{N}}\circ J_{\gamma _{n}^{N-1}}^{A_{N-1}}\circ ...\circ J_{\gamma _{n}^{1}}^{A_{1}}x_{n},\\ y_{n}&{} = \text {argmin}_{y \in K}^{} \{f(z_{n},y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},y)\},\\ x_{n+1}&{} = \text {argmin}_{y \in K} \{f(y_{n},y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},y)\}, \end{array}\right. \end{aligned}$$
(10)

where \(x_{0}\in K, 0<\, \alpha \le \lambda _{k}\,\le \,\beta <\min \{\dfrac{1}{2c_{1}},\dfrac{1}{2c_{2}}\},\)\(\{\gamma _{n}^{i}\} \subset (0,\infty ), \,\displaystyle \liminf _{n\rightarrow \infty }\, \gamma _{n}^{i} >0.\) The proof of the following lemma is similar to that of [20, Lemma 2.1] and thus omitted.

Lemma 5

Let \(\{x_{n}\}, \{y_{n}\}\) and \(\{z_{n}\}\) be sequences generated by Algorithm (10) and \(x^{*}\in EP(f,K)\,\cap \,\bigcap _{i=1}^{N}A_{i}^{-1}(0)\,\), then

$$d^{2}(x_{n+1},x^{*})\le d^{2}(z_{n},x^{*}) - (1-2c_{1}\lambda _{n})d^{2}(z_{n},y_{n}) - (1-2c_{2}\lambda _{n})d^{2}(y_{n},x_{n+1}).$$

Remark 1

Similar to the proof of [20, Lemma 2.1], using Lemma 5, we get

$$\begin{aligned} f(y_{n},x_{n+1}) \le \dfrac{1}{2\lambda _{n}} \{d^{2}(z_{n},x^{*}) - d^{2}(z_{n},x_{n+1}) - d^{2}(x_{n+1},x^{*})\}, \end{aligned}$$
(11)

and

$$\begin{aligned} \left( \frac{1}{2\lambda _{n}} - c_{1}\right) d^{2}(z_{n},y_{n})+\left( \frac{1}{2\lambda _{n}} - c_{2}\right) d^{2}(y_{n},x_{n+1}) - \frac{1}{2\lambda _{n}}d^{2}(z_{n},x_{n+1}) \le f(y_{n},x_{n+1}). \end{aligned}$$
(12)

Theorem 3

Let f be a bifunction satisfying \(B_{i}\)\((1\le i\le 4)\) and let \(A_{1},A_{2}, \ldots ,A_{N} : X \rightrightarrows X^{*}\) be N multi-valuid monotone operators that satisfy the range condition. In addition, if \(\Omega =:EP(f,K)\,\cap \,\bigcap _{i=1}^{N}A_{i}^{-1}(0)\,\ne \,\emptyset \), then the sequence \(\{x_{n}\}\) produced by (10) \(\Delta \)-converges to a point of \(\Omega \).

Proof

Let \(x^{*} \in \Omega .\) Since \(J_{\lambda }^{A}\) is a nonexpansive mapping, we have

$$\begin{aligned} d(z_{n},x^{*})&=d(J_{\gamma _{n}^{N}}^{A_{N}}\circ J_{\gamma _{n}^{N-1}}^{A_{N-1}}\circ ...\circ J_{\gamma _{n}^{1}}^{A_{1}}x_{n},\, x^{*})\nonumber \\&\le d( J_{\gamma _{n}^{N-1}}^{A_{N-1}}\circ ...\circ J_{\gamma _{n}^{1}}^{A_{1}}x_{n},\, x^{*})\nonumber \\&\,\,\vdots \nonumber \\&\le d(J_{\gamma _{n}^{1}}^{A_{1}}x_{n},\, x^{*})\nonumber \\&\le d(x_{n},x^{*}). \end{aligned}$$
(13)

By Lemma 5, we have \(d(x_{n+1},x^{*})\,\le \,d(z_{n},x^{*})\). So

$$d(x_{n+1},x^{*})\,\le \,d(z_{n},x^{*})\,\le \,d(x_{n},x^{*}).$$

Therefore \(\lim _{n\rightarrow \infty } d(x_{n},x^{*})\) exists and as a result \(\{x_{n}\}\) is bounded. We define for all \(1\le \,i \,\le \,N\),

$$S_{n}^{i}=:J_{\gamma _{n}^{i}}^{A_{i}} \circ ... \circ J_{\gamma _{n}^{1}}^{A_{1}}.$$

So \(z_{n}=\,S_{n}^{N}x_{n}\) and assume that \(S^{0}= I\) where I is the Identity operator. Therefore

$$\begin{aligned} \limsup _{n\rightarrow \infty }(d^{2}(S_{n}^{i}x_{n},x^{*})\,-\, d^{2}(x_{n},x^{*})) \, \le 0, \end{aligned}$$
(14)

for all \(1\le \,i \,\le \,N\). It follows from \(d^{2}(x_{n+1},x^{*})\, \le \,d^{2}(z_{n},x^{*})\) that

$$d^{2}(x_{n+1},x^{*})\,-\, d^{2}(x_{n},x^{*}) \le \,d^{2}(S_{n}^{i}x_{n},x^{*})\, - \,d^{2}(x_{n},x^{*}).$$

This implies

$$\begin{aligned} 0\le \, \liminf _{n\rightarrow \infty }\,(d^{2}(S_{n}^{i}x_{n},x^{*})\, - \,d^{2}(x_{n},x^{*})). \end{aligned}$$
(15)

Using the inequalities (14) and (15), for all \(1\,\le \,i\,\le \,N\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }(d^{2}(S_{n}^{i}x_{n},x^{*})\, - \,d^{2}(x_{n},x^{*})\,)\,=0. \end{aligned}$$
(16)

Using the inequality (5), we have

$$d^{2}(J_{\gamma _{n}^{i}}^{A_{i}}(S_{n}^{i-1}x_{n}),S_{n}^{i-1}x_{n})\, \le \,d^{2}(x^{*},S_{n}^{i-1}x_{n})\,-\,d^{2}(x^{*},S_{n}^{i}x_{n}),$$

so

$$d^{2}(S_{n}^{i}x_{n},S_{n}^{i-1}x_{n})\, \le \,d^{2}(x^{*},x_{n})\,-\,d^{2}(x^{*},S_{n}^{i}x_{n}),$$

using (16), we have

$$\lim _{n\rightarrow \infty }d^{2}(S_{n}^{i}x_{n},S_{n}^{i-1}x_{n})\,=0.$$

Now for every \(i\,=1,2, \ldots ,N,\) we have

$$d(x_{n},S_{n}^{i}x_{n})\, \le \,d(x_{n},S_{n}^{1}x_{n})\, +\, \cdots \, +d(S_{n}^{i-1}x_{n},S_{n}^{i}x_{n})\rightarrow 0.$$

Since \(\liminf _{n\rightarrow \infty }\gamma _{n}^{i}>0,\) there exists \(\gamma \in \mathbb {R}\) such that for all \(n\in \mathbb {N}\) and \(1 \, \le \,i\, \le \,N,\)\(\gamma _{n}^{i}\ge \,\gamma \,>0.\)

Now using the inequality (5) and Theorem 2, we have

$$\begin{aligned} d(J_{\gamma }^{A_{i}}(S_{n}^{i-1}x_{n}),S_{n}^{i}x_{n})&\le d(J_{\gamma }^{A_{i}}(S_{n}^{i-1}x_{n}),S_{n}^{i-1}x_{n})+d(S_{n}^{i-1}x_{n},S_{n}^{i}x_{n})\\&\le 2d(J_{\gamma _{n}^{i}}^{A_{i}}(S_{n}^{i-1}x_{n}),S_{n}^{i-1}x_{n})+ d(S_{n}^{i-1}x_{n},S_{n}^{i}x_{n})\\&=3d(S_{n}^{i}x_{n},S_{n}^{i-1}x_{n}). \end{aligned}$$

Therefore

$$d(J_{\gamma }^{A_{i}}(S_{n}^{i-1}x_{n}),S_{n}^{i}x_{n})\,\rightarrow \, 0.$$

Now for every \(1\,\le i \, \le \,N,\) we have

$$\begin{aligned} d(x_{n},J_{\gamma }^{A_{i}}x_{n})&\le d(J_{\gamma }^{A_{i}}x_{n},J_{\gamma }^{A_{i}}(S_{n}^{i-1}x_{n})) +d(J_{\gamma }^{A_{i}}(S_{n}^{i-1}x_{n}),S_{n}^{i}x_{n}) +d(S_{n}^{i}x_{n},x_{n})\\&\le d(x_{n},S_{n}^{i-1}x_{n}) +d(J_{\gamma }^{A_{i}}(S_{n}^{i-1}x_{n}),S_{n}^{i}x_{n}) +d(S_{n}^{i}x_{n},x_{n}). \end{aligned}$$

So

$$\begin{aligned} d(x_{n},J_{\gamma }^{A_{i}}x_{n}) \, \rightarrow \,0. \end{aligned}$$
(17)

Let \(\{x_{n_{j}}\}\) be a subsequence of \(\{x_{n}\}\) such that \(x_{n_{j}} \xrightarrow {\Delta }p \in K\). Using Lemma 1 and (17), we get \(p \in A_{i}^{-1}(0)\) for any \(i=1,2,...,N\). So \(p \in \bigcap _{i=1}^{N}A_{i}^{-1}(0)\).

Now we prove that \(p \in EP(f,K)\). We assume that \(u=\epsilon x_{n+1} \oplus (1-\epsilon )y\) where \(\epsilon \in [0,1)\) and \(y \in K\). So we have

$$\begin{aligned} f(y_{n},x_{n+1}) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},x_{n+1})&\le f(y_{n},u) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},u)\\&=f(y_{n},\epsilon x_{n+1} \oplus (1-\epsilon )y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},\epsilon x_{n+1} \oplus (1-\epsilon )y)\\&\le \epsilon f(y_{n},x_{n+1}) +(1-\epsilon )f(y_{n},y) \\&\quad +\dfrac{1}{2\lambda _{n}}\{\epsilon d^{2}(z_{n},x_{n+1}) +(1-\epsilon ) d^{2}(z_{n},y) \\&\quad - \epsilon (1-\epsilon )d^{2}(x_{n+1},y)\}. \end{aligned}$$

So

$$f(y_{n},x_{n+1}) - f(y_{n},y) \le \dfrac{1}{2\lambda _{n}}\{ d^{2}(z_{n},y) - d^{2}(z_{n},x_{n+1}) - \epsilon d^{2}(x_{n+1},y)\}.$$

Now, if \(\epsilon \rightarrow 1^{-}\), we have

$$\dfrac{1}{2\lambda _{n}}\{ d^{2}(z_{n},x_{n+1}) + d^{2}(x_{n+1},y) - d^{2}(z_{n},y)\} \le f(y_{n},y) - f(y_{n},x_{n+1}).$$

It is easy to see that

$$\begin{aligned} \dfrac{-1}{2\lambda _{n}}d(z_{n},x_{n+1})\{d(x_{n+1},y) + d(z_{n},y)\} \le f(y_{n},y) - f(y_{n},x_{n+1}). \end{aligned}$$
(18)

Since \(\liminf _{n\rightarrow \infty }(1-2c_{i}\lambda _{n}) >0\) for \(i=1,2\), using Lemma 5 and inequality (13), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }d(z_{n},y_{n}) =\lim _{n\rightarrow \infty }d(x_{n+1},y_{n})=\lim _{n\rightarrow \infty }d(x_{n+1},z_{n})=0. \end{aligned}$$
(19)

It follows from (19) that \(y_{n_{j}}\xrightarrow {\Delta } p.\) Using (11), (12) and (19), we have \(\lim _{n\rightarrow \infty }f(y_{n},x_{n+1})=0\). Replacing n with \(n_{j}\) in (18), taking \(\limsup \) and using (19), we have

$$0\le \limsup _{j\rightarrow \infty }f(y_{n_{j}},y) \le f(p,y),\,\,\,\,\forall y\in K. $$

Therefore, \(p\in E(f,K)\) and so \(p\in \Omega \). Finally using Lemma 2, the sequence \(\{x_{n}\}\) is \(\Delta \)-convergent to a point of \(\Omega \) and this completes the proof.

Definition 5

Let X be an Hadamard space with dual \(X^{*}\) and let \(f: X\rightarrow (-\infty , +\infty ]\) be a proper function with effective domain \(D(f) :=\{x : f(x) < +\infty \}.\) Then, the subdifferential of f is the multivalued mapping \(\partial f : X\rightrightarrows X^{*}\) defined by:

$$\partial f(x) =\{x^{*} \in X^{*} : f(z) - f(x) \ge \langle x^{*} , \overrightarrow{xz}\rangle \,\,\, (z \in X) \},$$

when \(x \in D(f)\) and \(\partial f (x) = \emptyset ,\) otherwise.

It has been proved in [19] that \(\partial f(x)\) of a convex, proper and lower semicontinuous function f satisfies the range condition. So using Theorem 3 , we can obtain the following corollary:

Corollary 1

Let K be a convex and closed subset of an Hadamard space X and let f be a bifunction satisfying \(B_{1}\), \(B_{2}\), \(B_{3}\) and \(B_{4},\) and let \(g_{i}: K \rightarrow (-\infty ,+\infty ] (i=1,\ldots ,N)\) be N proper convex and lower semicontinuous functions, with \(\Omega =:EP(f,K)\cap \,\bigcap _{i=1}^{N}argmin_{y \in K}g_{i}(y) \ne \emptyset .\) For \(x_{0} \in K,\) let \(\{x_{n}\}\) be a sequence produced by:

$$\begin{aligned} \left\{ \begin{array}{rl} z_{n}&{}=J_{\gamma _{n}^{N}}^{\partial g_{N}}\circ J_{\gamma _{n}^{N-1}}^{\partial g_{N-1}}\circ ...\circ J_{\gamma _{n}^{1}}^{\partial g_{1}}x_{n},\\ y_{n}&{} = \text {argmin}_{y \in K}^{} \{f(z_{n},y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},y)\},\\ x_{n+1}&{} = \text {argmin}_{y \in K}^{} \{f(y_{n},y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},y)\}.\\ \end{array}\right. \end{aligned}$$
(20)

where \(0<\, \alpha \le \lambda _{k}\,\le \,\beta <\min \left\{ \dfrac{1}{2c_{1}},\dfrac{1}{2c_{2}}\right\} \), \(\{\gamma _{n}^{i}\}\,\, \subset \,\,(0,\infty )\) and \(\liminf _{n \rightarrow \infty }\, \gamma _{n}^{i}\,\, >0\). Then \(\{x_{n}\}\) is \(\Delta \)-convergent to a point of \(\Omega .\)

3 Strong convergence

In this section, using the Halpern regularization method, we study the strong convergence of the sequence generated by \(u, x_{0} \in K\) and

$$\begin{aligned} \left\{ \begin{array}{rl} z_{n}&{}=J_{\gamma _{n}^{N}}^{A_{N}}\circ J_{\gamma _{n}^{N-1}}^{A_{N-1}}\circ ...\circ J_{\gamma _{n}^{1}}^{A_{1}}x_{n},\\ y_{n}&{} = \text {argmin}_{y \in K}^{} \{f(z_{n},y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},y)\},\\ t_{n}&{} = \text {argmin}_{y \in K}^{} \{f(y_{n},y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},y)\},\\ x_{n+1}&{}=\alpha _{n}u\,\oplus \,(1-\alpha _{n})t_{n}, \end{array}\right. \end{aligned}$$
(21)

to a common zero of a finite family of monotone operators \(A_{1}, A_{2}, \ldots , A_{N}\) and an element of EP(fK) in an Hadamard space X, where

\(0<\, \alpha \le \lambda _{k}\,\le \,\beta <\min \left\{ \dfrac{1}{2c_{1}},\dfrac{1}{2c_{2}}\right\} \), \(\alpha _{k}\, \in (0,1)\), \(\lim _{k \rightarrow \infty } \alpha _{k}\,=0\)\(\sum _{k=0}^{\infty }\alpha _{k}=\infty \), \(\{\gamma _{n}^{i}\}\,\, \subset \,\,(0,\infty )\) and \(\liminf _{n \rightarrow \infty }\, \gamma _{n}^{i}\,\, >0\).

The proof of the following lemma is similar to that of [20, Lemma 3.1] and thus omitted.

Lemma 6

Let \(\{x_{n}\}\), \(\{y_{n}\}\), \(\{t_{n}\}\), and \(\{z_{n}\}\) be sequences generated by Algorithm (21) and \(x^{*}\in EP(f,K) \cap \,\bigcap _{i=1}^{N}A_{i}^{-1}(0),\) then

$$d^{2}(t_{n},x^{*})\le d^{2}(z_{n},x^{*}) - (1-2c_{1}\lambda _{n})d^{2}(z_{n},y_{n}) - (1-2c_{2}\lambda _{n})d^{2}(y_{n},t_{n}).$$

Remark 2

Similar to the proof of [20, Lemma 3.1], using Lemma 6, we get

$$\begin{aligned} f(y_{n},t_{n}) \le \dfrac{1}{2\lambda _{n}} \{d^{2}(z_{n},x^{*}) - d^{2}(z_{n},t_{n}) - d^{2}(t_{n},x^{*})\}, \end{aligned}$$
(22)

and

$$\begin{aligned} \left( \dfrac{1}{2\lambda _{n}} - c_{1}\right) d^{2}(z_{n},y_{n}) +\left( \dfrac{1}{2\lambda _{n}} - c_{2}\right) d^{2}(y_{n},t_{n}) - \frac{1}{2\lambda _{n}}d^{2}(z_{n},t_{n}) \le f(y_{n},t_{n}). \end{aligned}$$
(23)

To establish strong convergence of the sequence \(\{x_{n}\}\) produced by Algorithm (21), we need an intermediate result which establishes an elementary property of real sequences.

Lemma 7

[37] Let \(\{s_{n} \}\) be a sequence of nonnegative real numbers,\(\{\alpha _{n}\}\) be a sequence of real numbers in (0, 1) with \(\sum _{n=0}^{\infty } \alpha _{n}=\infty \) and \(\{t_{n}\}\) be a sequence of real numbers. Suppose that

$$s_{n+1}\le (1-\alpha _{n})s_{n} + \alpha _{n} t_{n}, \forall n\ge 0.$$

If \(\limsup _{k\rightarrow \infty }t_{n_{k}}\le 0,\) then, for every subsequence \(\{s_{n_{k}}\}\) of \(\{s_{n}\}\) satisfying \(\liminf _{k\rightarrow \infty }\,(s_{n_{k}+1}-s_{n_{k}})\,\ge 0\), it holds \(\lim _{n\rightarrow \infty }s_{n}=0.\)

Theorem 4

Let K be a convex and closed subset of X and let f be a bifunction satisfying \(B_{1},B_{2},B_{3}\) and \(B_{4}.\) Let \(A_{1},A_{2}, \ldots ,A_{N} : X \rightrightarrows X^{*}\) be N multi-valued monotone operators that satisfy the range condition. If \(\Omega = EP(f,K) \cap \bigcap _{i=1}^{N}A_{i}^{-1}(0)\ne \emptyset \), then the sequence \(\{x_{n}\}\) produced by Algorithm (21) converges strongly to \(x^{*}=P_{\Omega }u\).

Proof

First we show that the sequence \(\{x_{n}\}\) is bounded. Let \(x^{*}=P_{\Omega }u.\) From nonexpansivity of \(J_{\gamma _{n}^{i}}^{A_{i}}\), we have

$$\begin{aligned} d(z_{n},x^{*})&=d(J_{\gamma _{n}^{N}}^{A_{N}}\circ J_{\gamma _{n}^{N-1}}^{A_{N-1}}\circ ...\circ J_{\gamma _{n}^{1}}^{A_{1}}x_{n},x^{*})\\&\le d( J_{\gamma _{n}^{N-1}}^{A_{N-1}}\circ ...\circ J_{\gamma _{n}^{1}}^{A_{1}}x_{n},x^{*})\\&\,\, \vdots \\&\le d(J_{\gamma _{n}^{1}}^{A_{1}}x_{n},x^{*}) \le d(x_{n},x^{*}). \end{aligned}$$

Using Lemma 6, we have

$$\begin{aligned} d(t_{n},x^{*}) \le d(z_{n},x^{*}) \le d(x_{n},x^{*}). \end{aligned}$$
(24)

So

$$\begin{aligned} d(x_{n+1},x^{*})&=d(\alpha _{n}u\oplus (1-\alpha _{n})t_{n},x^{*})\\&\le \alpha _{n} d(u,x^{*}) + (1-\alpha _{n})d(t_{n},x^{*})\\&\le \alpha _{n} d(u,x^{*}) + (1-\alpha _{n})d(x_{n},x^{*})\\&\le \max \{d(u,x^{*}),d(x_{n},x^{*})\}. \end{aligned}$$

Using induction, we have

$$d(x_{n},x^{*}) \le \max \{d(u,x^{*}),d(x_{1},x^{*})\}.$$

So the sequence \(\{x_{n}\}\) is bounded. Consequently, we conclude that \(\{z_{n}\}\) and \(\{t_{n}\}\) are bounded. On the other hand, using (24), we have

$$\begin{aligned} d^2(x_{n+1},x^{*})&=d^2(\alpha _{n}u\oplus (1-\alpha _{n})t_{n},x^{*})\\&\le \alpha _{n} d^2(u,x^{*}) +(1-\alpha _{_{n}}) d^2(t_{n},x^{*}) -\alpha _{n}(1-\alpha _{n}) d^2(u,t_{n})\\&\le (1-\alpha _{n})d^2(x_{n},x^{*}) +\alpha _{n}[d^2(u,x^{*})-(1-\alpha _{_{n}})d^2(u,t_{n})]. \end{aligned}$$

Now we show that \(d^2(x_{n},x^{*})\rightarrow 0.\) To do this using Lemma 7, it is sufficient to show that:

$$\limsup _{k\rightarrow \infty }((d^2 (u,x^{*}) - (1-\alpha _{n_{k}}) d^2 (u,t_{n_{k}})) \le 0,$$

for every subsequence \(\{d^2(x_{n_{k}},x^{*})\}\) of \(\{d^2(x_{n},x^{*})\}\) that satisfies,

$$\begin{aligned} \liminf _{k \rightarrow \infty } (d^2 (x_{n_{k}+1},x^{*}) - d^2 (x_{n_{k}},x^{*})) \ge 0. \end{aligned}$$
(25)

Since \(\{t_{n_{k}}\}\) is bounded, we have

$$\begin{aligned} 0&\le \liminf _{k \rightarrow \infty } (d^2 (x_{n_{k}+1},x^{*}) - d^2(x_{n_{k}},x^{*}))\\&\le \liminf _{k \rightarrow \infty }(\alpha _{n_{k}}d^2(u,x^{*}) + (1-\alpha _{n_{k}})d^2(t_{n_{k}},x^{*}) - \alpha _{n_{k}}(1-\alpha _{n_{k}})d^2(u,t_{n_{k}}) - d^2(x_{n_{k}},x^{*}))\\&\le \liminf _{k \rightarrow \infty }(\alpha _{n_{k}}d^2(u,x^{*}) + (1-\alpha _{n_{k}})d^2(t_{n_{k}},x^{*}) - d^2(x_{n_{k}},x^{*}))\\&\le \liminf _{k \rightarrow \infty }(\alpha _{n_{k}}(d^2(u,x^{*}) - d^2(t_{n_{k}},x^{*})))+d^2(t_{n_{k}},x^{*}) - d^2(x_{n_{k}},x^{*}))\\&\le \limsup _{k \rightarrow \infty }(\alpha _{n_{k}}(d^2(u,x^{*}) - d^2(t_{n_{k}},x^{*}))+\liminf _{k \rightarrow \infty }(d^2(t_{n_{k}},x^{*}) - d^2(x_{n_{k}},x^{*}))\\&=\liminf _{k \rightarrow \infty }(d^2(t_{n_{k}},x^{*}) - d^2(x_{n_{k}},x^{*})) \\&\le \limsup _{k \rightarrow \infty }(d^2(t_{n_{k}},x^{*}) - d^2(x_{n_{k}},x^{*})) \le 0. \end{aligned}$$

In conclusion, \(\lim _{k \rightarrow \infty }(d^2(t_{n_{k}},x^{*}) - d^2(x_{n_{k}},x^{*})) =0.\)  There exists subsequence \(\{t_{n_{k_{\epsilon }}}\}\) of \(\{t_{n_{k}}\}\) such that \(t_{n_{k_{\epsilon }}} \xrightarrow {\Delta } p \in K,\)  therefore we have

$$\begin{aligned} \limsup _{k\rightarrow \infty }(d^2 (u,x^{*}) - (1-\alpha _{n_{k}}) d^2 (u,t_{n_{k}}))=\lim _{\epsilon \rightarrow \infty } (d^2 (u,x^{*}) - (1-\alpha _{n_{k_{\epsilon }}}) d^2 (u,t_{n_{k_{\epsilon }}})). \end{aligned}$$

Since \(d^2(u,.)\) is \(\Delta \)-lower semicontinuous, we have

$$\begin{aligned} \limsup _{k\rightarrow \infty }(d^2 (u,x^{*}) - (1-\alpha _{n_{k}}) d^2 (u,t_{n_{k}}))&=\lim _{\epsilon \rightarrow \infty } (d^2 (u,x^{*}) - (1-\alpha _{n_{k_{\epsilon }}}) d^2 (u,t_{n_{k_{\epsilon }}}))\nonumber \\&\le d^2(u,x^{*}) - d^2(u,p). \end{aligned}$$
(26)

It remains to prove that

$$d(u,x^{*}) \le d(u,p).$$

Let \(S_{n}^{i} =J_{\gamma _{n}^{i}}^{A_{i}}\circ ...\circ J_{\gamma _{n}^{1}}^{A_{1}}\), for \(1\le i \le N\) and \(n\in \mathbb {N}\). Thus \(z_{n}=S_{n}^{N}x_{n},\) and assume that \(S_{n}^{0}=I,\) where I is the Identity operator. Therefore

$$\begin{aligned} d^{2}(S_{n}^{i}x_{n},x^{*}) - d^{2}(x_{n},x^{*})&\le 0, \end{aligned}$$

hence

$$\begin{aligned} \limsup _{n \rightarrow \infty }(d^{2}(S_{n}^{i}x_{n},x^{*}) - d^{2}(x_{n},x^{*}))&\le 0. \end{aligned}$$
(27)

We can write

$$\begin{aligned} d^{2}(x_{n+1},x^{*})&\le d^{2}(\alpha _{n}u \oplus (1-\alpha _{n})t_{n},x^{*})\\&\le \alpha _{n}d^{2}(u,x^{*}) + (1-\alpha _{n})d^{2}(t_{n},x^{*}) - \alpha _{n} (1-\alpha _{n})d^{2}(t_{n},u)\\&\le \alpha _{n}d^{2}(u,x^{*}) - \alpha _{n}d^{2}(t_{n},x^{*}) - \alpha _{n} (1-\alpha _{n})d^{2}(t_{n},u) + d^{2}(t_{n}, x^{*}). \end{aligned}$$

So

$$\begin{aligned}&d^{2}(x_{n+1},x^{*}) - d^{2}(x_{n},x^{*}) \le \alpha _{n}(d^{2}(u,x^{*})-d^{2}(t_{n},x^{*})\nonumber \\ {}&\quad -(1-\alpha _{n})d^{2}(t_{n},u)) + d^{2}(z_{n},x^{*}) - d^{2}(x_{n},x^{*}). \end{aligned}$$
(28)

Since \(\lim _{n \rightarrow \infty } \alpha _{n}=0\), using (25) and (28), for \(1 \le i \le N\), we have

$$\begin{aligned} 0\le \liminf _{k \rightarrow \infty }(d^{2}(S_{n_{k}}^{i}x_{n_{k}},x^{*}) - d^{2}(x_{n_{k}},x^{*})). \end{aligned}$$
(29)

Using (27) and (29), we get

$$\begin{aligned} \lim _{k \rightarrow \infty }(d^{2}(S_{n_{k}}^{i}x_{n_{k}},x^{*}) - d^{2}(x_{n_{k}},x^{*}))=0. \end{aligned}$$
(30)

Applying (5), we obtain

$$\begin{aligned} d^2(J_{\gamma _{n_{k}}^{i}}^{A_{i}}(S_{n_{k}}^{i-1}x_{n_{k}}),S_{n_{k}}^{i-1}x_{n_{k}})&\le d^2(x^{*},S_{n_{k}}^{i-1}x_{n_{k}}) - d^{2}(x^{*},S_{n_{k}}^{i}x_{n_{k}})\\&\le d^2(x^{*},x_{n_{k}}) -d^{2} (x^{*},S_{n_{k}}^{i}x_{n_{k}}). \end{aligned}$$

Using (30), we have

$$\lim _{k \rightarrow \infty } d^2(S_{n_{k}}^{i}x_{n_{k}},S_{n_{k}}^{i-1}x_{n_{k}})=0.$$

We have

$$d(x_{n_{k}},S_{n_{k}}^{i}x_{n_{k}}) \le d(x_{n_{k}},S_{n_{k}}^{1}x_{n_{k}})+ \cdots +d(S_{n_{k}}^{i-1}x_{n_{k}},S_{n_{k}}^{i}x_{n_{k}}),$$

hence

$$\lim _{k \rightarrow \infty }d(x_{n_{k}},S_{n_{k}}^{i}x_{n_{k}})=0. $$

Since \(\liminf _{n\rightarrow \infty }\gamma _{n}^{i}>0,\) there exists \(\gamma \in \mathbb {R}\) such that \(\gamma _{n}^{i}\ge \,\gamma \,>0\) for all \(n\in \mathbb {N}\) and \(1 \, \le \,i\, \le \,N.\) Now using inequality (5) and Theorem 2, we have

$$\begin{aligned} d(J_{\gamma }^{A_{i}}(S_{n_{k}}^{i-1}x_{n_{k}}),S_{n_{k}}^{i}x_{n_{k}})&\le d(J_{\gamma }^{A_{i}}(S_{n_{k}}^{i-1}x_{n_{k}}),S_{n_{k}}^{i-1}x_{n_{k}})+d(S_{n_{k}}^{i-1}x_{n_{k}},S_{n_{k}}^{i}x_{n_{k}}),\\&\le 2d(J_{\gamma _{n_{k}}^{i}}^{A_{i}}(S_{n_{k}}^{i-1}x_{n_{k}}),S_{n_{k}}^{i-1}x_{n_{k}})+ d(S_{n_{k}}^{i-1}x_{n_{k}},S_{n_{k}}^{i}x_{n_{k}}),\\&=3d(S_{n_{k}}^{i}x_{n_{k}},S_{n_{k}}^{i-1}x_{n_{k}}). \end{aligned}$$

Therefore

$$d(J_{\gamma }^{A_{i}}(S_{n_{k}}^{i-1}x_{n_{k}}),S_{n_{k}}^{i}x_{n_{k}})\, \rightarrow \, 0.$$

Now for every \(1\,\le i \, \le \,N,\) we have

$$\begin{aligned} d(x_{n_{k}},J_{\gamma }^{A_{i}}x_{n_{k}})&\le d(J_{\gamma }^{A_{i}}x_{n_{k}},J_{\gamma }^{A_{i}}(S_{n_{k}}^{i-1}x_{n_{k}})) +d(J_{\gamma }^{A_{i}}(S_{n_{k}}^{i-1}x_{n_{k}}),S_{n_{k}}^{i}x_{n_{k}}) +d(S_{n_{k}}^{i}x_{n_{k}},x_{n_{k}})\\&\le d(x_{n_{k}},S_{n_{k}}^{i-1}x_{n_{k}}) +d(J_{\gamma }^{A_{i}}(S_{n_{k}}^{i-1}x_{n_{k}}),S_{n_{k}}^{i}x_{n_{k}}) +d(S_{n_{k}}^{i}x_{n_{k}},x_{n_{k}}). \end{aligned}$$

So

$$\begin{aligned} d(x_{n_{k}},J_{\gamma }^{A_{i}}x_{n_{k}}) \, \rightarrow \,0. \end{aligned}$$
(31)

Let \(\{x_{n_{k_{\epsilon }}}\}\) be a subsequence of \(\{x_{n_{k}}\}\) such that \(x_{n_{k_{\epsilon }}}\xrightarrow {\Delta }p \in K\). By Lemma 1 and (31), we get \(p \in A_{i}^{-1}(0)\). So \(p \in \bigcap _{i=1}^{N}A_{i}^{-1}(0)\). Since \(\liminf _{n \rightarrow \infty }(1-2c_{i}\lambda _{n})>0\), for \(i=1,2\), using Lemma 6, we have

$$\begin{aligned} \lim _{k \rightarrow \infty }d^2(z_{n_{k}},y_{n_{k}}) = \lim _{k \rightarrow \infty }d^2(y_{n_{k}},t_{n_{k}}) = \lim _{k \rightarrow \infty }d^2(z_{n_{k}},t_{n_{k}}) = 0. \end{aligned}$$
(32)

Using (22), (23) and (32), we have

$$\begin{aligned} \lim _{k \rightarrow \infty }f(y_{n_{k}},t_{n_{k}}) =0. \end{aligned}$$
(33)

Now assume that \(z=\epsilon t_{n} \oplus (1-\epsilon )y,\) where \(0<\epsilon <1\) and \(y \in K\). we have

$$\begin{aligned} f(y_{n},t_{n}) +\dfrac{1}{2\lambda _{n}}d^2(z_{n},t_{n})&\le f(y_{n},z) + \dfrac{1}{2\lambda _{n}}d^2(z_{n},z)\\&=f(y_{n},\epsilon t_{n} \oplus (1-\epsilon )y) + \dfrac{1}{2\lambda _{n}}d^2(z_{n},\epsilon t_{n} \oplus (1-\epsilon )y)\\&\le \epsilon f(y_{n},t_{n}) +(1-\epsilon )f(y_{n},y)\\&\quad +\dfrac{1}{2\lambda _{n}}\{\epsilon d^2(z_{n},t_{n})+(1-\epsilon )d^2(z_{n},y)-\epsilon (1-\epsilon )d^2(t_{n},y)\}. \end{aligned}$$

Therefore

$$\begin{aligned} (1-\epsilon )f(y_{n},t_{n}) -(1-\epsilon )f(y_{n},y)&\le \dfrac{1}{2\lambda _{n}}\{(1-\epsilon )d^2(z_{n},y)\\ {}&\quad -(1-\epsilon ) d^2(z_{n},t_{n})-\epsilon (1-\epsilon )d^2(t_{n},y)\}. \end{aligned}$$

So

$$\begin{aligned} f(y_{n},t_{n}) -f(y_{n},y)&\le \dfrac{1}{2\lambda _{n}}\{d^2(z_{n},y) - d^2(z_{n},t_{n})-\epsilon d^2(t_{n},y)\}. \end{aligned}$$

Now, if \(\epsilon \rightarrow 1^{-},\) we obtain

$$\dfrac{1}{2\lambda _{n}} \{d^2(z_{n},t_{n}) +d^2(t_{n},y) - d^2(z_{n},y)\} \le f(y_{n},y) -f(y_{n},t_{n}).$$

It is easy to see that

$$\begin{aligned} \dfrac{-1}{2 \lambda _{n}}d(z_{n},t_{n}) \{d(t_{n},y) +d(z_{n},y)\} \le f(y_{n},y) - f(y_{n},t_{n}). \end{aligned}$$
(34)

Now replacing n with \(n_{k_{\epsilon }}\) in (34), taking \(\limsup \) and using (32) and (33), since \(y_{n_{k_{\epsilon }}} \xrightarrow {\Delta } p\), we have

$$0 \le \limsup _{\epsilon \rightarrow 0} f(y_{n_{k_{\epsilon }}},y) \le f(p,y), \,\,\,\,\, \forall y \in K.$$

Therefore \(p \in EP(f,K)\) and as a result, \(p \in \Omega \). Since \(x^{*}=P_{\Omega }u,\) we have

$$d(u,x^{*}) \le d(u,p).$$

Using (26), we get

$$\limsup _{k\rightarrow \infty }(d^{2} (u,x^{*}) - (1-\alpha _{n_{k}}) d^{2} (u,t_{n_{k}}))\le 0.$$

Now using Lemma 7, we get \(x_{n}\rightarrow x^{*}.\)

Using Theorem 4, we can obtain the following corollary:

Corollary 2

Let K be a convex and closed subset of an Hadamara space X and let f be a bifunction satisfying \(B_{1}\), \(B_{2}\), \(B_{3}\) and \(B_{4}.\) Let \(g_{i}: K \rightarrow (-\infty ,+\infty ] (i=1,\ldots ,N)\) be N proper convex and lower semicontinuous functions, and \(\Omega = EP(f,K)\cap \,\bigcap _{i=1}^{N}argmin_{y \in K}g_{i}(y) \ne \emptyset .\) For \(u, x_{0}\in K\), let \(\{x_{n}\}\) be a sequence produced by:

$$\begin{aligned} \left\{ \begin{array}{rl} z_{n}&{}=J_{\gamma _{n}^{N}}^{\partial g_{N}}\circ J_{\gamma _{n}^{N-1}}^{\partial g_{N-1}}\circ ...\circ J_{\gamma _{n}^{1}}^{\partial g_{1}}x_{n},\\ y_{n}&{} = \text {argmin}_{y \in K}^{} \{f(z_{n},y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},y)\},\\ t_{n}&{} = \text {argmin}_{y \in K}^{} \{f(y_{n},y) +\dfrac{1}{2\lambda _{n}}d^{2}(z_{n},y)\},\\ x_{n+1}&{}=\alpha _{n}u\,\oplus \,(1-\alpha _{n})t_{n}. \end{array}\right. \end{aligned}$$
(35)

where \(0<\, \alpha \le \lambda _{k}\,\le \,\beta <\min \left\{ \dfrac{1}{2c_{1}},\dfrac{1}{2c_{2}}\right\} \), \(\alpha _{k}\, \in (0,1)\), \(\lim _{k \rightarrow \infty } \alpha _{k}\,=0\), \(\sum _{k=0}^{\infty }\alpha _{k}=\infty \), \(\{\gamma _{n}^{i}\}\,\, \subset \,\,(0,\infty )\) and \(\liminf _{n \rightarrow \infty }\, \gamma _{n}^{i}\,\, >0\). Then \(\{x_{n}\}\) converges strongly to \(x^{*}=P_{\Omega } u.\)

Table 1 Numerical results of Example 1
Fig. 1
figure 1

Plotting of \(d_{H}(x_{n},0)\) in Table 1

4 Numerical example

In this section, we provide a numerical experiment to validate our obtained results in an Hadamard space.

Example 1

Let \(f_{1}:\mathbb {R}^{2} \rightarrow \mathbb {R}\) and \(f_{2}:\mathbb {R}^{2} \rightarrow \mathbb {R}\) be two functions defined by:

$$f_{1}(x_{1},x_{2})=100((x_{2}+1) -(x_{1}+1))^{2}+x_{1}^{2}, \,\,\,f_{2}(x_{1},x_{2})=100x_{1}^{2},$$

and \(X=\mathbb {R}^{2}\) be endowed with a metric defined by:

$$d_{H}(x,y)=\sqrt{(x_{1} - y_{1})^{2} +(x_{1}^{2}-x_{2} - y_{1}^{2} + y_{2})^{2}},$$

where \(x=(x_{1},x_{2})\) and \(y=(y_{1},y_{2})\). So \((\mathbb {R}^{2},d_{H})\) is an Hadamard space (see [14, Example 5.2]) with the geodesic joining x to y given by:

$$\gamma _{x,y}(t)=((1-t)x_{1} + ty_{1},((1-t)x_{1} + ty_{1})^{2} - (1-t)(x_{1}^{2}- x_{2}) - t(y_{1}^{2}-y_{2})).$$

It follows from [14, Example 5.2] that \(f_{1}\) is a proper convex and lower semicontinuous function in \((\mathbb {R}^{2},d_{H})\) but not convex in the classical sense. Let \(f:X \times X \rightarrow \mathbb {R}\) be a function defined by:

$$f(x,y) =d_{H}^2(y,0) - d_{H}^2(x,0).$$

It is obvious that f satisfies \(B_{1},B_{2},B_{3}\) and \(B_{4}\). Letting \(N=2\), \(A_{1}=\partial f_{1}\) and \(A_{2}=\partial f_{2},\) Algorithm (21) takes the following form:

$$\begin{aligned} \left\{ \begin{array}{rl} w_{n}&{}=argmin_{y \in \mathbb {R}^{2}} \{f_{1}(y) +\dfrac{1}{2\gamma _{n}^{1}}d_{H}^{2}(y,x_{n})\},\\ z_{n}&{}=argmin_{y \in \mathbb {R}^{2}} \{f_{2}(y) +\dfrac{1}{2\gamma _{n}^{2}}d_{H}^{2}(y,w_{n})\},\\ y_{n}&{}=argmin_{y \in \mathbb {R}^{2}} \{f(z_{n},y) +\dfrac{1}{2\lambda _{n}}d_{H}^{2}(z_{n},y)\},\\ t_{n}&{}=argmin_{y \in \mathbb {R}^{2}} \{f(y_{n},y) +\dfrac{1}{2\lambda _{n}}d_{H}^{2}(z_{n},y)\},\\ x_{n+1}&{}=\alpha _{n}u\,\oplus \,(1-\alpha _{n})t_{n}. \end{array}\right. \end{aligned}$$
(36)

Now, take \(\alpha _{n}=\dfrac{1}{n+2}\), \(\gamma _{n}^{1}=\gamma _{n}^{2}=2n\), \(u=(u_{1},u_{2})=(0.7,0.7)\) and \(\lambda _{n}=\dfrac{1}{n+2} + \dfrac{1}{2}\) for every \(n \in \mathbb {N},\) and the initial point \(x_{1}=(0.3,0.2)\). It can be observed that all the assumptions of Theorem 4 are true and \(\Omega =EP(f,\mathbb {R}^{2}) \cap (\bigcap _{i=1}^{2}argmin_{i \in X}f_{i}(x))=\{0\}.\) Now using Algorithm (36), we have numerical results in Table 1 and Fig. 1.