1 Introduction

Let C be a nonempty closed convex subset of a Hilbert space H , let \( T : C \rightarrow C \) be a nonexpansive mapping (i.e., \( \Vert Tx - Ty\Vert \le \Vert x - y\Vert \) for all \( x, y \in C \)), and let \( f : C \rightarrow C \) be a contraction (i.e., \( \Vert f(x) - f(y) \Vert \le \alpha \Vert x - y \Vert \) for all \( x, y \in C \) and some \( \alpha \in [0,1))\). The Moudafi’s viscosity method, introduced by Moudafi [1], for nonexpansive mappings generates a sequence \( \{x_n\} \) through the iteration process:

$$\begin{aligned}&x_0\in C \text { arbitarily chosen} \nonumber \\&x_{n+1} = \alpha _n f(x_n) + (1-\alpha _n)Tx_n, \quad \ n\ge 0, \end{aligned}$$
(1)

where \( \{\alpha _n\} \) is a sequence in (0, 1) . The author prove that, under certain conditions, the sequence \( \{x_n\} \) converges strongly to a fixed point q of T which solves the variational inequality:

$$\begin{aligned} \langle (I-f)q,p-q \rangle \ge 0, \quad \ p\in F(T), \end{aligned}$$
(2)

where F(T) is the set of fixed points of T , namely, \( F(T) = \{ x \in C : Tx = x \} \). The Moudafi’s viscosity approximation method is one of the most popular techniques to approximate fixed points because of applications to convex optimization, linear programming, etc. Therefore, it has been well studied and developed by many author in various directions. In 2004, Xu [2] extended this result to uniformly smooth Banach spaces. In 2009, Takahashi [3] obtained the result for a countable family of nonexpansive mappings in reflexive Banach spaces with a uniformly Gáteaux differentiable norm, etc. Recently, in CAT(0) spaces, Shi and Chen [4], introduced the following Moudafi’s viscosity iterations for a nonexpansive mapping T: for a contraction f on C and \(t \in (0,1)\), let \(x_t\in C\) be the unique fixed point of the contraction \(x \mapsto tf(x) \oplus (1-t)Tx\); i.e.

$$\begin{aligned} x_t = tf(x_t) \oplus (1-t)Tx_t, \end{aligned}$$
(3)

and \(x_0\in C\) is arbitrary chosen and

$$\begin{aligned} x_{n+1} = \alpha _n f(x_n) \oplus (1-\alpha _n)T x_n,\quad n\ge 0, \end{aligned}$$
(4)

where \(\{\alpha _n\}\subset (0,1)\). They proved \(\{x_t\}\) and \( \{x_n\} \) defined by (3) and (4) converge strongly to \(q\in F(T)\) such that \(q = P_{F(T)}f(q)\) in the framework of CAT(0) space satisfying property \(\mathcal {P}\) which solves the variational inequality:

$$\begin{aligned} \langle \overrightarrow{q f(q)},\overrightarrow{p q}\rangle \ge 0, \quad \ p\in F(T). \end{aligned}$$
(5)

By using the concept of quasi-linearization, Wangkeeree and Preechasilp [5] obtained the strong convergence of both \(\{x_t\}\) and \(\{x_n\}\) defined by (3) and (4) respectively, in the framework of CAT(0) space without the property \(\mathcal {P}\).

On the other hand, the implicit midpoint rule is one of the numerical techniques used to solve an ordinary differential equations. The implicit midpoint rule, also known as the second-order Runga-Kutta method or midpoint method, improves the Euler method by adding a midpoint in the step which increases the accuracy by one order; see [610] and the references therein. For instance, consider the initial value problem for the differential equation.

$$\begin{aligned} y'(t) = f (y(t)) \text { given the initial condition } y(0) = y_0, \end{aligned}$$
(6)

where f is a continuous function from \( \mathbb {R}^N \) to \( \mathbb {R}^N \). The midpoint method is an implicit method that generates a sequence \( \{y_n\} \) by

$$\begin{aligned} y_{n+1} = y_n + h f\left( \dfrac{y_n+y_{n+1}}{2}\right) , \quad \ n\ge 0. \end{aligned}$$

where \( h>0 \) is a stepsize. It is known that if \( f : R^N \rightarrow R^N \) is Lipschitz continuous and sufficiently smooth, then the sequence \( \{y_n\} \) converges to the exact solution of (6) as \( h \rightarrow 0 \) uniformly over \( t \in [0,\bar{t}] \) for any fixed \( \bar{t}>0 \). The first extension of midpoint method to the Hilbert space was studied by Alghamdi et al. [6]. They considered the following iterative scheme.

$$\begin{aligned} x_{n+1} = (1-t_n)x_n + t_n T \left( \dfrac{x_n + x_{n+1}}{2} \right) , \quad n\ge 0, \end{aligned}$$
(7)

where the initial guess \( x_0 \in H \) is arbitrarily chosen, \( t_n \in (0,1) \) for all n .

Recently, Xu et al. [11] combined the viscosity technique and the implicit midpoint rule for nonexpansive mappings. They introduced the following semi-implicit algorithm, so called viscosity implicit midpoint rule:

$$\begin{aligned} x_{n+1} = \alpha _nf(x_n) + (1-\alpha _n) T \left( \dfrac{x_n + x_{n+1}}{2} \right) , \quad n\ge 0. \end{aligned}$$
(8)

They proved that the sequence \( \{x_n\} \) defined by (8) converges strongly to a fixed point of T which, in addition, also solves the variational inequality (2).

All of the above bring us the following conjectures.

Question 1.1

Could we obtain the strong convergence for the viscosity implicit midpoint method in the framework of CAT(0) space?

The purpose of this paper is to study the following iterative schemes in a complete CAT(0) space.

$$\begin{aligned}&x_0 \in C \text { arbitarily chosen} \nonumber \\&x_{n+1} = \alpha _n f(x_n) \oplus (1-\alpha _n)T\left( \frac{x_n\oplus x_{n+1}}{2} \right) , \quad n\ge 0. \end{aligned}$$
(9)

We prove the iterative scheme (9) converges strongly to q such that \(q=P_{F(T)}f(q)\) which is the unique solution of the variational inequality:

$$\begin{aligned} \langle \overrightarrow{q f(q)},\overrightarrow{p q}\rangle \ge 0, \quad \ p\in F(T). \end{aligned}$$
(10)

The structure of the paper is as follows. Section 2 introduce some basic knowledge of inequalities and convergence types in CAT(0) space. We obtain, in Sect. 3, the strong convergence theorem of viscosity implicit midpoint rule for nonexpansive mappings in a complete CAT(0) space. Applications to minimization problems is given in Sect. 4.

2 Preliminaries

Let (Xd) be a metric space. A geodesic path joining \(x \in X\) to \(y \in X\) (or, more briefly, a geodesic from x to y) is a map c from a closed interval \([0, l] \subset \mathbb {R}\) to X such that \(c(0) = x, c(l) = y\), and \(d(c(t), c(t')) = |t - t'|\) for all \(t, t' \in [0, l]\). In particular, c is an isometry and \(d(x, y) = l\). The image \(\alpha \) of c is called a geodesic (or metric) segment joining x and y. When it is unique this geodesic segment is denoted by [xy]. The space (Xd) is said to be a geodesic space if every two points of X are joined by a geodesic, and X is said to be uniquely geodesic if there is exactly one geodesic joining x and y for each \(x, y \in X\). A geodesic triangle \(\triangle (x_1, x_2, x_3)\) in a geodesic metric space (Xd) consists of three points \(x_1, x_2, x_3\) in X (the vertices of \(\triangle \)) and a geodesic segment between each pair of vertices (the edges of \(\triangle \)). A comparison triangle for the geodesic triangle \(\triangle (x_1, x_2, x_3)\) in (Xd) is a triangle \(\overline{\triangle }(x_1, x_2, x_3) := \triangle (\overline{x}_1, \overline{x}_2, \overline{x}_3)\) in the Euclidean plane \(\mathbb {E}^2\) such that \(d_{\mathbb {E}_2}\left( \overline{x}_i,\overline{x}_j\right) =d(x_i,x_j)\) for all \(i,j \in \{1,2,3\}\).

A geodesic space is said to be a CAT(0) space if all geodesic triangles of appropriate size satisfy the following comparison axiom.

CAT(0) : Let \(\triangle \) be a geodesic triangle in X and let \(\overline{\triangle }\) be a comparison triangle for \(\triangle \). Then \(\triangle \) is said to satisfy the CAT(0) inequality if for all \(x, y \in \triangle \) and all comparison points \(\overline{x},\overline{y} \in \overline{\triangle }\),

$$\begin{aligned} d(x,y) \le d_{\mathbb {E}^2}(\overline{x},\overline{y}). \end{aligned}$$

It is proved in Lemma 2.1 of [12] that for each \( x,y\in X \) there exists the unique point z in the geodesic segment joining from x to y such that

$$\begin{aligned} d(z, x) = td(x, y) \hbox { and } d(z, y) = (1- t)d(x, y). \end{aligned}$$

We also denote by [xy] the geodesic segment joining from x to y, that is, \([x, y] = \{(1- t)x \oplus ty : t\in [0, 1]\}\). A subset C of a CAT(0) space is convex if \([x, y]\subseteq C\) for all \(x, y\in C\). The following lemmas play an important role in our paper.

Lemma 2.1

[12, Lemma 2.4] Let X be a CAT(0) space, \(x,y,z\in X\) and \(\lambda \in [0,1]\). Then

$$\begin{aligned} d(\lambda x\oplus (1-\lambda ) y, z)\le \lambda d(x,z)+(1-\lambda )d(y,z). \end{aligned}$$

Lemma 2.2

[12, Lemma 2.5] Let X be a CAT(0) space, \(x,y,z\in X\) and \(\lambda \in [0,1]\). Then

$$\begin{aligned} d^2(\lambda x\oplus (1-\lambda ) y, z)\le \lambda d^2(x,z)+(1-\lambda )d^2(y,z)-\lambda (1-\lambda )d^2(x,y). \end{aligned}$$

Lemma 2.3

[13, Proposition 2.2] Let X be a CAT(0) space, \(p, q, r, s\in X\) and \(\lambda \in [0,1]\). Then

$$\begin{aligned} d(\lambda p\oplus (1-\lambda ) q, \lambda r\oplus (1-\lambda ) s)\le \lambda d(p,r)+(1-\lambda )d(q,s). \end{aligned}$$

In 2008, Berg and Nikolaev [14] introduced the concept of quasilinearization as follows:

Let us formally denote a pair \((a,b)\in X\times X\) by \(\overrightarrow{ab}\) and call it a vector. Then quasilinearization is defined as a map \(\langle \cdot , \cdot \rangle : (X\times X)\times (X\times X)\rightarrow \mathbb {R}\) defined by

$$\begin{aligned} \langle \overrightarrow{ab},\overrightarrow{cd}\rangle = \frac{1}{2}\left( d^2(a,d)+d^2(b,c)-d^2(a,c)-d^2(b,d)\right) , \quad (a,b,c,d\in X).\nonumber \\ \end{aligned}$$
(11)

It is easily seen that \(\langle \overrightarrow{ab},\overrightarrow{cd}\rangle =\langle \overrightarrow{cd},\overrightarrow{ab}\rangle \), \(\langle \overrightarrow{ab},\overrightarrow{cd}\rangle = -\langle \overrightarrow{ba},\overrightarrow{cd}\rangle \) and \(\langle \overrightarrow{ax},\overrightarrow{cd}\rangle + \langle \overrightarrow{xb},\overrightarrow{cd}\rangle = \langle \overrightarrow{ab},\overrightarrow{cd}\rangle \) for all \(a,b,c,d, x\in X\). We say that X satisfies the Cauchy-Schwarz inequality if

$$\begin{aligned} \langle \overrightarrow{ab},\overrightarrow{cd}\rangle \le d(a,b) d(c,d) \end{aligned}$$
(12)

for all \(a,b,c,d\in X\). It known [14, Corollary 3] that a geodesically connected metric space is CAT(0) space if and only if it satisfies the Cauchy-Schwarz inequality.

The following two lemmas can be found in [5].

Lemma 2.4

[5, Lemma 2.9] Let X be a complete CAT(0) space. Then for all \(u,x,y\in X\), the following inequality holds

$$\begin{aligned} d^2(x,u) \le d^2(y,u) + 2\langle \overrightarrow{xy},\overrightarrow{xu} \rangle . \end{aligned}$$

Lemma 2.5

[5, Lemma 2.10] Let X be a CAT(0) space. For any \(u,v\in X\) and \(t\in [0,1]\), let \(u_t = tu \oplus (1-t)v\). Then, for all \(x,y\in X\),

  1. (i)

    \(\langle \overrightarrow{u_t x},\overrightarrow{u_t y} \rangle \le t\langle \overrightarrow{u x},\overrightarrow{u_t y} \rangle + (1-t)\langle \overrightarrow{v x},\overrightarrow{u_t y} \rangle \);

  2. (ii)

    \(\langle \overrightarrow{u_t x},\overrightarrow{u y} \rangle \le t\langle \overrightarrow{u x},\overrightarrow{u y} \rangle + (1-t)\langle \overrightarrow{v x},\overrightarrow{u y} \rangle \) and \(\langle \overrightarrow{u_t x},\overrightarrow{u_t y} \rangle \le t\langle \overrightarrow{u x},\overrightarrow{v y} \rangle + (1-t)\langle \overrightarrow{v x},\overrightarrow{v y} \rangle \).

Let C be a nonempty complete convex subset of a CAT(0) space X . It is well-known that for any \( x \in X \) there exists a unique point \( u \in C \) such that

$$\begin{aligned} d(x,u) = \min _{y\in C} d(x,y). \end{aligned}$$

The mapping \( P_C : X \rightarrow C \) defined by \( P_C(x) = u \) is called the metric projection from X onto C . Dehghan and Rooin [15] introduced the duality mapping in CAT(0) spaces and studied its relation with subdifferential, by using the concept of quasilinearization. Then, they presented a characterization of metric projection in CAT(0) spaces as follows:

Theorem 2.6

[15, Theorem 2.4] Let C be a nonempty convex subset of a complete CAT(0) space X, \(x\in X\) and \(u\in C\). Then

$$\begin{aligned} u = P_C x \ \ \ \text { if and only if } \ \ \ \langle \overrightarrow{yu} , \overrightarrow{ux} \rangle \ge 0, \quad \text { for all } y\in C. \end{aligned}$$

Now, we collect the concept of two types of convergence of element in CAT(0) space.

\( \varDelta \) -convergence: The concept of \(\varDelta \)-convergence introduced by Lim [16] in 1976 was shown by Kirk and Panyanak [17] in CAT(0) spaces as follows.

Let \(\{x_n\}\) be a bounded sequence in a CAT(0) space X. For \(x\in X\), we set

$$\begin{aligned} r(x, \{x_n\})=\limsup _{n\rightarrow \infty }d(x, x_n). \end{aligned}$$

The asymptotic radius \(r(\{x_n\})\) of \(\{x_n\}\) is given by

$$\begin{aligned} r(\{x_n\})=\inf \{r(x, \{x_n\}): x\in X\}, \end{aligned}$$

and the asymptotic center \(A(\{x_n\})\) of \(\{x_n\}\) is the set

$$\begin{aligned} A(\{x_n\})=\{x\in X : r(x, \{x_n\})=r(\{x_n\})\}. \end{aligned}$$

It is known from Proposition 7 of [18] that in a CAT(0) space, \(A(\{x_n\})\) consists of exactly one point.

A sequence \(\{x_n\}\subset X\) is said to \(\varDelta \)-converge to \(x\in X\) if \(A(\{x_{n_k}\})=\{x\}\) for every subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\). Uniqueness of asymptotic center implies that CAT(0) space X satisfies Opial’s property, i.e., for given \(\{x_n\}\subset X\) such that \(\{x_n\}\) \(\varDelta \)-converges to x and given \(y\in X\) with \(y\not =x\),

$$\begin{aligned} \limsup _{n\rightarrow \infty }d(x_{n},x) < \limsup _{n\rightarrow \infty }d(x_{n}, y). \end{aligned}$$

Lemma 2.7

([19], p. 3690) Every bounded sequence in a complete CAT(0) space always has a \(\varDelta \)-convergent subsequence.

Lemma 2.8

[20] If C is a closed convex subset of a complete CAT(0) space and if \(\{x_n\}\) is a bounded sequence in C, then the asymptotic center of \(\{x_n\}\) is in C.

Since it is not possible to formulate the concept of demiclosedness in a CAT(0) setting, as stated in linear spaces, let us formally say that “\(I-T\) is demiclosed at zero”  if the conditions, \(\{x_n\}\subseteq C\) \(\varDelta \)-converges to x and \(d(x_n, Tx_n)\rightarrow 0\) imply \(x\in F(T)\).

Lemma 2.9

[20] If C is a closed convex subset of X and \(T:C\rightarrow X\) is a nonexpansive mapping, then the conditions \(\{x_n\}\) \(\varDelta \)-convergence to x and \(d(x_n,Tx_n)\rightarrow 0\), and imply \(x\in C\) and \(Tx=x\).

w -convergence: Having the notion of quasilinearization, Kakavandi and Amini [21] introduced the following notion of convergence.

A sequence \(\{x_n\}\) in the complete CAT(0) space (Xd), \( x_n \) w-converges to \(x \in X\) if \(\lim _{n\rightarrow \infty }\langle \overrightarrow{x x_n},\overrightarrow{x y} \rangle = 0\), i.e. \(\lim _{n\rightarrow \infty }(d^2(x_n,x) - d^2(x_n, y) + d^2(x, y)) = 0\) for all \(y \in X\).

It is obvious that convergence in the metric implies w-convergence, and it is easy to check that w-convergence implies \(\varDelta \)-convergence [21, Proposition 2.5], but it is showed in ([22, Example 4.7]) that the converse is not valid. However the following lemma shows another characterization of \(\varDelta \)-convergence as well as, more explicitly, a relation between w-convergence and \(\varDelta \)-convergence.

Lemma 2.10

[22, Theorem 2.6] Let X be a complete CAT(0) space, \(\{x_n\}\) be a sequence in X and \(x \in X\). Then \(\{x_n\}\) \(\varDelta \)-converges to x if and only if \(\limsup _{n\rightarrow \infty }\langle \overrightarrow{x x_n},\overrightarrow{x y} \rangle \le 0\) for all \(y\in X\).

Recall that a continuous linear functional \( \mu \) on \( l_{\infty } \), the Banach space of bounded real sequence, is called a Banach limit if \( \Vert \mu \Vert = \mu (1,1,\ldots )\) and \( \mu _n(a_n) = \mu _n(a_{n+1}) \) for all \( \{a_n\}\in l_{\infty } \).

The following lemma is an important tool for proving the strong convergence of a sequence \( \{d^2(x_n,q)\} \).

Lemma 2.11

[23, Lemma 2.1] Let \(\{a_n\}\) be a sequence of non-negative real number satisfying the property

\(a_{n+1} \le (1-\alpha _n)a_n + \alpha _n \beta _n, \phantom {xx} n\ge 0\),

where \(\{\alpha _n\}\subseteq (0,1)\) and \(\{\beta _n\}\subseteq \mathbb {R}\) such that

  1. (i)

    \(\sum _{n=0}^{\infty }\alpha _n = \infty \);

  2. (ii)

    \(\limsup _{n\rightarrow \infty }\beta _n \le 0\) or \(\sum _{n=0}^\infty |\alpha _n\beta _n| < \infty \).

Then \(\{a_n\}\) converges to zero, as \(n\rightarrow \infty \).

3 Main Results

Let X be a CAT(0) space, C a nonempty, closed, and convex subset of X , and \( T : C \rightarrow C \) a nonexpansive mapping such that \( F(T)\ne \emptyset \). Moreover, let \( f : C \rightarrow C \) be a contraction with coefficient \( \alpha \in [0,1) \). The viscosity method for nonexpansive mappings is essentially a regularization method of nonexpansive mappings by contractions. In this section we consider the viscosity technique for the implicit midpoint rule of nonexpansive mappings which generates a sequence \( {x_n} \) in the semi-implicit manner: \( x_0\in C \)

$$\begin{aligned} x_{n+1} = \alpha _n f(x_n) \oplus (1-\alpha _n)T\left( \frac{x_n\oplus x_{n+1}}{2} \right) , \quad n\ge 0, \end{aligned}$$
(13)

where \( \alpha _n \in (0,1) \) for all \( n\ge 0 \). Note that the scheme (13) is well defined for all \( n\ge 0 \). Indeed, for an initial valued \( x_0\in C \) we define map \( G_f(x) \) by

$$\begin{aligned} G_f(x) = \alpha _0 f(x_0) \oplus (1-\alpha _0)T\left( \frac{x_0\oplus x}{2} \right) . \end{aligned}$$

Then \( G_f \) is a contractive on C with coefficient \( \dfrac{1-\alpha _0}{2} \). In particular, for any \( x,y\in C \), we get that

$$\begin{aligned} d\left( G_f(x),G_f(y) \right)= & {} d\left( \alpha _0 f(x_0) \oplus (1-\alpha _0)T\left( \frac{x_0\oplus x}{2} \right) ,\alpha _0 f(x_0)\right. \\&\left. \oplus (1-\alpha _0)T\left( \frac{x_0\oplus y}{2} \right) \right) \\\le & {} (1-\alpha _0) d\left( T\left( \frac{x_0\oplus x}{2}\right) ,T\left( \frac{x_0\oplus y}{2} \right) \right) \\\le & {} \dfrac{1-\alpha _0}{2}d(x,y). \end{aligned}$$

Hence, there exists \( x_1 \in C \) such that \(\displaystyle x_1 = \alpha _0 f(x_0) \oplus (1-\alpha _0)T\left( \frac{x_0\oplus x_1}{2} \right) \). By induction methods, we have \( \{x_n\} \) is well-defined.

Theorem 3.1

Let X be a complete CAT(0) space, C a closed convex subset of X , \( T : C \rightarrow C \) a nonexpansive mapping with \( F(T)\ne \emptyset \), and \( f : C \rightarrow C \) a contraction with coefficient \( \alpha \in [0,1) \). Let \( \{x_n\} \) be generated by (13). Assume the following conditions hold:

  1. (i)

    \( \lim _{n\rightarrow \infty } \alpha _n = 0 \);

  2. (ii)

    \( \sum _{n=0}^{\infty } = \infty \);

  3. (iii)

    either \( \sum _{n=0}^{\infty } |\alpha _{n+1} - \alpha _n| \) or \( \lim _{n\rightarrow \infty }\dfrac{\alpha _{n+1}}{\alpha _n} = 1 \).

Then \(\{x_n\}\) converges strongly as \(n\rightarrow \infty \) to q such that \(q=P_{F(T)}f(q)\) which is equivalent to the following variational inequality:

$$\begin{aligned} \langle \overrightarrow{q f(q)},\overrightarrow{p q}\rangle \ge 0, \quad \ p\in F(T). \end{aligned}$$
(14)

Proof

We first show that the sequence \(\{x_n\}\) is bounded. For any \(p\in F(T)\), we have that

$$\begin{aligned} d(x_{n+1},p)= & {} d\left( \alpha _n f(x_n) \oplus (1-\alpha _n)T\left( \frac{x_n\oplus x_{n+1}}{2} \right) ,p\right) \\\le & {} \alpha _n d( f(x_n),p) + (1 - \alpha _n)d\left( T\left( \frac{x_n\oplus x_{n+1}}{2} \right) ,p\right) \\\le & {} \alpha _n \left( d( f(x_n),f(p)) + d(f(p),p)\right) + (1 - \alpha _n)d\left( \frac{x_n\oplus x_{n+1}}{2},p\right) \\\le & {} \alpha _n\alpha d( x_n,p)\\&+\,\alpha _nd(f(p),p) + (1 - \alpha _n)\left( \dfrac{1}{2}d(x_n,p) +\dfrac{1}{2}d(x_{n+1},p)\right) . \end{aligned}$$

Thus,

$$\begin{aligned} \dfrac{1+\alpha _n}{2}d(x_{n+1},p) \le \dfrac{1+(2\alpha - 1)\alpha _n}{2} d(x_n,p) + \alpha _n d(f(p),p). \end{aligned}$$

Hence,

$$\begin{aligned} d(x_{n+1},p)\le & {} \dfrac{1+(2\alpha - 1)\alpha _n}{1+\alpha _n} d(x_n,p) + \dfrac{2\alpha _n}{1+\alpha _n} d(f(p),p) \\\le & {} \left( 1 - \dfrac{2(1-\alpha )\alpha _n}{1+\alpha _n}\right) d(x_n,p) + \dfrac{2(1-\alpha )\alpha _n}{1+\alpha _n}\left( \dfrac{1}{1-\alpha }\right) d(f(p),p). \end{aligned}$$

Therefore,

$$\begin{aligned} d(x_{n+1},p) \le \max \left\{ d(x_n,p),\dfrac{1}{1-\alpha }d(f(p),p) \right\} . \end{aligned}$$

By induction, we have

$$\begin{aligned} d(x_n,p) \le \max \left\{ d(x_0,p),\frac{1}{1-\alpha }d(f(p),p) \right\} , \end{aligned}$$

for all \(n\in \mathbb {N}\). Hence \(\{x_n\}\) is bounded, so are \(\{T(x_n)\}\) and \(\{f(x_n)\}\).

We now show that \( d\left( x_{n+1},x_n \right) \rightarrow 0 \) as \( n\rightarrow \infty \). From the definition of (13), we have that

$$\begin{aligned} d(x_{n+1},x_n)= & {} d\left( \alpha _n f(x_n) \oplus (1 - \alpha _n)T\left( \frac{x_n\oplus x_{n+1}}{2} \right) , \alpha _{n-1} f(x_{n-1}) \right. \\&\left. \oplus (1 - \alpha _{n-1})T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) \right) \\\le & {} d\left( \alpha _n f(x_n) \oplus (1 - \alpha _n)T\left( \frac{x_n\oplus x_{n+1}}{2} \right) ,\alpha _{n} f(x_{n})\right. \\&\left. \oplus (1 - \alpha _{n})T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) \right) \\&+ d\left( \alpha _n f(x_n) \oplus (1 - \alpha _n)T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) ,\alpha _{n} f(x_{n-1})\right. \\&\left. \oplus (1 - \alpha _{n})T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) \right) \\&+ d\left( \alpha _n f(x_{n-1}) \oplus (1 - \alpha _n)T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) ,\alpha _{n-1} f(x_{n-1})\right. \\&\left. \oplus (1 - \alpha _{n-1})T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) \right) \\\le & {} (1-\alpha _n)d\left( T\left( \frac{x_n\oplus x_{n+1}}{2} \right) ,T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) \right) \\&\quad +\,\alpha _n d(f(x_n),f(x_{n-1})) \\&\quad +\,| \alpha _n - \alpha _{n-1} |d\left( f(x_{n-1}),T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) \right) \\\le & {} (1-\alpha _n)d\left( x_{n+1},x_{n-1}) + \alpha _n \alpha d(x_n,x_{n-1}\right) \\&+ | \alpha _n - \alpha _{n-1} |d\left( f(x_{n-1}),T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) \right) \\\le & {} \dfrac{(1-\alpha _n)}{2}\left( d(x_{n+1},x_{n}) + d(x_n,x_{n-1})\right) + \alpha _n \alpha d(x_n,x_{n-1}) \\&\quad +\,| \alpha _n - \alpha _{n-1} |M, \end{aligned}$$

where \( M\ge \sup _{n\ge 0}d\left( f(x_{n-1}),T\left( \frac{x_{n-1}\oplus x_{n}}{2} \right) \right) \). It then follows that

$$\begin{aligned} \dfrac{1+\alpha _n}{2}d(x_{n+1},x_n) \le \left( \dfrac{1}{2}(1-\alpha _n) + \alpha \alpha _n \right) d(x_n,x_{n-1}) + |\alpha _n - \alpha _{n-1}|M. \end{aligned}$$

Thus

$$\begin{aligned} d(x_{n+1},x_n)\le & {} \left( \dfrac{1+(2\alpha -1)\alpha _n}{1+\alpha _n} \right) d(x_n,x_{n-1}) + |\alpha _n - \alpha _{n-1}|\dfrac{2M}{1+\alpha _n} \\= & {} \left( 1 - \dfrac{2(1-\alpha )\alpha _n}{1+\alpha _n} \right) d(x_n,x_{n-1}) + |\alpha _n - \alpha _{n-1}|\dfrac{2M}{1+\alpha _n}. \end{aligned}$$

By the conditions (ii) and (iii) and Lemma 2.11, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }d(x_{n+1},x_n) = 0. \end{aligned}$$
(15)

Consequently, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }d(x_n,T(x_n)) = 0. \end{aligned}$$
(16)

In facts, we consider

$$\begin{aligned} d(x_n,T(x_n))\le & {} d(x_n,x_{n+1})\\&+\,d\left( x_{n+1},T\left( \dfrac{x_n\oplus x_{n+1}}{2}\right) \right) + d\left( T\left( \dfrac{x_n\oplus x_{n+1}}{2}\right) ,T(x_n) \right) \\\le & {} d(x_n,x_{n+1}) + \alpha _nd\left( f(x_{n}),T\left( \dfrac{x_n\oplus x_{n+1}}{2}\right) \right) + \dfrac{1}{2}d(x_n,x_{n+1}) \\\le & {} \dfrac{3}{2}d(x_n,x_{n+1}) + \alpha _n M, \end{aligned}$$

where \( M\ge \sup _{n\ge 0} d\left( f(x_{n}),T\left( \dfrac{x_n\oplus x_{n+1}}{2}\right) \right) \). By the virtue of condition (i) and (15), we have the result.

We will show that \( \{x_n\} \) converges strongly to q such that \( q = P_{F(T)} f(q) \) which is equivalent to the following variational inequality (14).

As a matter of fact, we can find a subsequence \( \{x_{n_j}\} \) of \( \{x_n\} \) such that \( \{x_{n_j}\} \) \( \varDelta \)-converges to a point q because of boundedness of \( \{x_n\} \) and moreover

$$\begin{aligned} \limsup _{n\rightarrow \infty } \langle \overrightarrow{qf(q)},\overrightarrow{qx_n}\rangle = \lim _{j\rightarrow \infty }\langle \overrightarrow{qf(q)},\overrightarrow{qx_{n_j}}\rangle . \end{aligned}$$
(17)

It follows from Lemma 2.9 that \( q\in F(T) \). We claim that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\langle \overrightarrow{qf(q)},\overrightarrow{qx_n} \rangle \le 0. \end{aligned}$$
(18)

By Lemma 2.10. we get that

$$\begin{aligned} \limsup _{j\rightarrow \infty }\langle \overrightarrow{qf(q)},\overrightarrow{qx_{n_j}} \rangle \le 0. \end{aligned}$$

Combine the last inequality with (17), we have the claim.

We now prove that \( x_n\) converges strongly to q . For any \(n\in \mathbb {N}\), we set \(y_n=\alpha _nq \oplus (1-\alpha _n)\left( T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) \right) \). It follows from Lemma 2.4 and Lemma 2.5 (i), (ii) that

$$\begin{aligned} d^2(x_{n+1},q)\le & {} d^2(y_{n},q) + 2\langle \overrightarrow{x_{n+1}y_{n}} , \overrightarrow{x_{n+1}q} \rangle \\\le & {} \left( \alpha _n d(q,q) + (1-\alpha _n)d\left( T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) ,q\right) \right) ^2 \\&+\,2\left[ \alpha _n\langle \overrightarrow{f(x_{n})y_{n}} , \overrightarrow{x_{n+1}q} \rangle + (1-\alpha _n)\left\langle \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) y_{n}} , \overrightarrow{x_{n+1}q} \right\rangle \right] \\\le & {} (1-\alpha _n)^2d^2\left( T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) ,q\right) + 2\left[ \alpha _n\alpha _n\langle \overrightarrow{f(x_{n})q} , \overrightarrow{x_{n+1}q} \rangle \right. \\&+ \alpha _n(1-\alpha _n)\left\langle \overrightarrow{f(x_{n})T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) } , \overrightarrow{x_{n+1}q} \right\rangle \\&+ (1-\alpha _n)\alpha _n\left\langle \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) q} , \overrightarrow{x_{n+1}q} \right\rangle \\&+ (1-\alpha _n)(1-\alpha _n)\left\langle \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) } , \left. \overrightarrow{x_{n+1}q} \right\rangle \right] \\= & {} (1-\alpha _n)^2d^2\left( T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) ,q\right) + 2\alpha _n\langle \overrightarrow{f(x_{n})q} , \overrightarrow{x_{n+1}q} \rangle \\\le & {} (1-\alpha _n)^2d^2\left( \dfrac{x_n\oplus x_{n+1}}{2},q\right) \\&+\, 2\alpha _n(1-\alpha _n)\left\langle \overrightarrow{f(x_{n})q} , \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) q} \right\rangle \\\le & {} (1-\alpha _n)^2d^2\left( \dfrac{x_n\oplus x_{n+1}}{2},q\right) \\&+\,2\alpha _n(1-\alpha _n)\left\langle \overrightarrow{f(x_{n})f(q)} , \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) q} \right\rangle \\&+\, 2\alpha _n(1-\alpha _n)\left\langle \overrightarrow{f(q)q} , \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) q} \right\rangle \\\le & {} (1-\alpha _n)^2d^2\left( \dfrac{x_n\oplus x_{n+1}}{2},q\right) \\&+\,2\alpha \alpha _n(1-\alpha _n) d(x_{n},q)\cdot d\left( T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) ,q\right) \\&+\,2\alpha _n(1-\alpha _n)\left\langle \overrightarrow{f(q)q} , \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) q} \right\rangle \\\le & {} (1-\alpha _n)^2d^2\left( \dfrac{x_n\oplus x_{n+1}}{2},q\right) \\&+\,2\alpha \alpha _n(1-\alpha _n) d(x_{n},q)\cdot d\left( \dfrac{x_n\oplus x_{n+1}}{2} ,q\right) \\&+\,2\alpha _n(1-\alpha _n)\left\langle \overrightarrow{f(q)q} , \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) q} \right\rangle . \end{aligned}$$

Putting

$$\begin{aligned} \beta _n = 2\alpha _n(1-\alpha _n)\left\langle \overrightarrow{f(q)q} , \overrightarrow{T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) q} \right\rangle . \end{aligned}$$
(19)

It implies that

$$\begin{aligned} 0\le & {} (1-\alpha _n)^2d^2\left( \dfrac{x_n\oplus x_{n+1}}{2},q\right) + 2\alpha \alpha _n(1-\alpha _n) d(x_{n},q)\cdot d\left( \dfrac{x_n\oplus x_{n+1}}{2},q\right) \\&+ \beta _n - d^2(x_{n+1},q). \end{aligned}$$

By using the quadratic formula to solve the above quadratic equation for \( d\left( \dfrac{x_n\oplus x_{n+1}}{2},q\right) \), we get that

$$\begin{aligned} d\left( \dfrac{x_n\oplus x_{n+1}}{2},q\right)\ge & {} \dfrac{1}{2(1-\alpha _n)^2} \Big [ -2\alpha \alpha _n(1-\alpha _n)d(x_n,q) \\&+ \sqrt{4\alpha ^2\alpha ^2_n(1-\alpha _n)^2d^2(x_n,q)\!-\!4(1\!-\!\alpha _n)^2(\beta _n \!-\! d^2(x_{n+1},q))} \Big ] \\= & {} \dfrac{-\alpha \alpha _nd(x_n,q) + \sqrt{\alpha ^2\alpha ^2_nd^2(x_n,q)+d^2(x_{n+1},q)-\beta _n}}{1-\alpha _n}. \end{aligned}$$

This implies that

$$\begin{aligned}&\dfrac{1}{2}d(x_{n+1},q) + \dfrac{1}{2}d(x_{n},q) \\&\quad \ge \,\dfrac{-\alpha \alpha _nd(x_n,q) + \sqrt{\alpha ^2\alpha ^2_nd^2(x_n,q)+d^2(x_{n+1},q)-\beta _n}}{1-\alpha _n}. \end{aligned}$$

Hence,

$$\begin{aligned}&\dfrac{1}{4}\left( (1-\alpha _n) d(x_{n+1},q) + (1+(2\alpha -1)\alpha _n)d(x_n,q) \right) ^2\\&\quad \ge \,\alpha ^2 \alpha ^2_nd^2(x_n,q) + d^2(x_{n+1},q) -\beta _n, \end{aligned}$$

which is expanded to the inequality

$$\begin{aligned}&\dfrac{1}{4}(1-\alpha _n)^2 d(x_{n+1},q) + \dfrac{1}{4}(1+(2\alpha -1)\alpha _n)^2d(x_n,q)\\&\qquad +\dfrac{1}{2}(1+(2\alpha -1)\alpha _n)d(x_n,q)d(x_{n+1},q) \\&\quad \ge \,\alpha ^2 \alpha ^2_nd^2(x_n,q) + d^2(x_{n+1},q) -\beta _n. \end{aligned}$$

By using the standard inequality

$$\begin{aligned} 2d(x_n,q)d(x_{n+1},q) \le d^2(x_n,q) + d^2(x_{n+1},q), \end{aligned}$$

we can get that

$$\begin{aligned}&\left( 1-\dfrac{1}{4}(1-\alpha _n)^2 -\dfrac{1}{4}(1-\alpha _n)(1+(2\alpha -1)\alpha _n) \right) d^2(x_{n+1},q) \nonumber \\&\quad \le \,\left( \dfrac{1}{4}(1+(2\alpha - 1)\alpha _n)^2 + \dfrac{1}{4}(1-\alpha _n)(1 + (2\alpha -1)\alpha _n) - \alpha ^2\alpha ^2_n \right) d^2(x_n,q)\nonumber \\&\qquad + \beta _n. \end{aligned}$$
(20)

Therefore,

$$\begin{aligned}&d^2(x_{n+1},q) \nonumber \\&\quad \le \,\dfrac{\dfrac{1}{4}(1+(2\alpha - 1)\alpha _n)^2 + \dfrac{1}{4}(1-\alpha _n)(1 + (2\alpha -1)\alpha _n) - \alpha ^2\alpha ^2_n}{1-\dfrac{1}{4}(1-\alpha _n)^2 -\dfrac{1}{4}(1-\alpha _n)(1+(2\alpha -1)\alpha _n)}d^2(x_n,q)\nonumber \\&\qquad +\gamma _n, \end{aligned}$$
(21)

where

$$\begin{aligned} \gamma _n = \dfrac{\beta _n}{1-\dfrac{1}{4}(1-\alpha _n)^2 -\dfrac{1}{4}(1-\alpha _n)(1+(2\alpha -1)\alpha _n)}. \end{aligned}$$
(22)

Note that

$$\begin{aligned} 1-\dfrac{1}{4}(1-\alpha _n)^2 -\dfrac{1}{4}(1-\alpha _n)(1+(2\alpha -1)\alpha _n) = 1-\dfrac{1}{2}(1-\alpha _n)(1-(1-\alpha )\alpha _n) \end{aligned}$$

and

$$\begin{aligned}&\dfrac{1}{4}(1+(2\alpha - 1)\alpha _n)^2 + \dfrac{1}{4}(1-\alpha _n)(1 + (2\alpha -1)\alpha _n) - \alpha ^2\alpha ^2_n \\&\quad =\,\dfrac{1}{2}(1+(2\alpha - 1)\alpha _n)(1-(1-\alpha )\alpha _n) - \alpha ^2\alpha ^2_n. \end{aligned}$$

We can rewrite (21) as

$$\begin{aligned} d^2(x_{n+1},q) \le \dfrac{\dfrac{1}{2}(1+(2\alpha - 1)\alpha _n)(1-(1-\alpha )\alpha _n) - \alpha ^2\alpha ^2_n}{1-\dfrac{1}{2}(1-\alpha _n)(1-(1-\alpha )\alpha _n)}d^2(x_n,q) + \gamma _n. \end{aligned}$$
(23)

We consider the function

$$\begin{aligned} h(t) := \dfrac{1}{t}\left\{ 1- \dfrac{\dfrac{1}{2}(1+(2\alpha - 1)\alpha _n)(1-(1-\alpha )\alpha _n) - \alpha ^2\alpha ^2_n}{1-\dfrac{1}{2}(1-\alpha _n)(1-(1-\alpha )\alpha _n)} \right\} , \quad t>0. \end{aligned}$$

After certain calculating, we can rewrite h(t) as

$$\begin{aligned} h(t) = \dfrac{1(1-\alpha )-(1-\alpha )^2t + \alpha ^2t}{1-\dfrac{1}{2}(1-t)(1-(1-\alpha )t)}. \end{aligned}$$

It turns out that \( \lim \limits _{t\rightarrow 0} h(t) = 4(1-\alpha ) > 0 \). Let \( \delta _0 > 0 \). Then

$$\begin{aligned} h(t)> \varepsilon _0 := 3(1-\alpha ) > 0, \quad 0<t<\delta _0. \end{aligned}$$

In other words, we have

$$\begin{aligned} \dfrac{\dfrac{1}{2}(1+(2\alpha - 1)\alpha _n)(1-(1-\alpha )\alpha _n) - \alpha ^2\alpha ^2_n}{1-\dfrac{1}{2}(1-\alpha _n)(1-(1-\alpha )\alpha _n)}< 1 - \varepsilon _0 t, \quad 0<t<\delta _0. \end{aligned}$$

Since \( \alpha _n \rightarrow 0 \) as \( n\rightarrow \infty \), we then have there exists \( N_0\in \mathbb {N} \) such that \( \alpha _n < \delta \) for all \( n \ge N_0 \). It then turns out from (23) that, for all \( n \ge N_0 \),

$$\begin{aligned} d^2(x_{n+1},q) \le (1-\varepsilon _0\alpha _n)d^2(x_n,q) + \gamma _n \end{aligned}$$
(24)

Notice that

$$\begin{aligned} d^2\left( T\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) , x_n \right) \rightarrow 0 \text { as } n\rightarrow \infty . \end{aligned}$$

It then follows from the definition (19) of \( \beta _n \) and (18) that

$$\begin{aligned} \limsup _{n\rightarrow \infty } \dfrac{\beta _n}{\alpha _n} \le 0, \end{aligned}$$

which implies that

$$\begin{aligned} \limsup _{n\rightarrow \infty } \dfrac{\gamma _n}{\alpha _n} \le 0. \end{aligned}$$
(25)

Inequality (25) and the conditions (C1) and (C2) enable us to apply Lemma 2.11 to the inequality (24) to conclude that \( d^2(x_n,q) \rightarrow 0 \) as \( n\rightarrow \infty \).

Finally, we show that q is a solution of (. Applying Lemma 2.2, for any \(p\in F(T)\),

$$\begin{aligned} d^2(x_{n+1},p)= & {} d^2\left( \alpha _n f(x_n) \oplus (1-\alpha _n)T\left( \frac{x_n\oplus x_{n+1}}{2} \right) ,p\right) \\\le & {} \alpha _n d^2(f(x_n),p) + (1-\alpha _n)d^2\left( T\left( \frac{x_n\oplus x_{n+1}}{2} \right) ,p\right) \\&-\, \alpha _n(1-\alpha _n)d^2\left( f(x_n),T\left( \frac{x_n\oplus x_{n+1}}{2} \right) \right) \\\le & {} \alpha _n d^2(f(x_n),p) + (1-\alpha _n)d^2\left( \frac{x_n\oplus x_{n+1}}{2},p\right) \\&-\, \alpha _n(1-\alpha _n)d^2\left( f(x_n),T\left( \frac{x_n\oplus x_{n+1}}{2} \right) \right) \\\le & {} \alpha _n d^2(f(x_n),p)\\&\quad +\,(1-\alpha _n)\left( \dfrac{1}{2}d^2(x_{n+1},p) + \dfrac{1}{2}d^2(x_{n},p) - \dfrac{1}{4}d^2(x_n,x_{n+1}) \right) \\&-\, \alpha _n(1-\alpha _n)d^2\left( f(x_n),T\left( \frac{x_n\oplus x_{n+1}}{2} \right) \right) \\\le & {} \alpha _n d^2(f(x_n),p) + \dfrac{(1-\alpha _n)}{2} d^2(x_{n+1},p) + \dfrac{(1-\alpha _n)}{2}d^2(x_{n},p)\\&-\, \alpha _n(1-\alpha _n)d^2\left( f(x_n),T\left( \frac{x_n\oplus x_{n+1}}{2} \right) \right) . \end{aligned}$$

Let \( \mu \) be a Banach limit. Then

$$\begin{aligned} \mu _n d^2(x_{n+1},p)\le & {} \alpha _n \mu _n d^2(f(x_n),p) + \dfrac{(1-\alpha _n)}{2} \mu _nd^2(x_{n+1},p)\\&+\,\dfrac{(1-\alpha _n)}{2} \mu _nd^2(x_{n},p) \\&-\,\alpha _n(1-\alpha _n)\mu _nd^2\left( f(x_n),T\left( \frac{x_n\oplus x_{n+1}}{2} \right) \right) . \end{aligned}$$

It follows that

$$\begin{aligned} \mu _n d^2(x_{n+1},p) \le \mu _n d^2(f(x_n),p) - (1-\alpha _n)\mu _nd^2\left( f(x_n),T\left( \frac{x_n\oplus x_{n+1}}{2} \right) \right) . \end{aligned}$$

Since \( x_n \rightarrow q \), we obtain that

$$\begin{aligned} d^2(q,p) \le d^2(f(q),p) - d^2\left( f(q),q \right) . \end{aligned}$$

Hence

$$\begin{aligned} 0\le & {} \frac{1}{2}\left[ d^2(q,q) + d^2(f(q),q) - d^2(q,q) - d^2(f(q),q)\right] \\= & {} \langle \overrightarrow{qf(q)} , \overrightarrow{pq} \rangle , \quad \forall p\in F(T). \end{aligned}$$

That is, q solves the inequality (14). The proof is complete. \(\square \)

If we set \( f(x)=u \) for all \( x\in C \) in Theorem 3.1, we have the following corollary.

Corollary 3.2

Let X be a complete CAT(0) space, C a closed convex subset of X , \( T : C \rightarrow C \) a nonexpansive mapping with \( F(T)\ne \emptyset \). Let \( \{x_n\} \) be generated by \( u,x_0\in C \)

$$\begin{aligned} x_{n+1} = \alpha _n u \oplus (1-\alpha _n)T\left( \frac{x_n\oplus x_{n+1}}{2} \right) , \quad n\ge 0. \end{aligned}$$

Assume the following conditions hold:

  1. (i)

    \( \lim _{n\rightarrow \infty } \alpha _n = 0 \);

  2. (ii)

    \( \sum _{n=0}^{\infty } = \infty \);

  3. (iii)

    either \( \sum _{n=0}^{\infty } |\alpha _{n+1} - \alpha _n| \) or \( \lim _{n\rightarrow \infty }\dfrac{\alpha _{n+1}}{\alpha _n} = 1 \).

Then \(\{x_n\}\) converges strongly as \(n\rightarrow \infty \) to q such that \(q=P_{F(T)}u\) which is equivalent to the following variational inequality:

$$\begin{aligned} \langle \overrightarrow{q u},\overrightarrow{p q}\rangle \ge 0, \quad p\in F(T). \end{aligned}$$

4 Applications

Let (Xd) be a metric space and \( g : X \rightarrow (-\infty ,\infty ] \) be a proper and convex function. One of the major problems in optimization is to find \( x \in X \) such that

$$\begin{aligned} g(x) = \min _{y\in X} g(x). \end{aligned}$$

We denote \( \arg \min _{y\in X} g (y) \) by the set of minimizers of g . Recall that a function \( g : C \rightarrow (-\infty ,\infty ] \) defined on a convex subset C of a CAT(0) space. For \( r > 0 \), define the Moreau-Yosida resolvent of g in CAT(0) spaces as

$$\begin{aligned} J_r(x) = \arg \min _{y\in X}\left( g(y) + \dfrac{1}{2r}d^2(y,x) \right) \end{aligned}$$
(26)

for all \( x \in X \) (see [24]). The mapping \( J_r \) is well defined for all \( r > 0 \) (see [24, 25]).

Lemma 4.1

Let (Xd) be a complete CAT(0) space, and \( g : X \rightarrow (-\infty ,\infty ] \) be a proper, convex and lower semicontinuous function. Then, for every \( r > 0 \),

  1. (i)

    the resolvent \( J_r \) is firmly nonexpansive, that is,

    $$\begin{aligned} d(J_r(x),J_r(y)) \le d( (1-\lambda )x\oplus \lambda J_r(x),(1-\lambda )y\oplus \lambda J_r(y) ) \end{aligned}$$

    for all \( x, y \in X \) and for all \( \lambda \in (0, 1) \);

  2. (ii)

    the set \( F(J_r) \) of fixed points of the resolvent associated with g coincides with the set \( \arg \min _{y \in X} g (y) \) of minimizers of g .

Remark 4.2

Every firmly nonexpansive mapping is nonexpansive.

The following two corollary follow from Theorem 3.1 and Corollary 3.2.

Corollary 4.3

Let X be a complete CAT(0) space, C a closed convex subset of X , \( g : X \rightarrow (-\infty ,\infty ] \) be a proper, convex and lower semicontinuous function. Suppose that g has a minimizer. Let \( f : C \rightarrow C \) be a contraction with coefficient \( \alpha \in [0,1) \). Let \( r > 0 \) and define the sequence \( \{x_n\} \) as follows: \( x_0\in C \) and

$$\begin{aligned} x_{n+1} = \alpha _n f(x_n) \oplus (1-\alpha _n) J_r\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) , \quad n\ge 0. \end{aligned}$$

Assume the following conditions hold:

  1. (i)

    \( \lim _{n\rightarrow \infty } \alpha _n = 0 \);

  2. (ii)

    \( \sum _{n=0}^{\infty } = \infty \);

  3. (iii)

    either \( \sum _{n=0}^{\infty } |\alpha _{n+1} - \alpha _n| \) or \( \lim _{n\rightarrow \infty }\dfrac{\alpha _{n+1}}{\alpha _n} = 1 \).

Then \(\{x_n\}\) converges strongly as \(n\rightarrow \infty \) to q, minimizer of g such that \(q=P_{F(J_r)}f(q)\) which is equivalent to the following variational inequality:

$$\begin{aligned} \langle \overrightarrow{q f(q)},\overrightarrow{p q}\rangle \ge 0, \quad \ p\in F(J_r). \end{aligned}$$

Corollary 4.4

Let X be a complete CAT(0) space, C a closed convex subset of X , \( g : X \rightarrow (-\infty ,\infty ] \) be a proper, convex and lower semicontinuous function. Suppose that g has a minimizer. Let \( r > 0 \) and define the sequence \( \{x_n\} \) as follows: \( u, x_0\in C \) and

$$\begin{aligned} x_{n+1} = \alpha _n u \oplus (1-\alpha _n) J_r\left( \dfrac{x_n\oplus x_{n+1}}{2} \right) , \quad n\ge 0. \end{aligned}$$

Assume the following conditions hold:

  1. (i)

    \( \lim _{n\rightarrow \infty } \alpha _n = 0 \);

  2. (ii)

    \( \sum _{n=0}^{\infty } = \infty \);

  3. (iii)

    either \( \sum _{n=0}^{\infty } |\alpha _{n+1} - \alpha _n| \) or \( \lim _{n\rightarrow \infty }\dfrac{\alpha _{n+1}}{\alpha _n} = 1 \).

Then \(\{x_n\}\) converges strongly as \(n\rightarrow \infty \) to q, minimizer of g such that \(q=P_{F(J_r)}u\) which is equivalent to the following variational inequality:

$$\begin{aligned} \langle \overrightarrow{q u},\overrightarrow{p q}\rangle \ge 0, \quad p\in F(J_r). \end{aligned}$$