1 Introduction

Let (Xd) be a metric space, \(x,y \in X\) and \(I=[0,d(x,y)]\) be an interval. A curve c (or simply a geodesic path) joining x to y is an isometry \(c:I \rightarrow X\) such that \(c(0)=x\), \(c(d(x,y))=y\) and \(d(c(t),c(t'))=|t-t'|\) for all \(t,t'\in I\). The image of a geodesic path is called the geodesic segment, which is denoted by [xy] whenever it is unique. We say that a metric space X is a geodesic space if for every pair of points \(x,y \in X\), there is a minimal geodesic from x to y. A geodesic triangle \(\Delta (x_1,x_2,x_3)\) in a geodesic metric space (Xd) consists of three vertices (points in X) with unparameterized geodesic segment between each pair of vertices. For any geodesic triangle, there is comparison (Alexandrov) triangle \(\bar{\Delta }\subset \mathbb {R}^2\) such that \(d(x_i,x_j)=d_{\mathbb {R}^2}(\bar{x}_i,\bar{x}_j)\) for \(i,j\in \{1,2,3\}\). A geodesic space X is a CAT(0) space if the distance between arbitrary pair of points on a geodesic triangle \(\Delta \) does not exceed the distance between its pair corresponding points on its comparison triangle \(\bar{\Delta }\). If \(\Delta \) is a geodesic triangle and \({\bar{\Delta }}\) is its comparison triangle in X, then \(\Delta \) is said to satisfy the CAT(0) inequality for all points xy of \(\Delta \) and \(\bar{x},\bar{y}\) of \(\bar{\Delta }\), if

$$\begin{aligned} d(x,y)=d_{\mathbb {R}^2}(\bar{x},\bar{y}). \end{aligned}$$
(1.1)

Let xyz be points in X and \(y_0\) be the midpoint of the segment [yz], then the CAT(0) inequality implies

$$\begin{aligned} d^2(x,y_0)\le \frac{1}{2}d^2(x,y)+\frac{1}{2}d^2(x,z)-\frac{1}{4}d(y,z). \end{aligned}$$
(1.2)

Inequality (1.2) is known as CN inequality of Bruhat and Titus [9]. A geodesic space X is said to be a CAT(0) space if all geodesic triangles satisfy the CN inequality. Equivalently, X is called a CAT(0) space if and only if it satisfies the CN inequality. Examples of CAT(0) spaces includes Hadamard manifold, \(\mathbb {R}\)-trees [27], pre-Hilbert space [7], hyperbolic metric [42], Euclidean building [8] and Hilbert ball [19].

Let X be a CAT(0) space and D be a nonempty closed and convex subset of X. Suppose CB(X) is the family of nonempty closed and bounded subset of X,  then the Hausdorff metric H on CB(X) is defined by

$$\begin{aligned} H(A, B) = \max \{\sup _{a \in A}~d(a,B),~\sup _{b\in B}~d(b,A)\}, ~\forall ~A,B~\in CB(X), \end{aligned}$$

where \(dist(a, B)=\inf \{d(a,b):b \in B \}.\)

Suppose for each \(x\in X,\) there exists \(u\in D\) such that

$$\begin{aligned} d(x,u)=dist(x,D)=\inf \{d(x,y):y\in D\}, \end{aligned}$$

then D is called proximal.

Let \(T:D\rightarrow CB(X)\) be a multivalued mapping. A point \(x\in D\) is called a fixed point of T if \(x\in Tx,\) and an endpoint of T if x is a fixed point of T and \(Tx=\{x\}.\) The set of all fixed points and endpoints of T are denoted by F(T) and E(T) respectively. A multivalued mapping \(T:X\rightarrow 2^X\) is called

  1. (1)

    L-Lipschitz, if there exists \(L > 0\) such that

    $$\begin{aligned} H(Tx,Ty)\le Ld(x,y),~\forall ~x,y\in X, \end{aligned}$$

    if \(L=1,\) then T is called nonexpansive,

  2. (2)

    quasi nonexpansive, if \(F(T)\ne \emptyset \) and

    $$\begin{aligned} H(Tx,p)\le d(x,p)~\forall ~x\in X \text{ and } p\in F(T), \end{aligned}$$
  3. (3)

    k-demicontractive, if \(F(T)\ne \emptyset \) and there exists \(k\in [0,1)\) such that

    $$\begin{aligned} H^2(Tx,p)\le d^2(x,p)+kd^2(x,Tx)~\forall ~x,y\in X,~\text{ and }~p\in F(T). \end{aligned}$$

Clearly, nonexpansive multivalued mappings with nonempty fixed point sets are quasi nonexpansive multivalued mappings. However, quasi nonexpansive multivalued mappings are k-demicontractive multivalued mappings. The following example shows that the converse of this statement is not always true.

Example 1.1

Let \(X=\mathbb {R}\) be the set of real numbers with the usual metric. Let \(T_j:X\rightarrow 2^X,\) where \(j\in \mathbb {N}\) be defined by

$$\begin{aligned} T_jx={\left\{ \begin{array}{ll} {}\big [\frac{-(5j+1)}{5}x, -(j+1)x\big ],~~~\text{ if }~~x\le 0,\\ {}\big [-(j+1)x, \frac{-(5j+1)}{5}x\big ]~~~\text{ if }~~x> 0. \end{array}\right. } \end{aligned}$$

Clearly, \(F(T)=\{0\}.\)\(H(T_jx,T_j0)=|-(j+1)x-0|^2=(j+1)^2|x-0|^2.\) Hence, \(T_j\) is L-Lipschitzian with \(L=(j+1)^2,\) for \(j\in \mathbb {N}\) and \(T_j\) is not quasi nonexpansive. Also,

$$\begin{aligned} d(x,T_jx)^2&= \Big |x+\frac{(5j+1)}{5}x\Big |^2\\&= \Big |\frac{(5j+6)}{5}x\Big |^2\\&= \Big (\frac{(25j^2+60j+36)}{25}\Big )|x|^2 \end{aligned}$$

and

$$\begin{aligned} H^2(T_jx,T_j0)^2&= (j+1)^2|x-0|^2\\&= |x-0|^2+(j^2+2j)|x-0|^2\\&= |x-0|^2+\frac{(25(j^2+2j)}{25j^2+60j+36}d^2(x,T_jx). \end{aligned}$$

Hence, \(T_j\) is k-demicontractive with \(k= \frac{(25(j^2+2j)}{25j^2+60j+36} \in (0,1)\) for \(j\in \mathbb {N}.\)

The application of fixed point theorems of multivalued mappings in CAT(0) spaces are well known in differential equations, convex optimization, control theory, graph theory, computer science, biology and economics (see [4, 6, 17, 27, 27, 47]). In 2012, Samanmit and Panyanak [46] introduced the gate condition on a multivalued mapping in \(\mathbb {R}-\)trees (which is weaker than endpoint condition) as follows: Let \(x\notin D,\) a unique point \(y_x\) is called the gate of x in D if

$$\begin{aligned} d(x,z)=d(x,y_x)+d(y_x,z), \end{aligned}$$

where \(z\in D.\) A point u is called a key of T if for each \(x\in F(T),\)x is a gate of u in Tx. \(E(T)\subset F(T)\) and T satisfies the endpoint condition if \(E(T)=F(T).\) Also, T is said to satisfy the gate condition if T has a key in D. They proved strong convergence theorem of a modified Ishikawa iteration for quasi nonexpansive multivalued mappings satisfying the gate condition. Phuengrattana [39] also introduced k-strictly pseudononspreading multivalued mappings in \(\mathbb {R}\)-trees as follows: Let D be a nonempty subset of a complete \(\mathbb {R}\)-tree X. A multivalued mapping \(T:D\rightarrow CB(D)\) is called k-strictly pseudononspreading, if there exists \(k\in [0,1)\) such that

$$\begin{aligned} (2-k)H(Tx,Ty)^2&\le k~d(x,y)^2 + (1-k)~dist(y,Tx)^2 + (1-k)~dist(x,Ty)^2\nonumber \\&\quad +\, k~dist(x,Tx)^2 + k~dist(y,Ty)^2. \end{aligned}$$
(1.3)

If T is k-strictly pseudonospreading and has a fixed point \(p\in F(T)\), then (1.3) becomes

$$\begin{aligned} H(Tx,Tp)^2\le d(x,p)^2+k~dist(x,Tx)^2. \end{aligned}$$

He also proposed a new two-step Mann-type iterative algorithm to approximate common fixed point of two k-strictly pseudononspreading mappings and established strong convergence result using the gate condition. We remark here that approximation of fixed points of multivalued mappings with respect to Hausdorff metric in CAT(0) spaces were mostly obtained with the endpoint conditions which are known to be stronger than the gate conditions (see for example, [11, 31, 40, 45, 49, 51, 52]).

On the other hand, monotone operator theory remains one of the most important aspects of nonlinear and convex analysis. It plays an essential role in showing existence of solutions in optimization, variational inequalities, semigroup theory, partial differential equations and evolution equation (see [18, 29, 55] and other references therein). Finding the solutions of the following nonlinear stationary problem

$$\begin{aligned} x\in \mathbb {D}(A) \text{ such } \text{ that } 0\in ~A(x), \end{aligned}$$
(1.4)

where A is a monotone operator (to be defined in Sect. 2), remains a problem of interest in monotone operator theory since many mathematical problems can be modeled as problem (1.4). The problem (1.4) is called Monotone Inclusion Problem (MIP) and its solution set is closed and convex [26] which we denote by \(A^{-1}(0)\). The MIP can be applied to solve some well known problems like the minimization problem, equilibrium problem, variational inequality problem, convex feasibility problems, among others (see [13, 41]). These problems are better studied with the concept of monotonicity through the subdifferential which is also a monotone operator (see [12]). For example, the proper convex and lower semicontinuous functional for a minimization problem can be characterized by monotonicity of its subdifferential (see [37, 44]). Therefore, the problem of existence and approximation of zeros of monotone operators is of great importance in mathematics. Several iterative methods have been developed to solve MIP and other related optimization problems. The Proximal Point Algorithm (PPA) is one of the most used method for finding solutions of MIP. It was first studied in Hilbert space by Martinet [35] and was later developed by Rockafellar [43] who proved that the PPA converges weakly to a zero of a monotone operator. As a result, several authors have modified the PPA to obtain strong convergence results in Banach and Hilbert spaces (see for example, [1, 2, 10, 22,23,24, 36, 38, 48, 50, 53, 54]).

In 2016, Khatibzadeh and Ranjbar [26] generalized and studied the monotone operators, their resolvents and the PPA in the framework of CAT(0) spaces. They established some fundamental properties of the resolvent and studied the following PPA to approximate the solutions of (1.4) in CAT(0) spaces:

$$\begin{aligned} {\left\{ \begin{array}{ll} {}\big [\frac{1}{\lambda }\overrightarrow{x_nx_{n-1}}\big ]\in A(x_n)\\ x_0\in X. \end{array}\right. } \end{aligned}$$
(1.5)

They proved that (1.5) \(\Delta \)-converges to a zero of the monotone operator A in a complete CAT(0) space. This development led Heydari et.al. [20] to employ the PPA in approximating a common zero of a finite family of monotone operators in a complete CAT(0) space. In the same vein, Ranjbar and Khatibzadeh [41] established strong and \(\Delta \)-convergence results using the Mann-type and Halpern-type PPAs respectively in complete CAT(0) spaces under some mild conditions. Other authors have also studied the PPA in CAT(0) spaces (see [13, 21, 30]).

Motivated by the works of Samanmit and Panyanak [46], Khatibzadeh and Ranjbar [26], we use the gate condition on two multivalued k-demicontractive mappings to approximate a common solution of finite family of MIPs and fixed point problem in CAT(0) spaces. Furthermore, we propose and study a Halpern-type PPA, and prove its strong convergence to a common solution of a finite family of monotone inclusion problem and fixed point problems for two multivalued k-demicontractive mappings in a complete CAT(0) space. We also applied our result to the problem of finding a common solution of a finite family of minimization problem and fixed point problem in CAT(0) space. Finally, numerical experiments of our results are presented to further show its applicability.

The rest of the paper is organized as follows: Sect. 2 is devoted to some preliminaries and lemmas that are important to our result. In Sect. 3, we give the main theorem and some consequences of the main theorem. Lastly, in Sect. 4, we give the application of the main theorem and also present numerical illustrations of our main theorem.

2 Preliminaries

In this section, we state some known and useful results which will be needed in the proof of our main theorem. In the sequel, we denote strong and \(\Delta \)-convergence by “\(\rightarrow \)” and “\(\rightharpoonup \)” respectively. We begin with the following definitions.

Definition 2.1

[5] Let X be a CAT(0) space and \((a,b)\in X\times X\) be denoted by \(\overrightarrow{ab}\) and called a vector in \(X\times X\). A quasilinearization map \(\langle .,.\rangle :(X\times X)\times (X\times X)\rightarrow \mathbb {R}\) is defined by

$$\begin{aligned} \langle \overrightarrow{ab},\overrightarrow{cd}\rangle =\frac{1}{2}(d^2(a,d)+d^2(b,c)-d^2(a,c)-d^2(b,d)), \quad \forall ~ a,b,c,d \in X. \end{aligned}$$
(2.1)

It is easy to see that \(\langle \overrightarrow{ba}, \overrightarrow{cd}\rangle =-\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle ,~\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle =\langle \overrightarrow{ae}, \overrightarrow{cd}\rangle +\langle \overrightarrow{eb}, \overrightarrow{cd}\rangle \) and \(\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle =\langle \overrightarrow{cd}, \overrightarrow{ab}\rangle \) for all \(a,b,c,d,e\in X\).

Recall that a geodesic space X is said to satisfy the Cauchy-Schwarz inequality if

$$\begin{aligned} \langle \overrightarrow{ab},\overrightarrow{cd}\rangle \le d(a,b)d(c,d) \end{aligned}$$

for all \(a,b,c,d \in X\). Moreover, a geodesically connected space X is a CAT(0) space if and only if it satisfies the Cauchy-Schwarz inequality (see [16]).

Definition 2.2

(See [25]) Let (Xd) be a complete CAT(0) space and \(\Theta :\mathbb {R}\times X\times X\rightarrow C(X, \mathbb {R})\) be defined by

$$\begin{aligned} \Theta (t,a,b)(x)&=t\langle \overrightarrow{ab},\overrightarrow{ax}\rangle ~\quad \forall ~t\in \mathbb {R},~a,b,x\in X, \end{aligned}$$
(2.2)

where \(C(X,\mathbb {R})\) is the space of all continuous real-valued functions on X. Let the pseudometric space \((\mathbb {R}\times X\times X, \mathcal {D})\) be a subspace of the pseudometric space \((Lip(X, \mathbb {R}),L)\) of all real-valued Lipschitz functions. Note that \(\mathcal {D}\) defines an equivalence relation on \((\mathbb {R}\times X\times X)\), where the equivalence class of (tab) is

$$\begin{aligned} {}[t\overrightarrow{ab}]&=\{s\overrightarrow{cd}:t\langle \overrightarrow{ab},\overrightarrow{xy}\rangle =s\langle \overrightarrow{cd},\overrightarrow{xy}\rangle ~\forall ~x,y\in X\}. \end{aligned}$$
(2.3)

Then the set \(X^*=\{[t\overrightarrow{ab}]:(t,a,b)\in \mathbb {R}\times X\times X \}\) is called a metric space with the metric \(\mathcal {D}\), and the pair \((X^*,\mathcal {D})\) is called the dual space of X.

Definition 2.3

Let X be a complete CAT(0) space and \(X^*\) be its dual space. A multivalued operator \(A:X\rightarrow 2^{X^*}\) with domain \(\mathbb {D}(A)=\{x\in X:Ax\ne \emptyset \}\) is monotone, if for all \(x,y\in \mathbb {D}(A)\) with \(x\ne y,\) we have

$$\begin{aligned} \langle x^*-y^*,\overrightarrow{yx}\rangle&\ge 0,\quad \forall ~~x^*\in Ax,~y^*\in Ay. \end{aligned}$$
(2.4)

The graph of a monotone operator \(A:X\rightarrow 2^{X^*}\) is the set

$$\begin{aligned} Gr(A) =\{(x,x^*)\in X\times X^*: x^*\in A(x)\}. \end{aligned}$$

A monotone operator A is called a maximal monotone operator if the graph Gr(A) is not properly contained in the graph of any other monotone operator. It is easy to see that a monotone operator is maximal if and only if for each \((x,x^*)\in X\times X^*\),

$$\begin{aligned} \langle y^*-x^*,\overrightarrow{xy}\rangle&\ge 0 \quad \forall ~(y,y^*)\in G(A)\Rightarrow x^*\in A(x). \end{aligned}$$
(2.5)

Definition 2.4

(See [26]) Let X be a complete CAT(0) space and \(X^*\) be its dual space. The resolvent of a monotone operator A of order \(\lambda >0\) is the multivalued mapping \(J^A_{\lambda }: X\rightarrow 2^X\) defined by

$$\begin{aligned} J_{\lambda }^A(x):=\big \{z\in X:\big [\frac{1}{\lambda }\overrightarrow{zx}\big ]\in Az\big \}. \end{aligned}$$
(2.6)

The multivalued operator A is said to satisfy the range condition if \(\mathbb {D}(J_{\lambda }^A)=X,\) for every \(\lambda >0.\) For examples of monotone operators and their resolvents in complete CAT(0) spaces, see [13, Section 2 and Section 4]).

Definition 2.5

(See [26]) Let X be a complete CAT(0) space and \(X^*\) be its dual space. The Yosida approximation of A is the multivalued mapping \(A_{\lambda }: X\rightarrow 2^{X^*}\) defined by

$$\begin{aligned} A_{\lambda }(x):=\big \{\big [\frac{1}{\lambda }\overrightarrow{yx}\big ]:y\in J_{\lambda }^A(x)\big \}. \end{aligned}$$
(2.7)

The following result is due to [26] and it gives the connection between monotone operators, their resolvents and Yosida approximations, in the framework of CAT(0) spaces.

Theorem 2.6

Let X be a CAT(0) space and \(J^{A}_\lambda \) be the resolvent of the operator A of order \(\lambda .\) Then

  1. (i)

    for any \(\lambda > 0,\)\(R(J^{A}_\lambda ) \subset \mathbb {D}(A)\) and \(F(J^{A}_\lambda ) = A^{-1}(0),\) where \(R(J^{A}_\lambda )\) is the range of \(J^{A}_\lambda ,\)

  2. (ii)

    If \(J^{A}_\lambda \) is single-valued then \(A_\lambda \) is singlevalued, and \(A_\lambda (x) \subset A(J^{A}_\lambda (x)),\quad \forall ~x \in X,\)

  3. (iii)

    if A is monotone, then \(J_\lambda ^A\) is single-valued and firmly nonexpansive.

Definition 2.7

Let D be a nonempty closed and convex subset of a CAT(0) space X. The metric projection is a mapping \(P_D:X\rightarrow D\) which assigns to each \(x\in X,\) the unique point \(P_Dx\in D\) such that

$$\begin{aligned} d(x,P_Dx)=\inf \{d(x,y):y\in D\}. \end{aligned}$$

Recall that a mapping \(T:D\rightarrow X\) is firmly nonexpansive (see [26]), if

$$\begin{aligned} d^2(Tx,Ty)\le \langle \overrightarrow{TxTy},\overrightarrow{xy}\rangle \quad \forall ~x,y\in X. \end{aligned}$$

Lemma 2.8

[14, 16] Let X be a CAT(0) space. Then for all \(x,y,z \in X\) and all \(t \in [0,1],\) we have

  1. (1)

    \(d(tx\oplus (1-t)y,z)\le td(x,z)+(1-t)d(y,z),\)

  2. (2)

    \(d^2(tx\oplus (1-t)y,z)\le td^2(x,z)+(1-t)d^2(y,z)-t(1-t)d^2(x,y),\)

  3. (3)

    \(d^2(z, t x \oplus (1-t) y)\le t^2 d^2(z, x)+(1-t)^2 d^2(z, y)+2t (1-t)\langle \overrightarrow{zx}, \overrightarrow{zy}\rangle .\)

Lemma 2.9

[34] Let X be a complete CAT(0) space with a convex metric and VW be bounded gated subsets of X. Then,

$$\begin{aligned} d(P_V(u),P_W(u))\le H(V,W), \end{aligned}$$

for any \(u\in X,\) where \(P_V(u),P_W(u)\) are respectively the unique nearest points to u in V,  W.

Lemma 2.10

[32] Every bounded sequence in a complete CAT(0) space has a \(\Delta \)-convergent subsequence.

Lemma 2.11

[25] Let X be a complete CAT(0) space, \(\{x_n\}\) be a bounded sequence in X and \(x\in X\). Then \(\{x_n\}\)\(\Delta \)-converges to x if and only if \(\underset{n\rightarrow \infty }{\limsup }\langle \overrightarrow{x_n x}, \overrightarrow{yx}\rangle \le 0~\forall ~ y\in X.\)

Lemma 2.12

[14] Let D be a nonempty convex subset of a CAT(0) space X\(x\in X\) and \(u\in D.\) Then \(u=P_{D}x\) if and only if \(\langle \overrightarrow{ux},\overrightarrow{yu}\rangle \le 0\) for all \(y\in D\)

Lemma 2.13

[15] Let X be a complete CAT(0) space and \(T:X\rightarrow X\) be a nonexpansive mapping. Then T is \(\Delta \)-demiclosed.

Lemma 2.14

[56] Let \(\{a_{n}\}\) be a sequence of non-negative real numbers satisfying

$$\begin{aligned} a_{n+1}\le (1-\alpha _{n})a_{n}+\alpha _{n}\delta _{n}+\gamma _n,~~n\ge 0, \end{aligned}$$

where \(\{\alpha _{n}\},~\{\delta _{n}\}\) and \(\{\gamma _{n}\}\) satisfy the following conditions:

(i) \(\{\alpha _n\}\subset [0,1],~\Sigma _{n=0}^{\infty }\alpha _{n}=\infty \),

(ii) \(\limsup _{n\rightarrow \infty }\delta _{n}\le 0\),

(iii) \(\gamma _n\ge 0 (n\ge 0),~\Sigma _{n=0}^{\infty }\gamma _{n}<\infty \).

Then \(\lim _{n\rightarrow \infty }a_{n}=0.\)

Lemma 2.15

[33] Let \(\{a_n\}\) be a sequence of real numbers such that there exists a subsequence \(\{n_j\}\) of \(\{n\}\) with \(a_{n_j}<a_{n_j+1}\)\(\forall j\in \mathbb {N}\). Then there exists a nondecreasing sequence \(\{m_k\}\subset \mathbb {N}\) such that \(m_k\rightarrow \infty \) and the following properties are satisfied by all (sufficiently large) numbers \(k\in \mathbb {N}\):

$$\begin{aligned} a_{m_k}\le a_{m_k+1}~ \text{ and }~ a_k\le a_{m_k+1}. \end{aligned}$$

In fact, \(m_k=\max \{i\le k :a_i<a_{i+1}\}\).

3 Main results

Lemma 3.1

Let X be a complete CAT(0) space and \(X^*\) be its dual space. Let \(A_1,A_2,\dots ,A_k:X \rightarrow 2^{X^*}\) be multivalued monotone operators satisfying the range condition and \(S,T:D \rightarrow CB(X)\) be two L-Lipschitzian demicontractive mappings with coefficients \(\lambda _1\) and \(\lambda _2\) respectively and \(\lambda = \max \{\lambda _1,~\lambda _2\}\). Suppose S and T satisfy the gate condition \(h_1,~h_2\) are the keys of S and T respectively. Assume that \(\Gamma := F(S)\cap F(T)\cap \bigcap _{i=1}^k A_i^{-1}(0) \ne \emptyset \) and for arbitrary points \(u,x_1 \in X\), the sequence \(\{x_n\}\) is generated iteratively by

$$\begin{aligned} {\left\{ \begin{array}{ll} u_n = J_{\mu _n}^{A_k}\circ J_{\mu _n}^{A_{k-1}}\circ \dots \circ J_{\mu _n}^{A_1}(\alpha _n u \oplus (1-\alpha _n) x_n),\\ y_n = \beta _n u_n \oplus (1-\beta _n) v_n,\\ x_{n+1} = \delta _n y_n \oplus (1-\delta _n)z_n,~~~n\ge 1, \end{array}\right. } \end{aligned}$$
(3.1)

where \(v_n\) is a gate of \(h_1 \in Su_n,\)\(z_n\) is a gate of \(h_2 \in Ty_n\), \(\{\alpha _n\}\), \(\{\beta _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] satisfying the following condition:

  1. (C1)

    \(0 \le \beta _n, \delta _n \le 1 - \lambda .\)

Then, \(\{x_n\}\) is bounded.

Proof

Let \(p \in \Gamma \), then \(0 \in A_i x\) for \(i \in \{1,2,\dots ,k\}\). Put \(w_n = \alpha _n u \oplus (1-\alpha _n)x_n\) and let

$$\begin{aligned} \Psi _n^i = J_{\mu _n}^{A_i} \Psi _n^{i-1}, \quad \text {for}\quad i \in \{1,2,\dots ,k\}, \end{aligned}$$

where \(\Psi _n^0 = w_n\), for all \(n \in \mathbb {N}.\) Then \(\Psi _n^{k}=u_n,\) for all \(n\ge 1.\) we obtain from (2.6) that

$$\begin{aligned} \Big [ \frac{1}{\mu _n}\overrightarrow{\Psi _n^i \Psi _n^{i-1}} \Big ] \in A_i(\Psi _n^i), \quad \text {for}~~~i \in \{1,2,\dots ,k\}. \end{aligned}$$

Thus, by the monotonicity of \(A_i\), for \(i \in \{1,2,\dots ,k\}\), we have

$$\begin{aligned} 0 \le \bigg \langle \Big [ \frac{1}{\mu _n}\overrightarrow{\Psi _n^i \Psi _n^{i-1}} \Big ]-0, \overrightarrow{p \Psi _n^{i}} \bigg \rangle . \end{aligned}$$

This implies by quasilinearization that

$$\begin{aligned} 0 \le d^2(\Psi _n^{i-1},p) - d^2(\Psi _n^i,p) - d^2(\Psi _n^i, \Psi _n^{i-1}). \end{aligned}$$
(3.2)

By summing up the inequality in (3.2), from \(i=1\) to k, we get

$$\begin{aligned} 0 \le d^2(\Psi _n^0,p) - d^2(\Psi _n^k,p) - \sum _{i=1}^k d^2(\Psi _n^i,\Psi _n^{i-1}). \end{aligned}$$
(3.3)

Hence, we obtain from Lemma 2.8 that

$$\begin{aligned} d^2(u_n, p)\le & {} d^2(w_n,p) \nonumber \\= & {} d^2(\alpha _n u \oplus (1-\alpha _n)x_n, p) \nonumber \\\le & {} \alpha _n d^2(u,p) + (1-\alpha _n)d^2(x_n,p). \end{aligned}$$
(3.4)

Since S is \(\lambda _1\)-demicontractive, we have from Lemma 2.9 and (3.1) that

$$\begin{aligned} d^2(y_n,p)= & {} d^2((1-\beta _n)u_n \oplus \beta _n v_n, p) \nonumber \\\le & {} (1-\beta _n)d^2(u_n,p) + \beta _n d^2(v_n,p) - \beta _n(1-\beta _n)d^2(u_n,v_n) \nonumber \\\le & {} (1-\beta _n)d^2(u_n,p) + \beta _n d^2(P_{Su_n}(h_1),P_{Sp}(h_1)) - \beta _n(1-\beta _n)d^2(u_n,v_n) \nonumber \\\le & {} (1-\beta _n)d^2(u_n,p) + \beta _n H^2(Su_n, Sp) - \beta _n(1-\beta _n)d^2(u_n,v_n) \nonumber \\\le & {} (1-\beta _n)d^2(u_n,p) + \beta _n \big (d^2(u_n,p) + \lambda _1 dist^2(u_n,Su_n) \big ) \nonumber \\&-\, \beta _n(1-\beta _n)d^2(u_n,v_n) \nonumber \\= & {} d^2(u_n,p) - \beta _n(1-\beta _n -\lambda _1) d^2(u_n,v_n). \end{aligned}$$
(3.5)

Thus,

$$\begin{aligned} d^2(x_{n+1},p)= & {} d^2((1-\delta _n)y_n \oplus \delta _n z_n, p) \nonumber \\\le & {} (1-\delta _n)d^2 (y_n,p) + \delta _n d^2(z_n, p) - \delta _n(1-\delta _n)d^2(y_n,z_n) \nonumber \\\le & {} (1-\delta _n)d^2 (y_n,p) + \delta _n d^2(P_{Ty_n}(h_2),P_{Tp}(h_2)) \nonumber \\&-\, \delta _n(1-\delta _n)d^2(y_n,z_n) \nonumber \\\le & {} (1-\delta _n)d^2 (y_n,p) + \delta _n H^2(Ty_n,Tp) - \delta _n(1-\delta _n)d^2(y_n,z_n) \nonumber \\\le & {} (1-\delta _n) d^2(y_n,p) + \delta \big ( d^2(y_n,p) + \lambda _2 dist^2(y_n, Ty_n) \big )\nonumber \\&-\, \delta _n(1-\delta _n)d^2(y_n,z_n) \nonumber \\= & {} d^2(y_n,p) - \delta _n(1-\delta _n - \lambda _2)d^2(y_n,z_n) \nonumber \\\le & {} d^2(u_n,p) - \beta _n(1-\beta _n -\lambda _1) d^2(u_n,v_n) \nonumber \\&-\, \delta _n(1-\delta _n - \lambda _2)d^2(y_n,z_n). \end{aligned}$$
(3.6)

Thus, we have

$$\begin{aligned} d^2(x_{n+1},p)\le & {} d^2(u_n,p) \nonumber \\\le & {} \alpha _n d^2(u,p) + (1-\alpha _n) d^2(x_n,p) \nonumber \\\le & {} \max \Big \{ d^2(u,p), d^2(x_n,p) \Big \} \nonumber \\&\vdots&\nonumber \\\le & {} \max \Big \{ d^2(u,p), d^2(x_1,p) \Big \}. \end{aligned}$$
(3.7)

This implies that \(\{d^2(x_n,p)\}\) is bounded and hence \(\{x_n\}\) is bounded. Consequently, \(\{u_n\},\{y_n\},\{v_n\}\) and \(\{z_n\}\) are bounded. \(\square \)

Theorem 3.2

Let X be a complete CAT(0) space and \(X^*\) be its dual. Let \(A_1,A_2,\dots ,A_k:X \rightarrow 2^{X^*}\) be multivalued monotone operators satisfying the range condition and \(S,T:D \rightarrow CB(X)\) be two L-Lipschitzian demicontractive mappings with coefficients \(\lambda _1\) and \(\lambda _2\) respectively and \(\lambda = \max \{\lambda _1,~\lambda _2\}\). Suppose S and T satisfy the gate condition \(h_1,~h_2\) are the keys of S and T respectively. Assume that \(\Gamma := F(S)\cap F(T)\cap \bigcap _{i=1}^k A_i^{-1}(0) \ne \emptyset \) and for arbitrary points \(u,x_1 \in X\), the sequence \(\{x_n\}\) is generated iteratively by (3.1) where \(v_n\) is a gate of \(h_1 \in Su_n,\)\(z_n\) is a gate of \(h_2 \in Ty_n\), \(\{\alpha _n\}\), \(\{\beta _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] satisfying the following conditions:

  1. (C1)

    \(0 \le \beta _n, \delta _n \le 1 - \lambda \),

  2. (C2)

    \(\liminf \nolimits _{n\rightarrow \infty }\mu _n >0\),

  3. (C3)

    \(\lim \nolimits _{n\rightarrow \infty }\alpha _n=0\) and \(\sum _{n=1}^{\infty }\alpha _n=+\infty .\)

Then, \(\{x_n\}\) converges strongly to \(p = P_\Gamma u\), where \(P_\Gamma \) is the metric projection of X onto \(\Gamma \).

Proof

Let \(p = P_\Gamma u\), then from (3.6) and Lemma 2.8, we have

$$\begin{aligned} d^2(x_{n+1},p)\le & {} d^2(u_n,p) - \beta _n(1-\beta _n -\lambda _1) d^2(u_n,v_n) - \delta _n(1-\delta _n - \lambda _2)d^2(y_n,z_n) \nonumber \\\le & {} d^2(\alpha _n u \oplus (1-\alpha _n)x_n,p) - \beta _n(1-\beta _n -\lambda _1) d^2(u_n,v_n) \nonumber \\&-\, \delta _n(1-\delta _n - \lambda _2)d^2(y_n,z_n) \nonumber \\\le & {} \alpha _n d^2(u,p) + (1-\alpha _n)d^2(x_n,p) + 2\alpha _n(1-\alpha _n)\langle \overrightarrow{up},\overrightarrow{x_np} \rangle \nonumber \\&-\, \beta _n(1-\beta _n -\lambda _1) d^2(u_n,v_n) \nonumber \\&-\, \delta _n(1-\delta _n - \lambda _2)d^2(y_n,z_n). \end{aligned}$$
(3.8)

We now divide the rest of the proof into two cases.

Case I: Assume that \(\{d^2(x_n,p)\}\) is monotonically decreasing. Then \(\{d^2(x_n,p)\}\) converges and

$$\begin{aligned} d^2(x_n,p) - d^2(x_{n+1},p) \rightarrow 0 \quad \text {as}~~~n \rightarrow \infty . \end{aligned}$$

Therefore from (3.8), we obtain that

$$\begin{aligned}&\beta _n(1-\beta _n -\lambda _1) d^2(u_n,v_n) + \delta _n(1-\delta _n \\&\quad -\,\lambda _2)d^2(y_n,z_n) \le \alpha _n (d^2(u,p) - d^2(x_n,p)) + d^2(x_n,p) - d^2(x_{n+1},p) \\&\quad +\, 2\alpha _n(1-\alpha _n)\langle \overrightarrow{up},\overrightarrow{x_np} \rangle \rightarrow 0 ~~as ~~n \rightarrow \infty . \end{aligned}$$

From \(0< a \le \beta _n,~\delta _n \le b < 1-\lambda \), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }d^2(u_n,v_n) = 0 \quad \text {and} \quad \lim _{n\rightarrow \infty }d^2(y_n,z_n) = 0. \end{aligned}$$
(3.9)

Hence

$$\begin{aligned} dist(u_n, Su_n) \le d(u_n,z_n) \rightarrow 0, \quad as~~n \rightarrow \infty . \end{aligned}$$
(3.10)

Observe that

$$\begin{aligned} d(y_n,u_n) \le \beta _n d(u_n,u_n) + (1 - \beta _n)d(v_n,u_n) \rightarrow 0, \quad as~~n \rightarrow \infty . \end{aligned}$$
(3.11)

Since T is L-Lipschitzian, then

$$\begin{aligned} dist(u_n,Tu_n)\le & {} dist(u_n,Ty_n) + H(Ty_n,Tu_n) \\\le & {} d(u_n,z_n) + Ld(y_n,u_n) \\\le & {} d(u_n,y_n) + d(y_n,z_n) + Ld(y_n,u_n) \\= & {} (1+L)d(u_n,y_n) + d(y_n,z_n). \end{aligned}$$

Therefore, from (3.9) and (3.11), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }dist(u_n,Tu_n) =0. \end{aligned}$$
(3.12)

From (3.3) and (3.6), we obtain

$$\begin{aligned} \sum _{i=1}^k d^2(\Psi _n^i, \Psi _n^{i-1})\le & {} d^2(\Psi _n^0,x) - d^2(\Psi _n^k,p) \\= & {} d^2(w_n,x) - d^2(u_n,p) \\\le & {} \alpha _n d^2(u,p) + (1-\alpha _n) d^2(x_n,p) - d^2(x_{n+1},p) \rightarrow 0,~~~~\\&as~~n \rightarrow \infty . \end{aligned}$$

Therefore

$$\begin{aligned} \lim _{n\rightarrow \infty }d(\Psi _n^i, \Psi _n^{i-1}) = 0,~~~i\in \{1,2,\cdots ,k\}. \end{aligned}$$
(3.13)

By using the triangle inequality of d, we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty }d(u_n,w_n) = 0. \end{aligned}$$
(3.14)

Note that

$$\begin{aligned} d(w_n,x_n) \le \alpha _n d(u,x_n) + (1-\alpha _n)d(x_n,x_n) \rightarrow 0, ~~~as~~n \rightarrow \infty , \end{aligned}$$

then

$$\begin{aligned} d(u_n,x_n) \le d(u_n,w_n) + d(w_n,x_n) \rightarrow 0,~~as~~~n \rightarrow \infty . \end{aligned}$$
(3.15)

Also

$$\begin{aligned} d(z_n,x_n) \le d(z_n,y_n) + d(y_n,u_n) + d(u_n,x_n) \rightarrow 0,~~~~as~~~n \rightarrow \infty \end{aligned}$$

and

$$\begin{aligned} d(x_{n+1},z_n) \le (1-\delta _n)d(y_n,z_n) + \delta _n d(z_n,z_n) \rightarrow 0,~~as~~~n \rightarrow \infty . \end{aligned}$$

Thus

$$\begin{aligned} d(x_{n+1},x_n) \le d(x_{n+1},z_n) + d(z_n,x_n) \rightarrow 0, \quad as~~~n \rightarrow \infty . \end{aligned}$$
(3.16)

Since \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_j}\}\) of \(\{x_n\}\) such that \(x_{n_j} \rightharpoonup q\). By (2.7), the Yosida approximation of \(A_i\) for each \(i \in \{1,2,\dots ,k\}\), we have

$$\begin{aligned} A_{i,\mu _n}\Psi _n^{i-1} = \Big [ \frac{1}{\mu _n}\overrightarrow{\Psi _n^i\Psi _n^{i-1}} \Big ]. \end{aligned}$$

Since \(\liminf \limits _{n\rightarrow \infty }\mu _n>0,\) we obtain from (3.13) that \(\lim \limits _{n\rightarrow \infty }A_{i,\mu _n}\Psi _n^{i-1} = 0\).

Let \((a_1,a_2) \in G(A_i)\) for each \(i \in \{1,2,\dots ,k\}\), by the maximal monotonicity of \(A_i\), we have

$$\begin{aligned} \langle a_2 - A_{i,\mu _n} \Psi _n^{i-1}, \overrightarrow{\Psi _n^ia_1} \rangle \ge 0. \end{aligned}$$
(3.17)

Replacing n by \(n_j\) in (3.17) and taking the limit as \(j \rightarrow \infty \), we obtain that

$$\begin{aligned} \langle a_2, \overrightarrow{qa_1} \rangle \ge 0. \end{aligned}$$

Hence, by maximal monotonicity of \(A_i\), we obtain that \(q \in A_{i}^{-1}(0)\) for each \(i \in \{1,2,\dots ,k\}\). Therefore, \(q \in \bigcap _{i=1}^k A_i^{-1}(0)\).

Furthermore, since \(d(u_{n_j},x_{n_j}) \rightarrow 0\) as \(n \rightarrow \infty \) and ST are demiclosed at zero, it follows from (3.10) and (3.12) that \(q \in F(S)\) and \(q \in F(T)\) respectively. Hence, \(q \in F(S)\cap F(T) \cap \bigcap _{i=1}^\infty A_i^{-1}(0)\).

Next, we show that \(\limsup _{n\rightarrow \infty }\langle \overrightarrow{ux},\overrightarrow{x_np} \rangle \le 0\). Choose a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\langle \overrightarrow{up},\overrightarrow{x_{n}p} \rangle = \lim _{k\rightarrow \infty }\langle \overrightarrow{up}, \overrightarrow{x_{n_k}p} \rangle . \end{aligned}$$

Since \(x_{n_k} \rightharpoonup q\), it follows from Lemma 2.12 that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\langle \overrightarrow{up},\overrightarrow{x_{n}p}\rangle= & {} \lim _{j\rightarrow \infty }\langle \overrightarrow{up}, \overrightarrow{x_{n_k}p} \rangle \nonumber \\= & {} \langle \overrightarrow{up}, \overrightarrow{qp} \rangle \le 0. \end{aligned}$$
(3.18)

We now show that \(\{x_n\}\) converges to p. From (3.8), we obtain

$$\begin{aligned} d^2(x_{n+1},p)\le & {} (1-\alpha _n)d^2(x_n,p) + \alpha _n\big (2(1-\alpha _n)\langle \overrightarrow{up},\overrightarrow{x_np} \rangle + d^2(u,p)\big ) . \qquad \quad \end{aligned}$$
(3.19)

Using Lemma 2.12 in (3.19) and from (3.18), we conclude that \(d(x_n,x) \rightarrow 0\) as \(n \rightarrow \infty \). Therefore, \(\{x_n\}\) converges strongly to \(p =P_\Gamma u\).

Case II: Suppose there exists a subsequence \(\{n_j\}\) of \(\{n\}\) such that \(d^2(x_{n_j},p) \le (d^2(x_{n_j+1},p)\) for all \(i \in \mathbb {N}\). Then, by Lemma 2.15, there exists a subsequence \(\{m_k\}\subset \mathbb {N}\) such that \(m_k \rightarrow \infty \), \(d^2(x_{m_k},p) < d^2(x_{m_k+1},p)\), for all \(k \in \mathbb {N}\). Following similar process as in Case I, we obtain

$$\begin{aligned} \lim _{k\rightarrow \infty }d(u_{m_k},Su_{m_k}) = \lim _{k \rightarrow \infty }d(u_{m_k},Tu_{m_k}) = \lim _{k\rightarrow \infty }d( x_{m_k+1},x_{m_k}) = 0, \end{aligned}$$

and

$$\begin{aligned} \limsup _{k\rightarrow \infty }\langle \overrightarrow{up},\overrightarrow{x_{m_k}p}\rangle \le 0. \end{aligned}$$
(3.20)

From (3.7) and Lemma 2.8, we obtain

$$\begin{aligned} d^2(x_{m_k+1},p)\le & {} d^2(u_{m_k},p) \nonumber \\\le & {} d^2(\alpha _{m_p} u \oplus (1-\alpha _{m_k})x_{m_k},p) \nonumber \\\le & {} \alpha _{m_k}^2 d^2(u,p) + (1-\alpha _{m_k})^2 d^2(x_{m_k},p) \nonumber \\&+ 2\alpha _{m_k}(1-\,\alpha _{m_k})\langle \overrightarrow{up},\overrightarrow{x_{m_k}p} \rangle . \end{aligned}$$
(3.21)

Since \(d^2(x_{m_k},p) < d^2(x_{m_k+1},p)\), then we have

$$\begin{aligned} 0\le & {} d^2(x_{m_k+1},p) - d^2(x_{m_k},p) \\\le & {} \alpha _{m_k}^2 d^2(u,p) + (1-\alpha _{m_k}) d^2(x_{m_k},p) \nonumber \\&+\, 2\alpha _{m_k}(1-\,\alpha _{m_k})\langle \overrightarrow{up},\overrightarrow{x_{m_k}p} \rangle - d^2(x_{m_k},p). \end{aligned}$$

Therefore

$$\begin{aligned} d^2(x_{m_k},p)\le & {} \alpha _{m_k}d^2(u,p) + 2(1-\alpha _{m_k})\langle \overrightarrow{up},\overrightarrow{x_{m_k}p} \rangle . \end{aligned}$$
(3.22)

Since \(\alpha _{m_k} \rightarrow 0\) as \(k \rightarrow \infty \), it follows from (3.20) and (3.22) that

$$\begin{aligned} \lim _{n\rightarrow \infty }d(x_{m_k},p) = 0. \end{aligned}$$

Consequently, we obtain

$$\begin{aligned} d(x_{m_k+1},p) \le d(x_{m_k+1},x_{m_k}) + d(x_{m_k},p) \rightarrow 0, \quad as ~~~n\rightarrow \infty . \end{aligned}$$

By Lemma 2.15, we have

$$\begin{aligned} d(x_n,p) \le d(x_{m_k+1},p) \rightarrow 0, \quad as ~~n\rightarrow \infty . \end{aligned}$$

This implies that \(\{x_n\}\) converges strongly to \(p\in \Gamma \). This completes the proof. \(\square \)

By setting \(k=1,\) in Theorem 3.2, we obtain the following result:

Corollary 3.3

Let X be a complete CAT(0) space and \(X^*\) be its dual. Let \(A :X \rightarrow 2^{X^*}\) be multivalued monotone operator satisfying the range condition and \(S,T:D \rightarrow CB(X)\) be two L-Lipschitzian demicontractive mappings with coefficients \(\lambda _1\) and \(\lambda _2\) respectively and \(\lambda = \max \{\lambda _1,~\lambda _2\}.\) Suppose S and T satisfy the gate condition \(h_1,~h_2\) are the keys of S and T respectively. Assume that \(\Gamma := F(S)\cap F(T)\cap \bigcap _{i=1}^k A_i^{-1}(0) \ne \emptyset \) and for arbitrary points \(u,x_1 \in X\), the sequence \(\{x_n\}\) is generated iteratively by

$$\begin{aligned} {\left\{ \begin{array}{ll} u_n = J_{\mu _n}^{A}(\alpha _n u \oplus (1-\alpha _n) x_n),\\ y_n = \beta _n u_n \oplus (1-\beta _n) v_n,\\ x_{n+1} = \delta _n y_n \oplus (1-\delta _n)z_n,~~~n\ge 1, \end{array}\right. } \end{aligned}$$
(3.23)

where \(v_n\) is the gate of \(h_1 \in Su_n,\)\(z_n\) is the gate of \(h_2 \in Tu_n\), \(\{\alpha _n\}\), \(\{\beta _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] satisfying the following conditions:

  1. (C1)

    \(0 \le \beta _n, \delta _n \le 1 - \lambda \),

  2. (C2)

    \(\liminf \limits _{n\rightarrow \infty }\mu _n >0\),

  3. (C3)

    \(\lim \limits _{n\rightarrow \infty }\alpha _n=0\) and \(\sum _{n=1}^{\infty }\alpha _n=+\infty .\)

Then, \(\{x_n\}\) converges strongly to \(p = P_\Gamma u\), where \(P_\Gamma \) is the metric projection onto \(\Gamma \).

By setting S and T to be quasi nonexpansive mappings in Theorem 3.2, we obtain the following result:

Corollary 3.4

Let X be a complete CAT(0) space and \(X^*\) be its dual. Let \(A_1,A_2,\dots ,A_k:X \rightarrow 2^{X^*}\) be multivalued monotone operators satisfying the range condition and \(S,T:D \rightarrow CB(X)\) be two L-Lipschitzian quasi nonexpansive mappings. Suppose S and T satisfy the gate condition \(h_1,~h_2\) are the keys of S and T respectively. Assume that \(\Gamma := F(S)\cap F(T)\cap \bigcap _{i=1}^k A_i^{-1}(0) \ne \emptyset \) and for arbitrary points \(u,x_1 \in X\), the sequence \(\{x_n\}\) is generated iteratively by

$$\begin{aligned} {\left\{ \begin{array}{ll} u_n = J_{\mu _n}^{A_k}\circ J_{\mu _n}^{A_{k-1}}\circ \dots \circ J_{\mu _n}^{A_1}(\alpha _n u \oplus (1-\alpha _n) x_n),\\ y_n = \beta _n u_n \oplus (1-\beta _n) v_n,\\ x_{n+1} = \delta _n y_n \oplus (1-\delta _n)z_n,~~~n\ge 1, \end{array}\right. } \end{aligned}$$
(3.24)

(3.24) where \(v_n\) is the gate of \(h_1 \in Su_n,\)\(z_n\) is the gate of \(h_2 \in Ty_n\), \(\{\alpha _n\}\), \(\{\beta _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] satisfying the following conditions:

  1. (C1)

    \(0< a \le \beta _n, \delta _n \le b < 1\),

  2. (C2)

    \(\liminf \limits _{n\rightarrow \infty }\mu _n >0\),

  3. (C3)

    \(\lim \limits _{n\rightarrow \infty }\alpha _n=0\) and \(\sum _{n=1}^{\infty }\alpha _n=+\infty .\)

Then, \(\{x_n\}\) converges strongly to \(p = P_\Gamma u\), where \(P_\Gamma \) is the metric projection onto \(\Gamma \).

4 Applications and numerical example

It is well known that subdifferential of proper, convex and lower semicontinuous functions is maximal monotone in Hilbert spaces and satisfies the range conditions. In the sequel, we approximate a minimizer of a proper, convex and lower semicontinuous function in complete CAT(0) spaces.

Definition 4.1

Let X be a complete CAT(0) space and \(X^*\) be its dual space. Let \(f:X\rightarrow (-\infty ,+\infty ]\) be a proper, convex and lower semicontinous function with efficient domain \(\mathbb {D}(f)=\{x:f(x)<+\infty \},\) then the subdifferential of f is the multivalued function \(\partial f:X\rightarrow 2^{X^*}\) defined by

$$\begin{aligned} \partial f(x)={\left\{ \begin{array}{ll} x^*~\in ~X^*~:f(z)-f(x)\ge \langle ~x^*,\overrightarrow{xz}\rangle ;\quad \forall ~z~\in ~\mathbb {D}(f);\\ \emptyset ,\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad \text{ otherwise. } \end{array}\right. } \end{aligned}$$

Theorem 4.2

[25] Let \(f : X (-\infty ,+\infty ]\) be a proper, lower semicontinuous and convex function on a complete CAT(0) space X with dual \(X^*,\) then

  1. (i)

    f attains its minimum at \(x\in X\) if and only if \(0\in \partial f(x),\)

  2. (ii)

    \(\partial f:X\rightarrow 2^{X^*}\) is a monotone operator,

  3. (iii)

    for any \(y\in X\) and \(\alpha >0,\) there exists a unique point \(x\in X\) such that \([\alpha \overrightarrow{xy}]\in \partial f(x)\), that is, \(\mathbb {D}(J_{\lambda }^{\partial f}) = X, \forall ~\lambda > 0.\)

In 2013, Bač\(\acute{a}\)k [3] introduced the PPA in CAT(0) space as follows:

$$\begin{aligned} x_{n+1}=arg\min _{u\in X}\left[ f(u)\oplus \frac{1}{2\lambda _n}d^2(y,x_n)\right] , \end{aligned}$$
(4.1)

for \(n\in \mathbb {N}\), where \(\lambda _n>0\) such that \(\sum _{n=1}^{\infty }\lambda _n=\infty \). He obtained a \(\Delta \)-convergence result of (4.1) to a minimizer of f.

Proposition 4.3

[26] Let \(f:X\rightarrow (-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function on complete CAT(0) space X and \(X^*\) be its dual space. Then

$$\begin{aligned} J_{\lambda }^{\partial f}=\arg \min _{z\in X}\big [f(z)\oplus \frac{1}{2\lambda }d^2(z,x)\big ],~~\forall ~\lambda >0 \text{ and } x\in X. \end{aligned}$$

The following result is an application of Theorem 3.2 to obtain a common solution of a finite family of proper, convex and lower semicontinuous functions and fixed points of two demicontractive multivalued mappings.

Theorem 4.4

Let X be a complete CAT(0) space and \(X^*\) be its dual. Let \(f_1,f_2,\dots ,f_k:X \rightarrow (-\infty ,\infty ]\) be proper, convex and lower semicontinuous functions. Let \(S,T:D \rightarrow CB(D)\) be two L-Lipschitzian demicontractive mappings with coefficients \(\lambda _1\) and \(\lambda _2\) such that \(\lambda = \max \{\lambda _1, \lambda _2\}\). Suppose S and T satisfy the gate condition. Let \(h_1,~h_2\) be keys of S and T respectively. Assume that \(\Gamma := F(S)\cap F(T)\cap \bigcap _{i=1}^k \arg \min \limits _{y\in X} f_i(y) \ne \emptyset \) and for arbitrary points \(u,x_1 \in X\), the sequence \(\{x_n\}\) is generated iteratively by

$$\begin{aligned} {\left\{ \begin{array}{ll} u_n = J_{\mu _n}^{\partial f_k}\circ J_{\mu _n}^{\partial f_{k-1}}\circ \dots \circ J_{\mu _n}^{\partial f_1}(\alpha _n u \oplus (1-\alpha _n) x_n),\\ y_n = \beta _n u_n \oplus (1-\beta _n) v_n,\\ x_{n+1} = \delta _n y_n \oplus (1-\delta _n)z_n, \end{array}\right. } \end{aligned}$$
(4.2)

where \(v_n\) is the gate of \(h_1 \in Su_n,\)\(z_n\) is a gate of \(h_2 \in Ty_n\), \(\{\alpha _n\}\), \(\{\beta _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1]. Suppose the following conditions are satisfied:

  1. (C1)

    \(0 \le \beta _n, \delta _n \le 1 - \lambda \),

  2. (C2)

    \(\liminf \limits _{n\rightarrow \infty }\mu _n >0\),

  3. (C3)

    \(\lim \limits _{n\rightarrow \infty }\alpha _n=0\) and \(\sum _{n=1}^{\infty }\alpha _n=+\infty .\)

Then, \(\{x_n\}\) converges strongly to \(p = P_\Gamma u\), where \(P_\Gamma \) is the metric projection onto \(\Gamma \).

Proof

By setting \(\partial f_i=A_i\) in Theorem 3.2, the proof follows. \(\square \)

Next, we give a numerical example in (\(\mathbb {R}^2, ||.||_2\)) (where \(\mathbb {R}^2\) is the Euclidean plane) to illustrate the applicability of our main result.

Example 4.5

Let \(N=2\) in Theorem 3.2, then for \(i=1\), we define \(A_1:\mathbb {R}^2\rightarrow \mathbb {R}^2\) by

$$\begin{aligned} A_1(x)=(x_1+x_2, ~x_2-x_1). \end{aligned}$$

Then \(A_1\) is a monotone operator.

Recall that \([t\overrightarrow{ab}]\equiv t(b-a)\), for all \(t\in \mathbb {R}\) and \(a,b\in \mathbb {R}^2\) (see [25]). Using this, we have for each \(x\in \mathbb {R}^2\) that

$$\begin{aligned} J^{1}_{\mu _n}(x)= & {} z,~~ \Longleftrightarrow \frac{1}{\mu _n}(x-z)= A_1z,~~ \Longleftrightarrow ~x=(I+\mu _n A_1)z,\\ \Longleftrightarrow ~z= & {} (I+\mu _n A_1)^{-1}x. \end{aligned}$$

Hence, we compute the resolvent of \(A_1\) as follows:

$$\begin{aligned} J^{1}_{\mu _n} (x)= & {} \left( \begin{bmatrix} 1&\quad 0\\ 0&\quad 1 \end{bmatrix}+\begin{bmatrix} \mu _n&\quad \mu _n\\ - \mu _n&\quad \mu _n \end{bmatrix}\right) ^{-1}\begin{bmatrix} x_1\\ x_2 \end{bmatrix}\\\\= & {} {\begin{bmatrix} 1+\mu _n&\quad \mu _n\\ -\mu _n&\quad 1+\mu _n \end{bmatrix}}^{-1}\begin{bmatrix} x_1\\ x_2 \end{bmatrix}\\\\= & {} \frac{1}{1+2\mu _n +2\mu _n^2} \begin{bmatrix} 1+\mu _n&\quad -\mu _n \\ \mu _n&\quad 1+\mu _n \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix}\\\\= & {} \left( \frac{(1+\mu _n)x_1-\mu _n x_2}{1+2\mu _n +2\mu _n^2}, ~\frac{\mu _n x_1 +(1+\mu _n) x_2}{1+2\mu _n +2\mu _n^2}\right) . \end{aligned}$$

Thus,

$$\begin{aligned} J^{1}_{\mu _n} (x)= \left( \frac{(1+\mu _n)x_1-\mu _n x_2}{1+2\mu _n +2\mu _n^2}, ~\frac{\mu _n x_1 +(1+\mu _n) x_2}{1+2\mu _n +2\mu _n^2}\right) . \end{aligned}$$

Now, for \(i=2\), let \(A_2:\mathbb {R}^2\rightarrow \mathbb {R}^2\) be defined by

$$\begin{aligned} A_2(x)=(x_2, ~-x_1). \end{aligned}$$

So that by the same argument as in above, we obtain

$$\begin{aligned} J^{2}_{\mu _n} (x)=\left( \frac{x_1-\mu _n x_2}{1+\mu _n^2},~ \frac{x_2+\mu _n x_1}{1+\mu _n^2}\right) . \end{aligned}$$

Thus for \(i=1,2\), we obtain

$$\begin{aligned}&J_{\mu _n}^{2}(J_{\mu _n}^{1} x)\\&=\left( \frac{(1+\mu _n-\mu _n^2)x_1-(2\mu _n+\mu _n^2)x_2}{(1+\mu _n^2)(1+2\mu _n+2\mu _n^2)}, ~\frac{(2\mu _n+\mu _n^2)x_1+(1+\mu _n-\mu _n^2)x_2}{(1+\mu _n^2)(1+2\mu _n+2\mu _n^2)}\right) . \end{aligned}$$

Let \(S,~T:X\rightarrow 2^X\), be defined as in Example 1.1:

for \(j=1\), we have

$$\begin{aligned} S_x = {\left\{ \begin{array}{ll} {}[ -\frac{6}{5}x, -2x ]~~\text{ if }~ x \le 0,\\ {}[ -2x, -\frac{6}{5}x ]~~\text{ if }~ x > 0, \end{array}\right. } \end{aligned}$$

Similarly, for \(j=10\), we have

$$\begin{aligned} T_x = {\left\{ \begin{array}{ll} {}[ -\frac{51}{5}x, -11x ]~~\text{ if }~ x \le 0,\\ {}[ -11x, -\frac{51}{5}x ]~~\text{ if }~ x > 0, \end{array}\right. } \end{aligned}$$

Then S and T are \(\lambda _1\) (respectively \(\lambda _2\))-demicontractive mappings with \(\lambda _1 = \frac{75}{121}\) and \(\lambda _2 = \frac{3000}{3136}\) respectively. Take \(\delta _n=\frac{4n+1}{100n+9}\), \(\beta _n=\frac{1}{100(n+1)}\), \(\alpha _n=\frac{1}{n}\) and \(\mu _n=\frac{1}{2}\) then conditions C1 to C3 in Theorem 3.2 are satisfied.

Hence, for \(u, x_1 \in \mathbb {R}^2\), our Algorithm (3.1) becomes:

$$\begin{aligned} {\left\{ \begin{array}{ll} u_n=J_{\mu _n}^{A_1}\circ J_{\mu _n}^{A_2}(\frac{1}{n}u + \frac{n-1}{n}x_1),\\ y_n=\frac{1}{100(n+1)}u_n + \frac{100n+99}{100(n+1)}v_n,\\ x_{n+1}=\frac{4n+1}{100n+9}y_n+\frac{96n+8}{100n+9}z_n,~ n\ge 1. \end{array}\right. } \end{aligned}$$
(4.3)

Case I

  1. (a)

    Take \(x_1=(0.5,~ 1)^T\) and \(u=(-5, ~10)^T\).

  2. (b)

    Take \(x_1=(2,~ 5)^T\) and \(u=(5, ~10)^T\).

Case II

  1. (a)

    Take \(x_1=(2,~ 3)^T,~u=(1, ~4)^T\).

  2. (b)

    Take \(x_1=(-2,~ 3)^T,~u=(2, ~5)^T\).

Fig. 1
figure 1

Errors versus Iteration numbers(n): Case 1 (a) (top left); Case 1 (b) (top right); Case 2 (a) (bottom left); Case 2 (b) (bottom right)

Mathlab version R2017a is used to obtain the graphs of errors against number of iterations.

Remark 4.6

Using different choices of the initial vectors \(x_1\) and u (that is, Case 1-Case 2), we have the above numerical results (Figure 1). We see that the error values converge to 0, suggesting that by choosing arbitrary starting points, the sequence \(\{x_n\}\) converges to the common zero of \(A_i,~i=1,2,\) which is also a common fixed point of S and T.

5 Conclusion

The gate condition which is weaker than the endpoint condition (commonly used by many authors in this direction), was used to establish the strong convergence of an Halpern-type proximal point algorithm to a common solution of a finite family of monotone inclusion problems and a common fixed point of two L-Lipschitzian demicontractive multivalued mappings in the frame work CAT(0) spaces. Furthermore, the obtained result (that is Theorem 3.2) was applied to solve minimization problems in CAT(0) spaces, and numerical experiments of the obtained result were given to further show its applicability.