1 Introduction

The split feasibility problem (SFP) which was introduced by Censor and Elfving [14] in 1994, is the first instance of the split inverse problem. In general, this split problem is the problem of finding a point of a closed convex subset of a Hilbert space such that its image under a bounded linear operator belongs to a closed convex subset of another Hilbert space. The SFP was studied by many authors (see [2, 30, 38, 46, 47]) because its applications are desirable and can be used in real-world applications such as in signal processing, in image reconstruction, in the intensity-modulated radiation therapy, etc., see [3, 7, 12, 15, 30]. Recently, various split problems were introduced and studied (see [6, 16, 17, 22, 26]) and one of the important generalizations of the SFP is the split common fixed point problem (SCFPP) which was first introduced by Censor and Segal [17] in 2009 for the class of directed mappings in Euclidean spaces. After that, many researchers studied the SCFPP for various classes of mappings in Hilbert spaces (even in Banach spaces), see [5, 9, 18, 19, 23, 24, 27, 29, 35,36,37, 39, 40, 42, 48] for instance. The SCFPP requires to find a common fixed point of a family of mappings in a Hilbert space whose image under a considered bounded linear operator is a common fixed point of another family of mappings in the image space. Let us recall the SFP and the SCFPP, and review the well-known methods, in order to approximate solutions of these problems. Simultaneously, the motivation and purpose of this paper are also presented.

Let \({\mathcal {H}}\) and \({\mathcal {K}}\) be two real Hilbert spaces, and let \(A : {\mathcal {H}} \rightarrow {\mathcal {K}}\) be a bounded linear operator. The SFP is to find a point

$$\begin{aligned} x \in C ~~~\text {such that }~~~ Ax \in Q, \end{aligned}$$
(1.1)

where \(C \subseteq {\mathcal {H}}\) and \(Q \subseteq {\mathcal {K}}\) are nonempty closed convex subsets. Byrne [2] proposed the so-called CQ algorithm for solving the SFP (1.1) as follows:

$$\begin{aligned} x_{n+1} = P_{C}\left( x_n - \gamma A^{*}(I-P_{Q})Ax_n\right) , ~~n \ge 1, \end{aligned}$$
(1.2)

where \(\gamma \in \left( 0,\frac{2}{\Vert A\Vert ^2}\right) \), and \(A^{*}\) denotes the adjoint operator of A and \(P_{C}\), \(P_{Q}\) are the projections onto C and Q, respectively.

Let \(S_{i} : {\mathcal {H}} \rightarrow {\mathcal {H}}\) \((i = 1, 2,\ldots , s)\) and \(T_{j} : {\mathcal {K}} \rightarrow {\mathcal {K}}\) \((j = 1, 2,\ldots , t)\) be mappings with nonempty fixed point sets \(\mathop {\mathrm {Fix}}(S_{i})\) and \(\mathop {\mathrm {Fix}}(T_{j})\), respectively. The SCFPP is formulated as finding a point

$$\begin{aligned} x \in \bigcap _{i=1}^{s}\mathop {\mathrm {Fix}}(S_{i}) ~~~\text {such that}~~~ Ax \in \bigcap _{j=1}^{t}\mathop {\mathrm {Fix}}(T_{j}). \end{aligned}$$
(1.3)

In the case \(s = 1 = t\), the SCFPP (1.3) is reduced to find a point

$$\begin{aligned} x \in \mathop {\mathrm {Fix}}(S) ~~~\text {such that}~~~ Ax \in \mathop {\mathrm {Fix}}(T), \end{aligned}$$
(1.4)

where \(S : {\mathcal {H}} \rightarrow {\mathcal {H}}\) and \(T : {\mathcal {K}} \rightarrow {\mathcal {K}}\) are two mappings with nonempty fixed point sets \(\mathop {\mathrm {Fix}}(S)\) and \(\mathop {\mathrm {Fix}}(T)\), respectively. Problem (1.4) is usually called the split fixed point problem (SFPP).

In order to solve the SFPP (1.4), Censor and Segal [17] introduced following iterative method for two directed mappings S and T:

$$\begin{aligned} x_{n+1} = S\left( x_{n} - \gamma A^{*}(I - T)Ax_{n}\right) , ~~n \ge 1, \end{aligned}$$
(1.5)

where \(\gamma \in \left( 0,\frac{2}{\Vert A\Vert ^2}\right) \), and obtain a convergence theorem under the closedness principle. Moudafi [35] introduced the relaxed algorithm for two demicontractive mappings S and T with coefficients \(\kappa _{1}\) and \(\kappa _{2}\), respectively as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n = x_n - \gamma A^{*}(I - T)Ax_n,\\ x_{n+1} = (1-\alpha _{n})y_{n} + \alpha _{n}Sy_{n}, ~~n \ge 1, \end{array}\right. } \end{aligned}$$
(1.6)

where \(\gamma \in \left( 0,\frac{1 - \kappa _{2} }{\Vert A\Vert ^2}\right) \) and \(\alpha _{n} \in (0, 1)\). He proved a weak convergence result of this algorithm under the demiclosedness principle and some suitable control condition. It is well known that the class of demicontractive mappings [13, 20, 21, 31] includes several common types of classes of mappings occurring in nonlinear analysis and optimization problems.

It is observed that the parameters \(\gamma \) in Algorithms (1.2), (1.5) and (1.6) depend on the norm of A. So these algorithms have a drawback in the sense that the implementation of them requires to calculate or estimate the operator norm \(\Vert A\Vert \), which is not an easy task in general practice. López et al. [30] proposed one of the ways to select the stepsize

$$\begin{aligned} \gamma _{n} := \frac{\lambda _{n}\Vert (I - T)Ax_{n}\Vert ^{2}}{\left\| A^{*}(I - T)Ax_{n}\right\| ^{2}}, \end{aligned}$$
(1.7)

where \(T := P_{Q}\) and \(\lambda _{n} \in (0, 2)\), for replacing the parameter \(\gamma \) in Algorithm (1.2), in order to solve the SFP (1.1). One can see that the choice of the stepsize \(\gamma _{n}\) in (1.7) does not depend on \(\Vert A\Vert \), but it depends on \(x_{n}\). In the case of the SFPP (1.4), many authors introduced self-adaptive algorithms by selecting the stepsizes in the similar way to (1.7), see [5, 10, 18, 24, 37, 39, 41]. Moreover, it is illustrated in [24, Example 6.1] that an algorithm whose stepsize is defined by (1.7) requires the smaller number of iterations than an algorithm depending on the operator norm.

In 2016, the SCFPP in the case of multi-valued mappings was first considered by Latif and Eslamian [29]. They proposed a viscosity-type algorithm for solving the SCFPP for finite families of quasi-nonexpansive multi-valued mappings, and proved a strong convergence result. In 2019, Jailoka and Suantai [23] introduced an algorithm for solving the SCFPP for two infinite families of demicontractive multi-valued mappings, and proved a strong convergence theorem. However, those algorithms in [23, 29] still depend on the operator norms.

It is worth mentioning that the fixed point theory [4, 8, 25, 43] plays an important role in nonlinear analysis, and it can be applied in a variety of problems because the solutions of those problems can be explained in terms of fixed points of some mappings. We also note that the SCFPP includes other convex optimization problems as special cases such as the split variational inequality problem (SVIP) [16], the split common null point problem (SCNPP) [6], the split equilibrium problem (SEP) [26], the proximal split feasibility problem (PSMP) and the convex feasibility problem (CFP).

In this paper, motivated by above-mentioned researches, we are interested in studying the split fixed point problem (SFPP) for multi-valued mappings in Hilbert spaces. Our main objective is to invent a strongly convergent algorithm without prior knowledge of the operator norm for solving the SFPP for the class of demicontractive mappings. The paper is organized as follows. In Sect. 2, basic definitions, notations, and some useful lemmas for proving our main result are given. Our main result is in Sect. 3. In this section, we introduce a self-adaptive algorithm based on the viscosity approximation method by selecting the stepsize in the similar adaptive way to (1.7) for solving the SFPP for two demicontractive multi-valued mappings, and establish a strong convergence theorem of the proposed algorithm under some suitable conditions. Some consequences of our main result are also given in Sect. 4. At last, in Sect. 5, we provide a numerical result to support our main result and to demonstrate the convergence behavior of our algorithm.

2 Preliminaries

Throughout this paper, we assume that \({\mathcal {H}}\) and \({\mathcal {K}}\) are real Hilbert spaces with inner products \(\langle \cdot , \cdot \rangle \) and the induced norms \(\Vert \cdot \Vert \). The following notations are adopted:

  • \({\mathbb {R}}\) : the set of real numbers,

  • \({\mathbb {N}}\) : the set of positive integers,

  • I : the identity operator on a Hilbert space,

  • \( x_{n} \rightharpoonup x\) : \(\{x_{n}\}\) converges weakly to x,

  • \( x_{n} \rightarrow x\) : \(\{x_{n}\}\) converges strongly to x.

    Let \(x , y \in {\mathcal {H}}\) and \(\alpha \in [0, 1]\). Then the following properties hold on \({\mathcal {H}}\):

    $$\begin{aligned}&\Vert \alpha x + (1 - \alpha ) y \Vert ^{2} = \alpha \Vert x\Vert ^{2} + (1 - \alpha )\Vert y\Vert ^{2} - \alpha (1 - \alpha )\Vert x - y\Vert ^{2}; \end{aligned}$$
    (2.1)
    $$\begin{aligned}&\Vert x + y \Vert ^{2} \le \Vert x\Vert ^{2} + 2\langle y, x + y\rangle . \end{aligned}$$
    (2.2)

    Property (2.1) is known as the strong convexity of \(\Vert \cdot \Vert ^{2}\).

Let \(D \subseteq {\mathcal {H}}\) be a nonempty closed convex set. The projection from \({\mathcal {H}}\) onto D denoted by \(P_{D}\) is defined for each \(x \in {\mathcal {H}}\), \(P_{D}x\) is the unique element in D such that

$$\begin{aligned} \Vert x - P_{D}x \Vert = d(x, D) := \inf \{ \Vert x - y \Vert : y \in D \}. \end{aligned}$$

Let \(x \in {\mathcal {H}}\) and \(u \in D\). It is known that \(u = P_{D}x\) if and only if

$$\begin{aligned} \langle x - u, y - u \rangle \le 0, ~~\forall y \in D. \end{aligned}$$

A mapping \(F : {\mathcal {H}} \rightarrow {\mathcal {H}}\) is called a \(\mu \)-contraction with respect to D, where \(\mu \in [0, 1)\) if

$$\begin{aligned} \Vert Fx - Fy\Vert \le \mu \Vert x - y \Vert , ~~\forall x \in {\mathcal {H}}, \forall y \in D. \end{aligned}$$

If F is a \(\mu \)-contraction with respect to D, then \(P_{D}F\) is also a \(\mu \)-contraction with respect to D.

A fixed point of a mapping \(S : {\mathcal {H}} \rightarrow {\mathcal {H}}\) is a point in \({\mathcal {H}}\), which is mapped to itself by S, and the set of all fixed points of S is denoted by \(\mathop {\mathrm {Fix}}(S) := \{ x \in {\mathcal {H}} : x = Sx \}\). A mapping \(S : {\mathcal {H}} \rightarrow {\mathcal {H}}\) having a fixed point is said to be

  1. (i)

    directed if

    $$\begin{aligned} \Vert Sx - u \Vert ^{2} \le \Vert x - u \Vert ^{2} - \Vert x - Sx\Vert ^{2},~~\forall x \in {\mathcal {H}}, \forall u \in \mathop {\mathrm {Fix}}(S), \end{aligned}$$
  2. (ii)

    quasi-nonexpansive if

    $$\begin{aligned} \Vert Sx - u \Vert \le \Vert x - u \Vert ,~~\forall x \in {\mathcal {H}}, \forall u \in \mathop {\mathrm {Fix}}(S), \end{aligned}$$
  3. (iii)

    demicontractive ([20, 31]) if there exists \(\kappa \in [0, 1)\) such that

    $$\begin{aligned} \Vert Sx - u \Vert ^{2} \le \Vert x - u \Vert ^{2} + \kappa \Vert x - Sx\Vert ^{2},~~\forall x \in {\mathcal {H}}, \forall u \in \mathop {\mathrm {Fix}}(S). \end{aligned}$$

    It can be seen that the class of demicontractive mappings includes the class of quasi-nonexpansive mappings and the class of directed mappings.

Next, we recall some notations and definitions on multi-valued mappings. Let \(S : {\mathcal {H}} \rightarrow 2^{{\mathcal {H}}}\) be a multi-valued mapping. An element \(u \in {\mathcal {H}}\) is called a fixed point of S if \(u \in Su\). The set of all fixed points of S is also denoted by \(\mathop {\mathrm {Fix}}(S)\). We say that S satisfies the endpoint condition if \(Su = \{u\}\) for all \(u \in \mathop {\mathrm {Fix}}(S)\). The Pompeiu-Hausdorff metric on \(CB({\mathcal {H}})\) is defined by

$$\begin{aligned} H(D, E) := \max \left\{ \sup _{x \in D} d(x, E), \sup _{y \in E} d(y, D)\right\} \end{aligned}$$

for all \(D, E \in CB({\mathcal {H}})\), where \(CB({\mathcal {H}})\) denotes the family of all nonempty closed bounded subsets of \({\mathcal {H}}\).

A class of demicontractive multi-valued mappings was first introduced in [13, 21] in the following way:

Definition 2.1

([13, 21]). A multi-valued mapping \(S : {\mathcal {H}} \rightarrow CB({\mathcal {H}})\) is said to be demicontractive if \(\mathop {\mathrm {Fix}}(S) \ne \emptyset \), and there exists \(\kappa \in [0,1)\) such that

$$\begin{aligned} H(Sx, Su)^{2} \le \Vert x - u \Vert ^{2} + \kappa ~d(x, Sx)^{2}, ~~~\forall x \in {\mathcal {H}}, \forall u \in \mathop {\mathrm {Fix}}(S). \end{aligned}$$
(2.3)

In particular, if \(\kappa = 0\), then S is called quasi-nonexpansive.

Example 2.2

A multi-valued mapping \(S : {\mathbb {R}} \rightarrow CB({\mathbb {R}})\) defined by \(Sx = \left[ \frac{|x|}{4}, |x| \right] \) is quasi-nonexpansive with \(\mathop {\mathrm {Fix}}(S) = [0, \infty )\).

The following example shows that the class of demicontractive multi-valued mappings properly contains the class of quasi-nonexpansive multi-valued mappings.

Example 2.3

([23]). For each \(i \in {\mathbb {N}}\), we define \(S_{i} : {\mathbb {R}} \rightarrow CB({\mathbb {R}})\) by

$$\begin{aligned} S_{i}x = {\left\{ \begin{array}{ll} {[} -\frac{(2i + 1)x}{2}, -(i+1)x ], ~~ &{}\text {if}~x \le 0,\\ {[} -(i+1)x, -\frac{(2i + 1)x}{2} ], &{}\text {if}~x > 0.\\ \end{array}\right. } \end{aligned}$$

Thus, \(\mathop {\mathrm {Fix}}(S_{i}) = \{0\}\). Hence, \(S_{i}\) is demicontractive with a coefficient \(\kappa _{i} = \frac{4i(i+2)}{(2i+3)^2} \in (0, 1)\) and it is not quasi-nonexpansive for all \(i \in {\mathbb {N}}\).

The following lemma gives the properties of a demicontractive multi-valued mappings having an endpoint.

Lemma 2.4

([23]). Let \(S : {\mathcal {H}} \rightarrow CB({\mathcal {H}})\) be a \(\kappa \)-demicontractive multi-valued mapping. If \(u \in \mathop {\mathrm {Fix}}(S)\) such that \(Su = \{u\}\), then the following inequalities hold: for all \(x \in {\mathcal {H}}\), \(y \in Sx\),

(i):

\(\langle x - y, u - y \rangle \le \frac{1+\kappa }{2} \Vert x - y \Vert ^{2}\);

(ii):

\(\langle x - y, x - u \rangle \ge \frac{1-\kappa }{2} \Vert x - y \Vert ^{2}\).

For a demicontractive mapping \(S : {\mathcal {H}} \rightarrow CB({\mathcal {H}})\), the fixed point set \(\mathop {\mathrm {Fix}}(S)\) is always closed. It is shown in [41, Lemma 2.3] that if S satisfies the endpoint condition, then \(\mathop {\mathrm {Fix}}(S)\) is convex. The following lemma gives a sufficient condition for the convexity of the solution set of the SFPP for demicontractive multi-valued mappings.

Lemma 2.5

([23]). Let \(A : {\mathcal {H}} \rightarrow {\mathcal {K}}\) be a bounded linear operator. Let \(S : {\mathcal {H}} \rightarrow CB({\mathcal {H}})\) and \(T : {\mathcal {K}} \rightarrow CB({\mathcal {K}})\) be two demicontractive multi-valued mappings. Suppose that \(\Gamma := \mathop {\mathrm {Fix}}(S) \cap A^{-1}(\mathop {\mathrm {Fix}}(T)) \ne \emptyset \). Then, we have

(i):

\(\Gamma \) is closed;

(ii):

If \(Su = \{u\}\) and \(T(Au) = \{Au\}\) for all \(u \in \Gamma \), then \(\Gamma \) is convex.

We recall the notion of the so-called demiclosedness principle.

Definition 2.6

Let \(S : {\mathcal {H}} \rightarrow CB({\mathcal {H}})\) be a multi-valued mapping. We say that \(I - S\) is demiclosed at 0 if for any sequence \(\{x_{n}\}\) in \({\mathcal {H}}\) which converges weakly to \(u \in {\mathcal {H}}\) and the sequence \(\{\Vert x_{n} - y_{n}\Vert \}\) converges to 0, where \(y_{n} \in Sx_{n}\), then \(u \in \mathop {\mathrm {Fix}}(S)\).

We end this section with the following two useful lemmas for proving our strong convergence theorem.

Lemma 2.7

([44]). Suppose that \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that

$$\begin{aligned} a_{n+1} \le (1 - \beta _{n})a_{n} + \beta _{n}\sigma _{n} + \nu _{n},~~~n \in {\mathbb {N}}, \end{aligned}$$

where \(\{\beta _{n}\}, \{\sigma _{n}\}\) and \(\{\nu _{n}\}\) satisfy the following conditions:

(i):

\(\beta _{n} \in [0, 1]\), \(\sum _{n=1}^{\infty } \beta _{n} = \infty \);

(ii):

\(\mathop {\limsup }\limits _{n \rightarrow \infty }\sigma _{n} \le 0\) or \(\sum _{n=1}^{\infty } | \beta _{n} \sigma _{n} | < \infty \);

(iii):

\(\nu _{n} \ge 0\) and \(\sum _{n=1}^{\infty } \nu _{n} < \infty \).

Then, \(\mathop {\lim }\limits _{n \rightarrow \infty } a_{n} = 0\).

Lemma 2.8

([33]). Let \(\{a_{n}\}\) be a sequence of real numbers such that there exists a subsequence \(\{n_{i}\}\) of \(\{n\}\) which satisfies \(a_{n_{i}} <a_{n_{i}+1}\) for all \(i \in {\mathbb {N}}\). Define a sequence of positive integers \(\{\tau (n)\}\) by

$$\begin{aligned} \tau (n) := \max \{ m \le n : a_{m} < a_{m+1}\} \end{aligned}$$

for all \(n \ge n_{0}\) (for some \(n_{0}\) large enough). Then \(\{\tau (n)\}\) is a nondecreasing sequence such that \(\tau (n) \rightarrow \infty \) as \(n \rightarrow \infty \), and it holds that

$$\begin{aligned} a_{\tau (n)} \le a_{\tau (n)+1}~~\text {and}~~a_{n} \le a_{\tau (n)+1}. \end{aligned}$$

3 Main Result

In this section, we present an iterative approximation method which is independent of the operator norm for solving the SFPP for multi-valued mappings, and prove its strong convergence theorem for two demicontractive mappings.

We now focus on the SFPP for two multi-valued mappings \(S : {\mathcal {H}} \rightarrow 2^{{\mathcal {H}}}\) and \(T : {\mathcal {K}} \rightarrow 2^{{\mathcal {K}}}\) as follows: Find a point

$$\begin{aligned} x \in \mathop {\mathrm {Fix}}(S) ~~~\text {such that}~~~ Ax \in \mathop {\mathrm {Fix}}(T), \end{aligned}$$
(3.1)

where \(A : {\mathcal {H}} \rightarrow {\mathcal {K}}\) is a bounded linear operator. The solution set of (3.1) is denoted by \(\Gamma \). If \(\Gamma \) is nonempty, let \(F : {\mathcal {H}} \rightarrow {\mathcal {H}}\) be a contraction with respect to \(\Gamma \). Here, a self-adaptive method for solving the SFPP (3.1) is introduced as follows.

figure a

A strong convergence result of Algorithm 1 is established below.

Theorem 3.1

Let \(A : {\mathcal {H}} \rightarrow {\mathcal {K}}\) be a bounded linear operator. Let \(S : {\mathcal {H}} \rightarrow CB({\mathcal {H}})\) and \(T : {\mathcal {K}} \rightarrow CB({\mathcal {K}})\) be demicontractive multi-valued mappings with coefficients \(\kappa \) and \(\kappa ^{\prime }\), respectively, such that \(I - S\) and \(I - T\) are demiclosed at 0. Assume that \(\Gamma \ne \emptyset \) and \(Su = \{u\}\), \(T(Au) = \{Au\}\) for all \(u \in \Gamma \). Let \(F : {\mathcal {H}} \rightarrow {\mathcal {H}}\) be a \(\mu \)-contraction with respect to \(\Gamma \). Then, any sequence \(\{x_n\}\) generated by Algorithm 1 converges strongly to a point \(x^{*} \in \Gamma \), where \(x^{*} = P_{\Gamma }F(x^{*})\) provided that the sequences \(\{\lambda _{n}\}, \{\delta _{n}\}\) and \(\{\alpha _{n}\}\) satisfy the following conditions:

(Ci):

\(0< a \le \lambda _{n} \le b < 1 - \kappa ^{\prime }\);

(Cii):

\(0< c \le \delta _{n} \le d < 1 - \kappa \);

(Ciii):

\(\mathop {\lim }\limits _{n \rightarrow \infty }\alpha _{n} = 0\) and \(\sum _{n=1}^{\infty } \alpha _{n} = \infty \).

Proof

By Lemma 2.5, \(\Gamma \) is closed and convex, and hence \(P_{\Gamma }F\) is a contraction on \(\Gamma \). By Banach fixed point theorem, there is a unique point \(x^{*} \in \Gamma \) such that \(x^{*} = P_{\Gamma }F(x^{*})\). It follows from the characterization of \(P_{\Gamma }\) that

$$\begin{aligned} \langle Fx^{*} - x^{*}, u - x^{*} \rangle \le 0,~~\forall u \in \Gamma . \end{aligned}$$
(3.6)

Since \(x^{*} \in \Gamma \), then \(x^{*} \in \mathop {\mathrm {Fix}}(S)\) and \(Ax^{*} \in \mathop {\mathrm {Fix}}(T)\). We first show that \(\{x_{n}\}\) is bounded. Suppose that \(Ax_{n} \ne w_{n}\). Here, the stepsize \(\gamma _{n}\) defined by (3.2) is well defined. Indeed, if \(A^{*}(Ax_{n} - w_{n}) = 0\), by Lemma 2.4(ii) we have

$$\begin{aligned}&\Vert Ax_{n} - w_{n} \Vert ^{2} \le \frac{2}{1 - \kappa ^{\prime }} \langle Ax_{n} - w_{n}, Ax_{n} - Ax^{*} \rangle \\&= \frac{2}{1 - \kappa ^{\prime }} \langle A^{*}(Ax_{n} - w_{n}), x_{n} - x^{*} \rangle = 0, \end{aligned}$$

that is \(Ax_{n} = w_{n}\), which is a contradiction. Thus, \(A^{*}(I - T)Ax \ne 0\). Now, we consider

$$\begin{aligned} \Vert y_{n} {-} x^{*}\Vert ^{2}&= \Vert x_{n} {-} x^{*} \Vert ^{2} {-} 2 \gamma _{n} \langle A^{*}(Ax_{n} {-} w_{n}), x_{n} {-} x^{*} \rangle + \gamma _{n}^{2} \Vert A^{*}(Ax_{n} - w_{n}) \Vert ^{2} \nonumber \\ {}&= \Vert x_{n} {-} x^{*} \Vert ^{2} {-} 2 \gamma _{n} \langle Ax_{n} {-} w_{n}, Ax_{n} - Ax^{*} \rangle + \gamma _{n}^{2} \Vert A^{*}(Ax_{n} - w_{n}) \Vert ^{2} \nonumber \\ {}&\le \Vert x_{n} - x^{*} \Vert ^{2} - (1 - \kappa ^{\prime }) \gamma _{n} \Vert Ax_{n} - w_{n} \Vert ^{2} + \gamma _{n}^{2} \Vert A^{*}(Ax_{n} - w_{n}) \Vert ^{2}\nonumber \\&= \Vert x_{n} - x^{*} \Vert ^{2} - (1 - \kappa ^{\prime } - \lambda _{n})\lambda _{n} \frac{\Vert Ax_{n} - w_{n}\Vert ^{4}}{\Vert A^{*}(Ax_{n} - w_{n})\Vert ^{2}}\end{aligned}$$
(3.7)
$$\begin{aligned}&\le \Vert x_{n} - x^{*} \Vert ^{2} - \frac{(1 - \kappa ^{\prime } - \lambda _{n})\lambda _{n}}{\Vert A\Vert ^{2}} \Vert Ax_{n} - w_{n}\Vert ^{2}. \end{aligned}$$
(3.8)

Clearly, (3.7) still holds when \(Ax_{n} = w_{n}\). By the demicontractivity of S with the coefficient \(\kappa \) and by using (2.1) and (3.7), we have

$$\begin{aligned} \Vert u_{n} - x^{*}\Vert ^{2}&= \Vert (1 - \delta _{n})(y_n - x^{*}) + \delta _{n}(z_{n} - x^{*})\Vert ^{2} \nonumber \\&= (1 - \delta _{n}) \Vert y_n - x^{*}\Vert ^{2} + \delta _{n} \Vert z_{n} - x^{*}\Vert ^{2} - \delta _{n}(1 - \delta _{n})\Vert y_n - z_{n}\Vert ^{2} \nonumber \\&= (1 - \delta _{n}) \Vert y_n - x^{*}\Vert ^{2} + \delta _{n} d(z_{n}, Sx^{*})^{2} - \delta _{n}(1 - \delta _{n})\Vert y_n - z_{n}\Vert ^{2} \nonumber \\&\le (1 - \delta _{n}) \Vert y_n - x^{*}\Vert ^{2} + \delta _{n} H(Sy_{n}, Sx^{*})^{2} - \delta _{n}(1 - \delta _{n})\Vert y_n - z_{n}\Vert ^{2} \nonumber \\&\le (1 - \delta _{n}) \Vert y_n - x^{*}\Vert ^{2} + \delta _{n}\left( \Vert y_{n} - x\Vert ^{2} + \kappa ~d(y_{n}, Sy_{n})^{2} \right) \nonumber \\&\quad - \delta _{n}(1 - \delta _{n})\Vert y_n - z_{n}\Vert ^{2} \nonumber \\&\le (1 - \delta _{n}) \Vert y_n - x^{*}\Vert ^{2} + \delta _{n}\left( \Vert y_{n} - x\Vert ^{2} + \kappa \Vert y_{n} - z_{n} \Vert ^{2} \right) \nonumber \\&\quad - \delta _{n}(1 - \delta _{n})\Vert y_n - z_{n}\Vert ^{2} \nonumber \\&= \Vert y_n - x^{*}\Vert ^{2} - (1 - \delta _{n} - \kappa )\delta _{n}\Vert y_n - z_{n}\Vert ^{2} \end{aligned}$$
(3.9)
$$\begin{aligned}&\le \Vert x_{n} - x^{*} \Vert ^{2} - \frac{(1 - \kappa ^{\prime } - \lambda _{n})\lambda _{n}}{\Vert A\Vert ^{2}} \Vert Ax_{n} - w_{n}\Vert ^{2}\nonumber \\&\quad - (1 - \delta _{n} - \kappa )\delta _{n}\Vert y_n - z_{n}\Vert ^{2}. \end{aligned}$$
(3.10)

It follows that \(\Vert u_{n} - x^{*}\Vert \le \Vert x_{n} - x^{*} \Vert \). Thus,

$$\begin{aligned} \Vert x_{n+1} - x^{*} \Vert&= \Vert \alpha _{n}(Fx_{n} - x^{*}) + (1 - \alpha _{n})(u_{n} - x^{*}) \Vert \\&\le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert + (1-\alpha _{n})\Vert u_{n} - x^{*}\Vert \\&\le \alpha _{n} \Vert Fx_{n} - Fx^{*} \Vert + \alpha _{n} \Vert Fx^{*} - x^{*} \Vert + (1-\alpha _{n})\Vert u_{n} - x^{*}\Vert \\&\le \alpha _{n} \mu \Vert x_{n} - x^{*} \Vert + \alpha _{n} \Vert Fx^{*} - x^{*} \Vert + (1-\alpha _{n})\Vert x_{n} - x^{*}\Vert \\&= (1 - \alpha _{n}(1-\mu )) \Vert x_{n} - x^{*} \Vert + \alpha _{n}(1-\mu ) \frac{\Vert Fx^{*} - x^{*} \Vert }{1-\mu } \\&\le \max \left\{ \Vert x_{n} - x^{*} \Vert , \frac{\Vert Fx^{*} - x^{*}\Vert }{1-\mu } \right\} . \end{aligned}$$

By mathematical induction, we obtain

$$\begin{aligned} \Vert x_{n} - x^{*} \Vert \le \max \left\{ \Vert x_{1} - x^{*} \Vert , \frac{\Vert Fx^{*} - x^{*} \Vert }{1-\mu } \right\} ,~~\forall n \in {\mathbb {N}}. \end{aligned}$$

This means that \(\{x_{n}\}\) is bounded. From (2.1) and (3.10), we have

$$\begin{aligned} \Vert x_{n+1} -x^{*} \Vert ^{2}&= \Vert \alpha _{n}(Fx_{n} - x^{*}) + (1 - \alpha _{n})(u_{n} - x^{*}) \Vert ^{2} \\&\le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert ^{2} + (1-\alpha _{n})\Vert u_{n} - x^{*}\Vert ^{2} \\&\le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert ^{2} + \Vert x_{n} - x^{*} \Vert ^{2} - \frac{(1 - \kappa ^{\prime } - \lambda _{n})\lambda _{n}}{\Vert A\Vert ^{2}} \Vert Ax_{n} - w_{n}\Vert ^{2} \\&\quad - (1 - \delta _{n} - \kappa )\delta _{n}\Vert y_n - z_{n}\Vert ^{2}. \end{aligned}$$

So, above inequality leads to the following two inequalities:

$$\begin{aligned} \frac{(1 {-} \kappa ^{\prime } {-} \lambda _{n})\lambda _{n}}{\Vert A\Vert ^{2}} \Vert Ax_{n} - w_{n}\Vert ^{2} \le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert ^{2} + \Vert x_{n} - x^{*} \Vert ^{2} - \Vert x_{n+1} - x^{*} \Vert ^{2}\nonumber \\ \end{aligned}$$
(3.11)

and

$$\begin{aligned} (1 {-} \delta _{n} {-} \kappa )\delta _{n}\Vert y_n - z_{n}\Vert ^{2} \le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert ^{2} + \Vert x_{n} - x^{*} \Vert ^{2} - \Vert x_{n+1} - x^{*} \Vert ^{2}.\nonumber \\ \end{aligned}$$
(3.12)

Here, we consider the rest of the proof into two cases.

Case I. Suppose that there exists \(n_{0} \in {\mathbb {N}}\) such that \(\{ \Vert x_{n} - x^{*} \Vert \}_{n \ge n_{0}}\) is either nonincreasing or nondecreasing. Then, \(\{ \Vert x_{n} - x^{*} \Vert \}\) is convergent because it is bounded. This implies that \(\Vert x_{n} - x^{*}\Vert ^{2} - \Vert x_{n+1} - x^{*} \Vert ^{2} \rightarrow 0\) as \(n \rightarrow \infty \). Taking the limit as \(n \rightarrow \infty \) into (3.11) and (3.12) yields

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert Ax_{n} - w_{n} \Vert = 0 \end{aligned}$$
(3.13)

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert y_{n} - z_{n} \Vert = 0. \end{aligned}$$
(3.14)

We show that \(\Vert y_{n} - x_{n} \Vert \rightarrow 0\) as \(n \rightarrow \infty \). If \(Ax_{n} = w_{n}\), then \(\Vert y_{n} - x_{n} \Vert = 0\). Thus, we assume that \(Ax_{n} \ne w_{n}\). From (3.9), we get \(\Vert u_{n} - x^{*}\Vert \le \Vert y_{n} - x^{*}\Vert \). By using (3.7), we have

$$\begin{aligned} \Vert x_{n+1} -x^{*} \Vert ^{2}&\le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert ^{2} + (1-\alpha _{n})\Vert u_{n} - x^{*}\Vert ^{2} \\&\le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert ^{2} + \Vert y_{n} - x^{*}\Vert ^{2} \\&\le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert ^{2} + \Vert x_{n} - x^{*} \Vert ^{2}\\&- (1 - \kappa ^{\prime } - \lambda _{n})\lambda _{n} \frac{\Vert Ax_{n} - w_{n}\Vert ^{4}}{\Vert A^{*}(Ax_{n} - w_{n})\Vert ^{2}}, \end{aligned}$$

which implies that

$$\begin{aligned} (1 - \kappa ^{\prime } - \lambda _{n})\lambda _{n} \frac{\Vert Ax_{n} - w_{n}\Vert ^{4}}{\Vert A^{*}(Ax_{n} - w_{n})\Vert ^{2}}&\le \alpha _{n}\Vert Fx_{n} - x^{*} \Vert ^{2}\\&+ \Vert x_{n} - x^{*} \Vert ^{2} - \Vert x_{n+1} -x^{*} \Vert ^{2}. \end{aligned}$$

Taking \(n \rightarrow \infty \) into above inequality yields \(\frac{\Vert Ax_{n} - w_{n}\Vert ^{4}}{\Vert A^{*}(Ax_{n} - w_{n})\Vert ^{2}} \rightarrow 0\) as \(n \rightarrow \infty \). Since \(\Vert y_{n} -x_{n}\Vert ^{2} = \lambda _{n}^{2} \frac{\Vert Ax_{n} - w_{n}\Vert ^{4}}{\Vert A^{*}(Ax_{n} - w_{n})\Vert ^{2}}\), we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert y_{n} - x_{n} \Vert = 0. \end{aligned}$$
(3.15)

We next show that

$$\begin{aligned} \limsup _{n \rightarrow \infty } \langle Fx^{*} - x^{*}, x_{n} - x^{*} \rangle \le 0. \end{aligned}$$

To show this, let \(\{x_{n_{k}}\}\) be a subsequence of \(\{x_{n}\}\) such that

$$\begin{aligned} \lim _{k \rightarrow \infty }\langle Fx^{*} - x^{*}, x_{n_{k}} - x^{*} \rangle = \limsup _{n \rightarrow \infty }\langle Fx^{*} - x^{*}, x_{n} - x^{*} \rangle . \end{aligned}$$

Since \(\{x_{n_{k}}\}\) is bounded, there exists a subsequence \(\{x_{n_{k_{j}}}\}\) of \(\{x_{n_{k}}\}\) and \(u \in {\mathcal {H}}\) such that \(x_{n_{k_{j}}} \rightharpoonup u\). Without loss of generality, we assume that \(x_{n_{k}} \rightharpoonup u\). Then, \(\langle w, Ax_{n_{k}} - Au\rangle = \langle A^{*}w, x_{n_{k}} - u\rangle \rightarrow 0\) as \(k \rightarrow \infty \), for all \(w \in {\mathcal {K}}\), that is, \(Ax_{n_{k}} \rightharpoonup Au\). By the demiclosedness principle of T with (3.13), we have \(Au \in \mathop {\mathrm {Fix}}(T)\). Since \(x_{n_{k}} \rightharpoonup u\), it follows from (3.15) that \(y_{n_{k}} \rightharpoonup u\). By the demiclosedness principle of S with (3.14), we get \(u \in \mathop {\mathrm {Fix}}(S)\) and hence \(u \in \Gamma \). Since \(x^{*}\) solves the variational inequality (3.6), we get

$$\begin{aligned} \limsup _{n \rightarrow \infty } \langle Fx^{*} {-} x^{*}, x_{n} {-} x^{*} \rangle {=} \lim _{k \rightarrow \infty }\langle Fx^{*} {-} x^{*}, x_{n_{k}} {-} x^{*} \rangle = \langle Fx^{*} - x^{*}, u - x^{*} \rangle \le 0. \end{aligned}$$

By using (2.2), we have

$$\begin{aligned} \Vert x_{n+1} - x^{*} \Vert ^{2}&= \Vert (1 - \alpha _{n})(u_n - x^{*} ) + \alpha _{n}(Fx_n - x^{*} ) \Vert ^{2} \\&\le (1 - \alpha _{n})^{2} \Vert u_n - x^{*} \Vert ^{2} + 2\alpha _{n} \langle Fx_{n} - x^{*}, x_{n+1} - x^{*} \rangle \\&= (1 - \alpha _{n})^{2} \Vert u_n - x^{*} \Vert ^{2} + 2\alpha _{n} \langle Fx_{n} - Fx^{*}, x_{n+1} - x^{*} \rangle \\&\quad + 2\alpha _{n} \langle Fx^{*} - x^{*}, x_{n+1} - x^{*} \rangle \\&\le (1 - \alpha _{n})^{2} \Vert x_n - x^{*} \Vert ^{2} + 2\alpha _{n}\mu \Vert x_{n} - x^{*} \Vert \Vert x_{n+1} - x^{*} \Vert \\&\quad + 2\alpha _{n} \langle Fx^{*} - x^{*}, x_{n+1} - x^{*} \rangle \\&\le (1 - \alpha _{n})^{2} \Vert x_n - x^{*} \Vert ^{2} + \alpha _{n}\mu \left( \Vert x_{n} - x^{*} \Vert ^{2} + \Vert x_{n+1} - x^{*} \Vert ^{2}\right) \\&\quad + 2\alpha _{n} \langle Fx^{*} - x^{*}, x_{n+1} - x^{*} \rangle , \end{aligned}$$

which implies that

$$\begin{aligned} \Vert x_{n+1} - x^{*} \Vert ^{2}&\le \frac{(1 - \alpha _{n})^{2} + \alpha _{n}\mu }{1 - \alpha _{n}\mu } \Vert x_n - x^{*} \Vert ^{2} + \frac{2\alpha _{n}}{1 - \alpha _{n}\mu } \langle Fx^{*} - x^{*}, x_{n+1} - x^{*} \rangle \nonumber \\&= \left( 1 - \frac{(1-\mu )\alpha _{n}}{1 - \alpha _{n}\mu }\right) \Vert x_n - x^{*} \Vert ^{2} + \frac{(\alpha _{n} - (1-\mu ))\alpha _{n}}{1 - \alpha _{n}\mu }\Vert x_n - x^{*} \Vert ^{2} \nonumber \\&\quad + \frac{2\alpha _{n}}{1 - \alpha _{n}\mu } \langle Fx^{*} - x^{*}, x_{n+1} - x^{*} \rangle \nonumber \\&\le \left( 1 - \frac{(1-\mu )\alpha _{n}}{1 - \alpha _{n}\mu }\right) \Vert x_n - x^{*} \Vert ^{2} \nonumber \\&\quad +\frac{(1-\mu )\alpha _{n}}{1 - \alpha _{n}\mu } \left\{ \left( \frac{\alpha _{n}}{1 - \mu } - 1 \right) {\mathcal {M}} + \frac{2}{1 - \mu } \langle Fx^{*} {-} x^{*}, x_{n+1} {-} x^{*} \rangle \right\} \nonumber \\&= (1-\beta _{n})\Vert x_n - x^{*} \Vert ^{2} + \beta _{n}\sigma _{n}, \end{aligned}$$
(3.16)

where \({\mathcal {M}} = \sup \{ \Vert x_n - x^{*} \Vert ^{2} : n \in {\mathbb {N}} \}\), \(\beta _{n} = \frac{(1-\mu )\alpha _{n}}{1 - \alpha _{n}\mu }\), and \(\sigma _{n} = \left( \frac{\alpha _{n}}{1 - \mu } - 1 \right) {\mathcal {M}} + \frac{2}{1 - \mu } \langle Fx^{*} - x^{*}, x_{n+1} - x^{*} \rangle \). Obviously, \(\beta _{n} \in [0, 1]\), \(\sum _{n=1}^{\infty } \beta _{n} = \infty \) and \(\mathop {\limsup }\limits _{n \rightarrow \infty }\sigma _{n} \le 0\). In view of (3.16), we can conclude by using Lemma 2.7 that \(x_{n} \rightarrow x^{*}\) as \(n \rightarrow \infty \).

Case II. Assume that \(\{ \Vert x_{n} - x^{*} \Vert \}\) is not a monotone sequence. Then, there exists a subsequence \(\{n_{i}\}\) of \(\{n\}\) such that \( \Vert x_{n_{i}} - x^{*} \Vert < \Vert x_{n_{i}+1} - x^{*} \Vert \) for all \(i \in {\mathbb {N}}\). Let \(\{\tau (n)\}\) be a positive integer sequence defined by

$$\begin{aligned} \tau (n) := \max \left\{ m \le n : \Vert x_{m} - x^{*} \Vert < \Vert x_{m+1} - x^{*} \Vert \right\} \end{aligned}$$

for all \(n \ge n_{0}\) (for some \(n_{0}\) large enough). By Lemma 2.8, \(\{\tau (n)\}\) is a nondecreasing sequence such that \(\tau (n) \rightarrow \infty \) as \(n \rightarrow \infty \) and

$$\begin{aligned} \Vert x_{\tau (n)} - x^{*} \Vert ^{2} - \Vert x_{\tau (n)+1} -x^{*} \Vert ^{2} \le 0 \end{aligned}$$
(3.17)

for all \(n \ge n_{0}\). By (3.11) and (3.12), we get

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert Ax_{\tau (n)} - w_{\tau (n)} \Vert = 0 \end{aligned}$$
(3.18)

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert y_{\tau (n)} - z_{\tau (n)} \Vert = 0. \end{aligned}$$
(3.19)

From (3.18), (3.19) and by the same proof as in Case I, we obtain

$$\begin{aligned} \limsup _{n \rightarrow \infty } \langle Fx^{*} - x^{*}, x_{\tau (n)} - x^{*} \rangle \le 0. \end{aligned}$$

By the same computation as in Case I, we also get

$$\begin{aligned} \Vert x_{\tau (n)+1} - x^{*} \Vert ^{2}&\le \left( 1-\beta _{\tau (n)}\right) \Vert x_{\tau (n)} - x^{*} \Vert ^{2} + \beta _{\tau (n)}\sigma _{\tau (n)}, \end{aligned}$$
(3.20)

where \(\beta _{\tau (n)} = \frac{(1-\mu )\alpha _{\tau (n)}}{1 - \alpha _{\tau (n)}\mu }\), \(\sigma _{\tau (n)} = \left( \frac{\alpha _{\tau (n)}}{1 - \mu } - 1 \right) \tilde{{\mathcal {M}}}+ \frac{2}{1 - \mu } \langle Fx^{*} - x^{*}, x_{\tau (n)+1} - x^{*} \rangle \) and \(\tilde{{\mathcal {M}}}= \sup \{ \Vert x_{\tau (n)} - x^{*} \Vert ^{2} : n \in {\mathbb {N}} \}\). Clearly, \(\mathop {\limsup }\limits _{n \rightarrow \infty }\sigma _{\tau (n)} \le 0\). By looking at (3.20) and using (3.17), we have \(\Vert x_{\tau (n)} - x^{*} \Vert ^{2} \le \sigma _{\tau (n)}\). This implies that \(\Vert x_{\tau (n)} - x^{*} \Vert \rightarrow 0\) as \(n \rightarrow \infty \). Thus, it follows from Lemma 2.8 and (3.20) that

$$\begin{aligned} 0 \le \Vert x_{n} - x^{*} \Vert \le \Vert x_{\tau (n)+1} - x^{*} \Vert \rightarrow 0 \end{aligned}$$

as \(n \rightarrow \infty \). Therefore, \(\{x_{n}\}\) converges strongly to \(x^{*} \in \Gamma \). The proof is complete. \(\square \)

Remark 3.2

It is worth mentioning the inspiration for designing Algorithm 1 as follows:

  1. (i)

    In Step I, the choice of \(\gamma _{n}\) defined by (3.2) was inspired by the technique of choosing the stepsize in [30, Algorithm 3.1] for the SFP. Using the concept of the Landweber operator [2, 10, 28], we then defined Equation (3.3).

  2. (ii)

    Step II was motivated by the so-called viscosity approximation method [32, 45] for finding a fixed point of a mapping \(U : {\mathcal {H}} \rightarrow {\mathcal {H}}\) (if such a point exists) as follows:

    $$\begin{aligned} x_{n+1}=\alpha _{n}f(x_{n})+ (1-\alpha _{n})Ux_{n},~~ n \ge 1, \end{aligned}$$
    (3.21)

    where \(f : {\mathcal {H}} \rightarrow {\mathcal {H}}\) is a contraction and \(\{\alpha _{n}\}\) is a sequence in (0, 1). It is known that if U corresponds to a property such as the nonexpansivity (see [45, Theorem 3.2]) and the strong quasi-nonexpansivity with the demiclosedness principle (see [1, Corollary 3.5] and [42, Theorem 3.1]), then \(\{x_{n}\}\) defined by (3.21) with some mild control conditions on \(\{\alpha _{n}\}\) converges strongly to a fixed point of U. However, larger classes of mappings may not get strong convergence of (3.21). Mainge [34] employed a relaxation of a quasi-nonexpansive mapping \(S : {\mathcal {H}} \rightarrow {\mathcal {H}}\), i.e., \(U := (1-\omega )I + \omega S\), where \(\omega \in (0, 1)\), to obtain strong convergence of (3.21) (also see [42, Corollary 3.2] in the case of demicontractive mappings). The readers can study the properties of the relaxation of mappings from [8, 11]. So, defining Equations (3.4) and (3.5) is conceptualized by [1, 8, 11, 32, 34, 42, 45].

Remark 3.3

We have some observations on Theorem 3.1.

  1. (i)

    The stepsize \(\gamma _{n}\) defined by (3.2) does not depend on \(\Vert A\Vert \); however, it depends on \(x_{n}\).

  2. (ii)

    If \(F \equiv x_{0}\) is constant, then Algorithm 1 becomes the Halpern-type algorithm. In particular, if \(x_{0} = 0\), then \(\{x_{n}\}\) converges to \(x^{*}\), where \(x^{*}\) is the minimum norm solution in \(\Gamma \).

  3. (iii)

    The assumption “\(Su = \{u\}\), \(T(Au) = \{Au\}\) for all \(u \in \Gamma \)” in Theorem 3.1 is weaker than the statement “both S and T satisfy the endpoint condition”.

  4. (iv)

    Taking \(\kappa = 0 = \kappa ^{\prime }\) in Theorem 3.1, we get a strong convergence result for quasi-nonexpansive multi-valued mappings.

4 Corollaries

A subset D of \({\mathcal {H}}\) is called proximal if for each \(x \in {\mathcal {H}}\), there exists \(y \in D\) such that \(\Vert x - y \Vert = d(x, D)\). Denote by \(PB({\mathcal {H}})\) the family of all nonempty proximal bounded subsets of \({\mathcal {H}}\). For a multi-valued mapping \(S : {\mathcal {H}} \rightarrow PB({\mathcal {H}})\), the best approximation operator of S is defined by \(B_{S}(x) := \{ y \in Sx : \Vert x - y\Vert = d(x, Sx)\}\).

Using the properties of the best approximation operator, we have the following corollary.

Corollary 4.1

Let \(A : {\mathcal {H}} \rightarrow {\mathcal {K}}\) be a bounded linear operator. Let \(S : {\mathcal {H}} \rightarrow PB({\mathcal {H}})\) and \(T : {\mathcal {K}} \rightarrow PB({\mathcal {K}})\) be two multi-valued mappings such that \(B_{S}\) and \(B_{T}\) are demicontractive multi-valued mappings with coefficients \(\kappa \) and \(\kappa ^{\prime }\), respectively. Suppose that \(I - B_{S}\) and \(I - B_{T}\) are demiclosed at 0. Assume that \(\Gamma \ne \emptyset \) and let \(F : {\mathcal {H}} \rightarrow {\mathcal {H}}\) be a \(\mu \)-contraction with respect to \(\Gamma \). Let \(\{x_n\}\) be a sequence generated iteratively by \(x_{1} \in {\mathcal {H}}\) and

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n = x_n - \gamma _{n} A^{*}(Ax_{n} - w_{n}),\\ u_n = (1 - \delta _{n})y_n + \delta _{n}z_{n},\\ x_{n+1} = \alpha _{n}F(x_n) + (1 - \alpha _{n})u_n, ~~n \ge 1, \end{array}\right. } \end{aligned}$$
(4.1)

where \(w_{n} \in B_{T}(Ax_{n})\), \(z_{n} \in B_{S}(y_{n})\), the stepsize \(\gamma _{n}\) is defined by (3.2) and the real sequences \(\{\lambda _{n}\}\), \(\{\delta _{n}\}\) and \(\{\alpha _{n}\}\) satisfy (Ci)–(Ciii) in Theorem 3.1. Then, \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Gamma \), where \(x^{*} = P_{\Gamma }F(x^{*})\).

Proof

One can show that \(B_{S}\) and \(B_{T}\) satisfy the end point condition and we also have \(\mathop {\mathrm {Fix}}(S) = \mathop {\mathrm {Fix}}(B_{S})\) and \(\mathop {\mathrm {Fix}}(T) = \mathop {\mathrm {Fix}}(B_{T})\). Hence, the result is obtained directly from Theorem 3.1. \(\square \)

Taking \({\mathcal {H}} = {\mathcal {K}}\) and \(A = I\) in Theorem 3.1, we obtain a strong convergence result for finding a common fixed point of two demicontractive multi-valued mappings as follows.

Corollary 4.2

Let \(S, T : {\mathcal {H}} \rightarrow CB({\mathcal {H}})\) be demicontractive multi-valued mappings with coefficients \(\kappa \) and \(\kappa ^{\prime }\), respectively, such that \(I - S\) and \(I - T\) are demiclosed at 0. Assume that \(\Omega := \mathop {\mathrm {Fix}}(S) \cap \mathop {\mathrm {Fix}}(T) \ne \emptyset \) and \(Su = Tu = \{u\}\) for all \(u \in \Omega \). Let \(F : {\mathcal {H}} \rightarrow {\mathcal {H}}\) be a \(\mu \)-contraction with respect to \(\Omega \). Let \(\{x_n\}\) be a sequence generated iteratively by \(x_{1} \in {\mathcal {H}}\) and

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n = (1 - \lambda _{n})x_{n} + \lambda _{n}w_{n},\\ u_n = (1 - \delta _{n})y_n + \delta _{n}z_{n},\\ x_{n+1} = \alpha _{n}F(x_n) + (1 - \alpha _{n})u_n, ~~n \ge 1, \end{array}\right. } \end{aligned}$$
(4.2)

where \(w_{n} \in Tx_{n}\), \(z_{n} \in Sy_{n}\) and the real sequences \(\{\lambda _{n}\}\), \(\{\delta _{n}\}\) and \(\{\alpha _{n}\}\) satisfy (Ci)–(Ciii) in Theorem 3.1. Then, \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }F(x^{*})\).

A strong convergence result for solving the SFPP (1.4) is obtained when S and T in Theorem 3.1 are single-valued mappings as shown below.

Corollary 4.3

Let \(A : {\mathcal {H}} \rightarrow {\mathcal {K}}\) be a bounded linear operator. Let \(S : {\mathcal {H}} \rightarrow {\mathcal {H}}\) be a \(\kappa \)-demicontractive mapping and \(T : {\mathcal {K}} \rightarrow {\mathcal {K}}\) a \(\kappa ^{\prime }\)-demicontractive mapping such that both \(I-S\) and \(I-T\) are demiclosed at 0. Assume that \(\Omega := \mathop {\mathrm {Fix}}(S) \cap A^{-1}(\mathop {\mathrm {Fix}}(T)) \ne \emptyset \). Let \(F : {\mathcal {H}} \rightarrow {\mathcal {H}}\) be a \(\mu \)-contraction with respect to \(\Omega \). Suppose that \(\{x_n\}\) is a sequence generated iteratively by \(x_{1} \in {\mathcal {H}}\) and

$$\begin{aligned} y_{n}&= x_n - \gamma _{n} A^{*}(I-T)Ax_n,\nonumber \\ x_{n+1}&= \alpha _{n}F(x_{n}) + (1 - \alpha _{n})\left( (1 - \delta _{n})y_{n} + \delta _{n}Sy_{n} \right) , ~~n \ge 1, \end{aligned}$$
(4.3)

where the stepsize \(\gamma _{n}\) is selected in such a way:

$$\begin{aligned} \gamma _{n} = {\left\{ \begin{array}{ll} \frac{\lambda _{n}\Vert (I - T)Ax_{n}\Vert ^{2}}{\left\| A^{*}(I - T)Ax_{n}\right\| ^{2}}, ~ &{}\text {if}~Ax_{n} \notin \mathop {\mathrm {Fix}}(T),\\ 0, &{}\text {otherwise}, \end{array}\right. } \end{aligned}$$
(4.4)

and the real sequences \(\{\lambda _{n}\}\), \(\{\delta _{n}\}\) and \(\{\alpha _{n}\}\) satisfy (Ci)–(Ciii) in Theorem 3.1. Then, \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }F(x^{*})\).

Remark 4.4

Some results in [5, 24] are consequences of Corollary 4.3 as follows:

  1. (i)

    Taking \(F \equiv x_{0}\), \(\lambda _{n} := \frac{1 - \kappa ^{\prime }}{2}\) and \(\delta _{n} := \delta \in (0, 1 - \kappa )\) in Corollary 4.3, we obtain a result in [5, Theorem 4.1].

  2. (ii)

    Taking \(\lambda _{n} := \lambda \in (0, 1 - \kappa ^{\prime })\) and \(\delta _{n} := \delta \in (0, 1 - \kappa )\) in Corollary 4.3, we obtain a result in [24, Theorem 4.2].

5 Numerical Example

In this section, we give a numerical example of Theorem 3.1 to show the convergence behavior of Algorithm 1.

Table 1 The numerical experiment of Algorithm 1
Table 2 A comparative result of Algorithm 1 by choosing different contractions

Example 5.1

Let \({\mathcal {H}} = {\mathbb {R}}^{3}\) and \({\mathcal {K}} = {\mathbb {R}}\) with the usual norms. Define \(S : {\mathbb {R}}^{3} \rightarrow CB({\mathbb {R}}^{3})\) by \(S(a, b, c) = \left\{ (-3a, -3b, c)\right\} \) and \(T : {\mathbb {R}} \rightarrow CB({\mathbb {R}})\) by

$$\begin{aligned} Tx = {\left\{ \begin{array}{ll} \left[ -\frac{3}{2}x, -2x \right] , \quad &{}\text {if}~x \le 0,\\ \left[ -2x, -\frac{3}{2}x \right] , \quad &{}\text {if}~x > 0.\\ \end{array}\right. } \end{aligned}$$

One can show that S is demicontractive with a constant \(\kappa = \frac{1}{2}\). By Example 2.3, T is demicontractive with a constant \(\kappa ^{\prime } = \frac{12}{25}\). Let \(A: {\mathbb {R}}^{3} \rightarrow {\mathbb {R}}\) be defined by \(A(a, b, c) = -2a + 3b -5c.\) We consider Algorithm 1 by setting

$$\begin{aligned} w_{n} = - \frac{3}{2}Ax_{n}, ~~z_{n} = Sy_{n}, ~~\lambda _{n} = \sqrt{\frac{n+2}{4n+10}},~~ \delta _{n} = \frac{n}{5n +1}~~\text {and}~~\alpha _{n} = \frac{1}{n+3}. \end{aligned}$$

An initial point is chosen by \(x_{1} = (14, -7, 8)\) and the stopping criterion for our testing process is set as: \(E_{n} := \Vert x_{n} - x_{n-1} \Vert < 10^{-7}\) where \(x_{n} = (a_{n}, b_{n},c_{n})\). The numerical experiment of Algorithm 1 by taking \(Fx = \frac{1}{2}x\) is shown by Table 1. Next, in Table 2, we show the number of iterations and the approximate solution by considering different contractions.

Remark 5.2

By testing the convergence behavior of Algorithm 1 in Example 5.1, we note that

  1. (i)

    The sequence \(\{x_{n}\}\) converges to a solution, i.e., \(x_{n} \rightarrow 0 \in \Gamma \);

  2. (ii)

    Choosing non-constant functions makes Algorithm 1 more efficient than using constant functions in terms of the number of iterations and the approximate solution. So, our algorithm is more general and desirable than the Halpern-type algorithm.

6 Conclusion

In this work, we study and investigate the split fixed point problem (SFPP) for multi-valued mappings emphasized on the class of demicontractive mappings in Hilbert spaces. To solve this problem, we propose a viscosity-type algorithm whose stepsize does not depend on any operator norms. We then prove that the sequence generated by our proposed algorithm converges strongly to a solution of the considered SFPP under some suitable assumptions and some mild control conditions. Our main result generalizes and improves many results in the literature, such as in [5, 18, 23, 24, 27, 29, 35,36,37, 40, 42] in terms of the class of mappings, the type of convergence and the stepsize in the method.