1 Introduction

Throughout this work, we assume that ℋ, \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) are real Hilbert spaces with inner products \(\langle \cdot , \cdot \rangle \) and the induced norms \(\| \cdot \|\), and let \(I\) be the identity operator on a Hilbert space. Denote by ℕ and ℝ the set of positive integers and the set of real numbers, respectively.

The split inverse problem (SIP) concerns a model of finding a point

$$ x^{*} \in \mathcal{H}_{1} ~\text{that solves}~~\text{IP}_{1} ~~~ \text{such that}~~~ Ax^{*} \in \mathcal{H}_{2} ~\text{solves}~ \text{IP}_{2}, $$
(1.1)

where IP1 and IP2 denote inverse problems formulated in \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively, and \(A : \mathcal{H}_{1} \rightarrow \mathcal{H}_{2}\) is a bounded linear operator. In 1994, Censor and Elfving [7] introduced the first instance of the split inverse problem which is the split feasibility problem (SFP). After that, various split problems were introduced and studied such as the split common fixed point problem (SCFPP) [8], the split common null point problem (SCNPP) [5], the split variational inequality problem (SVIP) [11], the split equilibrium problem (SEP) [23], the split minimization problem (SMP), etc. Recently, the split inverse problem has been widely studied by many authors (see [4, 5, 7,8,9, 11, 15, 17, 23]) due to its model can be applied in various problems related to significant real-world applications. For example, in signal and image processing can be formulated as the equation system:

$$ y = Ax + \varepsilon , $$
(1.2)

where \(y \in \mathbb{R}^{M}\) is the vector of noisy observations, \(x \in \mathbb{R}^{N}\) is a recovered vector, \(\varepsilon \) is the noise, and \(A : \mathbb{R}^{N} \rightarrow \mathbb{R}^{M}\) is a bounded linear observation operator. It is known that (1.2) can be modeled as the constrained least-squares problem:

$$ \min _{x \in \mathbb{R}^{N}} \frac{1}{2}\|y - Ax\|_{2}^{2}~~ \text{subject to}~~\|x\|_{1} \leq \sigma , $$
(1.3)

for any nonnegative real number \(\sigma \). Then, we can apply SIP model (1.1) to Problem (1.3) as follows: Find a point \(x \in \mathbb{R}^{N}\) such that \(\|x\|_{1} \leq \sigma \) and \(Ax = y\). In addition, Kotzer et al. [24] used the formulation of the split feasibility problem to model the design of a nonlinear synthetic discriminant filter for optical pattern recognition. In [13], the convex feasibility problem in image recovery was discussed. Censor et al. [9, 10] studied the inverse problem in intensity-modulated radiation therapy. López et al. [25] introduced a self-adaptive algorithm for the split feasibility problem and applied to signal processing.

In this paper, we focus our attention on the split null point problem (SNPP) which was introduced by Byrne et al. in 2012 (see [5]). This split problem is the problem of finding a point of the null point set of a multivalued mapping in a Hilbert space such that its image under a given bounded linear operator belongs to the null point set of another multivalued mapping in the image space.

Given two multivalued mappings \(B_{1} : \mathcal{H}_{1} \rightarrow 2^{\mathcal{H}_{1}}\) and \(B_{2} : \mathcal{H}_{2} \rightarrow 2^{\mathcal{H}_{2}}\), a bounded linear operator \(A : \mathcal{H}_{1} \rightarrow \mathcal{H}_{2}\), then the SNPP is formulated as finding a point \(x^{*} \in \mathcal{H}_{1}\) such that

$$ x^{*} \in B_{1}^{-1}0 ~~~\text{and} ~~ Ax^{*} \in B_{2}^{-1}0, $$
(1.4)

where \(B_{1}^{-1}0 := \{ x \in \mathcal{H}_{1} : 0 \in B_{1}x \}\) and \(B_{2}^{-1}0\) are null point sets of \(B_{1}\) and \(B_{2}\), respectively. To study the SNPP (1.4), we often consider in the case of maximal monotone mappings \(B_{1}\) and \(B_{2}\) (see [1, 2, 5, 31, 32]). The subdifferential of a lower semicontinuous and convex function is an important example of maximal monotone mappings and its resolvents are often used to construct algorithms for solving the minimization problem of the function. For example in [14], Combettes and Pesquet introduced proximal splitting methods constructed by the resolvent operators of the subdifferential of functions and also discussed in signal processing.

Byrne et al. [5] proposed two iterative algorithms for solving the SNPP (1.4) for two maximal monotone mappings \(B_{1}\) and \(B_{2}\) as follows:

$$ x_{n+1} = J_{\lambda }^{B_{1}}(x_{n} - \gamma A^{\ast }(I - J_{\lambda }^{B_{2}})Ax_{n}),~~~n \in \mathbb{N}, $$
(1.5)

and

$$ \textstyle\begin{cases} u \in \mathcal{H}_{1}, \\ x_{n+1} = \alpha _{n}u + (1 - \alpha _{n})J_{\lambda }^{B_{1}}(x_{n} - \gamma A^{\ast }(I - J_{\lambda }^{B_{2}})Ax_{n}),~~~n \in \mathbb{N}, \end{cases} $$
(1.6)

where \(J_{\lambda }^{B_{1}}\) and \(J_{\lambda }^{B_{2}}\) are resolvents of \(B_{1}\) and \(B_{2}\), respectively, \(A^{*}\) is the adjoint operator of \(A\). They also proved a weak convergence result of Algorithm (1.5) and a strong convergence result of Algorithm (1.6) when \(\gamma \in \left ( 0, 2/\|A\|^{2}\right )\), \(\alpha _{n} \in (0, 1)\), \(\lim _{n \rightarrow \infty } \alpha _{n} = 0\) and \(\sum _{n=1}^{\infty } \alpha _{n} = \infty \). We observe that the parameter \(\lambda \) in above mentioned algorithms depend on the norm of the operator \(A\); however, the calculation of \(\|A\|\) is not an easy work in general practice.

In 2015, the problem of finding a common solution of the null point and fixed point problem was first considered by Takahashi et al. [34]. In 2017, Eslamain et al. [16] studied the problem of finding a common solution of the split null point problem and the fixed point problem for maximal monotone mappings and demicontractive mappings, respectively (also see [20, 21]). It is well known that the class of demicontractive mappings [18, 27] includes several common types of classes of mappings occurring in nonlinear analysis and optimization problems.

In our work, inspired and motivated by these researches, we are interested to study the split null point problem and the fixed point problem for multivalued mappings in Hilbert spaces. Our main objective is to construct a strongly convergent algorithm without prior knowledge of the operator norm for solving the considered problem on the class of maximal monotone mappings and the class of demicontractive multivalued mappings. The paper is organized as follows. Basic definitions and some useful lemmas for proving our main result are given in the preliminaries section. Section 3 is our main result. In this section, we introduce a viscosity-type algorithm [28] by selecting the stepsizes in the similar adaptive way to López et al. [25] for finding a common solution of the split null point problem and the fixed point problem for maximal monotone mappings and demicontractive multivalued mappings, and prove a strong convergence result of the proposed algorithm under some suitable conditions. In Sect. 4, we apply our main problem to the split feasibility problem and the split minimization problem. Finally, in Sect. 5, we provide some numerical results to demonstrate the convergence behavior of our algorithm and to support our main theorem.

2 Preliminaries

We denote the weak and strong convergence of a sequence \(\{x_{n}\} \subset \mathcal{H}\) to \(x \in \mathcal{H}\) by \(x_{n} \rightharpoonup x\) and \(x_{n} \rightarrow x\), respectively.

Let \(E\) be a nonempty closed convex subset of ℋ. The (metric) projection from ℋ onto \(E\), denoted by \(P_{E}\) is defined for each \(x \in \mathcal{H}\), \(P_{E}x\) is the unique element in \(E\) such that

$$ \| x - P_{E}x \| = d(x, E) := \inf \{ \| x - z \| : z \in E \}. $$

It is well known that \(u = P_{E}x\) if and only if \(\langle x - u, z - u \rangle \leq 0\) for all \(z \in E\). A mapping \(T : E \rightarrow E\) is called directed if \(F(T) := \{ x \in E : Tx = x \} \neq \emptyset \) and

$$ \|x - Tx\|^{2} \leq \langle x - Tx, x - p \rangle ~~~~ \text{for all}~x \in E, p \in F(T). $$

Let \(D\) be a nonempty subset of \(E\). A mapping \(f : E \rightarrow E\) is called a \(\beta \)-contraction with respect to \(D\), where \(\beta \in [0, 1)\) if \(\|f(x) - f(z) \| \leq \beta \| x - z\|\) for all \(x \in E, z \in D\); \(f\) is called a \(\beta \)-contraction if \(f\) is a \(\beta \)-contraction with respect to \(E\).

We now recall some notations and definitions on multivalued mappings. Let \(U : E \rightarrow 2^{E}\) be a multivalued mapping. An element \(p \in E\) is called a fixed point of \(U\) if \(p \in Up\). The set of all fixed points of \(U\) is also denoted by \(F(U)\). We say that \(U\) satisfies the endpoint condition if \(Up = \{p\}\) for all \(p \in F(U)\). We denote by CB\((E)\) the family of all nonempty closed bounded subsets of \(E\). The Pompeiu-Hausdorff metric on CB\((E)\) is defined by

$$ H(C, D) := \max \bigg\{ \sup _{x \in C} d(x, D), \sup _{z \in D} d(z, C) \bigg\} $$

for all \(C, D \in \) CB\((E)\).

Definition 2.1

Let \(E\) be a nonempty closed convex subset of ℋ. A multivalued mapping \(U : E \rightarrow \) CB\((E)\) is said to be

  1. (i)

    nonexpansive if

    $$ H(Ux, Uz) \leq \| x - z \| ~~~~\text{for all} ~ x, z \in E, $$
  2. (ii)

    quasi-nonexpansive if \(F(U) \neq \emptyset \) and

    $$ H(Ux, Up) \leq \| x - p \| ~~~~\text{for all} ~ x \in E, p \in F(U), $$
  3. (iii)

    demicontractive [12, 19] if \(F(U) \neq \emptyset \) and there exists \(\kappa \in [0,1)\) such that

    $$ H(Ux, Up)^{2} \leq \| x - p \|^{2} +\kappa d(x, Ux)^{2} ~~~~ \text{for all} ~ x \in E, p \in F(U). $$

It is observed in Definition 2.1 that the class of demicontractive mappings includes classes of quasi-nonexpansive mappings and nonexpansive mappings with nonempty fixed point sets. We give an example of demicontractive multivalued mappings which is not quasi-nonexpansive as follows.

Example 2.2

[22] Let \(\mathcal{H} = \mathbb{R}\). For each \(i \in \mathbb{N}\), define \(U_{i} : \mathbb{R} \rightarrow 2^{\mathbb{R}}\) by

$$ U_{i}x = \textstyle\begin{cases} [ -\frac{(2i + 1)x}{2}, -(i+1)x ], ~~ &\text{if}~x \leq 0, \\ [ -(i+1)x, -\frac{(2i + 1)x}{2} ], &\text{if}~x > 0. \\ \end{cases} $$

Then, \(U_{i}\) is demicontractive with a constant \(\kappa _{i} = \frac{4i^{2}+8i}{4i^{2}+12i + 9} \in (0, 1)\).

Lemma 2.3

Let \(E\) be a nonempty closed convex subset of ℋ. Let \(U : E \rightarrow \) CB\((E)\) be a \(\kappa \)-demicontractive multivalued mapping. Then we have

  1. (i)

    \(F(U)\) is closed;

  2. (ii)

    If \(U\) satisfies the endpoint condition, then \(F(U)\) is convex.

Proof

(i) Let \(\{p_{n}\} \subset F(U)\) be a sequence such that \(p_{n} \rightarrow p\) as \(n \rightarrow \infty \). Since \(U\) is \(\kappa \)-demicontractive, we have

$$\begin{aligned} d(p, Up) &\leq \|p -p_{n}\| + d(p_{n}, Up) \\ &\leq \|p -p_{n}\| + H(Up_{n}, Up) \\ &\leq 2 \|p -p_{n}\| + \sqrt{\kappa } d(p, Up). \end{aligned}$$

By taking \(n \rightarrow \infty \) into above inequality, we have \(d(p, Up) \leq \sqrt{\kappa } d(p, Up)\). This implies that \(d(p, Up) = 0\), that is, \(p \in F(U)\). Hence \(F(U)\) is closed.

(ii) Suppose that \(Up = \{p\}\) for all \(p \in F(U)\). Let \(x , y \in F(U)\) and \(\alpha \in (0,1)\). Take \(z := \alpha x + (1-\alpha )y\), and let \(v \in Uz\). By the demicontractivity of \(U\), we have

$$\begin{aligned} d(z, Uz)^{2} &\leq \| z - v \|^{2} \\ &= \| \alpha (x-v) + (1-\alpha )(y - v) \|^{2} \\ &= \alpha \| x - v \|^{2} + (1-\alpha )\| y - v \|^{2} - \alpha (1- \alpha )\| x - y \|^{2} \\ &= \alpha d(v, Ux)^{2} + (1-\alpha )d(v, Uy)^{2} - \alpha (1-\alpha ) \| x - y \|^{2} \\ &\leq \alpha H(Uz, Ux)^{2} + (1-\alpha )H(Uz, Uy)^{2} - \alpha (1- \alpha )\| x - y \|^{2} \\ &\leq \alpha \| x - z \|^{2} + (1-\alpha )\| y - z \|^{2} - \alpha (1- \alpha )\| x - y \|^{2} + \kappa d(z, Uz)^{2} \\ &= \alpha (1 - \alpha )^{2}\| x - y \|^{2} + \alpha ^{2}(1-\alpha )\| x - y \|^{2} - \alpha (1-\alpha )\| x - y \|^{2} + \kappa d(z, Uz)^{2} \\ &= \alpha (1-\alpha )(1-\alpha + \alpha -1)\| x - y \|^{2} + \kappa d(z, Uz)^{2} = \kappa d(z, Uz)^{2}, \end{aligned}$$

which implies \(d(z, Uz)^{2} = 0\), i.e., \(z \in F(U)\). Therefore, \(F(U)\) is convex. □

Definition 2.4

Let \(E\) be a nonempty closed convex subset of ℋ and let \(U : E \rightarrow \) CB\((E)\) be a multivalued mapping. We say that \(I - U\) is demiclosed at zero if for any sequence \(\{x_{n}\}\) in \(E\) which converges weakly to \(p \in E\) and the sequence \(\{\|x_{n} - z_{n}\|\}\) converges strongly to 0, where \(z_{n} \in Ux_{n}\), then \(p \in F(U)\).

Let us recall the maximal monotone mapping.

Definition 2.5

A multivalued mapping \(B : \mathcal{H} \rightarrow 2^{\mathcal{H}}\) is called maximal monotone if \(B\) is monotone, i.e.,

$$ \langle x - z, u - w \rangle \geq 0 ~~~~ \text{for all} ~ x,z \in \text{dom}(B), u \in Bx, w \in Bz, $$

where dom\((B) := \{ x \in \mathcal{H} : Bx \neq \emptyset \}\), and the graph G\((B)\) of \(B\),

$$ \text{G}(B) := \{ (x, u) \in \mathcal{H} \times \mathcal{H} : u \in Bx \}, $$

is not properly contained in the graph of any other monotone mapping.

Let \(B : \mathcal{H} \rightarrow 2^{\mathcal{H}}\) be a maximal monotone mapping and \(\lambda > 0\). The resolvent of \(B\) with parameter \(\lambda \) is defined by

$$ J_{\lambda }^{B} := ( I + \lambda B)^{-1}. $$

It is well known [3] that \(J_{\lambda }^{B} : \mathcal{H} \rightarrow \text{dom}(B)\) is single-valued, firmly nonexpansive, i.e., for any \(x, z \in \mathcal{H}\),

$$ \| J_{\lambda }^{B}x - J_{\lambda }^{B}z \|^{2} \leq \langle J_{\lambda }^{B}x - J_{\lambda }^{B}z, x - z\rangle . $$

Moreover, \(F(J_{\lambda }^{B}) = B^{-1}0\) and \(I - J_{\lambda }^{B}\) is demiclosed at zero.

Let \(g : \mathcal{H} \rightarrow (- \infty , \infty ]\) be proper. A subdifferential \(\partial g\) of \(g\) at \(x \in \mathcal{H}\) is defined by

$$ \partial g(x) := \{ u \in \mathcal{H} : g(x) + \langle u, z - x \rangle \leq g(z) , ~\forall z \in \mathcal{H} \}. $$

The set of minimizers of \(g\) is defined by

$$ \text{Argmin}~\!g := \{ x \in \mathcal{H} : g(x) \leq g(z), ~\forall z \in \mathcal{H}\}. $$

Note that if \(g\) is a proper, lower semicontinuous and convex function, then \(\partial g\) is a maximal monotone mapping (see [29]). In this case, the resolvent \(J_{\lambda }^{\partial g}\) of \(\partial g\) is called the proximity operator [3] of \(\lambda g\), and is denoted by Prox\(_{\lambda g}\).

Example 2.6

Let \(E\) be a nonempty closed convex subset of ℋ. Define an indicator function \(i_{E} : \mathcal{H} \rightarrow (-\infty , \infty ]\) of \(E\) by

$$ i_{E}(x) := \textstyle\begin{cases} 0, ~~ &\text{if}~x \in E, \\ \infty , &\text{if}~x \notin E. \\ \end{cases} $$

One can see that

$$ \partial i_{E}(x) = N_{E}(x) := \textstyle\begin{cases} \{ u \in \mathcal{H} : \langle u, z - x \rangle \leq 0, ~\forall z \in E \}, ~~ &\text{if}~x \in E, \\ \emptyset , &\text{otherwise}. \\ \end{cases} $$

Since \(i_{E}\) is a proper, lower semicontinuous and convex function, we have \(\partial i_{E} = N_{E}\) is maximal monotone and \(J_{\lambda }^{\partial i_{E}} = J_{\lambda }^{N_{E}} = P_{E}\) for all \(\lambda > 0\). Note that we call \(N_{E}\) the normal cone to \(E\).

We next give some significant facts and tools for proving our main result.

Lemma 2.7

Let \(x, z \in \mathcal{H}, \vartheta \in \mathbb{R}\). Then the following inequalities hold on ℋ:

  1. (i)

    \(\| x + z \|^{2} \leq \|x\|^{2} + 2\langle z, x + z\rangle \);

  2. (ii)

    \(\| \vartheta x + (1 - \vartheta ) z \|^{2} = \vartheta \|x\|^{2} + (1 - \vartheta )\|z\|^{2} - \vartheta (1 - \vartheta )\|x - z\|^{2}\).

Lemma 2.8

([35])

Suppose that \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that

$$ a_{n+1} \leq (1 - \mu _{n})a_{n} + \mu _{n}\sigma _{n} + \tau _{n},~~~~n \in \mathbb{N}, $$

where \(\{\mu _{n}\}, \{\sigma _{n}\}\) and \(\{\tau _{n}\}\) satisfy the following conditions:

  1. (i)

    \(\{\mu _{n}\} \subset [0, 1]\), \(\sum _{n=1}^{\infty } \mu _{n} = \infty \);

  2. (ii)

    \(\limsup _{n}\sigma _{n} \leq 0\) or \(\sum _{n=1}^{\infty } | \mu _{n} \sigma _{n} | < \infty \);

  3. (iii)

    \(\tau _{n} \geq 0\) for all \(n \in \mathbb{N}\), \(\sum _{n=1}^{\infty } \tau _{n} < \infty \).

Then \(\lim _{n \rightarrow \infty } a_{n} = 0\).

Lemma 2.9

([26])

Let \(\{r_{n}\}\) be a sequence of real numbers such that there exists a subsequence \(\{n_{i}\}\) of \(\{n\}\) which satisfies \(r_{n_{i}} < r_{n_{i}+1}\) for all \(i \in \mathbb{N}\). Define a sequence of positive integers \(\{\rho (n)\}\) by

$$ \rho (n) := \max \{ m \leq n : r_{m} < r_{m+1}\} $$

for all \(n \geq n_{0}\) (for some \(n_{0}\) large enough). Then \(\{\rho (n)\}\) is a nondecreasing sequence such that \(\rho (n) \rightarrow \infty \) as \(n \rightarrow \infty \), and it holds that

$$ r_{\rho (n)} \leq r_{\rho (n)+1},~~~~~r_{n} \leq r_{\rho (n)+1}. $$

3 Main Result

To prove our main result, we need the following useful lemma.

Lemma 3.1

Let \(A : \mathcal{H}_{1} \rightarrow \mathcal{H}_{2}\) be a bounded linear operator, and let \(T : \mathcal{H}_{2} \rightarrow \mathcal{H}_{2}\) be a directed mapping with \(A^{-1}(F(T)) \neq \emptyset \). If \(x \in \mathcal{H}_{1}\) with \(Ax \neq T(Ax)\) and \(p \in A^{-1}(F(T))\), then

$$ \| x - \gamma A^{*}(I - T)Ax - p\|^{2} \leq \| x - p \|^{2} - (2 - \xi )\xi \frac{\|(I - T)Ax\|^{4}}{\|A^{*}(I - T)Ax\|^{2}}, $$
(3.1)

where

$$ \gamma := \frac{\xi \|(I - T)Ax\|^{2}}{\|A^{*}(I - T)Ax\|^{2}} $$

and \(\xi \in (0, 2)\).

Proof

Let \(x \in \mathcal{H}_{1}\) with \(Ax \neq T(Ax)\) and \(p \in A^{-1}(F(T))\). If \(A^{*}(I - T)Ax = 0\), by the property of a directed mapping \(T\), we have

$$ \| (I - T)Ax \|^{2} \leq \langle (I - T)Ax, Ax - Ap \rangle = \langle A^{*}(I - T)Ax, x - p \rangle = 0, $$

which is a contradiction. Thus, \(A^{*}(I - T)Ax \neq 0\) and hence \(\gamma \) is well defined. Since \(T\) is directed,

$$\begin{aligned} \| x - \gamma A^{*}(I - T)Ax - p\|^{2} &= \| x - p \|^{2} - 2 \gamma \langle A^{*}(I - T)Ax, x - p \rangle \\ &\quad + \gamma ^{2} \| A^{*}(I - T)Ax \|^{2} \\ &= \| x - p \|^{2} - 2 \gamma \langle (I - T)Ax, Ax - Ap \rangle \\ &\quad + \gamma ^{2} \| A^{*}(I - T)Ax \|^{2} \\ &\leq \| x - p \|^{2} - 2 \gamma \|(I - T)Ax \|^{2} + \gamma ^{2} \| A^{*}(I - T)Ax \|^{2} \\ &= \| x - p \|^{2} - (2 - \xi )\xi \frac{\|(I - T)Ax\|^{4}}{\|A^{*}(I - T)Ax\|^{2}}. \end{aligned}$$

 □

We now state and prove our main theorem.

Theorem 3.2

Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be two real Hilbert spaces and \(C\) be a nonempty closed convex subset of \(\mathcal{H}_{1}\). Let \(A : \mathcal{H}_{1} \rightarrow \mathcal{H}_{2}\) be a bounded linear operator. Let \(U : C \rightarrow \) CB\((C)\) be a \(\kappa \)-demicontractive multivalued mapping. Let \(B_{1} : \mathcal{H}_{1} \rightarrow 2^{\mathcal{H}_{1}}\) and \(B_{2} : \mathcal{H}_{2} \rightarrow 2^{\mathcal{H}_{2}}\) be maximal monotone mappings such that dom\((B_{1})\) is included in \(C\), and let \(J_{\lambda }^{B_{1}}\) and \(J_{\lambda }^{B_{2}}\) be resolvents of \(B_{1}\) and \(B_{2}\), respectively for \(\lambda > 0\). Assume that \(\Omega := F(U) \cap \Gamma \neq \emptyset \), where \(\Gamma = \left \{ x \in B_{1}^{-1}0 : Ax \in B_{2}^{-1}0 \right \} \). Let \(f :C \rightarrow C\) be a \(\beta \)-contraction with respect to \(\Omega \). Suppose that \(\{x_{n}\}\) is a sequence generated iteratively by \(x_{1} \in C\) and

$$ \textstyle\begin{cases} y_{n} = J_{\lambda _{n}}^{B_{1}}(x_{n} - \gamma _{n} A^{*}(I - J_{ \lambda _{n}}^{B_{2}})Ax_{n}), \\ u_{n} = (1-\vartheta _{n})y_{n} + \vartheta _{n} z_{n}, \\ x_{n+1} = \alpha _{n}f(x_{n}) + (1 - \alpha _{n})u_{n},~~~~~~n \in \mathbb{N}, \end{cases} $$
(3.2)

where \(z_{n} \in Uy_{n}\), the stepsize \(\gamma _{n}\) is selected in such a way:

$$ \gamma _{n} := \textstyle\begin{cases} \frac{\xi _{n}\left \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n}\right \| ^{2}}{\left \| A^{*}(I - J_{\lambda _{n}}^{B_{2}})Ax_{n}\right \| ^{2}}, ~ &\textit{if}~Ax_{n} \notin B_{2}^{-1}0, \\ 1, &\textit{otherwise}, \end{cases} $$
(3.3)

and the real sequences \(\{\alpha _{n}\}, \{\lambda _{n}\}, \{\vartheta _{n}\}\) and \(\{\xi _{n}\}\) satisfy the following conditions:

  1. (C1)

    \(\alpha _{n} \in (0, 1)\) such that \(\lim _{n \rightarrow \infty } \alpha _{n} = 0\) and \(\sum _{n=1}^{\infty } \alpha _{n} = \infty \);

  2. (C2)

    \(\lambda _{n} \in (0, \infty )\) such that \(\liminf _{n \rightarrow \infty } \lambda _{n} > 0\);

  3. (C3)

    \(0 < a \leq \vartheta _{n} \leq b < 1 - \kappa \);

  4. (C4)

    \(0 < c \leq \xi _{n} \leq d < 2\).

If \(U\) satisfies the endpoint condition and \(I - U\) is demiclosed at zero, then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).

Proof

By Lemma 2.3, we have \(F(U)\) is closed and convex, and hence \(\Omega \) is also closed and convex. One can show that \(P_{\Omega }f\) is a contraction on \(\Omega \) and it follows from Banach fixed point theorem that \(x^{*} = P_{\Omega }f(x^{*})\) for some \(x^{*} \in \Omega \). By characterization of the metric projection \(P_{\Omega }\), we have

$$ \langle f(x^{*}) - x^{*}\!, p - x^{*} \rangle \leq 0~~~\text{for all}~p \in \Omega . $$
(3.4)

Since \(x^{*} \in \Omega \), we have \(Ux^{*} = \{x^{*}\}\), \(J_{\lambda _{n}}^{B_{1}}x^{*} = x^{*}\) and \(J_{\lambda _{n}}^{B_{2}}(Ax^{*}) = Ax^{*}\). We first show that \(\{x_{n}\}\) is bounded. By Lemma 2.7 (ii) and the demicontractivity of \(U\) with the constant \(\kappa \), we have

$$\begin{aligned} \| u_{n} - x^{*} \|^{2} &= \left \| (1-\vartheta _{n})(y_{n} - x^{*}) + \vartheta _{n} (z_{n} - x^{*}) \right \| ^{2} \\ &= (1-\vartheta _{n}) \| y_{n} -x^{*} \|^{2} + \vartheta _{n}\| z_{n} - x^{*} \|^{2} - \vartheta _{n}(1-\vartheta _{n})\| y_{n} - z_{n} \|^{2} \\ &= (1-\vartheta _{n}) \| y_{n} -x^{*} \|^{2} + \vartheta _{n} d( z_{n}, Ux^{*} )^{2} - \vartheta _{n}(1-\vartheta _{n})\| y_{n} - z_{n} \|^{2} \\ &\leq (1-\vartheta _{n}) \| y_{n} -x^{*} \|^{2} + \vartheta _{n} H( Uy_{n}, Ux^{*})^{2} - \vartheta _{n}(1-\vartheta _{n})\| y_{n} - z_{n} \|^{2} \\ &\leq (1-\vartheta _{n}) \| y_{n} -x^{*} \|^{2} + \vartheta _{n} \left ( \| y_{n} - x^{*} \|^{2} + \kappa ~\!d(y_{n}, Uy_{n})^{2} \right ) \\ &\quad - \vartheta _{n}(1-\vartheta _{n})\| y_{n} - z_{n} \|^{2} \\ &\leq (1-\vartheta _{n}) \| y_{n} -x^{*} \|^{2} + \vartheta _{n} \| y_{n} - x^{*} \|^{2} + \vartheta _{n} \kappa \|y_{n} - z_{n}\|^{2} \\ &\quad - \vartheta _{n}(1-\vartheta _{n})\| y_{n} - z_{n} \|^{2} \\ &= \| y_{n} - x^{*} \|^{2} - \vartheta _{n}(1 - \kappa - \vartheta _{n} )\| y_{n} - z_{n}\|^{2}. \end{aligned}$$
(3.5)

By the firm nonexpansivity of \(J_{\lambda _{n}}^{B_{2}}\), we have \(J_{\lambda _{n}}^{B_{2}}\) is a directed mapping. If \(Ax_{n} \notin B_{2}^{-1}0\), then \(J_{\lambda _{n}}^{B_{2}}(Ax_{n}) \neq Ax_{n}\) and it follows from Lemma 3.1 that

$$\begin{aligned} \| y_{n} - x^{*} \|^{2} &= \| J_{\lambda _{n}}^{B_{1}}(x_{n} - \gamma _{n} A^{*}(I - J_{\lambda _{n}}^{B_{2}} )Ax_{n}) - x^{*} \|^{2} \\ &\leq \| x_{n} - \gamma _{n} A^{*}(I - J_{ \lambda _{n}}^{B_{2}} )Ax_{n} - x^{*} \|^{2} \\ &\leq \| x_{n} - x^{*} \|^{2} - (2 - \xi _{n})\xi _{n} \frac{\| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{4}}{\| A^{*}(I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2}} \end{aligned}$$
(3.6)
$$\begin{aligned} &\leq \| x_{n} - x^{*} \|^{2} - \frac{(2 - \xi _{n})\xi _{n}}{\|A\|^{2}} \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2}. \end{aligned}$$
(3.7)

Note that in the case of \(Ax_{n} \in B_{2}^{-1}0\), we have \(J_{\lambda _{n}}^{B_{2}}(Ax_{n}) = Ax_{n}\). This implies that \(\| y_{n} - x^{*} \|^{2} = \| J_{\lambda _{n}}^{B_{1}}x_{n} - J_{ \lambda _{n}}^{B_{1}}x^{*} \|^{2} \leq \| x_{n} - x^{*} \|^{2}\) and hence (3.7) still holds. By substituting (3.7) into (3.5), we have

$$\begin{aligned} \| u_{n} - x^{*} \|^{2} &\leq \| x_{n} - x^{*} \|^{2} - \frac{(2 - \xi _{n})\xi _{n}}{\|A\|^{2}} \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2} \\ &\quad - \vartheta _{n}(1 - \kappa - \vartheta _{n} )\| y_{n} - z_{n} \|^{2}. \end{aligned}$$
(3.8)

It follows that \(\| u_{n} - x^{*} \| \leq \| x_{n} - x^{*} \|\). Thus, we have

$$\begin{aligned} \| x_{n+1} - x^{*} \| &= \| \alpha _{n}(f(x_{n}) - x^{*}) + (1 - \alpha _{n})(u_{n} - x^{*}) \| \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \| + (1-\alpha _{n})\| u_{n} - x^{*} \| \\ &\leq \alpha _{n}( \| f(x_{n}) - f(x^{*}) \| + \| f(x^{*}) - x^{*} \| ) + (1-\alpha _{n})\| x_{n} - x^{*}\| \\ &\leq \alpha _{n}( \beta \| x_{n} - x^{*} \| + \| f(x^{*}) - x^{*} \| ) + (1-\alpha _{n})\| x_{n} - x^{*}\| \\ &= (1 - \alpha _{n}(1-\beta )) \| x_{n} - x^{*} \| + \alpha _{n}(1- \beta ) \frac{\| f(x^{*}) - x^{*} \|}{1-\beta } \\ &\leq \max \left \{ \| x_{n} - x^{*} \|, \frac{\| f(x^{*}) - x^{*}\|}{1-\beta } \right \} . \end{aligned}$$

By mathematical induction, we obtain

$$\begin{aligned} \| x_{n} - x^{*} \| \leq \max \left \{ \| x_{1} - x^{*} \|, \frac{\| f(x^{*}) - x^{*} \|}{1-\beta } \right \} \end{aligned}$$

for all \(n \in \mathbb{N}\). Therefore, \(\{x_{n}\}\) is bounded. This implies that \(\{f(x_{n})\}\) and \(\{y_{n}\}\) are also bounded. Now, from (3.8), we have

$$\begin{aligned} \| x_{n+1} -x^{*} \|^{2} &= \| \alpha _{n}(f(x_{n}) - x^{*}) + (1 - \alpha _{n})(u_{n} - x^{*}) \|^{2} \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + (1-\alpha _{n})\| u_{n} - x^{*}\|^{2} \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + \| x_{n} - x^{*} \|^{2} - \frac{(2 - \xi _{n})\xi _{n}}{\|A\|^{2}} \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2} \\ &\quad - \vartheta _{n}(1 - \kappa - \vartheta _{n} )\| y_{n} - z_{n} \|^{2}. \end{aligned}$$
(3.9)

By (3.9), we obtain the following two inequalities

$$ \frac{(2 - \xi _{n})\xi _{n}}{\|A\|^{2}} \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2} \leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + \|x_{n} - x^{*} \|^{2} - \| x_{n+1} - x^{*} \|^{2} $$
(3.10)

and

$$ \vartheta _{n}(1 - \kappa - \vartheta _{n} )\| y_{n} - z_{n}\|^{2} \leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + \|x_{n} - x^{*} \|^{2} - \| x_{n+1} - x^{*} \|^{2}. $$
(3.11)

We next divide the rest of the proof into two cases.

Case 1: Assume that there exists \(n_{0} \in \mathbb{N}\) such that \(\{ \| x_{n} - x^{*} \| \}_{n \geq n_{0}}\) is either nonincreasing or nondecreasing. Since \(\{ \| x_{n} - x^{*} \| \}\) is bounded, then it converges and \(\|x_{n} - x^{*}\|^{2} - \| x_{n+1} - x^{*} \|^{2} \rightarrow 0\) as \(n \rightarrow \infty \). Inequalities (3.10), (3.11) and our control conditions (C1)–(C4) yield

$$\begin{aligned} \lim _{n \rightarrow \infty } \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \| = 0 \end{aligned}$$
(3.12)

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \| y_{n} - z_{n} \| = 0. \end{aligned}$$
(3.13)

If \(Ax_{n} \in B_{2}^{-1}0\), then \(\gamma _{n} \| A^{*}(I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \| = 0\). Thus, we assume that \(Ax_{n} \notin B_{2}^{-1}0\). By substituting (3.6) into (3.5), we have

$$ \| u_{n} - x^{*} \|^{2} \leq \| x_{n} - x^{*} \|^{2} - (2 - \xi _{n}) \gamma _{n} \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2}, $$

which implies that

$$\begin{aligned} \| x_{n+1} -x^{*} \|^{2} &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + (1-\alpha _{n})\| u_{n} - x^{*}\|^{2} \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + \| x_{n} - x^{*} \|^{2} - (2 - \xi _{n}) \gamma _{n} \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2} \end{aligned}$$

or

$$\begin{aligned} (2 - \xi _{n}) \gamma _{n} \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2} \leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + \| x_{n} - x^{*} \|^{2} - \| x_{n+1} -x^{*} \|^{2}. \end{aligned}$$

It follows from (C1), (C2) and (C4) that \(\gamma _{n} \| (I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|^{2} \rightarrow 0\) as \(n \rightarrow \infty \). This implies that

$$\begin{aligned} \lim _{n \rightarrow \infty } \gamma _{n}\| A^{*}(I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \| = 0. \end{aligned}$$
(3.14)

By the firm nonexpansivity of \(J_{\lambda _{n}}^{B_{1}}\) and Lemma 3.1, we have

$$\begin{aligned} \| y_{n} - x^{*} \|^{2} &= \| J_{\lambda _{n}}^{B_{1}}(x_{n} - \gamma _{n} A^{*}(I -J_{\lambda _{n}}^{B_{2}} )Ax_{n}) - J_{\lambda _{n}}^{B_{1}}x^{*} \|^{2} \\ &\leq \langle J_{\lambda _{n}}^{B_{1}}(x_{n} - \gamma _{n} A^{*}( I - J_{ \lambda _{n}}^{B_{2}})Ax_{n}) - J_{\lambda _{n}}^{B_{1}}x^{*}\!, x_{n} - \gamma _{n} A^{*}(I - J_{\lambda _{n}}^{B_{2}} )Ax_{n} - x^{*} \rangle \\ &= \langle y_{n} - x^{*}\!, x_{n} - \gamma _{n} A^{*}(I - J_{\lambda _{n}}^{B_{2}} )Ax_{n} - x^{*} \rangle \\ &= \frac{1}{2} \Big( \| y_{n} - x^{*} \|^{2} + \| x_{n} - \gamma _{n} A^{*}(I - J_{\lambda _{n}}^{B_{2}})Ax_{n} - x^{*} \|^{2} \\ &\quad - \| y_{n} - x_{n} - \gamma _{n} A^{*}(J_{\lambda _{n}}^{B_{2}} - I)Ax_{n} \|^{2} \Big) \\ &\leq \frac{1}{2} \Big( \| y_{n} - x^{*} \|^{2} + \| x_{n} - x^{*} \|^{2} - \| y_{n} - x_{n} - \gamma _{n} A^{*}(J_{\lambda _{n}}^{B_{2}} - I)Ax_{n} \|^{2} \Big) \\ &= \frac{1}{2} \Big( \| y_{n} - x^{*} \|^{2} + \| x_{n} - x^{*} \|^{2} - \| y_{n} - x_{n} \|^{2} - \gamma _{n}^{2}\| A^{*}(J_{\lambda _{n}}^{B_{2}} - I)Ax_{n}\|^{2} \\ &\quad + 2\gamma _{n}\langle y_{n} - x_{n}, A^{*}(J_{\lambda _{n}}^{B_{2}} - I)Ax_{n} \rangle \Big) \\ &\leq \frac{1}{2} \Big( \| y_{n} - x^{*} \|^{2} + \| x_{n} - x^{*} \|^{2} - \| y_{n} - x_{n} \|^{2} - \gamma _{n}^{2}\| A^{*}(J_{\lambda _{n}}^{B_{2}} - I)Ax_{n}\|^{2} \\ &\quad + 2\gamma _{n} \| y_{n} - x_{n}\| \| A^{*}(J_{\lambda _{n}}^{B_{2}} - I)Ax_{n} \| \Big), \end{aligned}$$

which implies that

$$\begin{aligned} \| y_{n} - x^{*} \|^{2} &\leq \| x_{n} - x^{*} \|^{2} - \| y_{n} - x_{n} \|^{2} + 2\gamma _{n} \| y_{n} - x_{n}\| \| A^{*}(I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \|. \end{aligned}$$
(3.15)

Since \(U\) is \(\kappa \)-demicontractive, it follows from (3.15) that

$$\begin{aligned} \| x_{n+1} -x^{*} \|^{2} &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + (1-\alpha _{n})\| u_{n} - x^{*}\|^{2} \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + (1-\alpha _{n}) \Big( (1- \vartheta _{n})\| y_{n} - x^{*}\|^{2} + \vartheta _{n} \|z_{n} - x^{*} \|^{2} \Big) \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + (1-\vartheta _{n})\| y_{n} - x^{*}\|^{2} + \vartheta _{n} ~\!d( z_{n}, Ux^{*} )^{2} \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + (1-\vartheta _{n})\| y_{n} - x^{*}\|^{2} + \vartheta _{n} H( Uy_{n}, Ux^{*})^{2} \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + (1-\vartheta _{n})\| y_{n} - x^{*}\|^{2} + \vartheta _{n} \| y_{n} - x^{*} \|^{2} \\ & \quad + \vartheta _{n} \kappa ~\!d(y_{n}, Uy_{n})^{2} \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + \| y_{n} - x^{*} \|^{2} + \vartheta _{n} \kappa \|y_{n} - z_{n} \|^{2} \\ &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + \| x_{n} - x^{*} \|^{2} - \| y_{n} - x_{n} \|^{2} \\ &\quad + 2\gamma _{n} \| y_{n} - x_{n}\| \| A^{*}( I - J_{\lambda _{n}}^{B_{2}})Ax_{n} \| + \vartheta _{n} \kappa \|y_{n} - z_{n} \|^{2}. \end{aligned}$$
(3.16)

By (3.12), (3.13), (3.14) and (3.16), we deduce that

$$\begin{aligned} \| y_{n} - x_{n} \|^{2} &\leq \alpha _{n}\| f(x_{n}) - x^{*} \|^{2} + 2 \gamma _{n} \| y_{n} - x_{n}\| \| A^{*}(I - J_{\lambda _{n}}^{B_{2}} )Ax_{n} \| + \vartheta _{n} \kappa \|y_{n} - z_{n} \|^{2} \\ &\quad +\| x_{n} - x^{*} \|^{2} - \| x_{n+1} -x^{*} \|^{2} \rightarrow 0 \end{aligned}$$

as \(n \rightarrow \infty \), which implies that \(\| y_{n} - x_{n} \| \rightarrow 0\) as \(n \rightarrow \infty \). We next show that

$$ \limsup _{n \rightarrow \infty } \langle f(x^{*}) - x^{*}\!, x_{n} - x^{*} \rangle \leq 0. $$

To show this, let \(\{x_{n_{j}}\}\) be a subsequence of \(\{x_{n}\}\) such that

$$ \lim _{j \rightarrow \infty }\langle f(x^{*}) - x^{*}\!, x_{n_{j}} - x^{*} \rangle = \limsup _{n \rightarrow \infty }\langle f(x^{*}) - x^{*}\!, x_{n} - x^{*} \rangle . $$

Since \(\{x_{n_{j}}\}\) is bounded, there exists a subsequence \(\{x_{n_{j_{k}}}\}\) of \(\{x_{n_{j}}\}\) and \(p \in \mathcal{H}_{1}\) such that \(x_{n_{j_{k}}} \rightharpoonup p\). Without loss of generality, we can assume that \(x_{n_{j}} \rightharpoonup p\). Since \(A\) is a bounded linear operator, we have \(\langle z, Ax_{n_{j}} - Ap\rangle = \langle A^{*}z, x_{n_{j}} - p \rangle \rightarrow 0\) as \(j \rightarrow \infty \), for all \(z \in \mathcal{H}_{2}\), this implies that \(Ax_{n_{j}} \rightharpoonup Ap\). From (3.12) and by the demiclosedness of \(I - J_{\lambda _{n}}^{B_{2}}\) at zero, we get \(Ap \in F(J_{\lambda _{n}}^{B_{2}}) = B_{2}^{-1}0\). Since \(x_{n_{j}} \rightharpoonup p\) and \(\| y_{n} - x_{n}\| \rightarrow 0\) as \(n \rightarrow \infty \), we have \(y_{n_{j}} \rightharpoonup p\). From (3.13) and by the demiclosedness of \(I - U\) at zero, we obtain \(p \in F(U)\). Now let us show that \(p \in B_{1}^{-1}0\). From \(y _{n} = J_{\lambda _{n}}^{B_{1}}(x_{n} - \gamma _{n} A^{*}(I - J_{ \lambda _{n}}^{B_{2}} )Ax_{n})\), then we can easily prove that

$$ \frac{1}{\lambda _{n}}(x_{n} - y_{n} - \gamma _{n} A^{*}(I - J_{ \lambda _{n}}^{B_{2}})Ax_{n}) \in B_{1}y_{n}. $$

By the monotonicity of \(B_{1}\), we have

$$ \langle y_{n} - v, \frac{1}{\lambda _{n}}(x_{n} - y_{n} - \gamma _{n} A^{*}(I - J_{\lambda _{n}}^{B_{2}} )Ax_{n}) - w \rangle \geq 0 $$

for all \((v, w) \in \) G\((B_{1})\). Thus, we also have

$$ \langle y_{n_{j}} - v, \frac{1}{\lambda _{n_{j}}}(x_{n_{j}} - y_{n_{j}} - \gamma _{n} A^{*}(I - J_{\lambda _{n_{j}}}^{B_{2}})Ax_{n_{j}}) - w \rangle \geq 0 $$
(3.17)

for all \((v, w) \in \) G\((B_{1})\). Since \(y_{n_{j}} \rightharpoonup p\), \(\| x_{n_{j}} - y_{n_{j}}\| \rightarrow 0\) and \(\|(I - J_{\lambda _{n_{j}}}^{B_{2}}) Ax_{n_{j}} \| \rightarrow 0\) as \(j \rightarrow \infty \), then by taking the limit as \(j \rightarrow \infty \) in (3.17) yields

$$ \langle p - v, -w \rangle \geq 0 $$

for all \((v, w) \in \) G\((B_{1})\). By the maximal monotonicity of \(B_{1}\), we get \(0 \in B_{1}p\), i.e., \(p \in B_{1}^{-1}0\). Therefore, \(p \in \Omega \). Since \(x^{*}\) satisfies inequality (3.4), we have

$$ \limsup _{n \rightarrow \infty } \langle f(x^{*}) - x^{*}\!, x_{n} - x^{*} \rangle = \lim _{j \rightarrow \infty }\langle f(x^{*}) - x^{*}\!, x_{n_{j}} - x^{*} \rangle = \langle f(x^{*}) - x^{*}\!, p - x^{*} \rangle \leq 0. $$

By Lemma 2.7 (i), we have

$$\begin{aligned} \| x_{n+1} - x^{*} \|^{2} &= \| (1 - \alpha _{n})(u_{n} - x^{*} ) + \alpha _{n}(f(x_{n}) - x^{*} ) \|^{2} \\ &\leq (1 - \alpha _{n})^{2} \| u_{n} - x^{*} \|^{2} + 2\alpha _{n} \langle f(x_{n}) - x^{*}\!, x_{n+1} - x^{*} \rangle \\ &= (1 - \alpha _{n})^{2} \| u_{n} - x^{*} \|^{2} + 2\alpha _{n} \langle f(x_{n}) - f(x^{*} ), x_{n+1} - x^{*} \rangle \\ &\quad + 2\alpha _{n} \langle f(x^{*} ) - x^{*}\!, x_{n+1} - x^{*} \rangle \\ &\leq (1 - \alpha _{n})^{2} \| x_{n} - x^{*} \|^{2} + 2\alpha _{n} \beta \| x_{n} - x^{*} \| \| x_{n+1} - x^{*} \| \\ &\quad + 2\alpha _{n} \langle f(x^{*} ) - x^{*}\!, x_{n+1} - x^{*} \rangle \\ &\leq (1 - \alpha _{n})^{2} \| x_{n} - x^{*} \|^{2} + \alpha _{n} \beta ( \| x_{n} - x^{*} \|^{2} + \| x_{n+1} - x^{*} \|^{2}) \\ &\quad + 2\alpha _{n} \langle f(x^{*} ) - x^{*}\!, x_{n+1} - x^{*} \rangle . \end{aligned}$$

Thus,

$$\begin{aligned} \| x_{n+1} - x^{*} \|^{2} &\leq \frac{(1 - \alpha _{n})^{2} + \alpha _{n}\beta }{1 - \alpha _{n}\beta } \| x_{n} - x^{*} \|^{2} + \frac{2\alpha _{n}}{1 - \alpha _{n}\beta } \langle f(x^{*}) - x^{*}\!, x_{n+1} - x^{*} \rangle \\ &= \left (1 - \frac{(1-\beta )\alpha _{n}}{1 - \alpha _{n}\beta } \right ) \| x_{n} - x^{*} \|^{2} + \frac{(\alpha _{n} - (1-\beta ))\alpha _{n}}{1 - \alpha _{n}\beta }\| x_{n} - x^{*} \|^{2} \\ &\quad + \frac{2\alpha _{n}}{1 - \alpha _{n}\beta } \langle f(x^{*} ) - x^{*}\!, x_{n+1} - x^{*} \rangle \\ &\leq \left (1 - \frac{(1-\beta )\alpha _{n}}{1 - \alpha _{n}\beta } \right ) \| x_{n} - x^{*} \|^{2} \\ &\quad +\frac{(1-\beta )\alpha _{n}}{1 - \alpha _{n}\beta } \left \{ \left ( \frac{\alpha _{n}}{1 - \beta } - 1 \right )M + \frac{2}{1 - \beta } \langle f(x^{*} ) - x^{*}\!, x_{n+1} - x^{*} \rangle \right \} \\ &= (1-\mu _{n})\| x_{n} - x^{*} \|^{2} + \mu _{n}\sigma _{n}, \end{aligned}$$
(3.18)

where \(M = \sup \{ \| x_{n} - x^{*} \|^{2} : n \in \mathbb{N} \}\), \(\mu _{n} = \frac{(1-\beta )\alpha _{n}}{1 - \alpha _{n}\beta }\), and \(\sigma _{n} = \left ( \frac{\alpha _{n}}{1 - \beta } - 1 \right )M + \frac{2}{1 - \beta } \langle f(x^{*}) - x^{*}\!, x_{n+1} - x^{*} \rangle \). Clearly, \(\{\mu _{n}\} \subset [0, 1]\), \(\sum _{n=1}^{\infty } \mu _{n} = \infty \) and \(\limsup _{n}\sigma _{n} \leq 0\). Hence, by applying Lemma 2.8 to (3.18), we can conclude that \(x_{n} \rightarrow x^{*}\) as \(n \rightarrow \infty \).

Case 2: Suppose that \(\{ \| x_{n} - x^{*} \|\}\) is not a monotone sequence. Then there exists a subsequence \(\{n_{i}\}\) of \(\{n\}\) such that \(\| x_{n_{i}} - x^{*} \| < \| x_{n_{i}+1} - x^{*} \|\) for all \(i \in \mathbb{N}\). We now define a positive integer sequence \(\{\rho (n)\}\) by

$$ \rho (n) := \max \{ m \leq n : \| x_{m} - x^{*} \| < \| x_{m+1} - x^{*} \|\} $$

for all \(n \geq n_{0}\) (for some \(n_{0}\) large enough). By Lemma 2.9, we have \(\{\rho (n)\}\) is a nondecreasing sequence such that \(\rho (n) \rightarrow \infty \) as \(n \rightarrow \infty \) and

$$ \| x_{\rho (n)} - x^{*} \|^{2} - \| x_{\rho (n)+1} -x^{*} \|^{2} \leq 0 $$

for all \(n \geq n_{0}\). From (3.10), we obtain

$$ \lim _{n \rightarrow \infty } \| (I - J_{\lambda _{\rho (n)}}^{B_{2}}) Ax_{ \rho (n)} \| = 0. $$
(3.19)

From (3.11), we have

$$ \lim _{n \rightarrow \infty } \| y_{\rho (n)} - z_{\rho (n)} \| = 0. $$
(3.20)

By (3.19), (3.20) and by the same proof as in Case 1, we obtain

$$ \limsup _{n \rightarrow \infty } \langle f(x^{*}) - x^{*}\!, x_{\rho (n)} - x^{*} \rangle \leq 0. $$

By the same computation as in Case 1, we deduce that

$$\begin{aligned} \| x_{\rho (n)+1} - x^{*} \|^{2} &\leq (1-\mu _{\rho (n)})\| x_{\rho (n)} - x^{*} \|^{2} + \mu _{\rho (n)}\sigma _{\rho (n)}, \end{aligned}$$
(3.21)

where \(\mu _{\rho (n)} = \frac{(1-\beta )\alpha _{\rho (n)}}{1 - \alpha _{\rho (n)}\beta }\), \(\sigma _{\rho (n)} = \left ( \frac{\alpha _{\rho (n)}}{1 - \beta } - 1 \right )M + \frac{2}{1 - \beta } \langle f(x^{*}) - x^{*}\!, x_{\rho (n)+1} - x^{*} \rangle \) and \(M = \sup \{ \| x_{\rho (n)} - x^{*} \|^{2} : n \in \mathbb{N} \}\). Clearly, \(\limsup _{n}\sigma _{\rho (n)} \leq 0\). Since \(\| x_{\rho (n)} - x^{*} \|^{2} \leq \| x_{\rho (n)+1} -x^{*} \|^{2}\), it follows from (3.21) that \(\| x_{\rho (n)} - x^{*} \|^{2} \leq \sigma _{\rho (n)}\). This implies that \(\| x_{\rho (n)} - x^{*} \| \rightarrow 0\) as \(n \rightarrow \infty \). It follows from Lemma 2.9 and (3.21) that

$$ 0 \leq \| x_{n} - x^{*} \| \leq \| x_{\rho (n)+1} - x^{*} \| \rightarrow 0 $$

as \(n \rightarrow \infty \). Hence \(\{x_{n}\}\) converges strongly to \(x^{*}\). This completes the proof. □

Remark 3.3

We have some observations on Theorem 3.2.

  1. (i)

    The stepsize \(\gamma _{n}\) defined by (3.3) does not depend on \(\|A\|\) (in fact, it depends on \(x_{n}\)).

  2. (ii)

    Taking \(f(x) = u\) for some \(u \in C\), Algorithm (3.2) becomes the Halpern-type algorithm. In particular, if \(u = 0\) (in the case that \(0 \in C\)), then \(\{x_{n}\}\) converges to \(x^{*}\), where \(x^{*}\) is the minimum norm solution in \(\Omega \).

4 Applications

4.1 The Split Feasibility Problem

Let \(C\) and \(Q\) are nonempty closed convex subsets of \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively, and let \(A : \mathcal{H}_{1} \rightarrow \mathcal{H}_{2}\) be a bounded linear operator. Recall that the split feasibility problem (SFP) is to find a point

$$ x^{*} \in C ~~\text{such that }~~ Ax^{*} \in Q. $$
(4.1)

Applying Theorem 3.2, we obtain a strongly convergent algorithm which is independent of the operator norms for finding a common solution of the SFP (4.1) and the fixed point problem for demicontractive multivalued mappings as follows.

Theorem 4.1

Let \(C\) and \(Q\) be nonempty closed convex subsets of \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively, and let \(A : \mathcal{H}_{1} \rightarrow \mathcal{H}_{2}\) be a bounded linear operator. Let \(U : C \rightarrow \) CB\((C)\) be a \(\kappa \)-demicontractive multivalued mapping. Assume that \(\Omega := F(U) \cap A^{-1}(Q) \neq \emptyset \). Let \(f :C \rightarrow C\) be a \(\beta \)-contraction with respect to \(\Omega \). Suppose that \(\{x_{n}\}\) is a sequence generated iteratively by \(x_{1} \in C\) and

$$ \textstyle\begin{cases} y_{n} = P_{C}(x_{n} - \gamma _{n} A^{*}(I - P_{Q})Ax_{n}), \\ u_{n} = (1-\vartheta _{n})y_{n} + \vartheta _{n} z_{n}, \\ x_{n+1} = \alpha _{n}f(x_{n}) + (1 - \alpha _{n})u_{n},~~~~~~n \in \mathbb{N}, \end{cases} $$
(4.2)

where \(z_{n} \in Uy_{n}\), the stepsize \(\gamma _{n}\) is selected in the same way as in [25]:

$$ \gamma _{n} := \textstyle\begin{cases} \frac{\xi _{n}\left \| (I - P_{Q})Ax_{n}\right \| ^{2}}{\left \| A^{*}(I - P_{Q})Ax_{n}\right \| ^{2}}, ~ &\textit{if}~Ax_{n} \notin Q, \\ 1, &\textit{otherwise}, \end{cases} $$
(4.3)

and the sequences \(\{\alpha _{n}\}, \{\vartheta _{n}\}\) and \(\{\xi _{n}\}\) satisfy (C1), (C3) and (C4) in Theorem 3.2. If \(U\) satisfies the endpoint condition and \(I - U\) is demiclosed at zero, then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).

Proof

Setting \(B_{1} := N_{C} = \partial i_{C}\) and \(B_{2} := N_{Q} = \partial i_{Q}\), we have \(B_{1}\) and \(B_{2}\) are maximal monotone. We also have \(J_{\lambda }^{B_{1}} = P_{C}\) and \(J_{\lambda }^{B_{2}} = P_{Q}\) for \(\lambda > 0\), and \(B_{1}^{-1}0 = C\) and \(B_{2}^{-1}0 = Q\). Thus, the result is obtained directly by Theorem 3.2. □

4.2 The Split Minimization Problem

Let \(g_{1} : \mathcal{H}_{1} \rightarrow (- \infty , \infty ]\) and \(g_{2} : \mathcal{H}_{2} \rightarrow (- \infty , \infty ]\) be two proper, lower semicontinuous and convex functions, and let \(A : \mathcal{H}_{1} \rightarrow \mathcal{H}_{2}\) be a bounded linear operator. The split minimization problem (SMP) is to find a point \(x^{*} \in \mathcal{H}_{1}\) such that

$$ x^{*} \in \text{Argmin}~\!g_{1} ~~ \text{and} ~~ Ax^{*} \in \text{Argmin}~\!g_{2}. $$
(4.4)

Applying Theorem 3.2, we get a strongly convergent algorithm which is independent of the operator norms for finding a common solution of the SMP (4.4) and the fixed point problem for demicontractive multivalued mappings as follows.

Theorem 4.2

Let \(A : \mathcal{H}_{1} \rightarrow \mathcal{H}_{2}\) be a bounded linear operator. Let \(U : \mathcal{H}_{1} \rightarrow \) CB\((\mathcal{H}_{1})\) be a \(\kappa \)-demicontractive multivalued mapping. Let \(g_{1} : \mathcal{H}_{1} \rightarrow (- \infty , \infty ]\) and \(g_{2} : \mathcal{H}_{2} \rightarrow (- \infty , \infty ]\) be two proper, lower semicontinuous and convex functions. Assume that \(\Omega := F(U) \cap \Gamma \neq \emptyset \), where \(\Gamma = \left \{ x \in \mathrm{Argmin}~\!g_{1} : Ax \in \mathrm{Argmin}~\!g_{2} \right \} \). Let \(f : \mathcal{H}_{1} \rightarrow \mathcal{H}_{1}\) be a \(\beta \)-contraction with respect to \(\Omega \). Suppose that \(\{x_{n}\}\) is a sequence generated iteratively by \(x_{1} \in \mathcal{H}_{1}\) and

$$ \textstyle\begin{cases} y_{n} = \mathrm{Prox}_{\lambda _{n} g_{1}}(x_{n} - \gamma _{n} A^{*}(I - \mathrm{Prox}_{\lambda _{n} g_{2}})Ax_{n}), \\ u_{n} = (1-\vartheta _{n})y_{n} + \vartheta _{n} z_{n}, \\ x_{n+1} = \alpha _{n}f(x_{n}) + (1 - \alpha _{n})u_{n},~~~~~~n \in \mathbb{N}, \end{cases} $$
(4.5)

where \(z_{n} \in Uy_{n}\), the stepsize \(\gamma _{n}\) is selected in such a way:

$$ \gamma _{n} := \textstyle\begin{cases} \frac{\xi _{n}\left \| (I - \mathrm{Prox}_{\lambda _{n} g_{2}})Ax_{n}\right \| ^{2}}{\left \| A^{*}(I - \mathrm{Prox}_{\lambda _{n} g_{2}})Ax_{n}\right \| ^{2}}, ~ &\textit{if}~Ax_{n} \notin \mathrm{Argmin}~\!g_{2}, \\ 1, &\textit{otherwise}, \end{cases} $$
(4.6)

and the sequences \(\{\alpha _{n}\}, \{\lambda _{n}\}, \{\vartheta _{n}\}\) and \(\{\xi _{n}\}\) satisfy (C1)–(C4) in Theorem 3.2. If \(U\) satisfies the endpoint condition and \(I - U\) is demiclosed at zero, then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).

Proof

Taking \(C = \mathcal{H}_{1}\), \(B_{1} := \partial g_{1}\) and \(B_{2} := \partial g_{2}\), we have \(B_{1}\) and \(B_{2}\) are maximal monotone. One can show that \(\text{Argmin}~\!g_{1} = (\partial g_{1})^{-1}0 = B_{1}^{-1}0\) and \(\text{Argmin}~\!g_{2} = (\partial g_{2})^{-1}0 = B_{2}^{-1}0\). Therefore, the result is obtained immediately by Theorem 3.2. □

5 Numerical Examples

We first give a numerical example of Theorem 3.2 to demonstrate the convergence behavior of Algorithm (3.2).

Example 5.1

Let \(\mathcal{H}_{1} = C = \mathbb{R}\) and \(\mathcal{H}_{2} = \mathbb{R}^{3}\) with the usual norms. Define a multivalued mapping \(U : \mathbb{R} \rightarrow \) CB\((\mathbb{R})\) by

$$ Ux := \textstyle\begin{cases} \left [ -\frac{11x}{2}, -6x \right ]\!, ~~ &\text{if}~x \leq 0, \\ \left [ -6x, -\frac{11x}{2} \right ]\!, &\text{if}~x > 0. \\ \end{cases} $$

By Example 2.2, \(U\) is demicontractive with a constant \(\kappa =\frac{140}{169}\). Let \(B_{1} : \mathbb{R} \rightarrow 2^{\mathbb{R}}\) be defined by

$$ B_{1}(x) := \textstyle\begin{cases} \left \{ u \in \mathbb{R} : z^{2} + xz - 2x^{2} \geq (z - x)u, ~ \forall z \in [-9, 3] \right \} , ~~ \text{if}~x \in [-9, 3], \\ \emptyset , ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \text{otherwise}. \\ \end{cases} $$

By [33, Theorem 4.2], \(B_{1}\) is maximal monotone. Define a maximal monotone mapping \(B_{2} : \mathbb{R}^{3} \rightarrow 2^{\mathbb{R}^{3}}\) by \(B_{2} := \partial g\), where \(g : \mathbb{R}^{3} \rightarrow \mathbb{R}\) is a function defined by

$$ g(x, y, z) = \frac{|2x -5y + 3z |^{2}}{2}. $$

Thus, the explicit forms of the resolvents of \(B_{1}\) and \(B_{2}\) can be written by \(J_{1}^{B_{1}}x = x/4\) and \(J_{1}^{B_{2}} = \text{Prox}_{g} = P^{-1}\), where

$$ P = \begin{pmatrix} 5 & -10 & 6 \\ -10 & 26 & -15 \\ 6 & -15 & 10 \\ \end{pmatrix}\!, $$

see [14, 30]. Define a bounded linear operator \(A: \mathbb{R} \rightarrow \mathbb{R}^{3}\) by \(Ax := (15x, 6x, -27x)\). Let \(\Omega := F(U) \cap \Gamma \), where \(\Gamma = \left \{ x \in B_{1}^{-1}0 : Ax \in B_{2}^{-1}0 \right \} \). Put the following control sequences: \(\alpha _{n} = \frac{1}{n+3}\), \(\vartheta _{n} = \frac{n}{6n+1}\), \(\xi _{n} = \sqrt{\frac{3n+1}{n+2}}\), \(\lambda _{n} = 1\) and take \(z_{n} = -\frac{23}{4}y_{n}\). Thus, Algorithm (3.2) in our main theorem becomes

$$ x_{n+1} = \frac{1}{n+3}f(x_{n}) + \frac{1}{16}\left ( \frac{n + 2}{n+3} \right ) \left ( \frac{-3n + 4}{6n+1} \right ) \left ( x_{n} - \gamma _{n}A^{\top } \left ( I - P^{-1} \right ) Ax_{n} \right ), $$
(5.1)

where

$$ \gamma _{n} = \textstyle\begin{cases} \sqrt{\frac{3n+1}{n+2}} \frac{\left \| (I - P^{-1})Ax_{n}\right \| ^{2}}{\left \| A^{\top }(I - P^{-1})Ax_{n}\right \| ^{2}}, ~ &\text{if}~Ax_{n} \neq P^{-1}(Ax_{n}), \\ 1, &\text{otherwise}, \end{cases} $$

and \(f : \mathbb{R} \rightarrow \mathbb{R}\) is a contraction with respect to \(\Omega \).

Let us start with the initial point \(x_{1} = 10\) and the stopping criterion for our testing process is set as: \(E_{n} := | x_{n} - x_{n-1} | < 10^{-7}\). In Table 1, a numerical experiment of Algorithm (5.1) is shown by taking \(f(x) = x/2\). In Table 2, we show the numbers of iterations of Algorithm (5.1) by considering different contractions in the form of \(f(x) = \beta x\) \((0 < \beta < 1)\) and constant functions.

Table 1 Numerical experiment of Algorithm (5.1)
Table 2 The numbers of iterations of Algorithm (5.1) by choosing different contractions

Remark 5.2

By testing the convergence behavior of Algorithm (5.1) in Example 5.1, we observe that

  1. (i)

    Algorithm (5.1) converges to a solution, i.e., \(x_{n} \rightarrow 0 \in \Omega \).

  2. (ii)

    A contraction \(f\) in our algorithm influences the convergence behavior. Namely, selecting non-constant functions make our algorithm more efficient than using constant functions in terms of the number of iterations and the approximate solution. So, our algorithm is more general and desirable than the Halpern-type method.

Next, we give an example in the infinite-dimensional space \(L^{2}\) for supporting Theorem 3.2.

Example 5.3

Let \(\mathcal{H}_{1} = C = \mathcal{H}_{2} = L^{2}([0, 1])\). Let \(x \in L^{2}([0, 1])\). Define a bounded linear operator \(A : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) by

$$ (Ax)(t) := \int _{0}^{1}(t^{2}+1)x(t)dt. $$

Define a mapping \(U : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) by

$$ (Ux)(t) := \sin (x(t)). $$

Then, \(U\) is demicontractive. Let

$$ E_{1} = \left \{ u \in L^{2}([0, 1]) : \int _{0}^{1} w(t)u(t) dt \leq 0 \right \} , $$

where \(0 \neq w \in L^{2}([0, 1])\), and let

$$ E_{2} = \{ u \in L^{2}([0, 1]) : u \geq 0 \}. $$

Define two maximal monotone mappings \(B_{1}, B_{2} : L^{2}([0, 1]) \rightarrow 2^{L^{2}([0, 1])}\) by \(B_{1} := N_{E_{1}}\) and \(B_{2} := N_{E_{2}}\) (see Example 2.6). We can write the explicit forms of the resolvents of \(B_{1}\) and \(B_{2}\) as follows:

$$ J_{\lambda }^{B_{1}}x = P_{E_{1}}x = \textstyle\begin{cases} x - \frac{\int _{0}^{1} w(t)x(t) dt }{\int _{0}^{1} w^{2}(t) dt}w, ~ &\text{if}~x \notin E_{1}, \\ x, &\text{if}~x \in E_{1} \end{cases} $$

and \(J_{\lambda }^{B_{2}}x = P_{E_{2}}x = x_{+}\), where \(x_{+}(t) = \max \{x(t), 0\}\) (see [6]). Now, Algorithm (3.2) has the following form:

$$ \textstyle\begin{cases} y_{n} = P_{E_{1}}(x_{n} - \gamma _{n} A^{*}(I - P_{E_{2}})Ax_{n}), \\ x_{n+1} = \alpha _{n}f(x_{n}) + (1 - \alpha _{n})((1-\vartheta _{n})y_{n} + \vartheta _{n} Uy_{n}), \end{cases} $$
(5.2)

where

$$ \gamma _{n} = \textstyle\begin{cases} \frac{\xi _{n}\left \| (I - P_{E_{2}})Ax_{n}\right \| ^{2}}{\left \| A^{*}(I - P_{E_{2}})Ax_{n}\right \| ^{2}}, ~ &\text{if}~Ax_{n} \notin E_{2}, \\ 1, &\text{otherwise}. \end{cases} $$

By choosing a contraction \(f : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) and the control sequences \(\{\alpha _{n}\}\), \(\{\vartheta _{n}\}\) and \(\{\xi _{n}\}\) satisfying the conditions (C1), (C3) and (C4) in Theorem 3.2, it can guarantee that the sequence \(\{x_{n}\}\) generated by (5.2) converges strongly to \(0 \in \Omega := F(U) \cap E_{1} \cap A^{-1}(E_{2})\).

6 Conclusion

In this paper, we consider a problem of finding a common solution of the split null point problem and the fixed point problem for multivalued mappings in Hilbert spaces. We focus on the class of maximal monotone mappings and the class of demicontractive multivalued mappings which including several common types of classes of mappings occurring in nonlinear analysis and optimization problems. Various algorithms were introduced for solving the problem and most of them depend on the norms of the bounded linear operators; however, the calculation of the operator norm is not an easy work in general practice. We present a viscosity-type algorithm which is independent of the operator norms for solving the problem by selecting the stepsizes in the similar adaptive way to López et al. [25], and obtain some sufficient conditions for strong convergence of our proposed algorithm. Moreover, our main result can be applied to solving the split feasibility problem and the split minimization problem. Numerical examples are also given to support our main result.