1 Introduction

Let X be a real Banach space. We study the following zero point problem: find \(x^* \in X\) such that

$$\begin{aligned} 0 \in Ax^* + Bx^*, \end{aligned}$$
(1)

where \(A : X \rightarrow X\) is an operator and \(B : X \rightarrow 2^X\) is a set-valued operator. This problem includes, as special cases, convex programming, variational inequalities, split feasibility problem and minimization problem. To be more precise, some concrete problems in machine learning, image processing and linear inverse problem can be modeled mathematically as this form (1). For example:

Example 1

A stationary solution to the initial value problem of the evolution equation

$$\begin{aligned} 0 \in \frac{\partial u}{\partial t} + Fu, \; \; u(0) = u_0 \end{aligned}$$
(2)

can be rewritten as (1) when the governing maximal monotone F is of the form \(F = A + B\).

Example 2

In optimization, it often needs to solve a minimization problem of the form

$$\begin{aligned} \min _{x \in H}\{ f(x) + g(Tx)\} \end{aligned}$$
(3)

where H is a real Hilbert space, and fg are proper lower-semicontinuous and convex functions from H to \((-\infty , \infty ]\) and T is a bounded linear operator on H.

Indeed, (3) is equivalent to (1) if f and \(g\circ T\) have a common point of continuity with \(A : = \partial f\) and \(B = : T^* \circ \partial g \circ T\). Here \(T^*\) is the adjoint of T, and \(\partial f\) is the subdifferential operator of f. It is known [1, 6, 19] that the minimization problem (3) is widely used in image recovery, signal processing and machine learning.

Example 3

If \(B = \partial \phi : H \rightarrow 2^H\), where \(\phi : H \rightarrow (-\infty , \infty ]\) is a proper convex and lower semicontinuous, and \(\partial \phi \) is the subdifferential of \(\phi \), then problem (1) is equivalent to find \(x^* \in H\) such that

$$\begin{aligned} \langle Ax^*,\; v -x^*\rangle + \phi (v) - \phi (x^*) \ge 0, \; \forall v \in H \end{aligned}$$
(4)

which is said to be the mixed quasi-variational inequality.

Example 4

In Example 3, if \(\phi \) is the indicator function of C, i.e.,

$$\begin{aligned} \phi (x) = \left\{ \begin{array}{ll}&{}\quad 0,\; \; \;\; if\; x \in C,\\ &{} +\infty , \; if \; x \not \in C,\end{array}\right. \end{aligned}$$

then problem (4) is equivalent to the classical variational inequality problem, denoted by VI(CA), i.e., to find \(x^* \in C\) such that

$$\begin{aligned} \langle Ax^*,\; v - x^*\rangle \ge 0, \; \forall v \in C. \end{aligned}$$
(5)

It is easy to see that (5) is equivalent to finding a point \(x^* \in C\) such that

$$\begin{aligned} 0 \in (A + B) x^*, \end{aligned}$$

where B is the subdifferential of the indicator of C.

A classical method for solving problem (1) is the forward–backward splitting method [6, 10, 14, 21] which is defined by the following manner: for any fixed \(x_1 \in X\) and for \(r > 0\),

$$\begin{aligned} x_{n+1} = (I + rB)^{-1}(x_n - r Ax_n),\; \forall n \ge 1. \end{aligned}$$
(6)

We see that each step of the iteration involves only with A as the forward step and B as the backward step, but not the sum of B. In fact, this method includes, in particular, the proximal point algorithm [2, 7, 17] and the gradient method.

In 2012, Takashashi et al. [20] proved some strong convergence theorems of the Halpern-type iteration in a Hilbert space H, which is defined by the following manner: for any \(x_1 \in H\),

$$\begin{aligned} x_{n+1} = \beta _n x_n + (1 - \beta _n)(\alpha _n u + (1 -\alpha _n)J_{r_n}^B(x_n - r_n Ax_n)),\; \forall n \ge 1, \end{aligned}$$
(7)

where \(u \in H\) is a fixed point and A is an \(\alpha \)-inverse strongly monotone mapping on H and B is an maximal monotone operator on H. Under suitable conditions, they proved that the sequence \(\{x_n\}\) generated by (7) converges strongly to a zero point of \(A + B\).

Recently, L\(\acute{o}\)pez et al. [11] introduced the following Halpern-type forward–backward method: for any \(x_1 \in X\),

$$\begin{aligned} x_{n+1} = \alpha _n u + (1 - \alpha _n)(J_{r_n}^B(x_n - r_n(Ax_n + a_n)) + b_n) \end{aligned}$$
(8)

where \(u \in X\), A is an \(\alpha \)-inverse strongly accretive mapping on X and B is an m-accretive operator on X, \(\{r_n\} \subset (0, \infty )\), \(\{\alpha _n\}\subset (0, 1]\) and \(\{a_n\}, \; \{b_n\}\) are the error sequences in X. They proved that the sequence \(\{x_n\}\) generated by (8) strongly converges to a zero point of the sum of A and B under some appropriate conditions.

Very recently there have many works concerning the problem of finding zero points of the sum of two monotone operators (in Hilbert spaces) and accretive operators (in Banach spaces). For more details, see, e.g., [5, 18, 20, 21, 23,24,25,26] and the references therein.

In this paper, we introduce and consider a viscosity iterative forward–backward splitting method with errors to find zeros of the sum of two accretive operators in the setting of Banach spaces. We shall prove the strong convergence of the method under mild conditions. We also discuss applications of these methods to variational inequalities, convex minimization problem and convexly constrained linear inverse problem.

2 Preliminaries

In order to prove the main results of the paper, we need the following basic concepts, notations and lemmas.

In what follows, we always assume that X is a uniformly convex and q-uniformly smooth Banach space for some \(q \in (1, 2]\) (the definitions and properties, see, for example [4]).

Recall that the generalized duality mapping\(J_q : X \rightarrow 2^{X^*}\) is defined by

$$\begin{aligned} J_q(x) = \{j_q(x) \in X^*: \langle j_q(x), x \rangle = ||x||^q,\; \; \; ||j_q(x)|| = ||x||^{q -1}\}, \end{aligned}$$

and the following subdifferential inequality holds: for any \(x, y\in X\),

$$\begin{aligned} ||x + y||^q \le ||x||^q + q \langle y, j_q(x + y)\rangle ,\; j_q(x + y) \in J_q(x + y). \end{aligned}$$
(9)

Recall that [11] if X is q-uniformly smooth, \(q \in (1, 2]\), then there exists a constant \(\kappa _q > 0\) such that

$$\begin{aligned} || x + y||^q \le ||x||^q + q \langle y,\; j_q(x)\rangle + \kappa _q||y||^q,\; x, y \in X. \end{aligned}$$
(10)

The best constant \(\kappa _q\) satisfying (10) will be called the q-uniform smoothness coefficient of X.

Proposition 1

([4]). Let \(1 < q \le 2\). Then the following conclusions hold:

  1. (1)

    Banach space X is smooth if and only if the duality mapping \(J_q\) is single valued.

  2. (2)

    Banach space X is uniformly smooth if and only if the duality mapping \(J_q\) is single valued and norm-to-norm uniformly continuous on bounded sets of X.

Recall that a set-valued operator \(A : X \rightarrow 2^X\) with the domain D(A) and the range R(A) is said to be accretive if, for each \(x, y \in D(A)\), there exists \(j(x - y) \in J(x - y)\) such that

$$\begin{aligned} \langle u - v, \; j(x - y) \rangle \ge 0, \; \forall u \in Ax \; and \; v \in Ay. \end{aligned}$$
(11)

An accretive operator A is said to be m-accretive if the range \(R(I + \lambda A) = X, \; \forall \lambda > 0\).

For any \(\alpha > 0\) and \(q \in (1, 2]\), we say that an accretive operator A is \(\alpha \)-inverse strongly accretive (shortly, \(\alpha \)-isa) of order q, if for each \(x, y \in D(A)\), there exists \(j_q(x - y) \in J_q(x - y)\) such that

$$\begin{aligned} \langle u - v,\; j_q(x - y)\rangle \ge \alpha ||u -v||^q, \;\forall u \in Ax\; and \; v \in Ay. \end{aligned}$$
(12)

Let C be a nonempty closed and convex subset of a real Banach space X and K be a nonempty subset of C. A mapping \(T: C \rightarrow K\) is called a retraction of C onto K if \(Tx = x\) for all \(x \in K\). We say that T is sunny if, for each \(x \in C\) and \(t \ge 0\),

$$\begin{aligned} T(tx + (1 - t)Tx) = Tx, \end{aligned}$$
(13)

whenever \(tx + (1 - t)Tx \in C\). A sunny nonexpansive retraction is a sunny retraction which is also nonexpansive.

Proposition 2

([15, 28]). Let X be a uniformly smooth Banach space, \(T : C \rightarrow C\) be a nonexpansive mapping with a fixed point and \(f: C \rightarrow C\) be a contraction mapping. For each \(t \in (0, 1)\) the unique fixed point \(x_t \in C\) of the contractive mapping, \(tf + (1-t)T: C \rightarrow C\), converges strongly as \(t \rightarrow 0\) to the unique fixed point z of T with \(z = Q f(z)\), where \(Q : C \rightarrow Fix(T)\) is the unique sunny nonexpansive retraction from C onto Fix(T).

Lemma 1

([12, Lemma 3.1] ). Let \(\{a_n\}\), \(\{c_n\} \subset R^+\), \(\{\alpha _n\} \subset (0, 1)\) and \(\{b_n\} \subset R\) be the sequences such that

$$\begin{aligned} a_{n+1} \le (1 - \alpha _n)a_n + b_n + c_n \forall n \ge 1. \end{aligned}$$

Assume that \( \sum _{n=1}^\infty c_n < \infty \). Then the following results hold:

  1. (1)

    If \(b_n \le \alpha _n M\), where \(M \ge 0\), then \(\{a_n\}\) is bounded.

  2. (2)

    If \(\sum _{n =1}^\infty \alpha _n = \infty \) and \(\limsup _{n \rightarrow \infty } \frac{b_n}{\alpha _n} \le 0\), then \(\lim _{n \rightarrow \infty } a_n = 0\).

Lemma 2

([8]). Let \(\{s_n\}\) be a sequence of nonnegative real numbers such that

$$\begin{aligned} s_{n+1} \le (1 - \gamma _n) s_n + \gamma _n \tau _n \end{aligned}$$

and

$$\begin{aligned} s_{n+1}\le s_n - \eta _n + \rho _n, \forall n \ge 1, \end{aligned}$$

where \(\{\gamma _n\}\) is a sequence in (0, 1), \(\{\eta _n\}\) is a sequence of nonnegative real numbers, \(\{\tau _n\}\) and \(\{\rho _n\}\) are real sequences such that

  1. (a)

    \(\sum _{n=1}^\infty \gamma _n = \infty \);

  2. (b)

    \(\lim _{n \rightarrow \infty }\rho _n = 0\);

  3. (c)

    \(\lim _{k \rightarrow \infty } \eta _{n_k} = 0\) implies \(\limsup _{k \rightarrow \infty } \tau _{n_k} \le 0\) for any subsequence \(\{n_k\} \subset \{n\}\).

Then \(\lim _{n \rightarrow \infty } s_n = 0\).

It is easy to prove the following conclusion holds.

Lemma 3

For any \(r > 0\), if

$$\begin{aligned} T_r := J_r^B (I - rA) = (I + rB)^{-1} (I - rA), \end{aligned}$$

then \(Fix(T_r) = (A + B)^{-1}(0)\).

Lemma 4

([11, Lemma 3.2]). For any \(s \in (0, r]\) and \(x \in X\), we have

$$\begin{aligned} ||x - T_s x|| \le 2 ||x - T_r x||. \end{aligned}$$

Lemma 5

([11, Lemma 3.3]). Let X be a uniformly convex and q-uniformly smooth Banach space with \(q \in (1, 2]\). Assume that A is a single-valued \(\alpha \)-isa of order q on X. Then, for any \(r > 0\), there exists a continuous, strictly increasing and convex function \(\phi _q : R^+ \rightarrow R^+\) with \(\phi _q(0) = 0\) such that for all \(x, y \in B_r\),

$$\begin{aligned} ||T_r x - T_r y||^q\le & {} || x-y||^q - r(\alpha q - r^{q -1} \kappa _q)||Ax - Ay||^q\nonumber \\&- \phi _q(||(I - J^B_r)(I - rA)x - (I - J^B_r)(I - rA)y||), \end{aligned}$$
(14)

where \(\kappa _q\) is the q-uniform smoothness coefficient of X.

It is easy to prove that the following inequality holds.

Proposition 3

Let \( 1 < q \le 2\) and let X be a real smooth Banach space with the generalized duality mapping \(j_q\). Let m be a fixed positive integer. Let \(\{x_i\}_{i=1}^m \subset X\) and \(t_i \ge 0\) for all \(i = 1, 2, ..., m\) with \(\sum _{i=1}^m t_i \le 1\). Then we have

$$\begin{aligned} ||\sum _{i=1}^m t_i x_i||^q \le \sum _{i=1}^m t_i ||x_i||^q. \end{aligned}$$
(15)

3 Main Results

We are now in a position to give the following main results.

Theorem 1

Let X be a uniformly convex and q-uniformly smooth Banach space, \(q\in (1, 2]\). Let \(A : X \rightarrow X\) be an \(\alpha \)-isa of order q and \(B : X \rightarrow 2^X\) be an m-accretive operator such that \({\varGamma }:= (A + B)^{-1}(0) \not = \emptyset \). Let \(\{e_n\}\) be a sequence in X and \(f: X\rightarrow X\) be a contractive mapping with contractive constant \(\xi \in (0, 1)\). Let \(\{x_n\}\) be a sequence generated by \(x_1 \in X\) and

$$\begin{aligned} x_{n+1} = \alpha _n f(x_n) + \lambda _n x_n + \delta _n J_{r_n}^B (x_n - r_n Ax_n) + e_n, \; n \ge 1, \end{aligned}$$
(16)

where \(J_{r_n}^B = (I + r_n B)^{-1}\), \(\kappa _q\) is the q-uniform smoothness coefficient of X, \(0 < r_n \le (\frac{\alpha q}{\kappa _q})^{1/(q-1)}\) and \(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\). If \(\sum _{n=1}^\infty ||e_n|| < \infty \), or \(\lim _{n\rightarrow \infty }||e_n||/\alpha _n = 0\), then \(\{x_n\}\) is bounded.

Proof

For each \(n \ge 1\), we put \(T_n = J_{r_n}^B (I - r_n A)\) and let the sequence \(y_{n}\) be defined by

$$\begin{aligned} y_{n+1} = \alpha _n f(y_n) + \lambda _n y_n + \delta _n T_n y_n. \end{aligned}$$
(17)

By the condition \(0 < r_n \le ( \frac{\alpha q}{\kappa _q} )^{1/(q-1)}\) and Lemma 5, we know that \(T_n \) is a nonexpansive mapping. Hence we have

$$\begin{aligned} ||x_{n+1} - y_{n+1}||= & {} ||\lambda _n( f(x_n) - f(y_n)) + \lambda _n (x_n - y_n) + \delta _n (T_n x_n - T_n y_n) + e_n|| \\\le & {} \lambda _n \xi ||x_n - y_n|| + \lambda _n ||x_n - y_n|| + \delta _n || x_n - y_n|| + ||e_n|| \\= & {} (1 - \alpha _n (1-\xi )) ||x_n - y_n|| + ||e_n||. \end{aligned}$$

By Lemma 1 (2), we conclude that \(\lim _{n \rightarrow \infty } ||x_n - y_n|| = 0\).

Next we show that \(\{y_n\}\) is bounded. Indeed, let \(z \in {\varGamma }\). By Lemma 3 this implies that \(z\in (A + B)^{-1}(0) = Fix(T_n), \forall n \ge 1\). Hence we have

$$\begin{aligned} ||y_{n+1} - z||= & {} ||\alpha _n(f(y_n)- z) + \lambda _n(y_n - z) + \delta _n(T_n y_n - z)|| \nonumber \\\le & {} \alpha _n ||f(y_n)- f(z)|| + \alpha _n||f(z)- z|| + \lambda _n ||y_n - z|| + \delta _n ||y_n - z|| \nonumber \\\le & {} (1- \alpha _n (1- \xi )||y_n- z|| + \alpha _n||f(z)- z||. \end{aligned}$$
(18)

By Lemma 1 (1), \(\{y_n\}\) is bounded, so is \(\{x_n\}\). This completes the proof of Theorem 1. \(\square \)

Theorem 2

Let \(X, A, B, f, q, \kappa _q, \{e_n\}, {\varGamma }\) and \(\{x_n\}\) be the same as in Theorem 1. If \({\varGamma }\not = \emptyset \) and the following conditions are satisfied:

  1. (i)

    \(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);

  2. (ii)

    \(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);

  3. (iii)

    \(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le (\alpha q/\kappa _q )^{1/(q-1)}\);

  4. (iv)

    \(\lim inf_{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)

then \(\{x_n\}\) converges strongly to \(z = Qf(z)\), where Q is a sunny nonexpansive retraction of X onto \({\varGamma }\).

Proof

In Theorem 1 we have proved that \(\lim _{n \rightarrow \infty }||x_n - y_n|| = 0\). In order to prove the conclusion, it suffices to show that \(\lim _{n \rightarrow \infty } y_n = z = Qf(z)\). In fact, from (9), we have

$$\begin{aligned} ||y_{n+1} - z||^q= & {} ||\alpha _n(f(y_n) - z) + \lambda _n(y_n - z) + \delta _n(T_n y_n - z)||^q \nonumber \\\le & {} ||\lambda _n(y_n - z) + \delta _n(T_n y_n - z)||^q \nonumber \\&+ \,q \alpha _n\langle f(y_n) - z,\; j_q(y_{n+1} - z)\rangle . \end{aligned}$$
(19)

Since \(z = Qf(z)\in {\varGamma }= Fix(T_n), \; \forall n \ge 1\), from Proposition 3 and Lemma 5 we have

$$\begin{aligned}&{||\lambda _n(y_n - z) + \delta _n(T_n y_n - z)||^q}\nonumber \\&\quad \le \lambda _n ||y_n - z||^q + \delta _n ||T_n y_n - z||^q \nonumber \\&\quad \le \lambda _n ||y_n - z||^q + \delta _n ||T_n y_n - T_n z||^q \nonumber \\&\quad \le \lambda _n ||y_n - z||^q + \delta _n \big \{||y_n -z||^q - r_n(\alpha q -r_n^{q-1}\kappa _q)||Ay_n - Az||^q\nonumber \\&\qquad - \,\phi _q(||y_n - r_n Ay_n - T_n y_n + r_n Az||)\big \}\nonumber \\&\quad = (1 -\alpha _n)||y_n - z||^q - \delta _n r_n(\alpha q -r_n^{q-1}\kappa _q)||Ay_n - Az||^q. \end{aligned}$$
(20)

Substituting (20) into (19) we have

$$\begin{aligned} ||y_{n+1} - z||^q\le & {} (1 -\alpha _n)||y_n - z||^q - \delta _n r_n(\alpha q -r_n^{q-1}\kappa _q)||Ay_n - Az||^q \nonumber \\&-\, \delta _n \phi _q(||y_n - r_n Ay_n - T_n y_n + r_n Az||) \nonumber \\&+\, q \alpha _n\langle f(y_n) - z,\; j_q(y_{n+1} - z)\rangle . \end{aligned}$$
(21)

Since \(\alpha q -r_n^{q-1}\kappa _q > 0\), we have

$$\begin{aligned} ||y_{n+1} - z||^q \le (1 -\alpha _n)||y_n - z||^q + q \alpha _n\langle f(y_n) - z,\; j_q(y_{n+1} - z)\rangle \end{aligned}$$
(22)

and

$$\begin{aligned} ||y_{n+1} - z||^q\le & {} ||y_n - z||^q - \delta _n r_n(\alpha q -r_n^{q-1}\kappa _q)||Ay_n - Az||^q \nonumber \\&- \,\delta _n \phi _q(||y_n - r_n Ay_n - T_n y_n + r_n Az||) \nonumber \\&+ \,q \alpha _n\langle f(y_n) - z,\; j_q(y_{n+1} - z)\rangle . \end{aligned}$$
(23)

For each \(n \ge 1\), let

$$\begin{aligned}&\displaystyle s_n = ||y_n - z||^q;\; \; \gamma _n = \alpha _n; \\&\displaystyle \rho _n = q \alpha _n\langle f(y_n) - z,\; j_q(y_{n+1} - z)\rangle ; \\&\displaystyle \tau _n = q \langle f(y_n) - z,\; j_q(y_{n+1} - z)\rangle ; \\&\displaystyle \eta _n = \delta _n r_n(\alpha q -r_n^{q-1}\kappa _q)||Ay_n - Az||^q + \delta _n \phi _q(||y_n - r_n Ay_n - T_n y_n + r_n Az||). \end{aligned}$$

Then (22) and (23) can be written as:

$$\begin{aligned} s_{n+1} \le (1 - \gamma _n) s_n + \gamma _n \tau _n \end{aligned}$$
(24)

and

$$\begin{aligned} s_{n+1} \le s_n - \eta _n + \rho _n. \end{aligned}$$
(25)

Since \(\alpha _n \in (0, 1)\), \(\alpha _n \rightarrow 0\) and \({\varSigma }_{n = 1}^\infty \alpha _n = \infty \). It follows that \(\gamma _n \in (0, 1)\), \(\sum _{n=1}^\infty \gamma _n = \infty \) and \(\lim _{n \rightarrow \infty }\rho _n = 0\). In order to prove \(s_n \rightarrow 0\), by Lemma 2, it is sufficient to prove that for any subsequence \(\{n_k\} \subset \{n\}\), if \(\lim _{k \rightarrow \infty } \eta _{n_k} = 0\), then \(\limsup _{k \rightarrow \infty } \tau _{n_k} \le 0\).

Indeed, if \(\{n_k\}\) is a subsequence of \(\{n\}\) such that \(\lim _{k \rightarrow \infty }\eta _{n_k} = 0\), then by the assumptions and the property of \(\phi _q\) , we can deduce that

$$\begin{aligned} \left\{ \begin{array}{ll}&{}\lim _{k \rightarrow \infty } ||Ay_{n_k} - Az|| = 0;\\ &{} \lim _{k \rightarrow \infty } ||y_{n_k} - r_{n_k} Ay_{n_k} - T_{n_k} y_{n_k} + r_{n_k} Az|| = 0.\end{array}\right. \end{aligned}$$
(26)

This implies, by the triangle inequality, that

$$\begin{aligned} \lim _{k \rightarrow \infty }||T_{n_k} y_{n_k} - y_{n_k}|| = 0. \end{aligned}$$
(27)

Since \(\liminf _{n\rightarrow \infty } r_n > 0\), there is \(r > 0\) such that \(r_n \ge r\) for all \(n \ge 1\). In particular, \(r_{n_k} \ge r\) for all \(k \ge 1\). It follows from Lemma 4 and (27) that

$$\begin{aligned} \limsup _{k \rightarrow \infty }||T_r y_{n_k} - y_{n_k}|| \le 2 \limsup _{k \rightarrow \infty } ||T_{n_k} y_{n_k} - y_{n_k}|| = 0, \end{aligned}$$
(28)

which implies that

$$\begin{aligned} \lim _{k \rightarrow \infty }||T_r y_{n_k} - y_{n_k}|| = 0. \end{aligned}$$
(29)

Put

$$\begin{aligned} z_t = t f(z_t) + (1-t) T_r z_t,\; t \in (0, 1). \end{aligned}$$

By Proposition 2, \(z_t\) converges strongly as \(t \rightarrow 0\) to the unique fixed point \(z= Q f(z) \in Fix(T_r) = (A + B)^{-1}(0)\), where \(Q : X \rightarrow Fix(T_r)\) is the unique sunny nonexpansive retraction from X onto \(Fix(T_r) = (A + B)^{-1}(0)\). So we obtain

$$\begin{aligned} ||z_t - y_{n_k}||^q= & {} ||t(f(z_t) - y_{n_k}) + (1-t)( T_r z_t - y_{n_k})||^q \\\le & {} (1-t)^q || T_r z_t - y_{n_k}||^q + q t \langle f(z_t) - z_t,\; j_q(z_t - y_{n_k}) \rangle \\&+ \, q t \langle z_t - y_{n_k} ,\;j_q(z_t - y_{n_k}) \rangle \\\le & {} (1-t)^q \{|| T_r z_t - T_r y_{n_k}|| + || T_r y_{n_k} - y_{n_k} ||\}^q \\&+ \,q t \langle f(z_t) - z_t,\; j_q(z_t - y_{n_k}) \rangle + q t ||z_t - y_{n_k}||^q \\\le & {} (1-t)^q \{|| z_t - y_{n_k}|| + || T_r y_{n_k} - y_{n_k} ||\}^q \\&+ \,q t \langle f(z_t) - z_t,\; j_q(z_t - y_{n_k}) \rangle + q t ||z_t - y_{n_k}||^q. \end{aligned}$$

After simplifying we have

$$\begin{aligned}&{\langle z_t - f(z_t),\; j_q(z_t - y_{n_k}) \rangle } \nonumber \\&\quad \le \frac{1}{qt} \{(1-t)^q (|| z_t - y_{n_k}|| + || T_r y_{n_k} - y_{n_k}||)^q + (qt-1) ||z_t - y_{n_k}||^q\}.\qquad \quad \end{aligned}$$
(30)

It follows from (29) and (30) that

$$\begin{aligned} \limsup _{k \rightarrow \infty }\langle z_t - f(z_t),\; j_q(z_t - y_{n_k}) \rangle \le \frac{1}{qt}[(1-t)^q + (qt - 1)] M^q, \end{aligned}$$
(31)

where \( M = \sup _{k \ge 1,\; t \in (0,1)} ||z_t -y_{n_k}||\). Since \(\lim _{t \rightarrow 0}\frac{1}{qt}[(1-t)^q + (qt - 1)] = 0\), \(z_t \rightarrow z = Qfz\) as \(t \rightarrow 0\) and by Proposition 1 (2) \(j_q\) is norm-to-norm uniformly continuous on bounded subsets of X, we have

$$\begin{aligned} ||j_q (z_t - y_{n_k}) - j_q (z - y_{n_k} )|| \rightarrow 0\; (as \; t \rightarrow 0). \end{aligned}$$
(32)

Observe that

$$\begin{aligned}&{|\langle z_t - f(y_{n_k}), \; j_q (z_t - y_{n_k})\rangle - \langle z - f(y_{n_k}),\; j_q (z - y_{n_k})\rangle |} \nonumber \\&\quad s\le |\langle z_t - z, \; j_q (z_t - y_{n_k})| + |\langle z - f(y_{n_k}),\; j_q (z_t - y_{n_k}) - j_q (z - y_{n_k})\rangle | \nonumber \\&\quad \le ||z_t - z|| ||z_t - y_{n_k}||^{q-1} + ||z - f(y_{n_k})|| ||j_q (z_t - y_{n_k}) - j_q (z - y_{n_k})||.\qquad \quad \end{aligned}$$
(33)

This together with (32) shows that

$$\begin{aligned}&{\limsup _{k \rightarrow \infty }\langle z - f(y_{n_k}),\; j_q (z - y_{n_k})\rangle } \nonumber \\&\quad = \limsup _{k \rightarrow \infty } \limsup _{t \rightarrow 0} \langle z_t - f(y_{n_k}), \; j_q (z_t - y_{n_k})\rangle \nonumber \\&\quad = \limsup _{k \rightarrow \infty } \limsup _{t \rightarrow 0}\langle z_t - f(z_t) + f(z_t) - f(y_{n_k}), \; j_q (z_t - y_{n_k})\rangle \nonumber \\&\quad = \limsup _{k \rightarrow \infty } \limsup _{t \rightarrow 0}\langle f(z_t) - f(y_{n_k}), \; j_q (z_t - y_{n_k})\rangle (by\; 31)) \nonumber \\&\quad = \limsup _{k \rightarrow \infty }\langle f(z) - f(y_{n_k}), \; j_q (z - y_{n_k})\rangle \nonumber \\&\quad = 0. \end{aligned}$$
(34)

On the other hand, by (17) and (27), we see that

$$\begin{aligned} ||y_{n_{k}+1} - y_{n_k}|| \le \alpha _{n_k}||f(y_{n_k})) - y_{n_k}|| + \delta _{n_k}||T_{n_k} y_{n_k} - y_{n_k}|| \rightarrow 0 \; (as \; k \rightarrow \infty ). \end{aligned}$$
(35)

Combining (34) and (35), we get that

$$\begin{aligned} \limsup _{k \rightarrow \infty }\langle z - f(y_{n_k}),\; j_q (z - y_{n_{k}+1})\rangle \le 0. \end{aligned}$$

This implies that \(\limsup _{k \rightarrow \infty } \tau _{n_k} \le 0\). By Lemma 2, \(y_n \rightarrow z\) (as \(n \rightarrow \infty \)). And so \(x_n \rightarrow z\) (as \(n \rightarrow \infty \)). This completes the proof of Theorem 2. \(\square \)

As well known, if X is a real Hilbert space, then it is a uniformly convex and 2-uniformly smooth Banach space, with the 2-uniform smoothness coefficient \(\kappa _2 = 1\). And note that in this case the concept of monotonicity coincides with the concept of accretivity. Hence from Theorem 2 we can obtain the following result.

Theorem 3

Let X be a real Hilbert space, \(A : X \rightarrow X\) be an \(\alpha \)-inverse strongly monotone operator of order 2 and \(B : X \rightarrow 2^X\) be a maximal monotone operator such that \({\varGamma }:= (A + B)^{-1}(0) \not = \emptyset \). Let \(f, \{e_n\}\) and \(\{x_n\}\) be the same as in Theorem 1. If the following conditions are satisfied:

  1. (i)

    \(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);

  2. (ii)

    \(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);

  3. (iii)

    \(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le 2 \alpha \);

  4. (iv)

    \(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)

then \(\{x_n\}\) converges strongly to \(z = Qf(z)\), where Q is a sunny nonexpansive retraction of X onto \({\varGamma }\).

In Theorem 2, if \(f(x) = u,\; \forall x \in X\), where u is a fixed point in X, then from Theorem 2 we have the following result.

Theorem 4

Let \(X, q, A, B, \{e_n\} and \ {\varGamma }\) be the same as in Theorem 2. Let \(\{x_n\}\) be the sequence generated by \(x_1 \in X\) and

$$\begin{aligned} x_{n+1} = \alpha _n u + \lambda _n x_n + \delta _n J_{r_n}^B (x_n - r_n Ax_n) + e_n, \; n \ge 1. \end{aligned}$$
(36)

If \({\varGamma }\not = \emptyset \) and the following conditions are satisfied:

  1. (i)

    \(\{\alpha _n\}\), \(\{\lambda _n\}\), and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);

  2. (ii)

    \(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);

  3. (iii)

    \(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le (\frac{\alpha q}{\kappa _q })^{1/(q-1)}\);

  4. (iv)

    \(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)

then \(\{x_n\}\) converges strongly to \(z = Qu\), where Q is a sunny nonexpansive retraction of X onto \({\varGamma }\).

Remark 1

Theorem 2 is an improvement of [3], and it is also a generalization of [9, 13, 22, 27] from Hilbert spaces to Banach spaces.

4 Applications

In this section we shall utilize the forward–backward methods mentioned above to study monotone variational inequalities, convex minimization problem and convexly constrained linear inverse problem.

Throughout this section, let C be a nonempty closed and convex subset of a real Hilbert space H. Note that in this case the concept of monotonicity coincides with the concept of accretivity.

4.1 Application to Monotone Variational Inequality Problems

A monotone variational inequality problem (VIP) is formulated as the problem of finding a point \(x^* \in C\) such that:

$$\begin{aligned} \langle Ax, y -x \rangle \ge 0 \quad \forall y \in C, \end{aligned}$$
(37)

where \(A : C \rightarrow H\) is a nonlinear monotone operator. We shall denote by \({\varGamma }\) the solution set of (37) and assume \({\varGamma }\not = \emptyset \). In Example 4, we have pointed out that VI(CA) (37) is equivalent to finding a point \(x^*\) so that

$$\begin{aligned} 0 \in (A + B) x^*, \end{aligned}$$
(38)

where \(B: C \rightarrow H\) is the subdifferential of the indicator of C, and it is a maximal monotone operator. By [16, Theorem 3] in this case, the resolvent of B is nothing but the projection operator \(P_C\). Therefore the following result can be obtained from Theorem 3 immediately.

Corollary 1

Let \(A : C \rightarrow H\) be an \(\alpha \)-inverse strongly monotone operator of order 2 and let \(f, \{e_n\}\) be the same as in Theorem 1. Let \(\{x_n\}\) be the sequence generated by \(x_1 \in C\) and

$$\begin{aligned} x_{n+1} = \alpha _n f(x_n) + \lambda _n x_n + \delta _n P_C (x_n - r_n Ax_n) + e_n, n \ge 1. \end{aligned}$$
(39)

If the following conditions are satisfied:

  1. (i)

    \(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);

  2. (ii)

    \(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);

  3. (iii)

    \(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le 2 \alpha \);

  4. (iv)

    \(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)

then \(\{x_n\}\) converges strongly to a solution z of monotone variational inequality (37).

4.2 Application to the Convex Minimization Problems

Let \(\psi : H \rightarrow R\) be a convex smooth function and \(\phi : H \rightarrow R\) be a proper convex and lower-semicontinuous function. We consider the following convex minimization problem of finding \(x^* \in H\) such that

$$\begin{aligned} \psi (x^*) + \phi (x^*) = \min _{x \in H} \{\psi ( x) + \phi (x)\}. \end{aligned}$$
(40)

This problem (40) is equivalent, by Fermat’s rule, to the problem of finding \(x^* \in H\) such that

$$\begin{aligned} 0 \in \nabla \psi (x^*) + \partial \phi (x^*), \end{aligned}$$
(41)

where \( \nabla \psi \) is a gradient of \(\psi \) and \(\partial \phi \) is a subdifferential of \(\phi \). Set \(A = \nabla \psi \) and \(B = \partial \phi \) in Theorem 3. If \(\nabla \psi \) is (1/L)-Lipschitz continuous, then it is L-inverse strongly monotone. Moreover, \(\partial \phi \) is maximal monotone. Hence from Theorem 3 we have the following result.

Theorem 5

Let \(\psi : H \rightarrow R\) be a convex and differentiable function with (1/L)-Lipschitz continuous gradient \( \nabla \psi \) and \(\phi : H \rightarrow R\) be a proper convex and lower-semicontinuous function such that \(\psi + \phi \) attains a minimizer. Let \(f: H \rightarrow H\) be a contractive mapping with a contractive coefficient \(\xi \in (0, 1)\), and \(\{e_n\}\) be a sequence in H. Let \(\{x_n\}\) be the sequence generated by \(x_1 \in H\) and

$$\begin{aligned} x_{n+1} = \alpha _n f(x_n) + \lambda _n x_n + \delta _n J_{r_n} (x_n - r_n \nabla \psi (x_n)) + e_n, \forall n \ge 1, \end{aligned}$$
(42)

where \(J_{r_n} = (I + r_n \partial \phi )^{-1}\). If the following conditions are satisfied:

  1. (i)

    \(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);

  2. (ii)

    \(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);

  3. (iii)

    \(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le 2 \alpha \);

  4. (iv)

    \(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)

then \(\{x_n\}\) strongly converges to a minimizer of \(\varphi + \psi \).

4.3 Application to the Convexly Constrained Linear Inverse Problem

Let \(K : H \rightarrow C\) be a bounded linear operator and \(b \in C\). The constrained linear system

$$\begin{aligned} Kx = b,\; x \in C \end{aligned}$$
(43)

is called convexly constrained linear inverse problem. Define \(\psi (x): H \rightarrow R^+\) by

$$\begin{aligned} \psi (x) = \frac{1}{2}||Kx - b||^2,\; x \in H. \end{aligned}$$
(44)

We have \(\nabla \psi (x) =K^*(Kx - b)\), and \(\nabla \psi \) is L-Lipschitzian, where \(L= ||K||^2 \), i.e., \(\nabla \psi \) is 1 / L-inverse strongly monotone. It is easy to know that \(x^* \in C\) is a solution of (43) if and only if \( 0 \in \nabla \psi (x^*) =K^*(Kx^* - b)\). Taking \(A= \nabla \psi \) and \(B= 0\) in Theorem 3 we have the following result.

Theorem 6

If problem (43) is consistent and the following conditions are satisfied

  1. (i)

    \(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);

  2. (ii)

    \(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);

  3. (iii)

    \(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le 2 / L\);

  4. (iv)

    \(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)

then for any given contractive mapping \(f: H \rightarrow C\), the sequence \(\{x_n\}\) generated by \(x_1 \in H\) and

$$\begin{aligned} x_{n+1} = \alpha _n f(x_n) + \lambda _n x_n + \delta _n P_C (x_n - r_n K^*(K x_n -b)), \forall n \ge 1, \end{aligned}$$
(45)

converges strongly to a solution of problem (43).