Abstract
Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two (possibly simpler) nonlinear operators. Most of the investigation on splitting methods is however carried out in the framework of Hilbert spaces. In this paper, we consider these methods in the setting of Banach spaces. We shall introduce a viscosity iterative forward–backward splitting method with errors to find zeros of the sum of two accretive operators in Banach spaces. We shall prove the strong convergence of the method under mild conditions. We also discuss applications of these methods to monotone variational inequalities, convex minimization problem and convexly constrained linear inverse problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let X be a real Banach space. We study the following zero point problem: find \(x^* \in X\) such that
where \(A : X \rightarrow X\) is an operator and \(B : X \rightarrow 2^X\) is a set-valued operator. This problem includes, as special cases, convex programming, variational inequalities, split feasibility problem and minimization problem. To be more precise, some concrete problems in machine learning, image processing and linear inverse problem can be modeled mathematically as this form (1). For example:
Example 1
A stationary solution to the initial value problem of the evolution equation
can be rewritten as (1) when the governing maximal monotone F is of the form \(F = A + B\).
Example 2
In optimization, it often needs to solve a minimization problem of the form
where H is a real Hilbert space, and f, g are proper lower-semicontinuous and convex functions from H to \((-\infty , \infty ]\) and T is a bounded linear operator on H.
Indeed, (3) is equivalent to (1) if f and \(g\circ T\) have a common point of continuity with \(A : = \partial f\) and \(B = : T^* \circ \partial g \circ T\). Here \(T^*\) is the adjoint of T, and \(\partial f\) is the subdifferential operator of f. It is known [1, 6, 19] that the minimization problem (3) is widely used in image recovery, signal processing and machine learning.
Example 3
If \(B = \partial \phi : H \rightarrow 2^H\), where \(\phi : H \rightarrow (-\infty , \infty ]\) is a proper convex and lower semicontinuous, and \(\partial \phi \) is the subdifferential of \(\phi \), then problem (1) is equivalent to find \(x^* \in H\) such that
which is said to be the mixed quasi-variational inequality.
Example 4
In Example 3, if \(\phi \) is the indicator function of C, i.e.,
then problem (4) is equivalent to the classical variational inequality problem, denoted by VI(C; A), i.e., to find \(x^* \in C\) such that
It is easy to see that (5) is equivalent to finding a point \(x^* \in C\) such that
where B is the subdifferential of the indicator of C.
A classical method for solving problem (1) is the forward–backward splitting method [6, 10, 14, 21] which is defined by the following manner: for any fixed \(x_1 \in X\) and for \(r > 0\),
We see that each step of the iteration involves only with A as the forward step and B as the backward step, but not the sum of B. In fact, this method includes, in particular, the proximal point algorithm [2, 7, 17] and the gradient method.
In 2012, Takashashi et al. [20] proved some strong convergence theorems of the Halpern-type iteration in a Hilbert space H, which is defined by the following manner: for any \(x_1 \in H\),
where \(u \in H\) is a fixed point and A is an \(\alpha \)-inverse strongly monotone mapping on H and B is an maximal monotone operator on H. Under suitable conditions, they proved that the sequence \(\{x_n\}\) generated by (7) converges strongly to a zero point of \(A + B\).
Recently, L\(\acute{o}\)pez et al. [11] introduced the following Halpern-type forward–backward method: for any \(x_1 \in X\),
where \(u \in X\), A is an \(\alpha \)-inverse strongly accretive mapping on X and B is an m-accretive operator on X, \(\{r_n\} \subset (0, \infty )\), \(\{\alpha _n\}\subset (0, 1]\) and \(\{a_n\}, \; \{b_n\}\) are the error sequences in X. They proved that the sequence \(\{x_n\}\) generated by (8) strongly converges to a zero point of the sum of A and B under some appropriate conditions.
Very recently there have many works concerning the problem of finding zero points of the sum of two monotone operators (in Hilbert spaces) and accretive operators (in Banach spaces). For more details, see, e.g., [5, 18, 20, 21, 23,24,25,26] and the references therein.
In this paper, we introduce and consider a viscosity iterative forward–backward splitting method with errors to find zeros of the sum of two accretive operators in the setting of Banach spaces. We shall prove the strong convergence of the method under mild conditions. We also discuss applications of these methods to variational inequalities, convex minimization problem and convexly constrained linear inverse problem.
2 Preliminaries
In order to prove the main results of the paper, we need the following basic concepts, notations and lemmas.
In what follows, we always assume that X is a uniformly convex and q-uniformly smooth Banach space for some \(q \in (1, 2]\) (the definitions and properties, see, for example [4]).
Recall that the generalized duality mapping\(J_q : X \rightarrow 2^{X^*}\) is defined by
and the following subdifferential inequality holds: for any \(x, y\in X\),
Recall that [11] if X is q-uniformly smooth, \(q \in (1, 2]\), then there exists a constant \(\kappa _q > 0\) such that
The best constant \(\kappa _q\) satisfying (10) will be called the q-uniform smoothness coefficient of X.
Proposition 1
([4]). Let \(1 < q \le 2\). Then the following conclusions hold:
-
(1)
Banach space X is smooth if and only if the duality mapping \(J_q\) is single valued.
-
(2)
Banach space X is uniformly smooth if and only if the duality mapping \(J_q\) is single valued and norm-to-norm uniformly continuous on bounded sets of X.
Recall that a set-valued operator \(A : X \rightarrow 2^X\) with the domain D(A) and the range R(A) is said to be accretive if, for each \(x, y \in D(A)\), there exists \(j(x - y) \in J(x - y)\) such that
An accretive operator A is said to be m-accretive if the range \(R(I + \lambda A) = X, \; \forall \lambda > 0\).
For any \(\alpha > 0\) and \(q \in (1, 2]\), we say that an accretive operator A is \(\alpha \)-inverse strongly accretive (shortly, \(\alpha \)-isa) of order q, if for each \(x, y \in D(A)\), there exists \(j_q(x - y) \in J_q(x - y)\) such that
Let C be a nonempty closed and convex subset of a real Banach space X and K be a nonempty subset of C. A mapping \(T: C \rightarrow K\) is called a retraction of C onto K if \(Tx = x\) for all \(x \in K\). We say that T is sunny if, for each \(x \in C\) and \(t \ge 0\),
whenever \(tx + (1 - t)Tx \in C\). A sunny nonexpansive retraction is a sunny retraction which is also nonexpansive.
Proposition 2
([15, 28]). Let X be a uniformly smooth Banach space, \(T : C \rightarrow C\) be a nonexpansive mapping with a fixed point and \(f: C \rightarrow C\) be a contraction mapping. For each \(t \in (0, 1)\) the unique fixed point \(x_t \in C\) of the contractive mapping, \(tf + (1-t)T: C \rightarrow C\), converges strongly as \(t \rightarrow 0\) to the unique fixed point z of T with \(z = Q f(z)\), where \(Q : C \rightarrow Fix(T)\) is the unique sunny nonexpansive retraction from C onto Fix(T).
Lemma 1
([12, Lemma 3.1] ). Let \(\{a_n\}\), \(\{c_n\} \subset R^+\), \(\{\alpha _n\} \subset (0, 1)\) and \(\{b_n\} \subset R\) be the sequences such that
Assume that \( \sum _{n=1}^\infty c_n < \infty \). Then the following results hold:
-
(1)
If \(b_n \le \alpha _n M\), where \(M \ge 0\), then \(\{a_n\}\) is bounded.
-
(2)
If \(\sum _{n =1}^\infty \alpha _n = \infty \) and \(\limsup _{n \rightarrow \infty } \frac{b_n}{\alpha _n} \le 0\), then \(\lim _{n \rightarrow \infty } a_n = 0\).
Lemma 2
([8]). Let \(\{s_n\}\) be a sequence of nonnegative real numbers such that
and
where \(\{\gamma _n\}\) is a sequence in (0, 1), \(\{\eta _n\}\) is a sequence of nonnegative real numbers, \(\{\tau _n\}\) and \(\{\rho _n\}\) are real sequences such that
-
(a)
\(\sum _{n=1}^\infty \gamma _n = \infty \);
-
(b)
\(\lim _{n \rightarrow \infty }\rho _n = 0\);
-
(c)
\(\lim _{k \rightarrow \infty } \eta _{n_k} = 0\) implies \(\limsup _{k \rightarrow \infty } \tau _{n_k} \le 0\) for any subsequence \(\{n_k\} \subset \{n\}\).
Then \(\lim _{n \rightarrow \infty } s_n = 0\).
It is easy to prove the following conclusion holds.
Lemma 3
For any \(r > 0\), if
then \(Fix(T_r) = (A + B)^{-1}(0)\).
Lemma 4
([11, Lemma 3.2]). For any \(s \in (0, r]\) and \(x \in X\), we have
Lemma 5
([11, Lemma 3.3]). Let X be a uniformly convex and q-uniformly smooth Banach space with \(q \in (1, 2]\). Assume that A is a single-valued \(\alpha \)-isa of order q on X. Then, for any \(r > 0\), there exists a continuous, strictly increasing and convex function \(\phi _q : R^+ \rightarrow R^+\) with \(\phi _q(0) = 0\) such that for all \(x, y \in B_r\),
where \(\kappa _q\) is the q-uniform smoothness coefficient of X.
It is easy to prove that the following inequality holds.
Proposition 3
Let \( 1 < q \le 2\) and let X be a real smooth Banach space with the generalized duality mapping \(j_q\). Let m be a fixed positive integer. Let \(\{x_i\}_{i=1}^m \subset X\) and \(t_i \ge 0\) for all \(i = 1, 2, ..., m\) with \(\sum _{i=1}^m t_i \le 1\). Then we have
3 Main Results
We are now in a position to give the following main results.
Theorem 1
Let X be a uniformly convex and q-uniformly smooth Banach space, \(q\in (1, 2]\). Let \(A : X \rightarrow X\) be an \(\alpha \)-isa of order q and \(B : X \rightarrow 2^X\) be an m-accretive operator such that \({\varGamma }:= (A + B)^{-1}(0) \not = \emptyset \). Let \(\{e_n\}\) be a sequence in X and \(f: X\rightarrow X\) be a contractive mapping with contractive constant \(\xi \in (0, 1)\). Let \(\{x_n\}\) be a sequence generated by \(x_1 \in X\) and
where \(J_{r_n}^B = (I + r_n B)^{-1}\), \(\kappa _q\) is the q-uniform smoothness coefficient of X, \(0 < r_n \le (\frac{\alpha q}{\kappa _q})^{1/(q-1)}\) and \(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\). If \(\sum _{n=1}^\infty ||e_n|| < \infty \), or \(\lim _{n\rightarrow \infty }||e_n||/\alpha _n = 0\), then \(\{x_n\}\) is bounded.
Proof
For each \(n \ge 1\), we put \(T_n = J_{r_n}^B (I - r_n A)\) and let the sequence \(y_{n}\) be defined by
By the condition \(0 < r_n \le ( \frac{\alpha q}{\kappa _q} )^{1/(q-1)}\) and Lemma 5, we know that \(T_n \) is a nonexpansive mapping. Hence we have
By Lemma 1 (2), we conclude that \(\lim _{n \rightarrow \infty } ||x_n - y_n|| = 0\).
Next we show that \(\{y_n\}\) is bounded. Indeed, let \(z \in {\varGamma }\). By Lemma 3 this implies that \(z\in (A + B)^{-1}(0) = Fix(T_n), \forall n \ge 1\). Hence we have
By Lemma 1 (1), \(\{y_n\}\) is bounded, so is \(\{x_n\}\). This completes the proof of Theorem 1. \(\square \)
Theorem 2
Let \(X, A, B, f, q, \kappa _q, \{e_n\}, {\varGamma }\) and \(\{x_n\}\) be the same as in Theorem 1. If \({\varGamma }\not = \emptyset \) and the following conditions are satisfied:
-
(i)
\(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);
-
(ii)
\(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);
-
(iii)
\(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le (\alpha q/\kappa _q )^{1/(q-1)}\);
-
(iv)
\(\lim inf_{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)
then \(\{x_n\}\) converges strongly to \(z = Qf(z)\), where Q is a sunny nonexpansive retraction of X onto \({\varGamma }\).
Proof
In Theorem 1 we have proved that \(\lim _{n \rightarrow \infty }||x_n - y_n|| = 0\). In order to prove the conclusion, it suffices to show that \(\lim _{n \rightarrow \infty } y_n = z = Qf(z)\). In fact, from (9), we have
Since \(z = Qf(z)\in {\varGamma }= Fix(T_n), \; \forall n \ge 1\), from Proposition 3 and Lemma 5 we have
Substituting (20) into (19) we have
Since \(\alpha q -r_n^{q-1}\kappa _q > 0\), we have
and
For each \(n \ge 1\), let
Then (22) and (23) can be written as:
and
Since \(\alpha _n \in (0, 1)\), \(\alpha _n \rightarrow 0\) and \({\varSigma }_{n = 1}^\infty \alpha _n = \infty \). It follows that \(\gamma _n \in (0, 1)\), \(\sum _{n=1}^\infty \gamma _n = \infty \) and \(\lim _{n \rightarrow \infty }\rho _n = 0\). In order to prove \(s_n \rightarrow 0\), by Lemma 2, it is sufficient to prove that for any subsequence \(\{n_k\} \subset \{n\}\), if \(\lim _{k \rightarrow \infty } \eta _{n_k} = 0\), then \(\limsup _{k \rightarrow \infty } \tau _{n_k} \le 0\).
Indeed, if \(\{n_k\}\) is a subsequence of \(\{n\}\) such that \(\lim _{k \rightarrow \infty }\eta _{n_k} = 0\), then by the assumptions and the property of \(\phi _q\) , we can deduce that
This implies, by the triangle inequality, that
Since \(\liminf _{n\rightarrow \infty } r_n > 0\), there is \(r > 0\) such that \(r_n \ge r\) for all \(n \ge 1\). In particular, \(r_{n_k} \ge r\) for all \(k \ge 1\). It follows from Lemma 4 and (27) that
which implies that
Put
By Proposition 2, \(z_t\) converges strongly as \(t \rightarrow 0\) to the unique fixed point \(z= Q f(z) \in Fix(T_r) = (A + B)^{-1}(0)\), where \(Q : X \rightarrow Fix(T_r)\) is the unique sunny nonexpansive retraction from X onto \(Fix(T_r) = (A + B)^{-1}(0)\). So we obtain
After simplifying we have
It follows from (29) and (30) that
where \( M = \sup _{k \ge 1,\; t \in (0,1)} ||z_t -y_{n_k}||\). Since \(\lim _{t \rightarrow 0}\frac{1}{qt}[(1-t)^q + (qt - 1)] = 0\), \(z_t \rightarrow z = Qfz\) as \(t \rightarrow 0\) and by Proposition 1 (2) \(j_q\) is norm-to-norm uniformly continuous on bounded subsets of X, we have
Observe that
This together with (32) shows that
On the other hand, by (17) and (27), we see that
Combining (34) and (35), we get that
This implies that \(\limsup _{k \rightarrow \infty } \tau _{n_k} \le 0\). By Lemma 2, \(y_n \rightarrow z\) (as \(n \rightarrow \infty \)). And so \(x_n \rightarrow z\) (as \(n \rightarrow \infty \)). This completes the proof of Theorem 2. \(\square \)
As well known, if X is a real Hilbert space, then it is a uniformly convex and 2-uniformly smooth Banach space, with the 2-uniform smoothness coefficient \(\kappa _2 = 1\). And note that in this case the concept of monotonicity coincides with the concept of accretivity. Hence from Theorem 2 we can obtain the following result.
Theorem 3
Let X be a real Hilbert space, \(A : X \rightarrow X\) be an \(\alpha \)-inverse strongly monotone operator of order 2 and \(B : X \rightarrow 2^X\) be a maximal monotone operator such that \({\varGamma }:= (A + B)^{-1}(0) \not = \emptyset \). Let \(f, \{e_n\}\) and \(\{x_n\}\) be the same as in Theorem 1. If the following conditions are satisfied:
-
(i)
\(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);
-
(ii)
\(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);
-
(iii)
\(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le 2 \alpha \);
-
(iv)
\(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)
then \(\{x_n\}\) converges strongly to \(z = Qf(z)\), where Q is a sunny nonexpansive retraction of X onto \({\varGamma }\).
In Theorem 2, if \(f(x) = u,\; \forall x \in X\), where u is a fixed point in X, then from Theorem 2 we have the following result.
Theorem 4
Let \(X, q, A, B, \{e_n\} and \ {\varGamma }\) be the same as in Theorem 2. Let \(\{x_n\}\) be the sequence generated by \(x_1 \in X\) and
If \({\varGamma }\not = \emptyset \) and the following conditions are satisfied:
-
(i)
\(\{\alpha _n\}\), \(\{\lambda _n\}\), and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);
-
(ii)
\(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);
-
(iii)
\(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le (\frac{\alpha q}{\kappa _q })^{1/(q-1)}\);
-
(iv)
\(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)
then \(\{x_n\}\) converges strongly to \(z = Qu\), where Q is a sunny nonexpansive retraction of X onto \({\varGamma }\).
Remark 1
Theorem 2 is an improvement of [3], and it is also a generalization of [9, 13, 22, 27] from Hilbert spaces to Banach spaces.
4 Applications
In this section we shall utilize the forward–backward methods mentioned above to study monotone variational inequalities, convex minimization problem and convexly constrained linear inverse problem.
Throughout this section, let C be a nonempty closed and convex subset of a real Hilbert space H. Note that in this case the concept of monotonicity coincides with the concept of accretivity.
4.1 Application to Monotone Variational Inequality Problems
A monotone variational inequality problem (VIP) is formulated as the problem of finding a point \(x^* \in C\) such that:
where \(A : C \rightarrow H\) is a nonlinear monotone operator. We shall denote by \({\varGamma }\) the solution set of (37) and assume \({\varGamma }\not = \emptyset \). In Example 4, we have pointed out that VI(C; A) (37) is equivalent to finding a point \(x^*\) so that
where \(B: C \rightarrow H\) is the subdifferential of the indicator of C, and it is a maximal monotone operator. By [16, Theorem 3] in this case, the resolvent of B is nothing but the projection operator \(P_C\). Therefore the following result can be obtained from Theorem 3 immediately.
Corollary 1
Let \(A : C \rightarrow H\) be an \(\alpha \)-inverse strongly monotone operator of order 2 and let \(f, \{e_n\}\) be the same as in Theorem 1. Let \(\{x_n\}\) be the sequence generated by \(x_1 \in C\) and
If the following conditions are satisfied:
-
(i)
\(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);
-
(ii)
\(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);
-
(iii)
\(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le 2 \alpha \);
-
(iv)
\(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)
then \(\{x_n\}\) converges strongly to a solution z of monotone variational inequality (37).
4.2 Application to the Convex Minimization Problems
Let \(\psi : H \rightarrow R\) be a convex smooth function and \(\phi : H \rightarrow R\) be a proper convex and lower-semicontinuous function. We consider the following convex minimization problem of finding \(x^* \in H\) such that
This problem (40) is equivalent, by Fermat’s rule, to the problem of finding \(x^* \in H\) such that
where \( \nabla \psi \) is a gradient of \(\psi \) and \(\partial \phi \) is a subdifferential of \(\phi \). Set \(A = \nabla \psi \) and \(B = \partial \phi \) in Theorem 3. If \(\nabla \psi \) is (1/L)-Lipschitz continuous, then it is L-inverse strongly monotone. Moreover, \(\partial \phi \) is maximal monotone. Hence from Theorem 3 we have the following result.
Theorem 5
Let \(\psi : H \rightarrow R\) be a convex and differentiable function with (1/L)-Lipschitz continuous gradient \( \nabla \psi \) and \(\phi : H \rightarrow R\) be a proper convex and lower-semicontinuous function such that \(\psi + \phi \) attains a minimizer. Let \(f: H \rightarrow H\) be a contractive mapping with a contractive coefficient \(\xi \in (0, 1)\), and \(\{e_n\}\) be a sequence in H. Let \(\{x_n\}\) be the sequence generated by \(x_1 \in H\) and
where \(J_{r_n} = (I + r_n \partial \phi )^{-1}\). If the following conditions are satisfied:
-
(i)
\(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);
-
(ii)
\(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);
-
(iii)
\(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le 2 \alpha \);
-
(iv)
\(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)
then \(\{x_n\}\) strongly converges to a minimizer of \(\varphi + \psi \).
4.3 Application to the Convexly Constrained Linear Inverse Problem
Let \(K : H \rightarrow C\) be a bounded linear operator and \(b \in C\). The constrained linear system
is called convexly constrained linear inverse problem. Define \(\psi (x): H \rightarrow R^+\) by
We have \(\nabla \psi (x) =K^*(Kx - b)\), and \(\nabla \psi \) is L-Lipschitzian, where \(L= ||K||^2 \), i.e., \(\nabla \psi \) is 1 / L-inverse strongly monotone. It is easy to know that \(x^* \in C\) is a solution of (43) if and only if \( 0 \in \nabla \psi (x^*) =K^*(Kx^* - b)\). Taking \(A= \nabla \psi \) and \(B= 0\) in Theorem 3 we have the following result.
Theorem 6
If problem (43) is consistent and the following conditions are satisfied
-
(i)
\(\{\alpha _n\}\), \(\{\lambda _n\}\) and \(\{\delta _n\}\) are sequences in [0, 1] with \(\alpha _n + \lambda _n + \delta _n = 1\);
-
(ii)
\(\lim _{n \rightarrow \infty }\alpha _n = 0\) and \(\sum _{n =1}^\infty \alpha _n = \infty \);
-
(iii)
\(0 <\liminf _{n \rightarrow \infty } r_n \le \limsup _{n \rightarrow \infty } r_n\le 2 / L\);
-
(iv)
\(\liminf _{n \rightarrow \infty } \delta _n > 0\), and \(\sum _{n =1}^\infty ||e_n|| < \infty \) or \(\lim _{n \rightarrow \infty }||e_n||/\alpha _n = 0,\)
then for any given contractive mapping \(f: H \rightarrow C\), the sequence \(\{x_n\}\) generated by \(x_1 \in H\) and
converges strongly to a solution of problem (43).
References
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Chen, G.H.G., Rockafellar, R.T.: Convergence rates in forward–backward splitting. SIAM J. Optim. 7, 421–444 (1997)
Cholamjiak, P.: A generalized forward-backward splitting method for solving quasi inclusion problems in Banach spaces. Numer. Algorithms doi:10.1007/s11075-015-0030-6
Cioranescu, I.: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Kluwer Academic Publishers, Dordrecht (1990)
Combettes, P.L.: Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 16, 727–748 (2009)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)
Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991)
He, S., Yang, C.: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 8 (2013)
Kamimura, S., Takahashi, W.: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 106, 226–240 (2000)
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
López, G., Martín-Márquez, V., Wang, F., Xu, H.K.: Forward–Backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, 25 (2012)
Maingǐe, P.E.: Approximation method for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 325, 469–479 (2007)
Marino, G., Xu, H.K.: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 3, 791–808 (2004)
Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383–390 (1979)
Reich, S.: Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 75, 287–292 (1980)
Rockafellar, R.T.: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75–88 (1970)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Saewan, S., Kumam, P., Cho, Y.J.: Strong convergence for maximal monotone operators, relatively quasi- nonexpansive mappings, variational inequalities and equilibrium problems. J. Glob. Optim. 57, 1299–1318 (2013)
Sra, S., Nowozin, S., Wright, S.J. (eds): Optimization for Machine Learning. Neural Information Processing series. The MIT Press, Cambridge, MA (2011)
Takahashi, W., Wong, N.C., Yao, J.C.: Two generalized strong convergence theorems of Halpern’s type in Hilbert spaces and applications. Taiwan. J. Math. 16, 1151–1172 (2012)
Tseng, P.: A modified forward–backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
Wang, F., Cui, H.: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 54, 485–491 (2012)
Yao, Y., Liou, Y.C., Yao, J.C.: Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theory Appl. 2015, 127, 19 (2015). doi:10.1186/s13663-015-0376-4
Yao, Y., Liou, Y.C.,Yao, J.C.: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007, Article ID 64363, 12 (2007). doi:10.1155/2007/64363
Yao, Y., Chen, R., Yao, J.C.: Strong convergence and certain control conditions for modified Mann iteration. Nonlinear Anal. 68, 1687–1693 (2008)
Yao, Y., Liou, Y.C., Yao, J.C.: Finding the minimum norm common element of maximal monotone operators and nonexpansive mappings without involving projection. J. Nonlinear Convex Anal. 16, 835–854 (2015)
Yao, Y., Noor, M.A.: On convergence criteria of generalized proximal point algorithms. J. Comput. Appl. Math. 217, 46–55 (2008)
Zhou, H.Y.: Iterative Methods of Fixed Points and Zeros with Applications. National Defense Industry Press, Beijing (2016)
Acknowledgements
This study was supported by the Natural Science Foundation of China Medical University, Taichung, Taiwan, and the grand from Research Center for Nonlinear Analysis and Optimization, Kaohsiung Medical University, Kaohsiung, Taiwan.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Mohammad Sal Moslehian.
Rights and permissions
About this article
Cite this article
Chang, SS., Wen, CF. & Yao, JC. Zero Point Problem of Accretive Operators in Banach Spaces. Bull. Malays. Math. Sci. Soc. 42, 105–118 (2019). https://doi.org/10.1007/s40840-017-0470-3
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40840-017-0470-3