Abstract
In this work, we study the split feasibility problem (SFP) in the framework of p-uniformly convex and uniformly smooth Banach spaces. We propose an iterative scheme with inertial terms for seeking the solution of SFP and then prove a strong convergence theorem for the sequences generated by our iterative scheme under implemented conditions on the step size which do not require the computation of the norm of the bounded linear operator. We finally provide some numerical examples which involve image restoration problems and demonstrate the efficiency of the proposed algorithm. The obtained result of this paper complements many recent results in this direction and seems to be the first one to investigate the SFP outside Hilbert spaces involving the inertial technique.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(E_1\) and \(E_2\) be real Banach spaces. Let C and Q be nonempty, closed and convex subsets of \(E_1\) and \(E_2\), respectively. Let \(A : E_1\rightarrow E_2\) be a bounded linear operator with its adjoint operator \(A^*\) which is defined by \(A^*:E_2^*\rightarrow E_1^*\),
and the equalities \(||A^*||=||A||\) and \({\mathcal {N}}(A^*) = {\mathcal {R}}(A)^\bot \) are valid, where \({\mathcal {R}}(A)^\bot :=\{x^*\in E_2^*:\left\langle x^*,u\right\rangle =0,~~\forall u \in {\mathcal {R}}(A)\}\). For more details on bounded linear operators and their duals, please see [22, 56, 57].
We shall adopt the following notations in this paper:
-
\(x_n\rightarrow x\) means that \(x_n\rightarrow x\) strongly;
-
\(x_n\rightharpoonup x\) means that \(x_n\rightarrow x\) weakly;
-
\(\omega _w(x_n):= \{x: \exists x_{n_j}\rightharpoonup x\}\) is the weak w-limit set of the sequence \(\{x_n\}\).
The split feasibility problem (SFP) is the problem of finding a point \(x\in C\) such that
We denote the solutions set by \(\Omega :=C \cap A^{-1}(Q) = \{y\in C : Ay \in Q\}\). This problem was first introduced by Censor and Elfving [15], in a finite dimensional Hilbert space, for solving the inverse problems in the context of phase retrievals, medical image reconstruction and also in modeling of intensity modulated radiation therapy. Many authors have introduced several iterative methods for finding a solution to (1) both in Hilbert spaces and certain Banach spaces (see, for example, [25, 38, 41, 50, 52, 53, 58, 59]). Censor et al. in Section 2 of [11] (see also [25]) introduced the prototypical Split Inverse Problem (SIP). In this, there are given two vector spaces X and Y and a linear operator \(A: X\rightarrow Y\). In addition, two inverse problems are involved. The first one, denoted by IP1, is formulated in the space X and the second one, denoted by IP2, is formulated in the space Y. Given these data, the Split Inverse Problem is formulated as follows: find a point \(x^*\in X\) that solves IP1 and such that the point \(y^*= Ax^*\in Y\) solves IP2.
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces and \(A:H_{1}\rightarrow H_{2}\) a bounded linear operator. Let C and Q be nonempty, closed and convex subsets of \(H_{1}\) and \(H_{2}\), respectively. To solve the SFP in Hilbert spaces, Byrne [10] introduced the following CQ algorithm: \(x_{1}\in C\) and
where \(\lambda >0\), \(P_{C}\) and \(P_{Q}\) are the orthogonal projections on C and Q, respectively. It was proved that the sequence \(\{x_{n}\}\) defined by (2) converges weakly to a solution of the SFP provided the step size \(\lambda \in (0,\frac{2}{\Vert A\Vert ^{2}})\). It is observed that, in order to achieve the convergence, one has to estimate the norm of the bounded linear operator \(\Vert A\Vert \) (or the spectral radius of the matrix \(A^TA\) in the finite-dimensional framework) for selecting the step size \(\lambda \), which is not always possible in practice (see also Theorem 2.3 of [28]). To avoid this computation, there have been worthwhile works that the convergence is guaranteed without any prior information of the matrix norm (see, for examples [60, 62,63,64]). Among these works, López et al. [34] introduced a new way to select the step size by replacing the parameter \(\lambda \) appeared in (2) by the following:
where \(\rho _n\in (0,4)\), \(f(x_n)=\frac{1}{2}\Vert (I-P_Q)Ax_n\Vert ^{2}\) and \(\nabla f(x_n)=A^{*}(I-P_Q)Ax_n\) for all \(n\ge 1\). This method is a modification of the CQ method which is often called the self-adaptive method. Some modifications of the CQ algorithm and the self-adaptive method now have been invented for solving the SFP (see, for example, [4, 34, 40, 41, 58, 60]).
In optimization theory, to speed up the convergence rate, Polyak [47] firstly proposed the heavy ball method of the two-order time dynamical system which is a two-step iterative method for minimizing a smooth convex function f. To improve the convergence rate, Nesterov [44] introduced a modified heavy ball method as follows:
where \(\alpha _{n}\in [0,1)\) is an extrapolation factor and \(\lambda _{n}\) is a positive sequence. Here, the inertial is represented by the term \(\alpha _n(x_{n}-x_{n-1})\). It is remarkable that the inertial methodology greatly improves the performance of the algorithm and has a nice convergence properties (see [16, 19, 35]). In [3], Alvarez and Attouch employed the idea of the heavy ball method to the setting of a general maximal monotone operator using the framework of the proximal point algorithm [9, 49]. This method is called the inertial proximal point algorithm and it is of the following form:
where B is a maximal monotone operator. It was proved that if \(\lambda _{n}\) is non-decreasing and \(\alpha _{n}\in [0, 1)\) is chosen such that
then \(\{x_{n}\}\) generated by (5) converges to a zero point of B (see also [42]).
In subsequent work, Maingé [36] (see also [37]) introduced the inertial Mann algorithm for solving the fixed point problem of nonexpansive mapping T (i.e., \(\Vert Tx-Ty\Vert \le \Vert x-y\Vert ,~~\forall x,y \in H_1\)) in Hilbert spaces as follows: take \(x_{0},x_{1}\in H_{1}\) and generate the sequence \(\{x_{n}\}\) by
where \(\alpha _{n}\in [0,1)\) and \(\beta _{n}\in (0,1)\). It was shown that the sequence \(\{x_{n}\}\) converges weakly to a fixed point of T under the following conditions:
-
(A)
\(\alpha _{n}\in [0,\alpha )\) where \(\alpha \in [0,1)\);
-
(B)
\(\sum _{n=1}^{\infty }\alpha _{n}\Vert x_{n}-x_{n-1}\Vert ^{2}<\infty \);
-
(C)
\(0<\inf _{n\ge 1}\beta _{n}\le \sup _{n\ge 1}\beta _{n}<1\).
To satisfy the summability condition (B) of the sequence \(\{x_n\}\), one needs to calculate \(\{\alpha _n\}\) at each step. For further results on approximations methods with inertial terms for nonexpansive mappings, please see [6, 18].
Subsequently, Dong et al. [19] introduced an inertial CQ algorithm by combining the CQ algorithm introduced by Nakajo and Takahashi [43] and the inertial extrapolation and analyze its convergence. They proved the following theorem.
Theorem 1.1
Let \(T:H\rightarrow H\) be a nonexpansive mapping such that \(F(T)\ne \emptyset \). Let
Set \(x_0, x_1 \in H \) arbitrarily. Define a sequence \(\{x_n\}\) by the following algorithm:
for each \(n\ge 0\). Then, the iterative sequence \(\{x_n\}\) converges in norm to \(P_{F(T )}(x_0)\).
Very recently, Dang et al. [16] proposed the inertial relaxed CQ algorithms for solving SFP in Hilbert spaces and proved the weak convergence theorem for Picard-type and Mann-type iteration processes (see also [20]). Recently, there have been increasing interests in studying inertial extrapolation type algorithms. See, for example, inertial forward–backward splitting method [5, 35, 36], inertial Douglas–Rachford splitting method [6], inertial ADMM [7, 14], and inertial forward–backward–forward method [8]. These results and other related ones analyzed the convergence properties of inertial extrapolation type algorithms and demonstrated their performance numerically on some imaging and data analysis problems. It is based on this recent trend that our contribution in solving the SFP in this paper lies.
For \(p>1\), the duality mapping \(J^E_p: E\rightarrow 2^{E^*}\) is defined by
In this situation, it is known that the duality mapping \(J^E_p\) is one-to-one, single-valued and satisfies \(J^E_p = (J^E_q)^{-1}\), where \(J^E_q\) is the duality mapping of \(E^*\) (see [1, 13, 48]).
The duality mapping \(J^E_p\) is said to be weak-to-weak continuous if
holds true for any \(y\in E\) . It is worth noting that the \(\ell _p (p > 1)\) space has such a property, but the \(L_p (p > 2)\) space does not share this property (see [13, 33]).
Let C be a nonempty, closed and convex subset of a Banach space E. The metric projection
is the unique minimizer of the norm distance, which can be characterized by a variational inequality:
More information on metric projections can be found, for example, in Section 3 of the book [27].
Given a Gâteaux differentiable convex function \(f: E\rightarrow {\mathbb {R}}\), the Bregman distance with respect to f is defined as:
It is worth noting that the duality mapping \(J^E_p\) is in fact the derivative of the function \(f_p(x) = (\frac{1}{p})||x||^p\). Then, the Bregman distance with respect to \(f_p\) is given by (see, for example, [1, 12])
Given \(x,y,z\in E\), one can easily get
If E is a Hilbert space H and \(f_p(x) = \frac{1}{2}||x||^2\), then \(\Delta _p(x,y)=\frac{1}{2}||x-y||^2, x,y \in H\). Generally speaking, the Bregman distance is not a metric due to the absence of symmetry, but it has some distance-like properties. Some of these properties include \((i) \Delta _p(x,y) \ge 0;~~(ii) \Delta _p(x,x)=0\) and the properties as given in (8) and (9).
Likewise, one can define the Bregman projection:
as the unique minimizer of the Bregman distance (see [54]). It has been shown (see, e.g., [12]) that the Bregman projection (terminology is due to Censor and Lent [12]) is a good replacement for the metric projection in optimization methods and in algorithms for solving convex feasibility problems. Bregman projections are continuous and have good stability properties. This mathematical property is relevant because it opens a way for deeper studying the effect of computational errors on the behavior of many iterative algorithms involving Bregman projections. The Bregman projection can also be characterized by a variational inequality:
from which one has
In Hilbert spaces, the metric projection and the Bregman projection with respect to \(f_2\) are coincident, but in general they are different.
The SFP was studied in more general framework, for example, Banach spaces. More specifically, Schöpfer et al. [53] proposed in Banach spaces the following algorithm: \(x_{1}\in E_1\) and
where \(\{\lambda _{n}\}\) is a positive sequence, \(\Pi _C\) denotes the generalized projection on E, \(P_Q\) is the metric projection on \(E_2\), \(J_{E_{1}}\) is the duality mapping on \(E_{1}\) and \(J^*_{E_{1}}\) is the duality mapping on \(E_{1}^*\). It was proved that the sequence \({\{x_{n}}\}\) converges weakly to a solution of SFP, under some mild conditions, in p-uniformly convex and uniformly smooth Banach spaces. To be more precise, the condition that the duality mapping of \(E_1\) is sequentially weak-to-weak-continuous is assumed in [53] (which excludes some important Banach spaces, such as the classical \(L_p (2< p < \infty )\) spaces). Please see some modifications in [50, 51].
Recently, since in some applied disciplines, the norm convergence is more desirable that the weak convergence, Wang [59] modified the above algorithm (3) and proved its strong convergence for the following multiple-sets split feasibility problem (MSSFP): find \(x \in E_1\) satisfying
where r, s are two given integers, \(C_i, i = 1, \ldots , r\) is a closed convex subset in \(E_1\), and \(Q_j, j = r + 1, \ldots , r + s\), is a closed convex subset in \(E_2\). He defined for each \(n \in {\mathbb {N}}\),
where \(i:{\mathbb {N}}\rightarrow I\) is the cyclic control mapping
and \(\lambda _n\) satisfies
with \(c_q\) a uniform smoothness constant and proposed the following algorithm: For any initial guess \(x_1\), define \(\{x_n\}\) recursively by
Using the idea in the work of Nakajo and Takahashi [43], he proved the following strong convergence theorem in p-uniformly convex Banach spaces which is also uniformly smooth.
Theorem 1.2
Let \(E_1\) be p-uniformly convex real Banach space which is also uniformly smooth. Let C and Q be nonempty, closed and convex subsets of \(E_1\) and \(E_2\), respectively, \(A:E_1 \rightarrow E_2\) be a bounded linear operator and \(A^*:E_2^* \rightarrow E_1^*\) be the adjoint of A. Suppose that SFP (13) has a nonempty solution set \(\Omega \). Let the sequence \(\{x_n\}\) be generated by (15). Then, \(\{x_n\}\) converges strongly to the Bregman projection of \(x_1\) onto the solution set \(\Omega \).
It is observed that the main advantage of result of Wang [59] is that the weak-to-weak continuity of the duality mapping assumed in [53] is dispensed and strong convergence result was achieved. However, the step size \(\lambda _{n}\) still depends on the computation of the norm of A which is very hard in general.
Motivated by the previous works, it is our purpose to introduce a new self-adaptive projection method involving the inertial extrapolation term for solving the SFP in p-uniformly convex and uniformly smooth Banach spaces. We then prove a strong convergence theorem under the condition that does not need to know a priori the norm of the bounded linear operator. Finally, we provide some numerical experiments which involve image restoration problem for the inertial method to illustrate the convergence behavior and the effectiveness of our proposed algorithm. The obtained result of this paper seems to be the first one to investigate the SFP outside Hilbert spaces involving the inertial technique.
The rest of this paper is organized as follows: Some basic concepts and lemmas are provided in Sect. 2. The strong convergence result of this paper is proved in Sect. 3. Finally, in Sect. 4, numerical experiments are reported to conclude the effectiveness and the implementation of our algorithm.
2 Preliminaries and lemmas
Let \(1< q\le 2\le p<\infty \). Let E be a real Banach space. The modulus of convexity \(\delta _{E}:[0, 2]\rightarrow [0, 1]\) is defined as:
E is called uniformly convex if \(\delta _{E}(\epsilon ) > 0\) for any \(\epsilon \in (0,2]\); p-uniformly convex if there is a \(c_p > 0\) so that \(\delta _{E}(\epsilon )\ge c_p\epsilon ^p\) for any \(\epsilon \in (0,2]\). The modulus of smoothness \(\rho _E(\tau ):[0,\infty )\rightarrow [0,\infty )\) is defined by
E is called uniformly smooth if \(\lim \nolimits _{n\rightarrow \infty } \frac{\rho _E(\tau )}{\tau } = 0\); q-uniformly smooth if there is a \(c_q > 0\) so that \(\rho _E(\tau )\le c_q\tau ^q\) for any \(\tau > 0\). The \(L_p\) space is 2-uniformly convex for \(1 < p\le 2\) and p-uniformly convex for \(p\ge 2\). It is known that E is p-uniformly convex if and only if its dual \(E^*\) is q-uniformly smooth (see [33]).
Here and hereafter, we assume that \(E_1\) is a p-uniformly convex and uniformly smooth, which implies that its dual space, \(E_1^*\), is q-uniformly smooth and uniformly convex where \(1< q\le 2\le p<\infty \) with \(\frac{1}{p} + \frac{1}{q} = 1\). We further assume that \(J^{E_1}_{p}\) and \(J^{E_2}_{p}\) represent the duality mappings of \(E_1\) and \(E_2\), respectively, and \(J^{E_1}_{p} = (J^*_{E_1})^{-1}=(J^{E_1}_{q})^{-1}\), where \(J^{*}_{E_1}=J^{E_1}_{q}\) is the duality mapping of \(E_1^*\). For the p-uniformly convex space, the metric and Bregman distance satisfies the following relation (see [53]):
where \(\tau > 0\) is some fixed number.
The q-uniformly smooth spaces have the following estimate [61].
Lemma 2.1
(Xu [61]). Let \(x,y\in E_1\). If \(E_1\) is q-uniformly smooth, then there is a \(c_q > 0\) so that
It is easy to see that if \(\{x_n\}\) and \(\{y_n\}\) are bounded sequences of a p-uniformly convex and uniformly smooth \(E_1\), then \(x_n-y_n\rightarrow 0,~~n\rightarrow \infty \) implies that \(\Delta _p(x_n,y_n)\rightarrow 0,~~n\rightarrow \infty .\)
3 Main results
In this section, we introduce the inertial algorithm and prove strong convergence of the sequence generated by our scheme for solving the split feasibility problem in Banach spaces.
Algorithm 3.1
Inertial Algorithm for SFP: Let \(\{\alpha _n\}\subset {\mathbb {R}}\) be a bounded set. Set \(x_0,x_1\in C\). Define a sequence \(\{x_n\}\) by the following manner:
for all \(n\ge 0\) where \(f(w_n):=\frac{1}{p}\Vert (I-P_Q)Aw_n\Vert ^p, f^{p-1}(w_n):=\Big (\frac{1}{p}\Vert (I-P_Q)Aw_n\Vert ^p\Big )^{p-1}\) and \(\{\rho _n\}\subset (0,\infty )\) \(\liminf _{n\rightarrow \infty }\rho _n\Big (p-c_q\frac{\rho _n^{q-1}}{q}\Big )>0\).
Remark 3.2
In Algorithm 3.1, we use convention \(\frac{0}{0}=0\). Therefore, if \(\nabla f(w_n)=0\), then \(y_n=\Pi _C(w_n)\).
Lemma 3.3
The sequence \(\{x_n\}\) is well defined.
Proof
To show that \(\{x_n\}\) is well defined, we have to show that \(C_n\bigcap Q_n\) is nonempty closed and convex \(\forall n\ge 1\). Clearly, \(C_n\) is closed and \(Q_n\) is closed convex. To show the convexity of \(C_n\), it suffices to observe that
so that \(C_n\) is a half-space and, therefore, convex. Hence, \(C_n \cap Q_n\) is closed and convex. We next show that \(C_n\bigcap Q_n\ne \emptyset .\) To do this, it suffices to show that, \(\forall n\ge 1\),
Observe that if (4) holds, then we note that \(\Omega \ne \emptyset \) and so is \(C_n\bigcap Q_n.\) To verify (18), we first show \(\Omega \subset C_n.\) Let \(z\in \Omega .\) Then, let \(u_n=J_p^{E_1}(w_n)-\rho _n\displaystyle \frac{f^{p-1}(w_n)}{\Vert \nabla f(w_n)\Vert ^p}\nabla f(w_n)\) for all \(n\ge 1\). We see from Lemma 2.1 that
On the other hand, observe that (using (7) and the fact that \(Az \in Q\))
By using (20) and (21), we get
which implies by our assumption that
This implies that \(\Omega \subset C_n.\)
Finally, we show that \(\Omega \subset Q_n.\) For \(n=0\), we have \(Q_0=E_1\) and hence \(\Omega \subseteq Q_0.\) Suppose that \(x_k\) is given and \(\Omega \subseteq C_k\bigcap Q_k\) for some \(k\in {\mathbb {N}}.\) Then, there exists a unique element \(x_{k+1}\) such that
Using (10), we have
So that \(\Omega \subset Q_{k+1}.\) By induction, we can show that \(\Omega \subset Q_n\) \(\forall n\in {\mathbb {N}}\) and, thus, the proof is complete. \(\square \)
Lemma 3.4
Let \(\{x_n\}\) be generated by Algorithm 3.1. Then
-
(i)
\(\displaystyle \lim _{n\rightarrow \infty }\Vert x_{n+1}-x_n\Vert =0\);
-
(ii)
\(\displaystyle \lim _{n\rightarrow \infty }\Vert w_n-x_n\Vert =0\);
-
(iii)
\(\displaystyle \lim _{n\rightarrow \infty }\Vert y_n-x_{n+1}\Vert =0\);
-
(iv)
\(\displaystyle \lim _{n\rightarrow \infty }\Vert y_n-w_n\Vert =0\);
-
(v)
\(\displaystyle \lim _{n\rightarrow \infty }\Vert A^*J_p^{E_2}(I-P_Q)Aw_n\Vert =0\).
Proof
Note that the definition of \(Q_n\) actually implies that \(x_n=\Pi _{Q_n}(x_0).\) This together with the fact that \(\Omega \subset Q_n\) further implies that
Due to \(s:=\Pi _{\Omega }(x_0)\in \Omega \), we obtain
which implies that \(\{\Delta _p(x_0,x_n)\}\) is bounded. Hence \(\{x_n\}\) is bounded by (16). On the other hand, from \(x_{n+1}\in Q_n\) and (10), we have \(\left\langle x_n-x_{n+1},\right. \left. J_p^{E_1}(x_n)-J_p^{E_1}(x_0)\right\rangle \le 0\) and hence by (11)
This implies that
Thus, \(\{\Delta _p(x_0,x_n)\}\) is monotone nondecreasing and by (23) \(\displaystyle \lim _{n\rightarrow \infty }\Delta _p(x_0,x_n)\) exists. It then follows from (24) that
Hence, we obtain from (16) that
This establishes (i).
Since \(J_p^{E_1}\) is norm-to-norm uniformly continuous on bounded subsets of \(E_1\), we get
Observe from Algorithm 3.1 that
So,
Since \(J_q^{E_1}\) is also norm-to-norm uniformly continuous on bounded subsets of \(E_1^*\), we have
This establishes (ii).
Furthermore,
It follows that
By (16), we get
Since \(x_{n+1}\in C_n\), we have that
This implies that
and this establishes (iii).
Observe that
and
This establishes (iv).
From (22) and z being in \(\Omega \), we get
This implies that
By (8), we get
which implies that
Combining (26) and (27), we get
Since \(\liminf _{n\rightarrow \infty }\rho _n\Big (p-c_q\frac{\rho _n^{q-1}}{q}\Big )>0\), we get
and hence
Since \(J_p^{E_1}\) is norm-to-norm uniformly continuous on bounded subsets of \(E_1\), we have from (25) that
Furthermore, since \(\{\nabla f(w_n)\}\) is bounded, we obtain from (29) that
for some \(M_1 >0\). Therefore
Hence
Also
Thus
This establishes (v). \(\square \)
Lemma 3.5
Let \(\{x_n\}\) be generated by Algorithm 3.1. Then, the sequence \(\{x_n\}\) has a weak cluster point and \(\omega _w(x_n)\subseteq \Omega .\)
Proof
From the previous Lemma 3.4, we see that \(\{x_n\}\) is bounded. By the hypothesis, \(E_1\) is clearly reflexive and thus the weak cluster point set of \(\{x_n\}\) is nonempty. So it suffices to show that \(\omega _w(x_n)\subseteq \Omega .\) To do this, let us choose a subsequence \(\{x_{n_j}\}\) of \(\{x_n\}\) such that \(x_{n_j}\rightharpoonup z\in \omega _w(x_n).\) Since \(\Vert y_n-x_n\Vert \rightarrow 0,n\rightarrow \infty \), then we get \(y_{n_j}\rightharpoonup z\). Obviously we have \(z \in C\). Now since \(x_{n_j} \rightharpoonup z\) and \(\displaystyle \lim _{n\rightarrow \infty }\Vert x_n-w_n\Vert =0\), we obtain that \(w_{n_j}\rightharpoonup z\). Let us fix some \(x\in \Omega .\) Then, \(Ax\in Q\) and by (7), we get
where \(M_1>0\) is a sufficiently large number. It then follows that
Note that since \(w_{n_j}\rightharpoonup z\), then \(Aw_{n_j}\rightharpoonup Az.\) Letting \(j\rightarrow \infty \) in (32), we obtain
Thus, \(Az \in Q\) and hence \(z\in \Omega .\) \(\square \)
Theorem 3.6
The sequence \(\{x_n\}\) generated by Algorithm 3.1 converges strongly to the Bregman projection of \(x_0\) onto the solution set \(\Omega .\)
Proof
We see that there is \(\{x_{n_j}\}\) of \(\{x_n\}\) such that \(x_{n_j}\rightharpoonup z.\) Hence, \(z\in \Omega \) by Lemma 3.4. Since \(x_{n+1}\in Q_n\) and \(\Pi _{Q_n}(x_0)=\mathrm{argmin}_{y \in Q_n}\Delta _p(x_0,y)\), it follows that
By Lemma 3.2, \(\Pi _{\Omega }(x_0)\in \Omega \subseteq Q_{n+1}\) and hence
Thus,
Taking limsup yields
Hence, \(\lim _{j\rightarrow \infty }\Delta _p(x_{n_j},\Pi _\Omega x_0)=0,\) and so \(x_{n_j}\rightarrow \Pi _\Omega x_0\). By the arbitrariness of \(\{x_{n_j}\}\) and uniqueness of \(\Pi _\Omega x_0,\) we have that \(x_n\rightharpoonup \Pi _\Omega x_0.\) Using (16), it follows from (33) that
Taking limit as \(n\rightarrow \infty \) above, we get that \(x_n\rightarrow \Pi _\Omega x_0\).
\(\square \)
Remark 3.7
(i) On comparing with the conditions on \(\{\alpha _{n}\}\) in the inertial Mann algorithm of [6], our assumptions on \(\{\alpha _{n}\}\) in the inertial CQ algorithm are obviously relaxed. To be more precise, we do not require that the inertial sequence \(\{\alpha _{n}\}\) is nonnegative, nondecreasing and cannot exceed 1/3. Moreover, our main result seems to be new in a Banach space for solving the SFP with the inertial technique. As far as we know, it should be noted that the conditions of convergence of the inertial CQ algorithm introduced in this paper are weakest among the inertial algorithms proposed in some certain Banach spaces.
(ii) Our proposed Algorithm 3.1 has connections with some recent methods in the literature. For example, the inertial extrapolation factor \(\alpha _n\) in our method has similar property with the recent papers of Dong et al. [19], where
and the proposed method in [17], where \(\{\alpha _n\}\) is assumed to be bounded. In these papers, several choices of \(\{\alpha _n\}\) are considered in numerical implementations and the authors showed that their proposed methods are efficient and implementable. It is in the light of these results that our proposed Algorithm 3.1 is introduced and studied for SFP (1.1).
4 Application
Next, we give some example and application which serve as motivation for studying the split feasibility problem (1) outside Hilbert spaces. This example concerns computing \(L^p\)-solutions of Fredholm integral equations of the first kind with \(p \ge 2\). In this example, let \(E_1=L^p([\alpha ,\beta ]), p\ge 2\). Observe that the duality mapping of the spaces \(E_1=E_2=L^p([\alpha ,\beta ])\) is the function \(J_p^{E_1}:L^p([\alpha ,\beta ])\rightarrow L^q([\alpha ,\beta ])\) defined by (see [2])
and the Bregman distance function \(\Delta _p(.,.)\) is given by
where |x| is the Euclidean norm of x and \(\Vert x\Vert \) is the \(L^p\) norm of x.
Example 4.1
Let us consider computing \(L^p\)-solutions of Fredholm integral equations of the first kind, as considered in [2],
involving a continuous kernel \(K:[\alpha ,\beta ]^2\rightarrow {\mathbb {R}}\) and a continuous free term \(g:[\alpha ,\beta ]\rightarrow {\mathbb {R}}\). Fredholm integral equations of the first kind appear in many physical and engineering applications. An immense amount of work has been done on solving (34). The literature is very extensive on the subject. Many analytical and numerical techniques have been constructed so far and it is still expanding. For more details, see, for example, [65].
It can be shown that (34) is an SFP (1) with \(A:=I\), the identity, which is the problem: find \(x \in C:=\bigcap \nolimits _{i=1}^{N}C_i\), where
with \(a_i(t)=K(s_i,t)\in L^q([\alpha ,\beta ]) (q=\frac{p}{p-1})\) and \(b_i=g(s_i)\in {\mathbb {R}}\), while \(\alpha =s_1<s_2<\ldots <s_N=\beta \) (see [2, 29, 65]). Under some hypothesis, (34) has solutions (cf. [39]) and approximating a \(L^p\)-solution of (34) amounts to solving a consistent SFP (1).
In this setting, our Algorithm 3.1 reduces to
Algorithm 4.2
Let \(\{\alpha _n\}\subset {\mathbb {R}}\) be a bounded set. Set \(x_0,x_1\in C\). Define a sequence \(\{x_n\}\) by the following manner:
for all \(n\ge 0\) where \([n] := n (\mathrm{mod}~~ N)\) with the mod function taking values in \(\{1,2,\ldots , N\}\) and \(\{\rho _n\}\subset (0,\infty )\) satisfies \(\displaystyle \liminf _{n\rightarrow \infty }\rho _n\Big (p-c_q\frac{\rho _n^{q-1}}{q}\Big )>0\).
By Theorem 3.6, we can show that, if (34) has solutions, then the sequence \(\{x_n\}\) generated by (36) converges strongly to a solution of (34).
5 Examples and numerical results
In this section, we provide computational experiments comparing our newly proposed method considered in Sect. 3 of this paper to the existing iterative method of Wang [59] for solving some problems in p-uniformly convex Banach space which is also uniformly smooth. All codes were written in Matlab 2017a. We perform all computations on a Linux desktop with an Intel(R) Core(TM) i5-4670S CPU at 3.10 GHz.
Example 1: image restoration problem
The image restoration problem consists of the reconstruction of an original image that has been digitized and has been degraded by blur and additive noise. In this problem, the interest is in finding the minimum p-norm solution of \(Ax = y\), with exact data \(y\in R(A)\), where \(A: E_1\rightarrow E_2\) is a continuous linear operator between the two Banach spaces \(E_1\) and \(E_2\) with \(x\in E_1\). Since the solution is almost sparse, the minimum p-norm solution is always sought for with \(p \in (1,2]\), to promote sparsity in the restored solution. For more information about the role of the sparsity in inverse problems, we refer to [21, 23, 31, 55]. Using Algorithm 3.1, we develop an iterative algorithm to recover the solution of the functional linear equation \(Ax = y\). Furthermore, we compare the performance of our proposed Algorithm 3.1 with Algorithm (15).
Let us choose a simple image restoration problem studied in [53]. For simplicity, we consider the case when \(p=2=q\) and \(E_1=L_2([0,1])=E_2\). Consider the matrix \(A\in {\mathbb {R}}^{1000\times 1000}\)
resulting from a discretization of the operator equation of the first kind
We are interested in solutions \(x\in \{x\in C: Ax\in Q\}\), where C is the cone of functions x(s) that are negative for \(s\in [0, 0.25]\) and positive for \(s\in [0.25, 1]\) and \(Q = [a(t), b(t)]\) is a box delimited by the functions a(t), b(t). We take \(E_1 =({\mathbb {R}}^{1000}, \Vert .\Vert _{2}), E_2=({\mathbb {R}}^{1000}, \Vert .\Vert _{2})\). We compute the metric projection \(P_Q\) with formula:
where \([a, b] := \{x\in H:a \le x\le b\}\) is a closed box with extended real valued functions a, b, “\(\le \)”, “min” and “max” are to be understood pointwise a.e.. Since \(E_1,E_2\) are Hilbert spaces, the Bregman projection \(\Pi _C\) coincides with the metric projection \(P_C\), which can be also computed with (37) because \(C = [c(s), \mathrm{d}(s)]\) is furthermore an unbounded box with \(c(s) = -\infty , \mathrm{d}(s) = 0\) for \(s\in [0, 0.25]\) and \(c(s) = 0, \mathrm{d}(s) = +\infty \) for \(s\in [0.25, 1]\).
To ensure the existence of the solution of the considered problem, a sparse vector \(x^*(t)\) is generated randomly in C (see Fig. 1a). Taking \(y(t)=Ax^*(t)\) and \(a(t)=y(t)-0.01, b(t)=y(t)+0.01 \) (Fig. 1b) we have \(Q=[a(t),b(t)]\). The problem of interest is to find \(x \in C\) such that \(Ax \in Q.\)
In Fig. 2, we report the results comparing the behavior of Algorithm 3.1 with Algorithm (15) for initial guess \(x_0=x_1=0\). For Algorithm (17), we take \(\alpha _{n}=\alpha =3\) and \(\rho _{n}=\rho =2\), for Algorithm (15), we take \(\lambda _n=\lambda =\frac{1}{\Vert A\Vert ^2}\). We compute the error as:
It can be seen that Algorithm 3.1 is significantly faster than Algorithm (15).
Example 2
The following example was considered in [51], where \(E_1=E_2=L_2[0,1]\) with the inner product given as:
Let
where \(a=2t^2\) and \(b=0\). In this case, we have
Also, let
where \(c=t/3\) and \(d=1\), we obtain
We assume that
Then, A is a bounded linear operator and \(A^*=A\). Observe that the solution set
is nonempty since \(0 \in \Omega \). In the following figures, we compare Algorithm 3.1 with Algorithm (15) for two initial guesses \(x_0(t)=x_1(t)=\exp (2t)\) and \(x_0(t)=x_1(t)=3 \sin (t)\). For Algorithm (17), we take \(\alpha _{n}=\alpha =1\) and \(\rho _{n}=\rho =2\), for Algorithm (15), we take \(\lambda _n=\lambda =\frac{1}{\Vert A\Vert ^2}\). The iteration was stopped with accuracy
where \(\epsilon =10^{-4}\) (Fig. 3).
Finally, let us summarize the comparison of Algorithm 3.1 with Algorithm (15) in Table 1.
6 Conclusion and final remarks
In this paper, we study the modified self-adaptive projection method with an inertial technique for solving the split feasibility problem (1) in p-uniformly convex Banach spaces which are also uniformly smooth. We show in a simple and novel way how the sequence generated by our projection method with an inertial technique strongly converges to a solution of the SFP. The effectiveness and numerical implementation of the proposed method are illustrated for solving the some examples in Banach spaces. All the numerical implementations of our results are compared with the Wang’s [59] algorithm. On these grounds, we can summarize that our proposed method is really more efficient and faster than Wang’s method.
In recent years, the nonlinear split feasibility problem (NLSFP) gained a lot of interest, see, e.g., [26, 32]. In addition, the non-convex case is also very attractive from the application point of view, see [45]. In our future project, we will extend the results of this paper to nonlinear split feasibility problem and non-convex case of split feasibility problem.
References
Alber, Y.I.: Metric and generalized projection operator in Banach spaces: properties and applications. Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. vol 178 of Lecture Notes in Pure and Applied Mathematics, pp. 15–50. Dekker, New York (1996)
Alber, Y., Butnariu, D.: Convergence of Bregman projection methods for solving consistent convex feasibility problems in reflexive Banach spaces. J. Optim. Theory Appl 92(1), 33–61 (1997)
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set Valued Anal. 9, 3–11 (2001)
Alsulami, S.M., Takahashi, W.: Iterative methods for the split feasibility problem in Banach spaces. J. Convex Anal. 16, 585–596 (2015)
Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 24, 232–256 (2014)
Bot, R.I., Csetnek, E.R., Hendrich, C.: Inertial Douglas–Rachford splitting for monotone inclusion. Appl. Math. Comput. 256, 472–487 (2015)
Bot, R.I., Csetnek, E.R.: An inertial alternating direction method of multipliers. Minimax Theory Appl. 1, 29–49 (2016)
Bot, R.I., Csetnek, E.R.: An inertial forward–backward–forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algebra 71, 519–540 (2016)
Bruck, R.E., Reich, S.: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houston J. Math. 3, 459–470 (1977)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Prob. 18, 441–453 (2002)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Censor, Y., Lent, A.: An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 34, 321–353 (1981)
Cioranescu, I.: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Kluwer Academic, Dordrecht (1990)
Chen, C., Chan, R.H., Ma, S., Yang, J.: Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 8, 2239–2267 (2015)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projection in product space. Numer. Algorithm 8, 221–239 (1994)
Dang, Y., Sun, J., Xu, H.: Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim.https://doi.org/10.3934/jimo.2016078
Dong, Q., Jiang, D., Cholamjiak, P., Shehu, Y.: A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 19, 3097–3118 (2017)
Dong, Q.L., Yuan, H.B.: Accelerated Mann and CQ algorithms for finding a fixed point of nonexpansive mapping. Fixed Point Theory Appl. 2015, 125 (2015)
Dong, Q.L., Yuan, H.B., Cho, Y.J., Rassias, T.M.: Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 12, 87–102 (2018)
Dang, Y., Gao, Y.: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 27, 015007 (2011)
Daubachies, L., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)
Dunford, N., Schwartz, J.T.: Linear Operators I. Wiley, New York (1958)
Estatico, C., Gratton, S., Lenti, F., Titley-Peloquin, D.: A conjugate gradient like method for p-norm minimization in functional spaces. Numer. Math. https://doi.org/10.1007/s00211-017-0893-7
Gibali, A., Liu, L.-W., Tang, Y.-C.: Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 12, 817–830 (2018)
Gibali, A.: A new split inverse problem and application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2, 243–258 (2017)
Gibali, A., Küfer, K.-H., Süss, P.: Successive linear programing approach for solving the nonlinear split feasibility problem. J. Nonlinear Convex Anal. 15, 345–353 (2014)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
Hendrickx, J.M., Olshevsky, A.: Matrix \(P\)-norms are NP-hard to approximate if \(P\ne 1,2,\infty \). SIAM J. Matrix Anal. Appl. 16, 2802–2812 (2010)
Kammerer, W.J., Nashed, M.Z.: A generalization of a matrix iterative method of G. Cimmino to best approximate solutions of linear integral equations for the first kind. Rendiconti della Accademia Nazionale dei Lincei, Serie 8(51), 20–25 (1971)
Kohsaka, F., Takahashi, W.: Proximal point algorithms with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 6, 505–523 (2005)
Lenti, F., Nunziata, F., Estatico, C., Migliaccio, M.: Analysis of reconstructions obtained solving \(\ell _p\)-penalized minimization problems. IEEE Trans. Geosci. Remote Sens. 53, 48764886 (2015)
Li, Z., Han, D., Zhang, W.: A self-adaptive projection-type method for nonlinear multiple-sets split feasibility problem. Inverse Probl. Sci. Eng. 21, 155–170 (2013)
Lindenstrauss, J., Tzafriri, L.: Classical Banach Spaces II. Springer, Berlin (1979)
López, G., Martin-Marquez, V., Wang, F.H., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. (2012). https://doi.org/10.1088/0266-5611/28/8/085004
Lorenz, D.A., Pock, T.: An inertial forward–backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)
Maingé, P.E.: Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223–236 (2008)
Maingé, P.E.: Inertial iterative process for fixed points of certain quasi-nonexpansive mappings. Set Valued Anal. 15, 67–79 (2007)
Masad, E., Reich, S.: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367–371 (2007)
Mikhlin, S.G., Smolitskiy, K.L.: Approximate Methods for Solution of Differential and Integral Equations. Elsevier, New York (1967)
Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011)
Moudafi, A., Thakur, B.S.: Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 8, 2099–2110 (2014)
Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155, 447–454 (2003)
Nakajo, K., Takahashi, W.: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372–379 (2003)
Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^2)\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)
Penfold, S., Zalas, R., Casiraghi, M., Brooke, M., Censor, Y., Schulte, R.: Sparsity constrained split feasibility for dose-volume constraints in inverse planning of intensity-modulated photon or proton therapy. Phys. Med. Biol. 62, 3599–3618 (2017)
Phelps, R.P.: Convex Functions, Monotone Operators, and Differentiability, 2nd Edn. Lecture Notes in Mathematics, vol. 1364. Springer, Berlin (1993)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. U. S. S. R. Comput. Math. Math. Phys. 4, 1–17 (1964)
Reich, S.: Review of “Geometry of Banach spaces, duality mappings and nonlinear problems” by Ioana Cioranescu. Bull. Am. Math. Soc. 26, 367–370 (1992)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 6, 877–898 (1976)
Shehu, Y.: Iterative methods for split feasibility problems in certain Banach spaces. J. Nonlinear Convex Anal. 16, 2315–2364 (2015)
Shehu, Y., Iyiola, O.S., Enyi, C.D.: An iterative algorithm for solving split feasibility problems and fixed point problems in Banach spaces. Numer. Algorithm 72, 835–864 (2016)
Shehu, Y., Iyiola, O.S.: Convergence analysis for the proximal split feasibility problem using an inertial extrapolation term method. J. Fixed Point Theory Appl. 19, 2483–2510 (2017)
Schöpfer, F., Schuster, T., Louis, A.K.: An iterative regularization method for the solution of the split feasibility problem in Banach spaces. Inverse Prob. 24, 055008 (2008)
Schöpfer, F.: Iterative regularization method for the solution of the split feasibility problem in Banach spaces. PhD thesis, Saarbrücken (2007)
Schuster, T., Kaltenbacher, B., Hofmann, B., Kazimierski, K.: Regularization methods in Banach spaces. In: de Gruyter, W. (ed.) Radon Series on Computational and Applied Mathematics, vol. 10. de Gruyter, Berlin (2012)
Takahashi, W.: Nonlinear Functional Analysis-Fixed Point Theory and Applications. Yokohama Publishers Inc., Yokohama (2000). (in Japanese)
Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: A gradient projection method for solving split equality and split feasibility problems in Hilbert spaces. Optimization 64, 2321–2341 (2015)
Wang, F.: A new algorithm for solving the multiple-sets split feasibility problem in Banach spaces. Numer. Funct. Anal. Optim. 35, 99–110 (2014)
Wang, F.: On the convergence of CQ algorithm with variable steps for the split equality problem. Numer. Algorithm 74, 927–935 (2017)
Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127–1138 (1991)
Xu, H.K.: A variable Krasonosel’skii–Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 22, 2021–2034 (2006)
Xu, H.K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)
Yang, Q.: On variable-set relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 302, 166–179 (2005)
Yoshida, K.: Lectures on Differential and Integral Equations. Interscience, London (1960)
Acknowledgements
We would like to thank anonymous referees for their useful comments that have improved the quality of the paper. The research was carried out when the first author was an Alexander von Humboldt Postdoctoral Fellow at the Institute of Mathematics, University of Wurzburg, Germany. He is grateful to the Alexander von Humboldt Foundation, Bonn, for the fellowship and the Institute of Mathematics, Julius Maximilian University of Wurzburg, Germany for the hospitality and facilities. The research of the second author was supported by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) Grant 101.01-2017.315. The third author was supported by the Thailand Research Fund and University of Phayao under Grant RSA6180084.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Shehu, Y., Vuong, P.T. & Cholamjiak, P. A self-adaptive projection method with an inertial technique for split feasibility problems in Banach spaces with applications to image restoration problems. J. Fixed Point Theory Appl. 21, 50 (2019). https://doi.org/10.1007/s11784-019-0684-0
Published:
DOI: https://doi.org/10.1007/s11784-019-0684-0