Abstract
In this paper, we propose a CQ-type algorithm for solving the split feasibility problem (SFP) in real Hilbert spaces. The algorithm is designed such that the step-sizes are directly computed at each iteration. We will show that the sequence generated by the proposed algorithm converges in norm to the minimum-norm solution of the SFP under appropriate conditions. In addition, we give some numerical examples to verify the implementation of our method. Our result improves and complements many known related results in the literature.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let H be a real Hilbert space with the inner product \(\langle .,.\rangle \) and the induced norm ||.||. Let I denote the identity operator on H. Let C and Q be nonempty, closed, and convex subsets of real Hilbert spaces \(H_1\) and \(H_2\), respectively. The split feasibility problem (SFP) was first introduced by Censor and Elfving [6], and it can be formulated as follows:
if such points exist, where \(A: H_1\rightarrow H_2\) is a bounded linear operator.
We will use \(\Omega \) to denote the solution set of (1.1), i.e.,
The problem (1.1) arises in signal processing and image reconstruction with particular progress in intensity modulated therapy, and many iterative algorithms have been established for it (see, e.g., [3, 4, 6,7,8, 11, 15, 17, 20]).
From an optimization point of view, \(x^*\in \Omega \) if and only if \(x^*\) is a solution of the following minimization problem with zero optimal value:
Note that the function f is differentiable convex and has a Lipschitz gradient given by \(\nabla f(x)=A^*(I-P_Q)Ax\). Hence, \(x^*\) solves the SFP if and only if \(x^*\) solves the variational inequality problem of finding \(x\in C\) such that
A popular algorithm was known under the name of CQ algorithm introduced by Byrne [3, 4] as follows:
where \(\gamma \in \left( 0,\frac{2}{\Vert A\Vert ^2}\right) \).
In fact, the CQ algorithm is the gradient projection method for the variational inequality problem (1.3). For more details on the SFP and the CQ algorithm, the interested reader is referred to see [1, 3,4,5, 10, 13, 19, 22, 23] and the references therein. Xu [22] proved the weak convergence of (1.4) in the setting of Hilbert spaces. In order to obtain strong convergence, Wang and Xu [18] proposed the following algorithm:
Wang and Xu [18] proved that the above iterative sequence converges strongly to the minimum-norm solution of the SFP (1.1) provided that the sequence \(\{\alpha _k\}\) and parameter \(\gamma \) satisfy the following conditions:
-
(1)
\(\alpha _k\rightarrow 0\) and \(0<\gamma <\frac{2}{\Vert A\Vert ^2}\);
-
(2)
\(\sum _{k=0}^{\infty }\alpha _k=\infty \);
-
(3)
either \(\sum _{k=0}^{\infty }|\alpha _{k+1}-\alpha _k|<\infty \) or \(\lim _{k\rightarrow \infty }|\alpha _{k+1}-\alpha _k|/\alpha _k=0\).
In 2012, Yu et al. [20] proved the strong convergence of (1.5) without the condition (3). It is worth mentioning that the determination of the step-size in (1.5) depends on the Lipschitz constant \(L=\Vert A\Vert ^2\) of gradient \(\nabla f\), which is in general not easy to compute in practice. This leads us to the following question.
QuestionCan we design a self-adaptive scheme for the algorithm (1.5) above?
In this paper, we give a positive answer to this question. Motivated and inspired by the works of Lopéz et al. [13], Tian and Zhang [16], Wang and Xu [18], Xu [22], Yao et al. [24] and Zhou et al. [25], we will introduce a self-adaptive CQ-type algorithm for finding a solution of the SFP in the setting of infinite-dimensional real Hilbert spaces. The advantage of our algorithm lies in the fact that step-sizes are dynamically chosen and do not depend on the operator norm. Moreover, we will prove that the proposed algorithm converges strongly to the minimum-norm solution of the SFP.
The rest of the paper is organized as follows. Some useful definitions and results are collected in Sect. 2 for the convergence analysis of the proposed algorithm. In Sect. 3, we introduce a new self-adaptive CQ-type algorithm for finding an element of the set \(\Omega \) and prove strong convergence of the method. Our result improves the corresponding results of Chuang [9], Wang and Xu [18], Xu [22] and Yao et al. [24]. We also consider the relaxation version for the proposed method in Sect. 4. Finally in Sect. 5, we provide some numerical experiments to illustrate the performance of the proposed algorithms.
2 Preliminaries
Let C be a closed convex subset of a real Hilbert space H. It is easy to see that
for all \(x,y\in H\) and for all \(t\in [0,1]\).
In what follows, the strong (weak) convergence of a sequence \(\{x^k\}\) to x will be denoted by \(x^k\rightarrow x\) ( \(x^k\rightharpoonup x\)), respectively. For a given sequence \(\{x^k\}\subset H\), \(\omega _w(x^k)\) denotes the weak \(\omega \)-limit set of \(\{x^k\}\), that is,
For every element \(x\in H\), there exists a unique nearest point in C, denoted by \(P_Cx\) such that
\(P_C\) is called the metric projection of H onto C.
Lemma 2.1
The metric projection \(P_C\) has the following basic properties:
-
(1)
\(\langle x-P_Cx, y-P_Cx\rangle \le 0\) for all \(x\in H\) and \(y\in C\);
-
(2)
\(\Vert P_Cx-P_Cy\Vert \le \Vert x-y\Vert \) for all \(x,y\in H\);
-
(3)
\(\Vert P_Cx-P_Cy\Vert ^2\le \langle x-y, P_Cx-P_Cy\rangle \) for every \(x,y\in H\);
Let C and Q be nonempty closed convex subsets of the infinite-dimensional real Hilbert spaces \(H_1\) and \(H_2\), respectively, \(A\in B(H_1, H_2)\), where \(B(H_1, H_2)\) denotes the family of all bounded linear operators from \(H_1\) to \(H_2\).
Lemma 2.2
(see [2]) Let \(f: H_1\rightarrow {\mathbb {R}}\) be a function defined by \(f(x):=\frac{1}{2}\Vert Ax-P_QAx\Vert ^2\). Then
-
(1)
f is convex and differentiable;
-
(2)
f is w-lsc on \(H_1\);
-
(3)
\(\nabla f(x)=A^*(I-P_Q)Ax\), \(x\in H_1\);
-
(4)
\(\nabla f\) is \(\frac{1}{\Vert A\Vert ^2}\)-inverse strongly monotone, i.e.,
$$\begin{aligned} \langle \nabla f(x)-\nabla f(y),x-y\rangle \ge \dfrac{1}{\Vert A\Vert ^2}\Vert \nabla f(x)-\nabla f(y)\Vert ^2 \quad \forall x,y\in H_1. \end{aligned}$$
Remark 2.1
From (4) of Lemma 2.2, it is easy to see that \(\nabla f\) is \(\Vert A\Vert ^2\)-Lipschitz, that is,
In convergence analysis of the proposed algorithms, we will use the well-known lemmas.
Lemma 2.3
(Maingé [14]) Let \(\{\Gamma _n\}\) be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence \(\{\Gamma _{n_j}\}\) of \(\{\Gamma _n\}\) such that \(\Gamma _{n_j}<\Gamma _{n_j+1}\) for all \(j\ge 0\). Also consider the sequence of integers \(\{\tau (n)\}_{n\ge n_0}\) defined by
Then \(\{\tau (n)\}_{n\ge n_0}\) is a nondecreasing sequence verifying \(\underset{n\rightarrow \infty }{\lim }\tau (n)=\infty \) and, for all \(n\ge n_0\),
Lemma 2.4
(Xu [21]) Assume that \(\{a_k\}\) is a sequence of nonnegative real numbers such that
where \(\{\alpha _k\}\) is a sequence in (0, 1), \(\{b_k\}\) is a sequence of nonnegative real numbers and \(\{\gamma _k\}\) is a sequence of real numbers such that
-
(1)
\(\sum \nolimits _{k=0}^{\infty }\alpha _k=\infty \),
-
(2)
\(\sum \nolimits _{k=0}^{\infty }b_k<\infty \),
-
(3)
\(\limsup _{k\rightarrow \infty }\gamma _k\le 0\).
Then \(\lim _{k\rightarrow \infty } a_k=0\).
We end this section by recalling a new fundamental tool which will be helpful for proving strong convergence of our relaxation CQ algorithm.
Lemma 2.5
(He and Yang 2013 [12]) Assume that \(\{s_k\}\) is a sequence of nonnegative real numbers such that for all \(k\in {\mathbb {N}}\)
where \(\{\alpha _k\}\) is a sequence in (0, 1), \(\{\eta _k\}\) is a sequence of nonnegative real numbers, and \(\{\delta _k\}\) and \(\{\gamma _k\}\) are two sequences in \({\mathbb {R}}\) such that
-
(1)
\(\sum \nolimits _{k=0}^{\infty }\alpha _k=\infty \),
-
(2)
\(\lim _{k\rightarrow \infty }\gamma _k=0\),
-
(3)
\(\lim _{k\rightarrow \infty }\eta _{n_k}=0\) implies that \(\limsup _{k\rightarrow \infty }\delta _{n_k}\le 0\) for any subsequence \(\{n_k\}\) of \(\{n\}\).
Then \(\lim _{s\rightarrow \infty } s_k=0\).
3 A New Modification of CQ Algorithm and Its Convergence
In this section, we introduce a CQ-type algorithm with self-adaptive step-sizes for solving the SFP (1.1) and establish its strong convergence under some mild conditions. The algorithm is designed as follows.
Algorithm 3.1
[CQ-type algorithm for the SFP (1.1)]
Initialization Take two positive sequences \(\{\beta _k\}\) and \(\{\rho _k\}\) satisfying the following conditions:
Select initial \(x^0\in H_1\) and set \(k:=0\).
Iterative Step Given \(x^k\), if \(\nabla f(x^k)=0\) then stop [\(x^k\) is a solution to the SFP (1.1)]. Otherwise, compute
and
Let \(k:=k+1\) and return to Iterative Step.
For the convergence analysis of Algorithm 3.1, we need the following results.
Lemma 3.1
Let \(\{x^k\}\) be the sequence generated by Algorithm 3.1. Then, for each \(z\in \Omega \), the following inequality holds:
Proof
By Lemma 2.1 (2) and (3.3), we have
Note that
We now estimate the second term on the right-hand side of (3.5) as follows:
From (3.5) and (3.7), we arrive at
This completes the proof. \(\square \)
Lemma 3.2
The sequence \(\{x^k\}\) generated by Algorithm 3.1 is bounded.
Proof
By Lemmas 3.1 and (3.2), we have
So, we get
By induction,
this implies that sequence \(\{x^k\}\) is bounded. \(\square \)
Lemma 3.3
Let \(\{x^k\}\) be the sequence generated by Algorithm 3.1. Then the following inequality holds for all \(z\in \Omega \) and \(k\in {\mathbb {N}}\),
Proof
Combining with (3.4) of Lemma 3.1, we obtain
The proof is complete. \(\square \)
We are now in a position to establish the strong convergence of the sequence generated by Algorithm 3.1.
Theorem 3.1
Assume that \(\inf _k\rho _k(4-\rho _k)>0\). Then the sequence \(\{x^k\}\) generated by Algorithm 3.1 converges strongly to the minimum-norm element of \(\Omega \).
Proof
Let \(z:=P_{\Omega }0\). From Lemma 3.1, we have
From (3.2) and the assumption \(\inf _k\rho _k(4-\rho _k)>0\), we can find a constant \(\sigma \) such that \((1-\beta _k)\rho _k(4-\rho _k)\ge \sigma >0\) for all \(k\in {\mathbb {N}}\). Hence
or
So, we obtain
Now, we consider two possible cases
Case 1 Put \(\Gamma _k:=\Vert x^k-z\Vert ^2\) for all \(k\in {\mathbb {N}}\). Assume that there is a \(k_0\ge 0\) such that for each \(k\ge n_0\), \(\Gamma _{k+1}\le \Gamma _k\). In this case, \(\lim _{k\rightarrow \infty }\Gamma _k\) exists and \(\lim _{k\rightarrow \infty }(\Gamma _k-\Gamma _{k+1})=0. \)
Since \(\lim _{k\rightarrow \infty }\beta _k=0\), it follows from (3.10) that
It follows from (3.11) that
Since \(\nabla f\) is Lipschitz, we have
Hence, \(\{\nabla f(x^k)\}\) is bounded. This together with (3.11) implies that \(f(x^k)\rightarrow 0\) as \(k\rightarrow \infty \). We now show that \(\omega _w(x^k)\subset \Omega \). Let \({\bar{x}}\in \omega _w(x^k)\) be an arbitrary element. Since \(\{x^k\}\) is bounded (by Lemma 3.2), there exists a subsequence \(\{x^{k_j}\}\) of \(\{x^k\}\) such that \(x^{k_j}\rightharpoonup {\bar{x}}\). With regard to the weak lower semicontinuity of f, we obtain
We immediately deduce that \(f({\bar{x}})=0\), i.e., \(A{\bar{x}}\in Q\). The choice of \({\bar{x}}\) in \(\omega _w(x^k)\) was arbitrary, and so we conclude that \(\omega _w(x^k)\subset \Omega \).
Using Lemma 3.3, we have
To apply Lemma 2.4, it remains to show that \(\limsup _{k\rightarrow \infty }\langle x^{k}-z,-z\rangle \le 0\). Indeed, since \(z=P_{\Omega }0\), by using the property of the projection [Lemma 2.1 (1)], we arrive at
By applying Lemma 2.4 to (3.12) with the data:
we immediately deduce that the sequence \(\{x^k\}\) converges strongly to \(z=P_{\Omega }0\). Furthermore, it follows again from Lemma 2.1 (1) that
Hence
from which we infer that z is the minimum-norm solution of the SFP (1.1).
Case 2 Assume that there exists a subsequence \(\{\Gamma _{k_m}\}\subset \{\Gamma _{k}\}\) such that \(\Gamma _{k_m}\le \Gamma _{k_m+1}\) for all \(m\in {\mathbb {N}}\). In this case, we can define \(\tau :{\mathbb {N}}\rightarrow {\mathbb {N}}\) by
Then we have from Lemma 2.3 that \(\tau (k)\rightarrow \infty \) as \(k\rightarrow \infty \) and \(\Gamma _{\tau (k)}<\Gamma _{\tau (k)+1}\). So, we have from (3.10) that
Following the same way as the proof of Case 1, we have that
and
where \(\beta _{\tau (k)}\rightarrow 0\).
Since \(\Gamma _{\tau (k)}<\Gamma _{\tau (k)+1}\), we have from (3.14) that
Combining (3.13) and (3.15) yields
and hence
From (3.14), we have
Thus
Therefore, by Lemma 2.3, we obtain
Consequently, \(\{x^k\}\) converges strongly to \(z=P_{\Omega }0\). The proof is complete. \(\square \)
Remark 3.1
One main advantage of our algorithm compared to others is that step-sizes are directly computed in each iteration and do not depend on the norm of A. Therefore, Theorem 3.1 improves Theorem 5.5 of Chuang [9], Theorem 4.3 of Wang and Xu [18], Theorem 5.5 of Xu [22], and Theorem 3.1 of Yao et al. [24].
4 A Relaxation Algorithm
When the sets C and Q are complicated, the computation of \(P_C\) and \(P_Q\) is expensive. This may affect the applicability of Algorithm 3.1. To overcome this drawback, we will use relaxation method of Yang [23] as follows: Consider the split feasibility problem (1.1) in which the involved sets C and Q are given as sub-level sets of convex functions, i.e.,
where \(c:H_1\rightarrow {\mathbb {R}}\) and \(q:H_2\rightarrow {\mathbb {R}}\) are lower semicontinuous convex functions. We assume that \(\partial c\) and \(\partial q\) are bounded operators (i.e., bounded on bounded sets). Set
where \(\xi ^k\in \partial c(x^k)\), and
where \(\zeta ^k\in \partial q(Ax^k)\). Obviously, \(C_k\) and \(Q_k\) are half-spaces, and it is easy to check that \(C_k\supset C\) and \(Q_k\supset Q\) hold for every \(k\ge 0\) from the subdifferentiable inequality. We now define
where \(Q_k\) is given as in (4.2). We have
Now we introduce the following relaxation version of Algorithm 3.1.
Algorithm 4.1
(A relaxation CQ algorithm for SFP (1.1))
Initialization Take two positive sequences \(\{\beta _k\}\) and \(\{\rho _k\}\) satisfying the following conditions:
Select initial \(x^0\in H_1\) and set \(k:=0\).
Iterative Step Given \(x^k\), if \(\nabla f_k(x^k)=0\) then stop [\(x^k\) is a solution to the SFP (1.1)]. Otherwise, compute
and
Let \(k:=k+1\) and return to Iterative Step.
The following lemma is quite helpful to analyze the convergence of Algorithm 4.1.
Lemma 4.1
If \(\nabla f_k(x^k)=0\), then \(x^k\in \Omega \).
Proof
If \(\nabla f_k(x^k)=0\) for some \(x^k\in C_k\), then
It is easy to see that \(Ax^k\in Q_k\). By (4.1) and (4.2) we have \(c(x^k)\le 0\) and \(q(Ax^k)\le 0\). So \(x^k\in C\) and \(Ax^k\in Q\) and the proof is complete. \(\square \)
The strong convergence of Algorithm 4.1 is proved below.
Theorem 4.1
Assume that \(\inf _k\rho _k(4-\rho _k)>0\). Then the sequence \(\{x^k\}\) generated by Algorithm 4.1 converges strongly to the minimum-norm element of \(\Omega \).
Proof
Let \(z:=P_{\Omega }0\). Since \(\inf _k\rho _k(4-\rho _k)>0\), we may assume without loss of generality that there exists \(\epsilon >0\) such that \(\rho _k(4-\rho _k)(1-\beta _k)\ge \epsilon \). Arguing as the proof in the proof of Theorem 3.1 and replacing f, C and Q with \(f_k\), \(C_k\) and \(Q_k\), respectively, we have
From (4.7) and (3.12), we obtain the following two inequalities:
where
In order to use Lemma 2.5 with the data \(s_k:=\Vert x^{k}-z\Vert ^2\), it remains to show that for any subsequence \(\{k_l\}\) of \(\{k\}\),
A similar argument as in the proof of Theorem 3.1 shows that
or equivalently,
Since \(\{x^{k_l}\}\) is bounded, there exists a subsequence \(\{x^{k_{l_m}}\}\) of \(\{x^{k_l}\}\) which converges weakly to \({\bar{x}}\). Without loss of generality, we can assume that \(x^{k_l}\rightharpoonup {\bar{x}}\). Since \(P_{Q_{k_{l}}}Ax^{k_{l}}\in Q_{k_{l}}\), we have
where \(\zeta ^{k_{l}}\in \partial q(Ax^{k_{l}})\). From the boundedness assumption of \(\zeta ^{k_{l}}\) and (4.9), we have
From the weak lower semicontinuity of the convex function q(x) and since \(x^{k_l}\rightharpoonup {\bar{x}}\), it follows from (4.13) that
which means that \(A{\bar{x}}\in Q\).
We will prove that
Indeed, from (4.6) we obtain
as \(l\rightarrow \infty \).
Further, using the fact that \(x^{k_{l}+1}\in C_{k_{l}}\) and by the definition of \(C_{k_{l}}\), we get
where \(\xi ^{k_{l}}\in \partial c(x^{k_{l}})\). Due to the boundedness of \(\xi ^{k_{l}}\) and (4.12), we have
as \(l\rightarrow \infty \). Similarly, we obtain that \(c({\bar{x}})\le 0\), i.e., \({\bar{x}}\in C\).
We now deduce that
Finally, using Lemma 2.5, we have \(\Vert x^k-z\Vert \rightarrow 0\). We thus complete the proof.
\(\square \)
5 Numerical Experiments
In this section, we provide the numerical examples and illustrate its performance by using Algorithm 3.1.
Example 5.1
Let \(H_{1}=H_{2}=L_{2}[0,1]\) with the inner product given by
Let \(C=\{x\in L_{2}[0,1]:\Vert x\Vert _{L_{2}}\le 1\}\) and \(Q=\{x\in L_{2}[0,1]:\langle x,\frac{t}{2}\rangle =0\}\). Find \(x\in C\) such that \(Ax\in Q\), where \((Ax)(t)=\frac{x(t)}{2}\).
Choose \(\beta _{k}=\frac{1}{k+1}\) for all \(k\in {\mathbb {N}}\). The stopping criterion is defined by
We now study its convergence in terms of the number of iterations and the cpu time with different step-sizes of \(\{\rho _k\}\) as reported in Table 1.
The error plotting of \(E_{k}\) for each choice of \(x^1\) are shown in Figs. 1 and 2, respectively.
We next provide some numerical examples and illustrate its performance by using the modified relaxed CQ method (Algorithm 4.1).
Example 5.2
Let \(H_{1}=H_{2}={\mathbb {R}}^3\), \(C=\{x=(a,b,c)^T\in {\mathbb {R}}^3: a^2+b^2-4\le 0\}\) and \(Q=\{x=(a,b,c)^T\in {\mathbb {R}}^3: a+c^2-1\le 0\}\). Find \(x\in C\) such that \(Ax\in Q\), where \(A=\left( \begin{array}{lll} -~1 &{}\quad 3 &{}\quad 5 \\ 5 &{}\quad 3 &{}\quad 2 \\ 2 &{}\quad 1 &{}\quad 0 \\ \end{array} \right) \).
Choose \(\beta _{k}=\frac{1}{k+1}\) for all \(k\in {\mathbb {N}}\). The stopping criterion is defined by
The numerical experiments for each case of \(\rho _{k}\) are shown in Figs. 3 and 4, respectively (Table 2).
Remark 5.1
From our numerical experiments, it is observed that the different choices of \(x^{1}\) have no effect in terms of cpu run time for the convergence of our algorithm. However, if the step-sizes \(\{\rho _{k}\}\) is taken close to 4, then the number of iterations and the cpu time have small reduction.
References
Ansari, Q.H., Rehan, A.: Split feasibility and fixed point problems. In: Ansari, Q.H. (ed.) Nonlinear Analysis, Approximation Theory, Optimization and Applications, pp. 281–322. Springer, India (2014)
Aubin, J.P.: Optima and Equilibria: An Introduction to Nonlinear Analysis. Springer, Berlin (1993)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64, 633–642 (2012)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2003)
Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)
Chuang, S.S.: Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 350 (2013)
Dang, Y., Gao, Y.: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 27, 015007 (2011)
Dang, Y., Sun, J., Xu, H.: Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 13, 1383–1394 (2017)
He, S., Yang, C.: Solving the variational inequality problem defined on intersection of finite level sets. In Abstr. Appl. Anal. 2013, Article ID 942315 (2013)
López, G., Martín-Márquez, V., Wang, F., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012)
Maingé, P.E.: Strong convergence of projected subgradientmethods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Qu, B., Xiu, N.: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 1655–1665 (2005)
Tian, M., Zhang, H.-F.: The regularized CQ algorithm without a priori knowledge of operator norm for solving the split feasibility problem. J. Inequal. Appl. 2017, 207 (2017)
Wang, F.: Polyak’s gradient method for split feasibility problem constrained by level sets. Numer. Algor. 77, 925–938 (2018). https://doi.org/10.1007/s11075-017-0347-4
Wang, F., Xu, H.-K.: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, 102085 (2010)
Wang, F., Xu, H.-K.: Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Analysis 74, 4105–4111 (2011)
Yu, X., Shahzad, N., Yao, Y.: Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 6, 1447–1462 (2012)
Xu, H.-K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
Xu, H.-K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)
Yang, Q.: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 20, 1261–1266 (2004)
Yao, Y.H., Gang, W.J., Liou, Y.C.: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012, 140679 (2012)
Zhou, Y., Haiyun, Z., Wang, P.: Iterative methods for finding the minimum-norm solution of the standard monotone variational inequality problems with applications in Hilbert spaces. J. Inequal. Appl. 2015, 135 (2015)
Acknowledgements
P. Cholamjiak was supported by the Thailand Research Fund and the Commission on Higher Education under Grant MRG5980248. S. Suantai was partially supported by Chiang Mai University.
Author information
Authors and Affiliations
Corresponding author
Additional information
Mohammad Sal Moslehian.
Rights and permissions
About this article
Cite this article
Vinh, N.T., Cholamjiak, P. & Suantai, S. A New CQ Algorithm for Solving Split Feasibility Problems in Hilbert Spaces. Bull. Malays. Math. Sci. Soc. 42, 2517–2534 (2019). https://doi.org/10.1007/s40840-018-0614-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40840-018-0614-0
Keywords
- Split feasibility problem
- Variational inequality
- Gradient projection method
- Weak convergence
- Strong convergence
- minimum-norm solution