Abstract
In this paper, we introduce two iterative algorithms for the split feasibility problem in real Hilbert spaces by reformulating it as a fixed point equation. Under suitable conditions, weak and strong convergence theorems are established. As a consequence, we obtain weak and strong convergence iterative sequences for the split equality problem introduced by Moudafi. The efficiency of the proposed algorithms is illustrated by numerical experiments. Our results improve and extend the corresponding results announced by many others.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The split feasibility problem (SFP) in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and medical image reconstruction [2], with particular progress in the intensity-modulated radiation therapy [3,4,5].
In this paper, we work in the framework of infinite-dimensional Hilbert spaces. Let C and Q be nonempty closed convex subsets of real Hilbert spaces \(H_1\) and \(H_2\), respectively. The SFP can mathematically be formulated as the problem of finding a point \(x^*\in C\) with the property
where \(A:H_1\rightarrow H_2\) is a bounded linear operator.
Throughout this paper, we use \(\Gamma \) to denote the solution set of the SFP (1.1), i.e.,
and assume the consistency of (1.1) so that \(\Gamma \) is closed, convex and nonempty.
Note that the SFP (1.1) can be formulated as a fixed point equation by using the fact
where \(P_C\) and \(P_Q\) are the (orthogonal) projections onto C and Q, respectively, \(\gamma >0\) is any positive constant and \(A^*\) denotes the adjoint of A. That is, \(x^*\) solves the SFP (1.1) if and only if \(x^*\) solves the fixed point equation (1.2) (see [6] for the details). This implies that we can use fixed point algorithms (see [7,8,9,10,11,12,13,14,15]) to solve SFP. Byrne [2, 16] proposed the so-called CQ algorithm which generates a sequence \(\{x_k\}\) by
where \(\gamma \in (0,2/ \lambda )\) with \(\lambda \) being the spectral radius of the operator \(A^*A\). The CQ algorithm only involves the computations of the projections \(P_C\) and \(P_Q\) onto the sets C and Q, respectively, and is therefore implementable in the case where \(P_C\) and \(P_Q\) have closed-form expressions.
We can reformulate the SFP (1.1) as an optimization problem. Indeed, \(x\in \Gamma \) means that there is an \(x\in C\) such that \(Ax-q =0\) for some \(q\in Q\). This motivates us to introduce the (convex) objective function
and consider the convex minimization problem
The objective function f is continuously differentiable with gradient given by
(Here \(A^*\) is the adjoint of A.) Due to the fact that \(I-P_Q\) is (firmly) nonexpansive, we obtain that \(\nabla f\) is Lipschitz continuous with Lipschitz constant \(L=\Vert A\Vert ^2\). It is well known that the gradient-projection algorithm (GPA) is one of the powerful methods for solving constrained optimization problems. Recall that the GPA, for the minimization problem (1.5), generates a sequence \(\{x_k\}\) via the recursion:
where \(\gamma \) is chosen in the interval (0, 2 / L) with L being the Lipschitz constant of \(\nabla f\). For solving (1.5), the GPM with gradient \(\nabla f\) given as in (1.6) is the CQ algorithm (1.3).
By (1.5), we can see the SFP (1.1) can be written as the following convex separable minimization problem
where f(x) is defined by (1.4) and \(\iota _C(x)\) is an indicator function of the set C defined by
Chen et al. [17] designed and discussed an efficient algorithm for minimizing the sum of two proper lower semi-continuous convex functions, i.e.,
where \(f_1\in \Gamma _0(R^m)\), \(f_2\in \Gamma _0(R^n)\), \(f_2\) is differentiable on \(R^n\) with \(1/\beta \)-Lipschitz continuous gradient for some \(\beta \in (0,+\infty )\) and \(B: R^n\rightarrow R^m\) a linear transform. When \(B = I\) and \(m=n\), the problem (1.9) becomes the following problem often considered in the literature
For any two positive numbers \(\lambda \) and \(\gamma \), define \(T: R^m \times R^n\rightarrow R^m\) as
and \(S : R^m \times R^n\rightarrow R^n\) as
Denote \(G: R^m\times R^n\rightarrow R^m\times R^n\) as
The authors in [17] obtained the following fixed point formulation for the solution of the convex separable problem (1.9).
Theorem 1.1
[17] (Theorem 3.1) Let \(\lambda \) and \(\gamma \) be two positive numbers. Suppose that \(x^*\) is a solution of (1.9). Then there exists \(v^*\in R^m\) such that
In other words, \(u^*= (v^*, x^*)\) is a fixed point of G. Conversely, if \(u^*\in R^m \times R^n\) is a fixed point of G, with \(u^*= (v^*, x^*)\), \(v^*\in R^m\), \(x^*\in R^n\), then \(x^*\) is a solution of (1.9).
They proposed the following Picard sequence \(\{u_k\}\) of G
It was showed [17] that under appropriate conditions \(\{u_k\}\) converges to a fixed point of G and \(\{x_k\}\) converges to a solution of problem (1.9). Since x is the primal variable related to (1.9), it is very natural to ask what role the variable v plays in above algorithm. In [17] they found out that v is actually the dual variable of the primal-dual form related to (1.9).
Motivated the above works, we construct two algorithms for the SFP (1.1). Note that the step-size \(\gamma \) in (1.14) is fixed, the aim of this paper is twofold: first to propose a new algorithm with variable step-sizes in infinite-dimensional Hilbert space; second to modify the proposed algorithm so that it has strongly convergence result. As a consequence, we obtain weak and strong convergence sequences for the split equality problem introduced by Moudafi [18]. Finally, some numerical results are presented to show the efficiency of the proposed algorithms.
2 Preliminaries
Throughout this paper, we always assume that H is a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\Vert \cdot \Vert \). Let I denote the identity operator on H. Let \(U:H\rightarrow H\) be a mapping. A point \(x\in H\) is said to be a fixed point of U provided \(Ux=x\). In this paper, we use F(U) to denote the fixed point set and use \(\rightarrow \) to denote the strong convergence. Here and in what follows, for a real Hilbert space H, \(\Gamma _0(H)\) denotes the collection of all proper lower semi-continuous convex functions from H to \((-\infty ,+\infty ]\).
Definition 2.1
An operator \(U:H\rightarrow H\) is said to be
(i) nonexpansive if
(ii) firmly nonexpansive if
It is easily to obtain that U is firmly nonexpansive if and only if
More information on firmly nonexpansive mappings can be found on pages 42–44 of the book [19].
Definition 2.2
An operator \(U:H\rightarrow H\) is called demi-closed at the origin if, for any sequence \(\{x_n\}\) which weakly converges to x, and if the sequence \(\{U(x_n)\}\) strongly converges to 0, then \(Ux=0\).
Definition 2.3
An operator \(h:H\rightarrow H\) is said to be
(i) L-Lipschitzian if there exists a positive constant L such that
(ii) \(\rho \)-strongly pseudo-contraction if there exists a constant \(\rho \in [0,1)\) such that
(iii) \(\rho \)-contraction if there exists a constant \(\rho \in [0,1)\) such that
Definition 2.4
An operator \(h:C\rightarrow H\) is said to be \(\eta \)-strongly monotone, if there exists a positive constant \(\eta \) such that
It is obvious that if h is a \(\rho \)-strongly pseudo-contraction, then \(I-h\) is a \((1-\rho )\)-strongly monotone mapping. Recall the variational inequality problem [20] is to find a point \(x^*\in C\) such that
where \(F:C\rightarrow H\) is a nonlinear operator. It is well known that [21] if \(F:C\rightarrow H\) is a Lipschitzian and strongly monotone operator, then the above variational inequality problem has a unique solution.
Recall that the metric (nearest point) projection from H onto a nonempty closed convex subset C of H, denoted by \(P_C\), is defined as follows: for each \(x\in H\),
For \(f\in \Gamma _0(H)\) and \(\rho \in (0,+\infty )\), the proximal operator of f of order \(\rho \), denoted by prox\(_{\rho f}\), is defined by, for each \(x\in H\),
It is well known that
for \(f =\iota _C\) and \(\rho >0\). Conversely, if prox\(_{\rho f} = P_C\), we can choose \(f=\iota _C\). Proximity operators are therefore generalization of projection operators.
Lemma 2.1
(Lemma 2.4 of [22]). Let f be a function in \(\Gamma _0(H)\). Then prox\(_f\) and \(I-\mathrm{prox}_f\) are both firmly nonexpansive operators.
Lemma 2.2
[23, 24] Let E be a uniformly convex Banach space, K be a nonempty closed convex subset of E and \(U:K\rightarrow K\) be a nonexpansive operator. Then \(I-U\) is demi-closed at origin.
Lemma 2.3
Let K be a nonempty closed convex subset of a Hilbert space H. Let \(\{x_k\}\) be a bounded sequence which satisfies the following properties:
-
(i)
every weak limit point of \(\{x_k\}\) lies in K;
-
(ii)
\(\lim _{k\rightarrow \infty }\Vert x_k-x\Vert \) exists for every \(x\in K\).
Then \(\{x_k\}\) converges weakly to a point in K.
Now, taking \(f_1(x)=\iota _C(x)\), \(f_2(x)=\frac{1}{2}\Vert (I-P_Q)Ax\Vert ^2\) and \(B=I\), it follows from (1.6) and (2.3) that
for any two positive numbers \(\lambda \) and \(\gamma \). We can obtain a fixed point formulation for the solution of the SFP (1.1) from Theorem 1.1.
For any two positive numbers \(\lambda \) and \(\gamma \), define \(g:H_1\rightarrow H_1\) as
\(T : H_1 \times H_1\rightarrow H_1\) as
and \(S : H_1 \times H_1\rightarrow H_1\) as
Denote \(G : H_1\times H_1\rightarrow H_1\times H_1\) as
Lemma 2.4
(Theorem 3.1 of [17]) Let \(\lambda \) and \(\gamma \) be two positive numbers. Suppose that \(x^*\) is a solution of the SFP (1.1). Then there exists \(v^*\in H_1\) such that
In other words, \(u^*= (v^*, x^*)\) is a fixed point of G. Conversely, if \(u^*\in H_1 \times H_1\) is a fixed point of G with \(u^*= (v^*, x^*)\), then \(x^*\) is a solution of the SFP (1.1).
Remark 2.1
For any two positive numbers \(\lambda \) and \(\gamma \), G is defined by (2.7). Then
So we can denote \(\Omega :=F(G)\) for any two positive numbers \(\lambda \) and \(\gamma \).
Indeed, for any two positive numbers \(\lambda \) and \(\gamma \), let \(u^*=(v^*, x^*)\in H_1\times H_1\) be a fixed point of G defined in (2.7). We have \(v^*=T(v^*,x^*)\) and \(x^*=S(v^*, x^*)\). Using Lemma 2.4, we have \(x^*\) is a solution of (1.1) and hence \(x^*\in C\), \(Ax^*\in Q\). It follows that \(g(x^*)=x^*\) and \(x^*=S(v^*, x^*)=x^*-\lambda v^*\). It follows that \(v^*=0\), which implies that \(F(G)\subseteq \{(0,x^*):x^*\in \Gamma \}\). Conversely, we easily obtain \(\{(0,x^*):x^*\in \Gamma \}\subseteq F(G)\). So, \(F(G)=\{(0,x^*):x^*\in \Gamma \}\) for any two positive numbers \(\lambda \) and \(\gamma \). \(\square \)
Lemma 2.5
[25] Assume \(\{s_k\}\) is a sequence of nonnegative real numbers such that
where \(\{\lambda _k\}\) is a sequence in (0, 1), \(\{\eta _k\}\) is a sequence of nonnegative real numbers and \(\{\delta _k\}\) and \(\{\mu _k\}\) are two sequences in \(\mathbb {R}\) such that
-
(i)
\(\Sigma _{k=1}^\infty \lambda _k=\infty \),
-
(ii)
\(\lim _{k\rightarrow \infty }\mu _k = 0\),
-
(iii)
\(\lim _{l\rightarrow \infty }\eta _{k_l}= 0\) implies \(\limsup _{l\rightarrow \infty }\delta _{k_l}\le 0\) for any subsequence \(\{k_l\}\subset \{k\}\).
Then \(\lim _{k\rightarrow \infty }s_k= 0\).
3 Weak convergence for SFP (1.1)
Now we can use fixed point equations to introduce a new iterative algorithm for the SFP (1.1) with variable step-sizes in infinite-dimensional Hilbert space.
Algorithm 3.1
Initialization: Let \(x_0,v_0\in H_1\) be arbitrary.
Iterative step: For \(k\in N\), let
where \(\{\gamma _k\}\subset (0,\frac{2}{\Vert A\Vert ^2})\), \(\lambda \in (0,1]\).
For the convergence analysis of Algorithm 3.1, we will first prove a key inequality for general cases (cf Theorem 3.1). Denote
where v, \(x\in H_1\). For an element \(u = (v, x)\in H_1\times H_1\), let
We can easily see that \(\Vert \cdot \Vert _\lambda \) is a norm over the produce space \(H_1\times H_1\) whenever \(\lambda > 0\).
Theorem 3.1
For any two elements \(u_1=(v_1,x_1)\), \(u_2=(v_2,x_2)\) in \(H_1\times H_1\), there holds
Proof
By Lemma 2.1, \(I-P_C\) is a firmly nonexpansive operator. This together with (3.2) yields
It follows from (3.2) that
Observing the definitions in (3.2) and (3.3), by (3.5) and (3.6), we have
Since
we have
It follows from Lemma 2.1 and the definition of firmly nonexpansive operators that
which implies that
Recalling the definition in (3.3), we easily see that (3.4) is a direct consequence of (3.9) and (3.11). \(\square \)
Remark 3.1
If \(0<\gamma <\frac{2}{\Vert A\Vert ^2}\), \(0 <\lambda \le 1\), then G is nonexpansive defined in (2.7) under the norm \(\Vert \cdot \Vert _\lambda \).
Theorem 3.2
Suppose \(0<\liminf _{k\rightarrow \infty }\gamma _k \le \limsup _{k\rightarrow \infty }\gamma _k<\frac{2}{\Vert A\Vert ^2}\), \(0 <\lambda \le 1\). Let \(u_k = (v_k, x_k )\) be the sequence generated by Algorithm 3.1. Then the sequence \(\{u_k\}\) weakly converges to a point in \(\Omega \) (i.e. F(G)) under the norm \(\Vert \cdot \Vert _\lambda \) and the sequence \(\{x_k\}\) weakly converges to a solution of the SFP (1.1).
Proof
From Algorithm 3.1, we have \(u_{k+1}=G_k(u_k)\). Let \(u^*\in \Omega \), we have \(u^*=(v^*, x^*)\in H_1\times H_1\) be a fixed point of G defined in (2.7) for any two positive numbers \(\lambda \) and \(\gamma \). So, \(x^*\in \Gamma \), \(v^*=0\) and \(u^*=(v^*, x^*)\) is a fixed point of \(G_k\) defined in (3.2) for any \(k\ge 1\). Using Theorem 3.1, \(x^*\in C\), \(Ax^*\in Q\) and \(v^*=0\), we have
By \(0<\liminf _{k\rightarrow \infty }\gamma _k \le \limsup _{k\rightarrow \infty }\gamma _k<\frac{2}{\Vert A\Vert ^2}\) and \(0 <\lambda \le 1\), we get the sequence \(\{\Vert u_{k}-u^*\Vert _\lambda \}\) is non-increasing and lower bounded by 0. Consequently, the sequence \(\{\Vert u_{k}-u^*\Vert _\lambda \}\) converges to some finite limit and the sequence \(\{\Vert u_k\Vert _\lambda \}\) is bounded. So the sequence \(\{x_k\}\) is bounded and
which together with \(0<\liminf _{k\rightarrow \infty }\gamma _k \le \limsup _{k\rightarrow \infty }\gamma _k<\frac{2}{\Vert A\Vert ^2}\) and \(0 <\lambda \le 1\) implies
and
So, we obtain
From (3.1), we have
Now, using (3.14) and (3.15) we immediately obtain
By the definition in (3.3) and (3.16)–(3.17), we have
Since \(\{\Vert u_k\Vert _\lambda \}\) is bounded, we see that the set of weak limit points of \(\{u_k\}\) under the norm \(\Vert \cdot \Vert _\lambda \), \(\omega _w(u_k)\), is nonempty. Next, we prove that \(\omega _w(u_k)\subseteq \Omega \). Assume that \(u^\circ \in \omega _w(u_k)\) and \(\{u_{k_j}\}\) is a subsequence of \(\{u_{k}\}\) which converges weakly to \(u^\circ \) under the norm \(\Vert \cdot \Vert _\lambda \). It follows from the boundedness of real sequence \(\{\gamma _{k}\}\) that \(\{\gamma _{k_j}\}\) is a bounded subsequence, which has convergent subsequence. Without loss of generality, we assume that \(\gamma _{k_j}\rightarrow \gamma ^\circ \) as \(j\rightarrow \infty \). Let g, T, S and G be defined in (2.4)–(2.7) for \(\gamma =\gamma ^\circ \), we have
By the boundedness of \(\{x_k\}\) and \(\gamma _{k_j}\rightarrow \gamma ^\circ \) as \(j\rightarrow \infty \), we get
Since
combining with (3.18) and (3.20), we obtain
By Lemma 2.2, (3.21) implies that \(u^\circ \in F(G)\) with \(\gamma =\gamma ^\circ \). So, \(u^\circ \in \Omega \) and \(\omega _w(u_k)\subseteq \Omega \). Lemma 2.3 ensures that \(\{u_k\}\) weakly converges to a point in \(\Omega \) under the norm \(\Vert \cdot \Vert _\lambda \). So, the sequence \(\{x_k\}\) weakly converges to a solution of problem (1.1). \(\square \)
Remark 3.2
Algorithm 3.1 generalizes CQ algorithm (1.3) for solving the SFP (1.1). Indeed, if we take \(\lambda =1\) in Algorithm 3.1, then we can obtain (1.3) with \(\gamma _k\equiv \gamma \) under the same conditions.
4 Strong convergence for SFP (1.1)
As we see from the above section, the sequence generated by Algorithm 3.1 has only weak convergence. So we aim to modify the proposed algorithm so that it has strongly convergence. It is known that the viscosity approximation method [26] is often used to approximate a fixed point of a nonexpansive mapping U in a Hilbert space with strong convergence and it is defined by
where \(\{\alpha _n\}\subseteq [0,1]\) and f is a contractive operator.
In [27], Yang and He proposed a general alternative regularization method for finding fixed point of firmly nonexpansive operator U
where \(\{\alpha _k\}\subseteq [0,1]\) and f is a Lipschitzian strong pseudo-contraction operator. They obtained the strong convergence of algorithm (4.2) and gave a simple example to show that if f in algorithm (4.1) is replaced with a Lipschitzian and strongly pseudo-contractive mapping, then strong convergence (even boundedness) of the iteration sequence \(\{x_k\}\) may not be guaranteed, in general. We now adapt (4.2) to modify Algorithm 3.1.
Algorithm 4.1
Let \(h:H_1\rightarrow H_1\) be an L-Lipschitzian and \(\theta \)-strongly pseudo-contractive mapping. Initialization: Let \(x_0,v_0\in H_1\) be arbitrary.
Iterative step: For \(k\in N\), let
where \(\{\alpha _k\}\subset (0,1),\) \(\{\gamma _k\}\subset (0,\frac{2}{\Vert A\Vert ^2})\), \(\lambda \in (0,1]\).
Theorem 4.1
If the sequences \(\{\alpha _k\}\), \(\{\gamma _{k}\}\) and the constant \(\lambda \) satisfy the following conditions:
then the sequence \(u_{k+1}=(v_{k+1},x_{k+1})\) generated by Algorithm 4.1 strongly converges to a point in \(\Omega \) under the norm \(\Vert \cdot \Vert _\lambda \) and the sequence \(\{x_k\}\) strongly converges to a solution \(x^*\) of the SFP (1.1). Moreover, \(x^*\) also solves the following variational inequality problem: finding a point \(\hat{x}\in \Gamma \) such that
Proof
Let \(\overline{u}_{k+1}=(\overline{v}_{k+1},\overline{x}_{k+1})\). From Algorithm 4.1, we have \(u_{k+1}=G_k(\overline{u}_{k+1})\). Since h is an L-Lipschitzian and \(\theta \)-strongly pseudo-contractive mapping, so the variational inequality (4.4) has a unique solution \(x^*\). Letting \(u^*=(0,x^*)\), we have \(u^*\in \Omega \). Similar to the proof of Theorem 3.2, we obtain that
From (4.3), we have
Obviously, we have
and
where \(\beta \) is a positive constant such that \(\theta +\beta <1.\) Thus, we get
Noting the fact that \(\alpha _k\rightarrow 0\) and \(\lim _{k\rightarrow \infty }[1-\alpha _k(\frac{1}{2}+L^2)-(1-\alpha _k)(\theta +\beta )]=1-(\theta +\beta )>0\), we assert that there exists some integer \(k_0\) such that \(2\alpha _k<1\) and
hold for all \(k\ge k_0.\) So, from (4.5), (4.9) and (4.10), we see that
for all \(k\ge k_0\). Consequently,
and inductively
This means \(\{\Vert u_k\Vert _\lambda \}\) is bounded. So \(\{x_k\},\) \(\{v_k\},\) \(\{h(x_k)\}\) and \(\{h(v_k)\}\) are bounded, too. Moreover, from (4.6), we have
Set
and
Then (4.12) can be rewritten as the form
On the other hand, there exists some integer \(k_1\) such that \(2\alpha _k<1\) and
hold for all \(k\ge k_1\). From (4.12), we have
for all \(k\ge k_1\). So, it follows from (4.5) that
Set
and
We can rewritten (4.15) as the form
Since \(\lim _{k\rightarrow \infty }(2-\alpha _k(1+2L^2)-2(1-\alpha _k)\theta )=2(1-\theta )>0\), from conditions (i) and (ii), we get \(\lambda _k\rightarrow 0,\) \(\sum _{k=0}^\infty \lambda _k=\infty \) and \(\mu _k\rightarrow 0\). So to complete the proof using Lemma 2.5, it suffices to verify \(\eta _{k_l}\rightarrow 0(l\rightarrow \infty )\) implies that \(\limsup _{l\rightarrow \infty }\delta _{k_l}\le 0\) for any subsequence \(\{k_l\}\subset \{k\}.\) Indeed, due to \(\eta _{k_l}\rightarrow 0(l\rightarrow \infty )\) and the condition (iii), we can obtain
So, we have
and
Furthermore, due to condition (i), we have
From (4.3) and (4.18), we have
Combining with (4.19) and (4.20), we can obtain
It follows from (4.17) and (4.20) that
By (4.22), (4.23) and the definition in (3.3), we have
Since \(G_{k}\) is nonexpansive, from (4.20), we have
as \(l\rightarrow \infty \). From (4.24) and (4.25), we get
Since \(\{x_{k_{l}}\}\) is bounded, we see that the set \(\omega _w(x_{k_{l}})\) of weak limit points of \(\{x_{k_{l}}\}\) is nonempty. Next, we prove that \(\omega _w(x_{k_{l}})\subseteq \Gamma \). Assume that \(\hat{x}\in \omega _w(x_{k_{l}})\) and \(\{x_{k_{l_{j}}}\}\) is a subsequence of \(\{x_{k_{l}}\}\) which converges weakly to \(\hat{x}\). Since \(\{v_{k_{l_j}}\}\) is also bounded, we know that \(\{v_{k_{l_j}}\}\) has a weak convergent subsequence. Without loss of generality, we can assume that \(\{v_{k_{l_{j}}}\}\) converges weakly to \(\hat{v}\) as \(j\rightarrow \infty \). Then \(\{u_{k_{l_j}}\}=\{(v_{k_{l_{j}}},x_{k_{l_{j}}})\}\) converges weakly to \(\hat{u}=(\hat{v},\hat{x})\) under the norm \(\Vert \cdot \Vert _\lambda \). We prove that \(\hat{u}\in \Omega \). It follows from the boundedness of real sequence \(\{\gamma _{k_{l}}\}\), without loss of generality, we can assume that \(\gamma _{k_{l_{j}}}\rightarrow \hat{\gamma }\) as \(j\rightarrow \infty \). Let g, T, S and G be defined in (2.4)–(2.7) for \(\gamma =\hat{\gamma }\), similar to (3.19), we have
By the boundedness of \(\{\overline{x}_{k_{l}}\}\) and \(\gamma _{k_{l_{j}}}\rightarrow \hat{\gamma }\) as \(j\rightarrow \infty \), we get
we obtain
By Lemma 2.2 and Remark 3.1, (4.29) implies that \(\hat{u}=(\hat{v},\hat{x})\in F(G)\) with \(\gamma =\hat{\gamma }.\) So \((\hat{v},\hat{x})\in \Omega \) and \(\hat{x}\in \Gamma \). Hence, we have \(\omega _w(x_{k_{l}})\subseteq \Gamma \). To get \(\limsup _{l\rightarrow \infty }\delta _{k_l}\le 0\), from \(\alpha _k\rightarrow 0\), (4.18) and (4.23) we only need to verify
We can take a subsequence \(\{x_{k_{l_j}}\}\) of \(\{x_{k_l}\}\) such that \(\{x_{k_{l_j}}\}\) converges weakly to \(\hat{x}\) as \(j\rightarrow \infty \) and
Since \(\omega _w(x_{k_l})\subset \Gamma \) and \(x^*\) is the solution of the variational inequality problem (4.4), from (4.30), we obtain
From Lemma 2.5, it follows that
which implies that \(x_k\rightarrow x^*\) as \(k\rightarrow \infty \). \(\square \)
5 Split equality problem
Let \(H_1\), \(H_2\), \(H_3\) be real Hilbert spaces, let \(C\subset H_1\), \(Q\subset H_2\) be two nonempty closed convex sets, let \(A:\ H_1\rightarrow H_3\), \(B:\ H_2\rightarrow H_3\) be two bounded linear operators. The split equality problem introduced by Moudafi in [18] is to find
In [28], Moudafi and Al-Shemas proposed the following simultaneous iterative algorithm for the split equality problem (5.1):
where \(\varepsilon \le \gamma _k\le \frac{2}{\lambda _A+\lambda _B}-\varepsilon \), \(\lambda _A\) and \(\lambda _B\) stand for the spectral radius of \(A^*A\) and \(B^*B\), respectively.
We will transform the split equality problem (5.1) into the original SFP. Let us denote
Denote a linear operator \(\mathbf {A}:\mathbf {H_1}\rightarrow \mathbf {H_2}\) by
for all \((x,y)\in \mathbf {H_1}\). If the set \({\mathbf {\Gamma }}:=\{(x,y)\in {\mathbf {C}}:{\mathbf {A}}(x,y)\in \mathbf {Q}\}\) is nonempty, then \((x,y)\in \mathbf {H_1}\) solves (5.1) if and only if
Note that:
As a consequence of our Theorem 3.2, the following iterative sequence\(\{x_k,y_k\}\) defined by
converges weakly to \((x^*,y^*)\) which solves split equality problem (5.1), where \(0 <\lambda \le 1\) and
We remark that (5.3) generalizes (5.2), that is, if we take \(\lambda =1\) in (5.3), then we can obtain (5.2).
Similarly, as a consequence of our Theorem 4.1, the following iterative sequence\(\{x_k,y_k\}\) defined by
converges strongly to \((x^*,y^*)\) which solves split equality problem (5.1), where \(h_1:H_1\rightarrow H_1\) and \(h_2:H_2\rightarrow H_2\) are L-Lipschitzian and \(\theta \)-strongly pseudo-contractive mappings.
6 Numerical results
To verify the theoretical assertions, we present some numerical results in this section for the SFP (1.1). To give some insight into the behavior of Algorithm 3.1 presented in this paper, we implemented them in MATLAB to solve an example. In the implementation, we take \(p(x)<\varepsilon =10^{-6}\) as the stopping criterion, where
Example
[29]. The SFP (1.1) is to find \(x\in C=\{x\in R^5|\ \Vert x\Vert \le 0.25\}\) such that \(Ax\in Q=\{y=(y_1,y_2,y_3,y_4)^T\in R^4|\ 0.6\le y_j\le 1,j=1,2,3,4\}\), where the matrix
We take parameters \(\gamma _k\equiv \gamma \in (0, \frac{2}{\Vert A\Vert ^2})\) and \(0<\lambda \le 1\) in Algorithm 3.1. Since a larger step size is more efficient than a smaller one (as illustrated in [29]), we take \(\gamma =\frac{1.95}{\Vert A\Vert ^2}\) in the experiment. The parameter \(\lambda \in (0,1]\) is important in Algorithm 3.1, we try different values of \(\lambda \) for solving this example. When the parameter \(\lambda =1\), Algorithm 3.1 becomes the exactly CQ algorithm (1.3).
We take different \(x_0\), \(v_0\) as initial points. In case 1, we take \(x_0=(1,1,1,1,1)\) and \(v_0=(2,-2,2,-2,2)\). In case 2, we take \(x_0=(1,1,1,1,1)\) and \(v_0=(0,0,0,0,0)\). In case 3, we take \(x_0=v_0(1,5,1,5,1)\). In case 4, we take \(x_0=(1,-5,1,-5,1)\) and \(v_0=(1,-2,1,-2,1)\). We report the numerical results in Fig. 1.
Elementary experiments showed that for some examples Algorithm 3.1 approaches to an approximate solution faster than the algorithm (1.3) (\(\lambda =1\)), which indicates that a proper choice of the parameter \(\lambda \in (0,1)\) may accelerate the convergence. But for different initial points, we failed to obtained the stability of \(\lambda \) for Algorithm 3.1. This still needs further study.
References
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Byrne, C.: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity- modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)
Censor, Y., Motova, A., Segal, A.: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 327, 1244–1256 (2007)
Xu, H.K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)
Xu, H.K.: A variable Krasnosel’ski\(\breve{i}\)–Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 22, 2021–2034 (2006)
Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problem. SIAM Rev. 38, 367–426 (1996)
López, G., Martín-Márquez, V., Wang, F., Xu, H.K.: Solving the split feasibility problem without prior knoeledge of matrix norms. Inverse Probl. 28, 085004 (2012)
Qu, B., Xiu, N.: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 1655–1665 (2005)
Abdellah, B., Muhammad, A.N., Mohamed, K., Sheng, Z.H.: On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 54, 627–639 (2012)
Zhou, H., Wang, P.: Adaptively relaxed algorithms for solving the split feasibility problem with a new step size. J. Inequal. Appl. 2014, 448 (2014)
Cui, H., Wang, F.: Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory Appl. 2014, 78 (2014)
Byrne, C., Censor, Y., Gibali, A., Reich, S.: The split common null point problem. J. Nonlinear Convex Anal. 13, 759–775 (2012)
Masad, E., Reich, S.: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367–371 (2007)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29, 025011 (2013)
Moudafi, A.: Alternating CQ-algorithms for convex feasibility and split fixed-point problems. J. Nonlinear Convex A. 15, 809–818 (2014)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
Stampacchia, G.: Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413–4416 (1964)
Lions, J.L., Stampacchia, G.: Variational inequalities. Commun. Pure Appl. Math. 20, 493–512 (1967)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)
Bauschke, H.H.: The approximation of fixed points of composition of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 202, 150–159 (1996)
Gornicki, J.: Weak convergence theorems for asymptotically nonexpansive mappings in uniformly convex Banach spaces. Comment. Math. Univ. Carol. 301, 249–252 (1998)
He, S., Yang, C.: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013(942315), 1–19 (2013)
Moudafi, A.: Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 241, 46–55 (2000)
Yang, C., He, S.: General alternative regularization methods for nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2014, 203 (2014)
Moudafi, A., Al-Shemas, E.: Simultaneous iterative methods for split equality problems and application. Trans. Math. Program. Appl. 1, 1–11 (2013)
Zhang, W., Han, D., Li, Z.: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 25, 115001 (2009)
Acknowledgements
We are sincerely grateful to the editor and the anonymous referees for helping us improve the original manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Supported by National Natural Science Foundation of China (No. 61503385) and Open Fund of Tianjin Key Lab for Advanced Signal Processing (No. 2017ASP-TJ03).
Rights and permissions
About this article
Cite this article
Zhao, J., Zong, H. Iterative algorithms for solving the split feasibility problem in Hilbert spaces. J. Fixed Point Theory Appl. 20, 11 (2018). https://doi.org/10.1007/s11784-017-0480-7
Published:
DOI: https://doi.org/10.1007/s11784-017-0480-7