Abstract
In this paper, we propose two new self-adaptive relaxed CQ algorithms to solve the split feasibility problem with multiple output sets, which involve the computation of projections onto half-spaces instead of the computation onto the closed convex sets. Our proposed algorithms with selection technique reduce the computation of projections. And then, as a generalization, we construct two new algorithms to solve the variational inequalities over the solution set of split feasibility problem with multiple output sets. More importantly, strong convergence of all proposed algorithms is proved under suitable conditions. Finally, we conduct numerical experiments to show the efficiency and accuracy of our algorithms compared to some existing results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let C and Q be nonempty closed convex subsets of Hilbert spaces \(H_1\) and \(H_2\), respectively. Let \(A:H_1\rightarrow H_2\) be a bounded linear operator with its adjoint operator \(A^*\). The split feasibility problem (SFP) is formulated as finding a point \(x^*\in H_1\) satisfying
The SFP was first proposed by Censor and Elfving [6] in 1994 for solving modeling inverse problems which arise from the phase retrieval problems and medical image reconstruction [4]. It has been found that the SFP can also be used to model problems with applications in different domains, for instance, intensity-modulated radiation therapy and gene regulatory network inference [5, 7, 8, 17].
For solving the SFP, Byrne [4] introduced the applicable and best-known CQ algorithm, which involves the computations of the projections \(P_C\) and \(P_Q\) onto C and Q. In addition, the step size of CQ algorithm depends on the operator norm, which is not easy to compute (or at least estimate). In 2004, Yang [19] generalized the CQ method to the so-called relaxed CQ algorithm, needing computation of the metric projection onto (relaxed sets) half-spaces \(C^n\) and \(Q^n\). Since \(P_{C^n}\) and \(P_{Q^n}\) are easily calculated, this method appears to be very practical. To overcome the criterion for computing the norm of A (which is both complicated and costly), in 2012, López et al. [11] introduced a relaxed CQ algorithm for solving the SFP with a new adaptive way of determining the sequence of steps \(\tau _n\), defined as follows:
where \(\rho _n\in (0, 4), \forall n \ge 1\) such that \(\liminf _{n\rightarrow \infty } \rho _n(4- \rho _n)>0.\) It was proved that the sequence \(\{x_n\}\) with \(\tau _n\) converges weakly to a solution of the SFP.
Some generalizations of the SFP have been studied by many authors. Recently, Reich and Tuyen [12] considered and studied the split feasibility problem with multiple output sets (SFPMOS). Let \(H, H_i, i = 1, 2,...,N,\) be real Hilbert spaces and let \(A_i: H\rightarrow H_i, i = 1, 2,...,N\) be bounded linear operators. Let C and \(Q_i, i = 1, 2,...,N\) be nonempty, closed, and convex subsets of H and \(H_i, i = 1, 2,...,N\), respectively. The SFPMOS is formulated to find a point \(x^*\) such that
The solution set of problem (SFPMOS) is denoted by \(\Gamma \).
Furthermore, Reich and Tuyen [12] proposed the following two algorithms for solving the SFPMOS by extending the CQ algorithm:
where \(f: C\rightarrow C\) is a strict contraction mapping, \(\{\lambda _n\} \subset (0,\infty )\) and \(\{\alpha _n\}\subset (0, 1).\) It is proved that if the sequence \(\lambda _n\) satisfies the condition:
for all \(n\ge 1,\) then the sequence \(\{x_n\}\) obtained weak and strong convergence results for (3) and (4), respectively.
However, we note that each iteration step of Algorithm (3) and Algorithm (4) requires the computation of metric projections onto sets C and \(Q_i\) and have to calculate or estimate the operator norm \(\Vert A_i\Vert \). In general, this is not an easy task in practice.
Next, we consider several self-adaptive projection methods that would avoid the above case. In 2019, Yao et al. [14] presented two self-adaptive iterative algorithms for solving the multiple-sets split feasibility problem (MSSFP) and proved the weak and strong convergence results. MSSFP is to find a point \(x^*\) such that
where \(\{C_i\}_{i=1}^s\) and \(\{Q_i\}_{j=1}^t\) are two finite families of closed convex subsets of \(H_1\) and \(H_2\).
Yao et al. [14] proposed the following self-adaptive method with selection technique:
where
in which \(\lambda _n>0.\)
In 2022, Reich et al. [14] introduced a split inverse problem called split common fixed point problem with multiple output sets. To solve this problem, they proposed a self-adaptive algorithm which is based on the viscosity approximation method and prove a strong convergence theorem. More details in [14].
Very recently, Taddele et al. [16] propose two new self-adaptive relaxed CQ algorithms for solving SFPMOS. The algorithms they proposed are as follows:
where \(\tau _n:=\frac{\rho _2^n\sum _{i=1}^N\vartheta _i\Vert (I-P_{Q_i^n})A_ix_n\Vert ^2}{\bar{\tau _n}^2}\), \(\bar{\tau _n}:=\max \{\Vert \sum _{i=1}^N\vartheta _iA_i^*(I-P_{Q_i^n})A_ix_n\Vert ,\beta \}.\) They establish a weak and a strong convergence theorems for the proposed algorithms. We note that although the step size of the algorithms of Taddele are adaptive, the computational effort required to compute the sum of the first N terms with respect to the projection \(P_{Q_i^n}\) is undoubtedly enormous.
In addition, variational inequalities are crucial for both optimization theory and its applications. More importantly, we also turn our attention to the variational inequality problem over the solution set of the SFPMOS (in short, P2). Let \(F: H \rightarrow H\) be a given operator. Suppose that the solution set \(\Gamma \) of SFPMOS is nonempty, P2 is described as follows:
The solution set of problem (P2) is denoted by VIP(F, \(\Gamma \)).
Motivated and inspired by [14] and [20], we further improve Reich’s algorithms (4) for solving the SFPMOS (2). Especially, we develop and improve some previously discussed results in the following ways:
1. In contrast to Reich’s algorithms (4), we propose two new relaxed CQ algorithms with adaptive step size to solve SFPMOS. It is worth noting that the parameters of these two algorithms are chosen without the prior knowledge of the norm of the transfer mappings. Our proposed algorithms employ selection techniques that reduce the computational effort of projection.
2. Based on these two algorithms in [14] and [20], we present compute the projections onto half-spaces instead of onto a closed convex set. Strong convergence results of the proposed Halpern-Type algorithms are proved.
3. As a generalisation of our proposed algorithms, we propose two new adaptive algorithms for solving the variational inequality problem over the solution set of the SFPMOS (7), which may not have been solved by others until now.
The paper is organized as follows: In Sect. 2, we recall some existing results, lemmas, and definitions for subsequent use. In Sect. 3, we present algorithms for solving SFPMOS and as a generalisation problem P2 in turn, and analyse the convergence of the proposed algorithms. To demonstrate that our proposed approach is implementable, several numerical examples are provided in Sect. 4.
2 Preliminaries
Let H be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and \(\Vert \cdot \Vert \). Let the symbols “\(\rightharpoonup \)" and “\(\rightarrow \)" denote the weak and strong convergence, respectively. For any sequence \(\{x_n\}\subset H\), \(\omega _w(x_n) =\{x\in H:\exists \{x_{n_k}\}\subset \{x_n\}\) such that \(x_{n_k}\rightharpoonup x\}\) denotes the weak w-limit set of \(\{x_n\}.\)
Definition 1
[2] Let C be a nonempty, closed and convex set of H. For every element \(x\in H,\) there exists a unique nearest point in C, denoted by \(P_Cx\) such that
The operator \(P_C\) is called a metric projection from H onto C.
It has the following well-known properties which are used in the subsequent convergence analysis.
Lemma 2
[10] Let \(C\subset H\) be a nonempty, closed and convex set. Then, the following assertions hold for any \(x, y\in H\) and \(z\in C:\)
-
(i)
\( \langle x-P_Cx,z-P_Cx \rangle \le 0;\)
-
(ii)
\( \Vert P_Cx-P_Cy\Vert \le \Vert x-y\Vert ;\)
-
(iii)
\(\Vert P_Cx-P_Cy\Vert ^2 \le \langle P_Cx-P_Cy, x-y\rangle ;\)
-
(iv)
\(\Vert P_Cx-z\Vert ^2 \le \Vert x-z\Vert ^2-\Vert x-P_Cx\Vert ^2.\)
Next, we give some inequalities required for proving convergence analysis. For each \(x,y\in H,\) we have
Lemma 3
(Peter-Paul Inequality) If a and b are non-negative real numbers, then
Definition 4
[2] Let \(f: H\rightarrow (-\infty , +\infty ]\) be a proper function, and let \(x\in H\). Then f is called
-
(i)
Lower semicontinuous at x if \(x_n\rightarrow x\) implies \(f(x)\le \liminf _{n\rightarrow \infty } f (x_n)\);
-
(ii)
Weakly lower semicontinuous at x if \(x_n\rightharpoonup x\) implies \(f(x)\le \liminf _{n\rightarrow \infty } f(x_n)\).
Definition 5
[2] Let \(f: H\rightarrow (-\infty , +\infty ]\) be proper. The subdifferential of f is the set-valued operator
Let \(x\in H\). Then f is subdifferentiable at x if \(\partial f(x)\ne \emptyset \); the elements of \(\partial f(x)\) are the subgradients of f at x.
Lemma 6
[11] A mapping \(T: H\rightarrow H\) is said to be:
-
(i)
L-Lipschitz continuous if there exists a constant \(L > 0\) such that
$$\begin{aligned} \Vert Tx-Ty\Vert \le L\Vert x-y\Vert ,\ \forall \ x,\ y \in H; \end{aligned}$$ -
(ii)
\(\kappa \)-strongly monotone if there exists a constant \(\kappa >0\) such that
$$\begin{aligned} \langle Tx-Ty, x-y \rangle \ge \kappa \Vert x-y\Vert ^2,\ \forall \ x,\ y \in H. \end{aligned}$$
Lemma 7
[15] Let \(\{\Upsilon _n\}\) be a positive sequence, \(\{b_n\}\) be a sequence of real numbers, and \(\{\alpha _n\}\) be a sequence in the open interval (0, 1) such that \( \sum _{n=1}^\infty \alpha _n=\infty .\) Assume that
If \(\limsup _{k\rightarrow \infty } b_{n_k}\le 0\) for every subsequence \(\{\Upsilon _{n_k}\}\) of \(\{\Upsilon _n\}\) satisfying \(\liminf _{k\rightarrow \infty }(\Upsilon _{n_{k+1}}-\Upsilon _{n_k})\le 0,\) then \(\lim _{n\rightarrow \infty } \Upsilon _n=0\).
3 Results
In this section, we consider a general case of the SFPMOS, in which C and \(Q_i\) are level sets of convex and subdifferential functions \(c: H_1\rightarrow {\mathbb {R}}\) and \(q_i: H_i\rightarrow {\mathbb {R}}\) defined as follows:
Let \(\partial c\) and \(\partial q\) denote the subdifferential of c and q. Then c and q are also weakly lower semicontinuous. We define the half-spaces \(C^n\) and \(Q_i^n\ (i=1,2,...,N)\) of C and \(Q_i\), respectively:
By the definition of the subgradient, it is easy to see that \(C \subseteq C^n\) and \(Q_i \subset Q^n_i\) ( [9]).
Now, we introduce two self-adaptive relaxed CQ algorithms for solving the SFPMOS (2) in the case where the SFPMOS is consistent (i.e. \(\Gamma \ne \emptyset \)).
3.1 Two iterative algorithms for solving SFPMOS
Theorem 8
Suppose the sequences \(\{\lambda _n\}\), \(\{\rho _n\}\) and \(\{\alpha _n\}\) are in (0, 1) satisfying the following conditions:
-
(i)
\(0<a\le \lambda _n \le b<1\), \(0<c\le \rho _n \le d < 1;\)
-
(ii)
\(\lim \limits _{n\rightarrow \infty }\alpha _n=0\) and \(\sum _{n=0}^\infty \alpha _n=\infty .\)
Then \(\{x_n\}\) generated by Algorithm 1 converges strongly to a solution \(x^*\), where \(x^*=P_\Gamma u.\)
Proof
Let \(x^*\in \Gamma .\) The proof is divided into four steps as follows.
Step 1. The sequence \(\{x_n\}\) is bounded. We consider the following two cases:
Case 1: \(\hat{d_n}=\Lambda _n.\)
From the definition of \(\{y_n\}\), we have
Since \(x^*\in C\subset C^n\), it follows from lemma 2 (i) that
This implies that
Case 2: \(\bar{d_n}=\Lambda _n.\)
Using (11) and lemma 2 (i), we have
We next demonstrate that the sequence \(\{x_n\}\) is bounded. Indeed, using (8) and (16), we get
It follows from lemma 3 and (17) that
This implies that the sequence \(\{x_n\}\) is bounded, and \(\{y_n\}\) and \(\{A_ix_n\}, i=1,..., N\) as well.
Step 2. \(\Vert x_n-P_{C^n}x_n\Vert \rightarrow 0\) and \(\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert \rightarrow 0\) are hold as \(n\rightarrow \infty \).
By (17), we obtain
Setting \(\Upsilon _n=\Vert x_n-x^*\Vert ^2\), we obtain
where \(b_n=2\langle u-x^*, x_{n+1}-x^*\rangle .\)
In view of lemma 7, it suffices to demonstrate that \(\limsup _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\Upsilon _{n_k}\) is an arbitrary subsequence of \(\Upsilon _n\) satisfying
From (17), we also obtain
If we put (14) in (22), we obtain that
Using the hypothesis \(\lambda _n\in [a,b]\subset (0,1)\), we get from the inequality above that
where \(M>0\) is a constant such that \(2\Vert u-x^*\Vert \Vert x_{n+1}-x^* \Vert \le M,\) for all \(n\in N.\)
Since \(x^*\in \Gamma ,\) we note \(A_i^*(I-P_{Q_i^n})A_ix^*=0.\) Hence, we can get
Since \(\{x_n\}\) is also bounded, we have the sequence \(\{\Vert A_i^*(I-P_{Q_i^n})A_ix_n\Vert \}_{n=1}^{\infty }\) is also bounded. Similarly, using (15), (22) and \(\rho _n\in [c,d]\subset (0,1)\), we see that
where \(K>0\) is a constant such that \(\Vert A_{i_n}^*(I-P_{Q_{i_n}^n})A_{i_n}x_n\Vert \le K,\) for all \(n\in N.\)
Since \(\Upsilon _n-\Upsilon _{n+1}\rightarrow 0\) and \(\alpha _n\rightarrow 0 \ (n\rightarrow \infty ),\) it follows from (23) and (24) that
Step 3. We show that \(\omega _w(x_n)\subset \Gamma .\)
Let \({\hat{x}}\in \omega _w(x_n),\) there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k}\rightharpoonup {\hat{x}}\) as \(k\rightarrow \infty .\) For each \(i = 1, 2,...,N\), since \(\partial c\) is bounded on bounded sets, there exists a constant \(\xi >0\) such that \(\Vert \xi _n\Vert \le \xi \) for all \(n \ge 0.\) Then, from the fact that \(P_{C^n}x_n\in C^n\) and (9), it follows that
By applying (25) and the weakly lower semicontinuity of c, we have
Consequently, \({\hat{x}}\in C.\) In the same way, there exists a constant \(\eta >0\) such that \(\Vert \eta ^n_{i_n}\Vert \le \eta \) for all \(n\ge 0.\) Since \(P_{Q_{i_n}^n}A_{i_n}x_n\in Q_{i_n}^n\), it follows that
By applying (25) and the weakly lower semicontinuity of \(q_i\), we have
According to the definition of \(i_n\), it turns out that \(A_i{\hat{x}}\in Q_i\) for \(i = 1, 2,...,N.\) Consequently, we have \({\hat{x}}\in \Gamma ,\) which implies that \(\omega _w(x_n)\subset \Gamma .\)
Step 4. We prove that \(\Upsilon _n\rightarrow 0\).
We now show that \(\Vert x_{{n_k}+1}-x_{n_k}\Vert \rightarrow 0\) as \(k\rightarrow \infty .\) Indeed, it follows from the boundedness of the sequence \(\{y_n\}\) and the assumption of \(\alpha _n\rightarrow 0\) that
In Case 1, from the definition of \(y_n\), \(\lambda _n\), \(\rho _n\) and (25), we obtain that
In Case 2, from the definition of \(y_n\), \(\tau _n\), \(\rho _n\) and (25), we have
Thus, from (28) and (29), we get \(\Vert y_{n_k}-x_{n_k}\Vert \rightarrow 0.\) Consequently, we have
Assume that \(\{x_{{n_k}_j}\}\) is a subsequence of \(\{x_{n_k}\},\) we also have
Furthermore, according to (30), \(x^*=P_\Gamma u\) and lemma 2 (i), we have
Finally, applying Lemma 7 to (21), we conclude that \(\Upsilon _n\rightarrow 0,\) that is \(x_n\rightarrow x^*.\) This completes the proof. \(\square \)
Assume that the sequence \(\{x_n\}\) generated by Algorithm 2 is infinite. In other words, Algorithm 2 does not terminate in a finite number of iterations.
Lemma 9
Let \(\{x_n\}\) be a bounded sequence. If \(\Vert x_n+y^n_{i_n}-z_n\Vert =0\) holds if and only if \(x_n\) is a solution of Problem 2.
Proof
Assume \(\Vert x_n+y_n^{i_n}-z_n\Vert =0\) holds. For any \(x^*\in \Gamma \), by lemma 2 (iii), we have
Since \(\{x_n\}\) is bounded and together with \(d_n=0\), we get
According to the definition of \(i_n\), it follows from the above equation that
Hence \(x_n\in C^n\) and \(A_ix_n\in Q_i^n.\) From (9), we have that \(x_n\in C\) and \(A_ix_n\in \bigcap _{i=1}^NQ_i.\) Therefore, \(x_n\in \Gamma .\) \(\square \)
Theorem 10
Suppose the sequences \(\{\alpha _n\}\) and \(\{\rho _n\}\) satisfying the following conditions:
(C1) \(\lim \limits _{n\rightarrow \infty } \alpha _n = 0\) and \(\sum _{n=1}^\infty \alpha _n=\infty ;\)
(C2) \(0< \liminf \limits _{n\rightarrow \infty } \rho _n\le \limsup \limits _{n\rightarrow \infty } \rho _n <4.\)
Then the sequence \(\{x_n\}\) generated by Algorithm 2 converges strongly to \(x^* =P_\Gamma u.\)
Proof
The proof is divided into four steps as follows.
Step 1. The sequence \(\{x_n\}\) is bounded.
Set \(w_n =x_n-t_n(x_n+y_n^{i_n}-z_n).\) It follows from (33) and (34) that
It implies that
Hence, \(\{x_n\}\) is bounded and so are the sequences \(\{Ax_n\}\), \(\{P_Cx_n\}\) and \(\{P_{Q_i}x_n\}(i\in I)\). Step 2. \(\Vert x_n-P_{C^n}x_n\Vert \rightarrow 0\) and \(\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert \rightarrow 0\) are hold, \(n\rightarrow \infty \).
Set \(\Upsilon _n=\Vert x_n-x^*\Vert ^2\), we have
where
Now, we prove that \(\Upsilon _n\rightarrow 0\) by lemma 7. Suppose that \(\{\Upsilon _{n_k}\}\) is an arbitrary subsequence of satisfying
Let \(M>0\) be a constant such that \(2\Vert u-x^*\Vert \Vert x_{n+1}-x^* \Vert \le M,\) for all \(n\in N.\) By (38), we have
From (40) and (41) together with conditions (C1) and (C2), we have that
Thus we see that
which yields
Step 3. We show that \(\omega _w(x_n)\subset \Gamma .\)
Assume that \({\hat{x}}\in \omega _w(x_n)\) and \(\{x_{n_k}\}\) is a subsequence of \(\{x_n\}\) which converges weakly to \({\hat{x}}.\) Since \(P_{C^n} x_n\in C^n\) and (9), it follows that
where \(\xi \) satisfies \(\Vert \xi _{n_k}\Vert \le \xi .\) By applying (43) and the weakly lower semicontinuity of c, we have
Consequently, \({\hat{x}}\in C.\) In the same way, there exists a constant \(\eta >0\) such that \(\Vert \eta ^{n_k}_{i_n}\Vert \le \eta \) for all \(n\ge 0.\)
Since \(P_{Q_{i_n}^{n_k}}(A_{i_n}x_{n_k})\in Q_{i_n}^{n_k}\), it follows that
By applying (43) and the weakly lower semicontinuity of \(q_i\), we have
It turns out that \(A_i{\hat{x}}\in Q_i\) for \(i = 1, 2,...,N.\) Consequently, we have \({\hat{x}}\in \Gamma ,\) which implies that \(\omega _w(x_n)\subset \Gamma .\)
Step 4. We prove that \(x_n\rightarrow x^*\).
Moreover, we have the following estimation
Since \(\{x_n\}\) is bounded, (43) together with (C1) and (C2), we have that
Assume that \(\{x_{{n_k}_j}\}\) is a subsequence of \(\{x_{n_k}\},\) we also have \(x_{{n_k}_j}\rightharpoonup {\hat{x}},\ \text {as} \ j\rightarrow \infty .\)
Furthermore, due to (48), \(x^* =P_\Gamma u\) and lemma 2 (i), we infer that
Finally, applying Lemma 7 to (39), we conclude that \(\Gamma _n\rightarrow 0,\) that is \(x_n\rightarrow x^*.\) This completes the proof. \(\square \)
3.2 Two iterative algorithms for variational inequalities over the solution set of SFPMOS
Now, we introduce two self-adaptive algorithms for solving variational inequalities over the solution set of SFPMOS (7). Before convergence analysis of our algorithms, the following conditions are assumed.
Assumption 1
The control sequences satisfy the following conditions:
-
(i)
\(0<a\le \lambda _n \le b<1\), \(0<c\le \rho _n \le d < 1;\)
-
(ii)
\(\{\beta _n\}\subset (0, 1)\), \(\lim \limits _{n\rightarrow \infty }\beta _n=0\) and \(\sum _{n=0}^\infty \beta _n=\infty \);
-
(iii)
\(\gamma \) is a positive real number in the open interval \((0, 2\kappa /L^2).\)
Lemma 11
Suppose that \(F:H\rightarrow H\) is L-Lipschitz and \(\kappa \)-strongly monotone, then
-
(i)
$$\begin{aligned} \Vert (I-\beta _n\gamma F)y_n-(I-\beta _n\gamma F)x^*\Vert \le (1-\beta _n\varphi )\Vert y_n-x^*\Vert , \end{aligned}$$
where
$$\begin{aligned} \varphi =1-\sqrt{1-\gamma (2\kappa -\gamma L^2)}; \end{aligned}$$ -
(ii)
$$\begin{aligned}{} & {} \Vert x_{n+1}-x^*\Vert \le \Vert x_n-x^*\Vert +\beta _n\gamma (\beta _n\gamma \Vert Fy_n\Vert ^2-2\langle y_n-x^*,Fy_n\rangle )\\ {}{} & {} -2\lambda _n(1-\lambda _n)\Vert x_n-P_{C^n}x_n\Vert ^2; \end{aligned}$$
-
(iii)
$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-x^*\Vert \le&\ \Vert x_n-x^*\Vert +\beta _n\gamma (\beta _n\gamma \Vert Fy_n\Vert ^2-2\langle y_n-x^*,Fy_n\rangle )\\ {}&-2\rho _n(1-\rho _n)\frac{\Vert A_ix_n-P_{Q_i^n}A_ix_n\Vert ^4}{\Vert A_i^*(A_ix_n-P_{Q_i^n})\Vert ^2}. \end{aligned} \end{aligned}$$(53)
Proof
(i) Taking any \(x^*\in \Gamma ,\) it follows from Lemma 3.1 in [18] that
From (54), we have
(ii) Using (14), we get
(iii) Similar to (ii), through (15), we have
\(\square \)
Theorem 12
Let \(F: H\rightarrow H\) be an L-Lipschitz and \(\kappa \)-strongly monotone operator and under Assumption 1. Then the sequence \(\{x_n\}\) generated by Algorithm 3 converges strongly to the unique solution of \(VIP(F,\Gamma ).\)
Proof
Let \(x^*\) be the unique solution of the variational inequality \(VIP(F,\Gamma ),\) that is,
From (16) in Theorem 8, (52) and lemma 11 (i), we have
This implies that the sequence \(\{x_n\}\) is bounded, and \(\{y_n\}\) and \(\{A_ix_n\}, i=1,..., N\) as well. Using lemma 11 (i), we get that
Hence, we infer that
Set \(\Upsilon _n=\Vert x_n-x^*\Vert ^2\), we have
where \(\alpha _n=\beta _n\varphi \) and \(b_n=-\frac{2\gamma }{\varphi }\langle Fx^*, x_{n+1}-x^*\rangle .\)
Now, we prove that \(\Upsilon _n\rightarrow 0\) by using lemma 7. To this end, suppose that \(\Upsilon _{n_k}\) is an arbitrary subsequence of \(\Upsilon _n\) satisfying
Since \(\{x_n\}\) is bounded, \(\{y_n\}\) is bounded too. Hence, there exists a positive real number \({\bar{M}}\) such that \(\max \{\sup _n\Vert y_n\Vert ,\sup _n\Vert Fy_n\Vert \}\le {\bar{M}}.\)
In Case 1, from lemma 11 (ii) and \(\{\lambda _n\}\subset [a, b]\subset (0, 1),\) we get
Thus, by (62) and \(\beta _n\rightarrow 0\), we obtain that
In Case 2, the same process as above, by lemma 11 (ii) and the condition \(\{\rho _n\}\subset [c, d]\subset (0, 1),\) we infer that
where \(K>0\) is a constant such that \(\Vert A_{i_n}^*(I-P_{Q_{i_n}^n})A_{i_n}x_n\Vert \le K,\) for all \(n\in N.\) Thus, we obtain that
This implies that
By the same process as Step 3 of Theorem 1, we obtain \(\omega _w(x_n)\subset \Gamma .\) Let \({\hat{x}}\in \omega _w(x_n),\) there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k}\rightharpoonup {\hat{x}}\) as \(k\rightarrow \infty .\)
Next, we show that \(\limsup \limits _{k\rightarrow \infty } b_{n_k}\le 0.\) Assume that \(\{x_{{n_k}_j}\}\) is a subsequence of \(\{x_{n_k}\},\) we also have \(x_{{n_k}_j}\rightharpoonup {\hat{x}}.\) Thus, using the above mentioned and (58), we have
From (27)–(30) in Theorem 8, we have
Therefore we can immediately conclude that \(\Upsilon _n\rightarrow 0,\) that is, \(x_n \rightarrow x^*\) as \(n \rightarrow \infty .\) This completes the proof of the theorem. \(\square \)
Theorem 13
Let \(F: H\rightarrow H\) be an L-Lipschitz and \(\kappa \)-strongly monotone operator and under assumption 1. Then the sequence \(\{x_n\}\) generated by Algorithm 4 converges strongly to the unique solution of \(VIP(F,\Gamma ).\)
Proof
Set \(w_n =x_n-t_n(x_n+y_n^{i_n}-z_n).\) It follows from (35) that
which implies
Using the above inequalities, we get
According to (59), replacing \(w_n\) by \(y_n,\) we get that \(\{x_n\}\) is bounded.
From (60)–(61), the following result obviously holds. Set \(\Upsilon _n=\Vert x_n-x^*\Vert ^2\), we have
where \(\alpha _n=\beta _n\varphi \) and \(b_n=-\frac{2\gamma }{\varphi }\langle Fx^*, x_{n+1}-x^*\rangle .\)
Now, we prove that \(\Upsilon _n\rightarrow 0\) by lemma 7. Suppose that \(\{\Upsilon _{n_k}\}\) is an arbitrary subsequence of satisfying
From (69), we get
Thus, from Assumption 1 (ii) and (62), we obtain
where \({\bar{M}}\) is a positive real number satisfying \(\max \{\sup _n\Vert y_n\Vert ,\sup _n\Vert Fy_n\Vert \}\le {\bar{M}}.\)
Under Assumption 1, we see that
which yields
By the same process as Step 3 of Theorem 1, we obtain \(\omega _w(x_n)\subset \Gamma .\) Furthermore, based on the assumptions of the parameters, we obtain
and
Therefore, we infer that
Finally, applying Lemma 7 to (70), we conclude that \(\Gamma _n\rightarrow 0,\) that is \(x_n\rightarrow x^*.\) This completes the proof. \(\square \)
Remark 1
We have the following comments for Algorithm 1–4.
-
Algorithm 1–4 are based on the gradient algorithm with selection technique. In each iteration, Algorithm 1 only needs to compute two projections, one from \(P_{C^n}\) and another one from \(\{P_{Q_i^n}A_ix_n\}_{i=1}^N\). Algorithm 2 only needs to compute the projection of \(\{x_n-P_{C^n}x_n+A_i^*P_{Q_i^n}A_ix_n\}\) in each iteration. This technique reduces the computational effort of the projection and speeds up the rate of convergence of algorithms.
-
Different selection methods of the proposed algorithms result in different representation of adaptive steps. Algorithm 1 and Algorithm 2 are two different projection and Halpern-type algorithms, which results in a different range of step sizes. Algorithm 1 and Algorithm 3 use the same step size, as do Algorithm 2 and Algorithm 4.
4 Numerical experiment
In this section, we provide some numerical examples to prove we proposed the convergence of the algorithm. All codes for the numerical computations are implemented using MATLAB R2015b. The numerical results are carried out on a personal computer with an Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz.
Using \(\Vert x_{n+1}-x_n\Vert <10^{-6}\) as the stopping criterion, we plot the graphs of \(TOL=\Vert x_{n+1}-x_n\Vert ^2\) against the number of iterations for each n. The numerical results are reported in Figs. 1 and 2 and Tables 1 and 2.
Example 1. [12] Consider \(H = {\mathbb {R}}^{10},\) \(H_1 = {\mathbb {R}}^{20},\) \(H_2 = {\mathbb {R}}^{30}\) and \(H_3 = {\mathbb {R}}^{40}.\) Find a point \(x^*\in {\mathbb {R}}^{10}\) such that
where \(C = \{x\in {\mathbb {R}}^{10}:\Vert x- c\Vert ^2\le r^2\},\) \(Q_i = \{x\in {\mathbb {R}}^{(i+1)\times 10}:\Vert x- c_i\Vert ^2\le r_i^2\},\) and \(A_1: {\mathbb {R}}^{10}\rightarrow {\mathbb {R}}^{20},\) \(A_2: {\mathbb {R}}^{10}\rightarrow {\mathbb {R}}^{30},\) \(A_3: {\mathbb {R}}^{10}\rightarrow {\mathbb {R}}^{40}\) are bounded linear operators the elements of the representing matrices of which are randomly generated in the closed interval [-5, 5]. In this case, for any \(x\in {\mathbb {R}}^{10}\), we have \(c(x) =\Vert x-c\Vert ^2-r^2\) and \(q_i(A_i x) =\Vert A_ix-c_i\Vert ^2-r_i^2\) for \(i = 1, 2, 3.\)
The half-spaces \(C^n\) and \(Q^n_i (i = 1, 2, 3),\) are defined by
and
The control parameters for each algorithm are chosen as follows.
-
Algorithm 1 and Algorithm 2 (Our new algorithm 1 and 2): \(\lambda _n=\tau _n=\frac{1}{10n+1}\) and \(\alpha _n=\frac{1}{n+1}.\)
-
Algorithm 3 and Algorithm 4 (Our new algorithm 3 and 4): \(\gamma = 0.15,\ \beta _n =\frac{1}{100n+1},\) and \(Fx = 2x\) for all \(x\in {\mathbb {R}}^{10}.\)
-
Algorithm (1.6) (in (4), [12]): \(\lambda _1=\frac{1}{6}, \lambda _2=\frac{1}{3}, \lambda _3=\frac{1}{2},\) and \(f(x) = 0.975x\) for all \(x\in {\mathbb {R}}^{10}.\)
-
Algorithm (79) (in [16]) is as follows:
$$\begin{aligned} x_{n+1}=\alpha _nu+(1-\alpha _n)(x_n-\rho _1^n(I-P_{C^n})x_n-\tau _n\sum _{i=1}^N\vartheta _iA_i^*(I-P_{Q_i^n})A_ix_n)), \end{aligned}$$
where \(\tau _n:=\frac{\rho _2^n\sum _{i=1}^N\vartheta _i\Vert (I-P_{Q_i^n})A_ix_n\Vert ^2}{\bar{\tau _n}^2}\), \(\bar{\tau _n}:=\max \{\Vert \sum _{i=1}^N\vartheta _iA_i^*(I-P_{Q_i^n})A_ix_n\Vert ,\beta \}.\)
We take \(\rho _1^n=\rho _2^n=\frac{1}{10n+1},\) \(\vartheta _1=\frac{1}{6},\vartheta _2=\frac{1}{3}\, \vartheta _3=\frac{1}{2},\) and \(\beta =0.05.\)
Now we bring the results of the iterations for four cases, where \(e_1=(1,1,...,1)\in {\mathbb {R}}^{10}\).
Case 1: Take \(u = 10e_1\) and \(x_1 = 5e_1;\)
Case 2: Take \(u = 3e_1\) and \(x_1 = 0.5e_1;\)
Case 3: Take \(u = 3e_1\) and \(x_1 = -0.5e_1;\)
Case 4: Take \(u=e_1\) and \(x_1 = -0.2e_1.\)
Example 2. Let \(H=H_1=L_2([0, 1])\). For all \(y\in L_2([0,1])\), \(z\in L_2([0,1])\), \(\langle \cdot ,\cdot \rangle \) and \(\Vert \cdot \Vert \) are defined by \(\langle y,z\rangle := \int _0^1 y(t)z(t)dt\) and \(\Vert y\Vert := (\int _0^1 |y(t)|^2dt)^{\frac{1}{2}},\) respectively. Furthermore, we consider the half-spaces
and
And \(A:L_2([0,1])\rightarrow L_2([0,1])\), where \((Ax)(t)=\int _0^1x(t)dt, \forall t\in [0,1], x\in L_2([0,1]).\) Then, A is a bounded linear operator, \(\Vert A\Vert =\frac{2}{\pi }\) and \((A^*x)(t)=\int _1^tx(s)ds.\)
The metric projections \(P_C\) and \(P_Q\) are defined by
and
where \(a=\int _0^1x(t)dt\) and \(b=\int _0^1|y(t)-sin(t)|^2dt.\)
Now we bring the results of the iterations for four cases,
Case 1: Take \(u = t\) and \(x_1 = e^{3t};\)
Case 2: Take \(u = t^3+2t\) and \(x_1 = sin(3t);\)
Case 3: Take \(u = -t\) and \(x_1 = \frac{2^t}{2};\)
Case 4: Take \(u=\frac{e^{5t}}{5}\) and \(x_1 = sin(3t).\)
Remark 2
The preliminary results presented in Tables 1 and 2 and Figs. 1 and 2 demonstrate the advantages and computational efficiency of the proposed methods over some known schemes.
-
The suggested Algorithms 1–4 require fewer iterations and CPU time than Alg.(1.6) and Alg.(79) in reaching the same stopping criterion.
-
The step size of Alg. (1.6) depends on the criterion of the transfer operator, and thus its performance is significantly weaker than that of our algorithms 1–4 with adaptive step sizes.
-
The advantage of our proposed algorithm compared to Alg. (79) lies in the application of the selection technique, which is shown to be advantageous when n is sufficiently large. We have only discussed the case where \(n=3.\)
5 Conclusions
In this paper, based on the relaxed CQ method and Halpern-type method, we proposed two new adaptive iterative algorithms to discover solutions of the split feasibility problem with multiple output sets in infinite dimensional Hilbert spaces. More importantly, according to different selection conditions, we give two different adaptive step sizes without the prior knowledge of the operator norm of the involved operator. Moreover, as a generalization, we construct two new algorithms to solve the variational inequalities over the solution set of split feasibility problem with multiple output sets. Under some suitable conditions, we established the strong convergence theorems of the suggested algorithms. Finally, the advantages of the proposed algorithms are confirmed by two numerical examples. It is interesting to extend the results obtained in this paper to Banach spaces or more complex spaces.
References
Alakoya, T.O., Mewomo, O.T.: A relaxed inertial tsengs extragradient method for solving split variational inequalities with multiple output sets. Mathematics 11(2), 386 (2023)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, Berlin (2017)
Browder, F.E.: Nonlinear mappings of nonexpansive and accretive-type in Banach spaces. B. Am. Math. Soc. 73(6), 875–882 (1967)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18(2), 441 (2002)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51(10), 2353 (2006)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21(6), 2071 (2005)
Censor, Y., Segal, A.: Iterative projection methods in biomedical inverse problems. Math. Methods Biomed. Imaging Intensity-Modul. Radiat. Therapy (IMRT) 10, 65–96 (2008)
Fukushima, M.: A relaxed projection method for variational inequalities. Math. Program. 35(1), 5870 (1986)
Goebel, K., Simeon, R.: Uniform convexity, hyperbolic geometry, and nonexpansive mappings. Dekker (1984)
López, G., Martín-Márquez, V., Wang, F., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28(8), 085004 (2012)
Reich, S., Truong, M.T., Mai, T.N.H.: The split feasibility problem with multiple output sets in Hilbert spaces. Optim. Lett. 14(8), 2335–2353 (2020)
Reich, S., Tuyen, T.M.: A generalized cyclic iterative method for solving variational inequalities over the solution set of a split common fixed point problem. Numer. Algorithms 91(1), 1–17 (2022)
Reich, S., Tuyen, T.M., Thuy, N.T.T., Ha, M.T.N.: A new self-adaptive algorithm for solving the split common fixed point problem with multiple output sets in Hilbert spaces. Numer. Algorithms 89(3), 1031–1047 (2022)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75(2), 742–750 (2012)
Taddele, G.H., Kumam, P., Sunthrayuth, P., Gebrie, A.G.: Self-adaptive algorithms for solving split feasibility problem with multiple output sets. Numer. Algorithms 92(2), 1335–1366 (2023)
Wang, J., Hu, Y., Li, C., Yao, J.C.: Linear convergence of CQ algorithms and applications in gene regulatory network inference. Inverse Probl. 33(5), 055017 (2017)
Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. Inherently parallel algorithm. Feasibility Optim. Appl. 8, 473–504 (2001)
Yang, Q.: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 20(4), 1261–1266 (2004)
Yao, Y., Postolache, M., Zhu, Z.: Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 69(2), 269–281 (2019)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No potential conflict of interest was reported by the author(s).
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Tong, X., Ling, T. & Shi, L. Self-adaptive relaxed CQ algorithms for solving split feasibility problem with multiple output sets. J. Appl. Math. Comput. 70, 1441–1469 (2024). https://doi.org/10.1007/s12190-024-02008-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12190-024-02008-4
Keywords
- Split feasibility problem
- Multiple output sets
- Strong convergence
- Relaxed CQ algorithm
- Self-adaptive algorithm