Abstract
We introduce a new approach by using unconstrained optimization to find a solution to the system of the split equality problems in real Hilbert spaces. Our new algorithms do not depend on the norm of the transfer mappings. We also give the relaxed iterative algorithms corresponding to the proposed algorithms. Finally, we present some numerical experiments to demonstrate the performance of the main results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(H_1, H_2\) and H be three real Hilbert spaces, let \(C\subseteq H_1\) and \(Q \subseteq H_2\) be two nonempty, closed and convex sets, let \(A: H_1 \rightarrow H\) and \(B : H_2 \rightarrow H\) be two bounded linear operators. The split equality problem (SEP, for short) was first introduced and studied by Moudafi et al. in 2013 (see, e.g., [11, 12]). The SEP is stated as follows:
The SEP links closely to many different important problems. For instance, in game theory, in decomposition methods for PDE’s, in decision sciences and inertial Nash equilibration processes (see, e.g., [1, 2]), and the split feasibility problem which was later approached for inversion problems in intensity modulated radiation therapy (see, e.g., [4, 5]).
To find a solution to the SEP, in [11], Moudafi considered the constrained optimization problem:
From observing and writing down the optimal conditions, he obtained the following fixed point formulation:
where \(A^*\) and \(B^*\) are the adjoint operators of A and B, respectively. This equation suggests the possibility of iterating, and thus he considered and established the alternating CQ-algorithm for solving the SEP, that is
Under some suitable conditions [11, Theorem 2.1], he proved that the iterative sequence generated by (1.1) converges weakly to a solution of the SEP.
Due to their tremendous utility and wide applicability, many algorithms have been set up to solve the SEP or its modified version in different forms. For more details, see, for instance, [6,7,8,9,10, 14,15,16, 19,20,22, 24,25,26] and the references therein.
Very recently, in [23], Tuyen introduced and studied the more general problem, which is said to be the system of split equality problems (SSEP, for short). Namely, suppose that
-
(D1)
\(H_1\), \(H_2\) and H are three real Hilbert spaces; \(C_i\) and \(Q_i\) (\(i=1,2,3,\ldots ,N\)) are nonempty closed convex subsets of \(H_1\) and \(H_2\), respectively.
-
(D2)
\(A_i: H_1\rightarrow H\) and \(B_i: H_2\rightarrow H\) (\(i=1,2,3,\ldots ,N\)) are bounded linear operators.
-
(D3)
\(b_i\) (\(i=1,2,3,\ldots ,N\)) are given elements in H.
-
(D4)
\(\varOmega =\{(v,w) \in \cap _{i=1}^N(C_i\times Q_i): A_i(v)-B_i(w)=b_i, i=1,2,3,\ldots ,N\}\ne \emptyset .\)
The SSEP is stated as follows:
Using the Tikhonov regularization method, he proposed implicit and explicit iterative algorithms [23, Theorems 3.1 and 3.5] for solving the Problem SSEP. But, in the first algorithm, we have to solve an implicit equation. For the second algorithm, one of the control parameters requires computing or at least estimating the Lipschitz constant and the norm of the objective operators. In general, they are not easy work to perform in practice.
We also note that if \(A_i\equiv A\), \(B_i\equiv B\) and \(b_i=0\) for all \(i=1, 2, 3,\ldots , N\), then the SSEP becomes the multiple-sets split equality problem (MSSEP, for short) which has been studied by Tian et al. in [22]. They also have established a weak convergence algorithm with the split self-adaptive step size for solving the MSSEP.
In this paper, motivated and inspired by the above works, we will focus on and establish several new algorithms for solving the Problem SSEP with another approach. To begin this, for each \(x=(v,w)\in \mathbb {H}:=H_1\times H_2\), we define the mapping \(U:\mathbb {H}\rightarrow \mathbb {R}\) as follows:
We now consider the unconstrained optimization problem:
It is easy to see that U is a convex function and Problem SSEP is equivalent to Problem (1.2). Thus, \(p_*=(v_*,w_*)\) is a solution of Problem SSEP if and only if \(\nabla U(p_*)=0\), in which \(\nabla U(x)=(U_1(x),U_2(x))\) with
Moreover, we observe that \(\nabla U(p_*)=0\) is equivalent to the problem of finding a fixed point \(p_*\) of \(I-\gamma \nabla U\), that is, \(p_*=(I-\gamma \nabla U)(p_*)\) for some \(\gamma >0\). Hence, in the present paper, we will introduce and study the convergence of the sequence \(\{x_n\}\) defined by
where \(\gamma _n>0\) (see more detail in Algorithm 1). We first establish the weak convergence of Algorithm 1. Next, to obtain a strong convergence, we give a modification of Algorithm 1 by using the viscosity approximation method (see Algorithm 2). Some corollaries for solving the system of split feasibility problems are introduced in Section 4. Two relaxed iterative algorithms corresponding to Algorithms 1 and 2 are presented and studied in Section 5. Three numerical examples are discussed in Section 6 to examine the performance of the proposed algorithms.
2 Preliminaries
In this section, we denote by \(\langle \cdot ,\cdot \rangle _{\mathcal {H}}\) and \(\Vert \cdot \Vert _{\mathcal {H}}\) the inner product and the induced norm in a real Hilbert space \(\mathcal H\). The symbols \(\rightarrow \) and \(\rightharpoonup \) are indicated the strong and weak convergence, respectively.
If \(H_1\) and \(H_2\) are real Hilbert spaces then \(\mathbb {H}:=H_1\times H_2\) is also a Hilbert space (see, e.g., [14, Proposition 2.4] and [17, Proposition 2.2]) with the inner product
and the norm on \(\mathbb {H}\) is defined by
The following lemmas are used in the sequel in the proofs of the main results.
Lemma 2.1
Let \(P_C^{\mathcal {H}}\) be a metric projection from a real Hilbert space \(\mathcal {H}\) into a nonempty, closed and convex subset C of \(\mathcal {H}\). Then the following hold:
-
(i)
([3, Theorem 3.14]) \(\langle x-P_C^{\mathcal {H}}(x),y-P_C^{\mathcal {H}}(x)\rangle _{\mathcal {H}}\le 0,\quad \forall x\in \mathcal {H}, y\in C.\)
-
(ii)
([23, Lemma 2.1])
$$\langle x-y,P_C^{\mathcal {H}}(x)-P_C^{\mathcal {H}}(y)\rangle _{\mathcal {H}}\ge \Vert P_C^{\mathcal {H}}(x)-P_C^{\mathcal {H}}(y)\Vert _{\mathcal {H}}^2,\quad \forall x,y\in \mathcal {H}.$$ -
(iii)
([23, Lemma 2.1])
$$\langle x-y,(I^{\mathcal {H}}- P_C^{\mathcal {H}})(x)-(I^{\mathcal {H}}- P_C^{\mathcal {H}})(y)\rangle _{\mathcal {H}}\ge \Vert (I^{\mathcal {H}}- P_C^{\mathcal {H}})(x)-(I^{\mathcal {H}}- P_C^{\mathcal {H}})(y)\Vert _{\mathcal {H}}^2$$for all \(x,y\in \mathcal {H}\). It also follows that \(I^{\mathcal {H}}- P_C^{\mathcal {H}}\) is a nonexpansive mapping.
Lemma 2.2
[13, Lemma 3] Let \(\mathcal {H}\) be a real Hilbert space and let \(\{x_n\}\) be a sequence in \(\mathcal {H}\) such that \(x_n\rightharpoonup z\) when \(n\rightarrow \infty \). Then we have
for all \(x\in \mathcal {H}\) and \(x\ne z\).
Lemma 2.3
[3, Theorem 4.17] Let C be a nonempty closed convex bounded subset of a Hilbert space \(\mathcal {H}\) and \(T: C \rightarrow \mathcal {H}\) a nonexpansive mapping. Then the mapping \(I^{\mathcal {H}}-T\) is demiclosed, that is, whenever \(\{x_n\}\) is a sequence in C which satisfies \(x_n\rightharpoonup x \in C\) and \(x_n-T(x_n)\rightarrow y\in \mathcal {H}\), it follows that \(x-T(x) = y\).
Lemma 2.4
For every \(x, y\in \mathcal {H}\), we have
Lemma 2.5
[18, Lemma 2.6] Let \(\{a_n\}\) be a sequence of positive real numbers, \(\{b_n\}\) be a sequence of real numbers in (0, 1) such that \(\sum _{n=1}^\infty b_n =\infty \) and \(\{c_n\}\) be a sequence of real numbers. Assume that
If
for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying
then \(\lim _{n \rightarrow \infty } a_n =0.\)
3 Main Results
In this section, we always suppose all conditions from (D1) to (D4) are held. From now on, we consistently denote \(\mathbb {H}:=H_1\times H_2\). To solve Problem SSEP, we first introduce the following algorithm.
Algorithm 1
Step 1. Choose \(x_0=(v_0,w_0)\in \mathbb {H}:=H_1\times H_2\) arbitrarily and set \(n:=0\).
Step 2. Given \(x_{n}=(v_n,w_n)\), compute
with the parameter \(\{\gamma _n\}\) is defined by
where \(\rho _n\in [a,b]\subset (0,2)\), \(\{\zeta _n\}\) is a sequence of positive real numbers which is upper bounded by \(\zeta \), and
Step 3. Set \(n\leftarrow n+1\), and go to Step 2.
We have the following theorem.
Theorem 3.1
The sequence \(\{x_n\}\) generated by Algorithm 1 converges weakly to a solution of Problem SSEP.
Proof
The proof is split into several steps. We take any point \(p=(v,w)\in \varOmega \).
Claim 1
The sequence \(\{x_n\}\) is bounded.
It takes from (3.1) that
We observe that
In view of Lemma 2.1 (iii) and (3.4), we can find that
This implies that
Besides, we also note that
Thus, it follows from (3.2), (3.3), (3.5) and (3.6) that
From the condition \(\rho _n\in [a,b]\subset (0,2)\) and (3.7), we obtain
By employing mathematical induction, we find that the sequence \(\{x_n\}\) is bounded.
Claim 2
For every \(i=1,2,3,\ldots ,N\), we have
From (3.7), we have
On the other hand, it takes from (3.8) that the finite limit of the sequence \(\{\Vert x_n-p\Vert _{\mathbb {H}}^2\}\) exists. Thus, from the conditions \(\rho _n\in [a,b]\subset (0,2)\), \(0<\zeta _n\le \zeta \) and the above inequality, we can infer that
This leads to
From the definition of \(D_n\) and (3.12), we obtain the limitations (3.9), (3.10) and (3.11), as claimed.
Claim 3
The sequence \(\{x_n\}\) converges weakly to \(p_*\in \varOmega \).
Since \(\{x_n\}\) is a bounded sequence, there exists the subsequence \(\{x_{n_k}\}:=\{(v_{n_k},w_{n_k})\}\) of \(\{x_n\}\) which converges weakly to some \(z=(v_*,w_*)\in \mathbb {H}\), that is,
In view of Lemma 2.3, (3.9) and (3.10), we get \((v_*,w_*)\in C_i\times Q_i\) for all \(i=1,2,\ldots ,N\). On the other hand, since \(A_i\) and \(B_i\) are bounded linear operators, we have
Combining with (3.11), we can infer that
Therefore, we have \(z\in \varOmega \).
Finally, we shall demonstrate that \(x_n\rightharpoonup z\). Suppose that there is another subsequence \(\{x_{n_m}\}\) of \(\{x_n\}\) such that \(x_{n_m}\rightharpoonup \bar{z}\) with \(\bar{z}\ne z\). Using an argument similar to the one used above, we again also get that \(\bar{z}\in \varOmega \). It follows from Lemma 2.2 and the existence of the finite limit of \(\{\Vert x_n-z\Vert _{\mathbb {H}}\}\) that
This is a contradiction. It implies that \(x_{n_m}\rightharpoonup z\). Therefore, we obtain that \(x_n\rightharpoonup z\).
This completes the proof.\(\square \)
To obtain the strong convergence theorem, we now combine Algorithm 1 with the viscosity approximation method. The second algorithm is established as follows:
Algorithm 2
Step 1. Choose \(x_0=(v_0,w_0)\in \mathbb {H}:=H_1\times H_2\) arbitrarily and set \(n:=0\).
Step 2. Given \(x_{n}=(v_n,w_n)\), compute
where \(h:\mathbb {H}\rightarrow \mathbb {H}\) is a contraction mapping with constant \(\delta \in [0,1)\), \(\{\gamma _n\}\) is defined as in (3.2) and \(\{\alpha _n\}\subset (0,1)\) satisfies
Step 3. Set \(n\leftarrow n+1\), and go to Step 2.
Theorem 3.2
The sequence \(\{x_n\}\) generated by Algorithm 2 converges strongly to \(p_*=P_\varOmega (h(p_*))\).
Proof
The proof is divided into several steps. We first put \(y_n=x_n-\gamma _n \nabla U(x_n)\) and take any \(p\in \varOmega \).
Claim 1
The sequence \(\{x_n\}\) is bounded.
It follows from (3.13) that
By the convexity of \(\Vert \cdot \Vert _{\mathbb {H}}\) and h is a contraction mapping with constant \(\delta \in [0,1)\), we can find that
By an argument similar as in Claim 1 of Theorem 3.1, we can find that
From (3.14) and (3.16), we can infer that
By employing mathematical induction, we find that the sequence \(\{x_n\}\) is bounded. Hence, the sequences \(\{y_n\}\) and \(\{h(x_n)\}\) are also bounded.
Claim 2
We have
where \(M_1=\sup _{n}\{ \Vert h(x_n)-p\Vert _{\mathbb {H}}^2\}<\infty \).
Indeed, from (3.13), (3.15) and Lemma 2.4, we have
It is easy to see that the last inequality can be rewritten in the form (3.17), as claimed.
Claim 3
We have the following inequality:
where
Indeed, one again, from (3.13), (3.16) and Lemma 2.4, we can see that
It is not difficult to see that the above inequality can be rewritten in the form (3.18), as claimed.
Claim 4
The sequence \(\{x_n\}\) converges strongly to \(p_*=P_\varOmega (h(p_*))\).
Suppose that \(\{\Vert x_{n_m}-p_*\Vert ^2\}\) is an arbitrary subsequence of \(\{\Vert x_n-p_*\Vert ^2\}\) such that
It follows from Claim 2, \(\alpha _{n_m}\rightarrow 0\) and \(\rho _{n_m}\in [a,b]\subset (0,2)\) that
Since \(\zeta _{n_m}\le \zeta \), we have
which implies that \(D_{n_m}\rightarrow 0\). Hence, we can find that
In addition, we also have
This implies that
By the boundedness of \(\{x_{n_m}\}\) and \(\{h(x_{n_m})\}\), we observe that
where \(M_2=\sup _{m}\{\Vert h(x_{n_m})-x_{n_m}\Vert _{\mathbb {H}}\}\). Thus, it takes from (3.22) and (3.23) that
Finally, to apply Lemma 2.5, from Claim 3, it suffices to prove the following inequality
It is equivalent to show that \( \limsup _{m\rightarrow \infty }\langle h(p_*)-p_*,x_{n_m+1}-p_*\rangle \le 0. \) We first note that
Since \(\{x_{n_m}\}\) is a bounded sequence (Claim 1), there exists a subsequence \(\{x_{n_{m_j}}\}\) of \(\{x_{n_m}\}\) which converges weakly to some \(z\in \mathbb {H}\), such that
Furthermore, from (3.19), (3.20), (3.21) and using an argument similar to the proof of Claim 3 in Theorem 3.1, we obtain that \(z\in \varOmega \). Besides, from the definition of \(p_*\) and Lemma 2.1 (i), we obtain that
Using (3.24), (3.25), (3.26), we find that \(\limsup _{m\rightarrow \infty }c_{n_m}\le 0\). Hence, it is not difficult to see that all the hypotheses of Lemma 2.5 are satisfied. This guarantees that \(\Vert x_n-p_*\Vert \rightarrow 0\).
This completes the proof.\(\square \)
Remark 3.1
It follows from (3.6) that if \(E_n+F_n=0\) then \(\nabla U(x_n)=0\). In this case, we have that \(x_n=(v_n,w_n)\) is a solution to the SSEP and, thus, we can stop the algorithm. If otherwise, we can select the parameter \(\zeta _n=0\) which leads to \(\gamma _n=\rho _n\frac{D_n}{E_n+F_n}\). On the other hand, we note that
where \(\kappa =2\max \{ 1, \max _{1\le i\le N}\{\Vert A_i\Vert ^2\}+\max _{1\le i\le N}\{\Vert B_i\Vert \}^2 \}\). This guarantees that \(D_n\rightarrow 0\) whenever \(\frac{D_n^2}{E_n+F_n}\rightarrow 0\).
Hence, the conclusions of Theorems 3.1 and 3.2 are still valued by employing an argument similar to the one used in the proof of these.
4 Corollaries
It is easy to see that if \(H\equiv H_2\), \(B_i\) is the identity mapping on H and \(b_i=0\) for all \(i=1,2,3,\ldots ,N\), then Problem SSEP becomes the system of split feasibility problems, that is,
where
Furthermore, Problem (4.1) reduces to the split feasibility problem in the case that \(N=1\).
We now denote \(\nabla \widehat{U}(x):=(\widehat{U}_1(x),\widehat{U}_2(x))\) for all \(x=(v,w)\in \mathbb {H}\) with
From Algorithm 1, we obtain the following algorithm.
Algorithm 3
Step 1. Choose \(x_0=(v_0,w_0)\in \mathbb {H}:=H_1\times H_2\) arbitrarily and set \(n:=0\).
Step 2. Given \(x_{n}=(v_n,w_n)\), compute
with the parameter \(\{\gamma _n\}\) is defined by
where \(\rho _n\in [a,b]\subset (0,2)\), \(\{\zeta _n\}\) is a sequence of positive real numbers which is upper bounded by \(\zeta \), and
Step 3. Set \(n\leftarrow n+1\), and go to Step 2.
Theorem 4.1
The sequence \(\{x_n\}\) generated by Algorithm 3 converges weakly to a solution \(p_*=(v_*,w_*)\) to Problem (4.1).
From Algorithm 2, we obtain Algorithm 4 below.
Algorithm 4
Step 1. Choose \(x_0=(v_0,w_0)\in \mathbb {H}:=H_1\times H_2\) arbitrarily and set \(n:=0\).
Step 2. Given \(x_{n}=(v_n,w_n)\), compute
where \(h:\mathbb {H}\rightarrow \mathbb {H}\) and \(\{\alpha _n\}\) are defined as in Step 2 of Algorithm 2 while \(\{\widehat{\gamma }_n\}\) is defined as in Step 2 of Algorithm 3.
Step 3. Set \(n\leftarrow n+1\), and go to Step 2.
Theorem 4.2
The sequence \(\{x_n\}\) generated by Algorithm 4 converges strongly to a unique solution \((v_*,w_*)\) to Problem (4.1) such that \(p_*=P_{\widehat{\varOmega }}(h(p_*))\).
5 Relaxed Iterative Algorithms
In this section, we consider Problem SSEP when \(C_i\) and \(Q_i\) are sublevel sets of the lower semicontinuous convex functions \(c_i: H_1 \rightarrow \mathbb {R}\) and \(q_i: H_2 \rightarrow \mathbb {R}\) and \(i = 1, 2, \ldots , N\), respectively. Namely,
where \(c_i\) and \(q_i\) are respectively subdifferentiable on \(H_1\) and \(H_2\), and that the subdifferentials \(\partial c_i\) and \(\partial q_i\) are bounded (on bounded sets).
At points \(v_n \in H_1\) and \(w_n \in H_2\), we define the subsets \(C_{i,n}\) and \(Q_{i,n}\) as follows:
where \(\mathfrak {c}_{i,n}\in \partial c_i(v_n)\) and \(\mathfrak {q}_{i,n}\in \partial q_i(w_n).\) It is not hard to find that \(C_{i,n}\) and \(Q_{i,n}\) are half-spaces of \(H_1\) and \(H_2\). They are respectively called the relaxed sets of \(C_i\) and \(Q_i\). Besides, we also have \(C_i\subset C_{i,n}\) and \(Q_i\subset Q_{i,n}\).
In general, we are not easy to compute the metric projections \(P_{C_i}^{H_1}\) and \(P_{Q_i}^{H_2}\). It depends on the construct of the sets \(C_i\) and \(Q_i\). However, we do have the explicit expression of the metric projections \(P_{C_{i,n}}^{H_1}\) and \(P_{Q_{i,n}}^{H_2}\), which are
Therefore, we obtain relaxed iterative algorithms corresponding to Algorithm 1 and Algorithm 2, where \(P_{C_i}^{H_1}\) and \(P_{Q_i}^{H_2}\) are respectively replaced by \(P_{C_{i,n}}^{H_1}\) and \(P_{Q_{i,n}}^{H_2}\).
We denote \(\nabla \tilde{U}(x):=(\tilde{U}_1(x),\tilde{U}_2(x))\) for all \(x=(v,w)\in \mathbb {H}\), where
From Algorithm 1, we obtain the following algorithm.
Algorithm 5
Step 1. Choose \(x_0=(v_0,w_0)\in \mathbb {H}:=H_1\times H_2\) arbitrarily and set \(n:=0\).
Step 2. Given \(x_{n}=(v_n,w_n)\), compute
with the parameter \(\{\tilde{\gamma }_n\}\) is defined by
where \(\rho _n\in [a,b]\subset (0,2)\), \(\{\zeta _n\}\) is a sequence of positive real numbers which is upper bounded by \(\zeta \), and
Step 3. Set \(n\leftarrow n+1\), and go to Step 2.
Theorem 5.1
The sequence \(\{x_n\}\) generated by Algorithm 5 converges weakly to a solution of Problem SSEP.
Proof
In view of the proof of Theorem 3.1, we can infer that the sequence \(\{x_n\}\) is bounded and
We will prove that all weak sequential limits of \(\{x_n\}\) belong to \(\varOmega \). Indeed, since \(\{x_n\}\) is a bounded sequence, there exists the subsequence \(\{x_{n_k}\}:=\{(v_{n_k},w_{n_k})\}\) of \(\{x_n\}\) which converges weakly to some \(z=(v_*,w_*)\in \mathbb {H}\). It is equivalent to
Since the subdifferential \(\partial c_i\) is assumed to be bounded on bounded sets and the sequence \(\{x_n\}\) is bounded, there exists a positive real number \(M_3\) such that
for all \(n\in \mathbb {N}\). It follows from \(P_{C_{i,n}}^{H_1}({v_n})\in C_{i,n}\) and the definition of \(C_{i,n}\) that
From (5.1) and (5.4), we can find that
By the lower semicontinuity of the function c, we have
Therefore, we obtain \(v_*\in C_i\). By an argument similar to the one above and using (5.2), we also obtain that \(w_*\in Q_i\). Furthermore, using (5.3) and repeating the proof of Theorem 3.1 in Claim 3, we can deduce that
Hence, we have \((v_*,w_*)\in \varOmega \).
Once again, we use a similar argument to the one employed in the last proof of Theorem 3.1 and can conclude that \((v_*, w_*)\) is the unique weak sequential limit of \(\{x_n\}\) and that \(v_n\rightarrow v_*\) and \(w_n\rightarrow w_*\).
This completes the proof.\(\square \)
From Algorithm 2, we obtain the algorithm below.
Algorithm 6
Step 1. Choose \(x_0=(v_0,w_0)\in \mathbb {H}:=H_1\times H_2\) arbitrarily and set \(n:=0\).
Step 2. Given \(x_{n}=(v_n,w_n)\), compute
where \(h:\mathbb {H}\rightarrow \mathbb {H}\) and \(\{\alpha _n\}\) are defined as in Step 2 of Algorithm 2 while \(\{\tilde{\gamma }_n\}\) is defined as in Step 2 of Algorithm 5.
Step 3. Set \(n\leftarrow n+1\), and go to Step 2.
By using a line of proof similar to the one in the proof of Theorem 3.2 and combining it with Theorem 5.1, we obtain the following theorem.
Theorem 5.2
The sequence \(\{x_n\}\) generated by Algorithm 6 converges strongly to \(p_*=P_\varOmega (h(p_*))\).
6 Numerical Test
Our algorithms are implemented in MATLAB 14a running on the DESKTOP-8LDGIN0, Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz with 2.40 GHz and 4GB RAM.
Example 6.1
We consider the Problem SSEP under the following hypotheses:
(D1) \(H_1=\mathbb {R}^m\), \(H_2=\mathbb {R}^k\) and \(H=\mathbb {R}^p\) are three finite Euclidean spaces. For each \(i=1,2,3\), the sets \(C_i\) and \(Q_i\) are defined by
where the coordinates of the centers \(\mathfrak {a}_i\) and \(\widehat{\mathfrak {a}}_i\) are randomly generated in the interval \([-2,2]\), the radii \(\mathfrak {R}_i\) and \(\widehat{\mathfrak {R}}_i\) are also randomly generated in the intervals [10, 20] and [20, 30], respectively.
(D2) \(A_i: \mathbb {R}^m\rightarrow \mathbb {R}^p\) and \(B_i: \mathbb {R}^k\rightarrow \mathbb {R}^p\) (\(i=1,2,3\)) are bounded linear operators where the elements of their representing matrices are randomly generated in the closed interval \([-5, 5]\).
(D3) \(b_i=0\) for all \(i=1,2,3\).
(D4) Since \(0=(0,0)\in \varOmega \), we have
We use Algorithm 2 to \(m=100, k=200, p=300\) and \(h(x)=0.05x\) for all \(x\in \mathbb {R}^m\times \mathbb {R}^k\). It is not difficult to see that \(p_*= (0,0)\). We take the initial point \(x_0=(v_0,w_0)\) which has the coordinates of \(v_0\) and \(w_0\) randomly generated in the closed interval [20, 40], and select the control parameters as follows:
We use the stopping rule
where TOL is a given tolerant and \(x_n=(v_n,w_n)\). The numerical results are presented in Table 1. The behavior of err is shown in Fig. 1.
We also compare our Algorithm 2 with the algorithm defined by [23, Theorem 3.5] (ALGO-T, for short). The parameters for the ALGO-T are chosen as follows:
where \(N=3\) and \(\gamma _{A,B}=\max _{1\le i\le 3}\{\Vert A_i\Vert , \Vert B_i\Vert \}\). The numerical results are presented in Table 2.
Example 6.2
We consider Problem (4.1) under the following hypotheses:
(D1) \(H_1=H_2=H=L^2[0,1]\) with the inner product
and the norm
The sets \(C_i\) and \(Q_i\) (\(i=1,2,3\)) are given by
where
(D2) For each \(i=1,2,3\), the operators \(A_i: L^2[0,1]\rightarrow L^2[0,1]\) are defined by
(D3) \(b_i=0\) for all \(i=1,2,3\).
(D4) It is easy to find that
is a nonempty set because \((t,2t/(i^2+1))\in \varOmega \).
We use Algorithm 4 with \(h(x)=0.25x\) for all \(x\in L^2[0,1]\times L^2[0,1]\), the initial point \(x_0=(\exp (t),\log (t+1))\), and select the parameters as follows:
We use the stopping criterion
where TOL is a given tolerant. The numerical results are presented in Table 3. The behavior of err is described in Fig. 2.
We also compare our Algorithm 4 with the ALGO-T. The parameters for the ALGO-T are chosen as follows:
where \(N=3\) and \(\gamma _{A,B}=\max _{1\le i\le 3}\{\Vert A_i\Vert , \Vert B_i\Vert \}\). The numerical results are presented in Table 4.
Example 6.3
Let \(\mathbb {R}^m\) and \(\mathbb {R}^k\) be two finite Euclidean spaces. We consider the signal recovery problem through the following LASSO problem:
where \(A \in \mathbb {R}^{m}\times \mathbb {R}^k \), \(w\in \mathbb {R}^m\), \(p>0\) and \(\Vert \cdot \Vert _1\) is \(l_1\)-norm. A is a perception matrix, which is generated from a standard normal distribution. The true sparse signal \(v_*\) is constructed from the uniform distribution in the interval \([-2, 2]\) with random p nonzero elements. In the sample data \(w_* = Av_*\) no noise is assumed.
In relation with the Problem (4.1) and the \(N=1\) case, we define
Thus, we define a convex function
and denote the relaxed set \(C_n\) by
where \(\mathfrak {c}_n\in \partial c(v_n)\). The subdiffrential \(\partial c\) at \(v_n\in \mathbb {R}^k\) is defined by
We use Algorithm 5 and Algorithm 6 with the initial point \(x_0=(v_0,w_0)\), where \(v_0\) and \(w_0\) are the original points of \(\mathbb {R}^k\) and \(\mathbb {R}^m\), and select the parameters as follows:
In Algorithm 6, the mapping \(h:\mathbb {R}^m\times \mathbb {R}^k\rightarrow \mathbb {R}^m\times \mathbb {R}^k\) is given by \(h(x)=0.5x\) for all \(x\in \mathbb {R}^m\times \mathbb {R}^k\).
The following method of mean square error is used for measuring the recovery accuracy:
which is required to be smaller than the given tolerant TOL.
We also compare our Algorithms with some previous algorithms ([24, Algorithm 2-SEF], [25, Algorithm (2.3)] and [10, Algorithm 4.1]). The parameters for each algorithm are chosen as follows:
-
Algorithm A ([24, Algorithm 2-SEF]): \( \rho _n=3.9. \)
-
Algorithm B ([25, Algorithm (2.3)]): \( \gamma =\dfrac{1}{2\Vert A\Vert ^2}. \)
-
Algorithm C ([10, Algorithm 4.1]): \( \rho _n=3.9. \)
The numerical results that we have obtained are shown in Table 5. In Figs. 3 and 4, we present the illustration of the original signal and recovered signal by using the above algorithms.
Remark 6.1
The numerical experiments above show that our new algorithms outperform several previous algorithms proposed in [10, 23,24,25] concerning the number of iterations and the CPU time.
Availability of data and materials
Not applicable.
References
Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: A new class of alternating proximal minimization algorithms with costs-to-move. SIAM J. Optim. 18, 1061–1081 (2007)
Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Alternating proximal algorithms for weakly coupled minimization problems. Applications to dynamical games and PDE’s. J. Convex Anal. 15, 485–506 (2008)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer (2011)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Chang, S.-S., Yang, L., Qin, L., Ma, Z.: Strongly convergent iterative methods for split equality variational inclusion problems in banach spaces. Acta Math. Sci. 36, 1641–1650 (2016)
Eslamian, M., Shehu, Y., Iyiola, O.S.: A strong convergence theorem for a general split equality problem with applications to optimization and equilibrium problem. Calcolo 55(48) (2018)
Izuchukwu, C., Mewomo, O.T., Okeke, C.C.: Systems of variational inequalities and multiple-set split equality fixed-point problems for countable families of multivalued type-one mappings of the demicontractive type. Ukr. Math. J. 71(11), 1692–1718 (2020)
Kazmi, K.R., Ali, R., Furkan, M.: Common solution to a split equality monotone variational inclusion problem, a split equality generalized general variational-like inequality problem and a split equality fixed point problem. Fixed Point Theory 20(1), 211–232 (2019)
López, G., M-Márquez, V., Wang, F., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Problems 28(8), 085004 (2012)
Moudafi, A., Al-Shemas, E.: Simultaneous Iterative Methods for split equality problems. Trans. Math. Program. Appl. 1(2), 1–11 (2013)
Moudafi, A.: Alternating CQ-algorithms for convex feasibility and split fixed-point problems. J. Nonlinear Convex Anal. 15, 809–818 (2014)
Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73(4), 591–597 (1967)
Reich, S., Tuyen, T.M.: A new approach to solving split equality problems in Hilbert spaces. Optimization 71(15), 4423–4445 (2022)
Reich, S., Tuyen, T.M., Ha, M.T.N.: The split feasibility problem with multiple output sets in Hilbert spaces. Optim. Lett. 14, 2335–2353 (2020)
Reich, S., Tuyen, T.M., Ha, M.T.N.: An optimization approach to solving the split feasibility problem in Hilbert spaces. J. Global Optim. 79, 837–852 (2021)
Reich, S., Tuyen, T.M., Ha, M.T.N.: A product space approach to solving the split common fixed point problem in Hilbert spaces. J. Nonlinear Convex Anal. 21(11), 2571–2588 (2021)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)
Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: A modified Halpern algorithm for approximating a common solution of split equality convex minimization problem and fixed point problem in uniformly convex Banach spaces. Comp. Appl. Math. 38(77) (2019)
Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: General alternative regularization method for solving split equality common fixed point problem for quasi-pseudocontractive mappings in Hilbert spaces. Ricerche Mat. 69, 235–259 (2020)
Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: Viscosity approximation method for solving the multiple-set split equality common fixed-point problems for quasi-pseudocontractive mappings in Hilbert spaces. J. Ind. Manag. Optim. 17(5), 2733–2759 (2021)
Tian, D., Shi, L., Chen, R.: Iterative algorithm for solving the multiple-sets split equality problem with split self-adaptive step size in Hilbert spaces. J. Inequal. Appl. 2016, 34 (2016)
Tuyen, T.M.: Regularization methods for the split equality problems in Hilbert spaces. Bull. Malays. Math. Sci. Soc. 46, (44) (2023)
Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: A gradient projection method for solving split equality and split feasibility problems in Hilbert spaces. Optimization 64, 2321–2341 (2015)
Yang, Q.: The relaxed CQ algorithm solving the split feasibility problem. Inverse Problems 20(4), 1261 (2004)
Zhao, J.: Solving split equality fixed-point problem of quasi-nonexpansive mappings without prior knowledge of operators norms. Optimization 64(15), 2619–2630 (2015)
Acknowledgements
All the authors are grateful to the editors and to an anonymous referee for their useful comments and helpful suggestions.
Funding
Nguyen Song Ha and Truong Minh Tuyen were supported by the Science and Technology Fund of the Thai Nguyen University of Sciences.
Author information
Authors and Affiliations
Contributions
All authors wrote the main manuscript text and reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no conflict of interest.
Ethical Approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ha, N.S., Tuyen, T.M. New Algorithms for Solving the Split Equality Problems in Hilbert Spaces. Acta Math Vietnam (2024). https://doi.org/10.1007/s40306-024-00552-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40306-024-00552-6