Abstract
We propose a method for solving the split variational inequality problem (SVIP) involving Lipschitz continuous and pseudomonotone mappings. The proposed method is inspired by the Halpern subgradient extragradient method for solving the monotone variational inequality problem with a simple step size. A strong convergence theorem for an algorithm for solving such a SVIP is proved without the knowledge of the Lipschitz constants of the mappings. As a consequence, we get a strongly convergent algorithm for finding the solution of the split feasibility problem (SFP), which requires only two projections at each iteration step. A simple numerical example is given to illustrate the proposed algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \({\mathcal{H}}_{1}\) and \({\mathcal{H}}_{2}\) be two real Hilbert spaces and \(A:{\mathcal{H}}_{1} \to {\mathcal{H}}_{2}\) is a bounded linear operator. Let C and Q be two nonempty closed convex subsets of \({\mathcal{H}}_{1}\) and \({\mathcal{H}}_{2}\), respectively. Given mappings \(F_{1}: {\mathcal{H}}_{1} \to {\mathcal{H}}_{1}\) and \(F_{2}: {\mathcal{H}}_{2} \to {\mathcal{H}}_{2}\), the split variational inequality problem (in short, SVIP) introduced first by Censor et al. [8] is to find a solution x∗ of the variational inequality problem in the space \({\mathcal{H}}_{1}\) so that the image y∗ = A(x∗), under a given bounded linear operator A, is a solution of another variational inequality problem in space \({\mathcal{H}}_{2}\).
More specifically, the SVIP is to find
such that
When F1 = 0 and F2 = 0, the SVIP reduces to the split feasibility problem, shortly SFP,
which was first introduced by Censor and Elfving [4] in finite-dimensional Hilbert spaces for modeling inverse problems. Recently, it has been found that the SFP can also be used to model the intensity-modulated radiation therapy [3, 5, 10, 23], and other real-world problems.
The SVIP was introduced and investigated by Censor et al. [8] in the case when F1 is α1-inverse strongly monotone on \({\mathcal{H}}_{1}\) and F2 is α2-inverse strongly monotone on \({\mathcal{H}}_{2}\). Their algorithm starts from a given point \(x^{0}\in {\mathcal{H}}_{1}\), for all n ≥ 0, the next iterate is defined as
where \(\gamma \in \left (0, \frac {1}{\|A\|^{2}}\right )\), \(0\leq \lambda \leq 2\min \limits \{\alpha _{1}, \alpha _{2}\}\) and \(P_{C}^{F_{1}, \lambda }\) and \(P_{Q}^{F_{2}, \lambda }\) stand for PC(I − λF1) and PQ(I − λF2), respectively. They showed that the sequence {xn} converges weakly to a solution of the split variational inequality problem, provided that the solution set of the SVIP is nonempty.
Since the solution set of the variational inequality problem VIP(C, F), for \(F: {\mathcal{H}}\to {\mathcal{H}}\), coincides with the set of fixed points of the mapping T from \({\mathcal{H}}\) to \({\mathcal{H}}\) by taking T(x) = PC(x − λF(x)) for all \(x\in {\mathcal{H}}\) (λ > 0 fixed), the SVIP is an instance of the split common fixed point problem, shortly SCFPP, which is introduced in 2009 by Censor and Segal [11]
where \(U: {\mathcal{H}}_{1}\to {\mathcal{H}}_{1}\) and \(T: {\mathcal{H}}_{2}\to {\mathcal{H}}_{2}\) are given mappings. Many authors proposed several methods for solving the SCFPP, see [1, 2, 13, 21, 25] and the references therein.
It is well-known (see e.g. [14, p. 1110]) that the projection method for monotone variational inequality problems (VIPs) may fail to converge. To overcome this difficulty, the extragradient method, first proposed by Korpelevich [19] for saddle problems, can be applied to monotone VIPs ensuring convergence. However, the extragradient method may be costly, since it requires two projections at each step. Motivated by this fact, Censor et al. [6] introduced an algorithm, which is called the subgradient extragradient method, for solving the monotone variational inequality problem
in which the second projection onto the constrained set C is replaced by the one onto a half-space Tn containing it. Their algorithm is of the form
It was proved that if \(F: {\mathcal{H}}\to {\mathcal{H}}\) is monotone on C, L-Lipschitz continuous on \({\mathcal{H}}\) and the stepsize \(\lambda \in \left (0, \frac {1}{L}\right )\), then the sequence {xn} generated by (1) converges weakly to a solution x∗ of the VIP(C, F). Since the inception of the subgradient extragradient method, they also proposed another modification in Euclidean space (see [9]).
In order to obtain the strong convergence of the subgradient extragradient method, Censor et al. [7] introduced the following hybrid subgradient extragradient method
and they proved, under appropriate conditions, that the sequence {xn} generated by (2) converges strongly to a point u∗ = PSol(C, F)(x0).
Inspired by the results in [7], Kraikaew and Saejung [20] introduced the following Halpern subgradient extragradient method for solving VIP(C, F)
where \(\lambda \in \left (0, \frac {1}{L}\right )\), {αn}⊂ (0,1), \(\lim _{n\to \infty }\alpha _{n}=0\) and \({\sum }_{n=0}^{\infty }\alpha _{n}=\infty \). They proved that the sequence {xn} generated by (3) converges strongly to PSol(C, F)(x0).
In the present paper, inspired by the above mentioned works, we present the modified Halpern subgradient extragradient method for the SVIP when F1 and F2 are Lipschitz continuous pseudomonotone mappings but the Lipschitz constants are not required to be known. The strong convergence of the proposed method is established under some suitable conditions.
The paper is organized as follows. In Section 2, we present some preliminaries that will be needed in the sequel. Section 3 deals with the algorithm and its convergence analysis. Finally, in Section 4, we illustrate the proposed method by considering a simple numerical experiment.
2 Preliminaries
Let C be a nonempty closed convex subset of a real Hilbert space \({\mathcal{H}}\). The strong convergence of {xn} to x is written as xn → x, while the weak convergence of {xn} to x is denoted by \(x^{n}\rightharpoonup x\). Recall that the metric projection from \({\mathcal{H}}\) onto C, denoted PC, is defined in such a way that, for each \(x\in {\mathcal{H}}\), PC(x) is the unique element in C with the property
Some important properties of the projection operator PC are gathered in the following lemma.
Lemma 1
([15])
- (i)
For given\(x\in {\mathcal{H}}\)and y ∈ C, y = PC(x) if and only if
$$ \langle x-y, z-y\rangle\leq 0\quad\forall z\in C. $$ - (ii)
PCis nonexpansive, that is,
$$ \|P_{C}(x)-P_{C}(y)\| \leq \|x-y\|\quad\forall x, y\in \mathcal{H}. $$ - (iii)
For all\(x\in {\mathcal{H}}\)and y ∈ C, we have
$$ \|P_{C}(x)-y\|^{2} \leq \|x-y\|^{2} - \|P_{C}(x)-x\|^{2}. $$
For more information on the projection operator PC, see [16, Section 3] and [18].
Definition 1
Let \({\mathcal{H}}_{1}\) and \({\mathcal{H}}_{2}\) be two Hilbert spaces and let \(A: {\mathcal{H}}_{1}\to {\mathcal{H}}_{2}\) be a bounded linear operator. An operator \(A^{\ast }: {\mathcal{H}}_{2}\to {\mathcal{H}}_{1}\) with the property
for all \(x\in {\mathcal{H}}_{1}\) and \(y\in {\mathcal{H}}_{2}\), is called the adjoint operator of A.
The adjoint operator of a bounded linear operator A between Hilbert spaces \({\mathcal{H}}_{1}\), \({\mathcal{H}}_{2}\) always exists and is uniquely determined. Furthermore, A∗ is a bounded linear operator and ∥A∗∥ = ∥A∥.
Definition 2
([12, 17]) A mapping \(F:{\mathcal{H}}\to {\mathcal{H}}\) is said to be
- (i)
L-Lipschitz continuous on \({\mathcal{H}}\) if
$$ \|F(x)-F(y)\| \leq L\|x-y\|\quad\forall x, y\in\mathcal{H}; $$ - (ii)
monotone on C if
$$ \langle F(x)-F(y), x-y\rangle\geq 0\quad\forall x, y\in C; $$ - (iii)
pseudomonotone on C if
$$ \langle F(y), x-y\rangle\geq 0\quad\Longrightarrow\quad \langle F(x), x-y\rangle\geq 0\quad\forall x, y\in C. $$
The next lemmas will be used for proving the convergence of the algorithm proposed in the next section.
Lemma 2
Let Cbe a nonempty closed convex subset of a real Hilbert space\({\mathcal{H}}\). Let\(F: {\mathcal{H}}\to {\mathcal{H}}\)be pseudomonotone on Cand L-Lipschitz continuous on\({\mathcal{H}}\)such that the solution set Sol(C, F) of the VIP(C, F) is nonempty. Let\(x\in {\mathcal{H}}\), μ ∈ (0,1), λ > 0 and define
Then for all x∗∈Sol(C, F)
Proof
By the definition of y and Lemma 1, it follows that
Combining this inequality and the definition of T, we get C ⊂ T.
Since \(x^{\ast }\in \text {Sol}(C, F)\) and y ∈ C, we have, in particular, \(\langle F(x^{\ast }), y-x^{\ast }\rangle \geq 0\). Using the pseudomonotonicity on C of F, we get
From \(z=P_{T}(x-\lambda F(y))\), we have z ∈ T. This together with the definition of T implies
Since x∗∈ C and C ⊂ T, we get x∗∈ T. Thus, using Lemma 1, (4) and (5), we obtain
If \(F(x)\neq F(y)\) then from the definition of γ, we have
Using the Cauchy–Schwarz inequality, (7) and the inequality of arithmetic and geometric means, we obtain
Substituting (8) into (6), we get
If \(F(x)=F(y)\) then \(\gamma =\lambda \). From (6), we have
This completes the proof of Lemma 2. □
Lemma 3
Let Cbe a nonempty closed convex subset of a real Hilbert space\({\mathcal{H}}\). Let\(F: {\mathcal{H}}\to {\mathcal{H}}\)be monotone and L-Lipschitz continuous on\({\mathcal{H}}\). Assume that λn ≥ a > 0 for all n, {xn} is a sequence in\({\mathcal{H}}\)satisfying\(x^{n}\rightharpoonup \overline {x}\)and\(\lim _{n\to \infty }\|x^{n}-y^{n}\|=0\), where yn = PC(xn − λnF(xn)) for all n. Then\(\overline {x}\in \text {Sol}(C, F)\).
Proof
It follows from \(x^{n}\rightharpoonup \overline {x}\) and \(\lim _{n\to \infty }\|x^{n}-y^{n}\|=0\) that {xn} is bounded and \(y^{n}\rightharpoonup \overline {x}\). Then {yn}, \(\{F(x^{n})\}\) are also bounded thanks to \(y^{n}\rightharpoonup \overline {x}\) and the Lipschitz continuity of F. Since \(\{y^{n}\}\subset C\), \(y^{n}\rightharpoonup \overline {x}\) and C is closed and convex, it is also weakly closed, and thus \(\overline {x}\in C\).
For all x ∈ C, from \(y^{n}=P_{C}(x^{n}-\lambda _{n} F(x^{n}))\), we have
This together with the monotonicity of F and the Cauchy–Schwarz inequality would imply that
Taking the limit in (9) as \(n\to \infty \), using the boundedness of {F(xn)}, {yn}, and recalling that \(\lim _{n\to \infty }\|x^{n}-y^{n}\|\to 0\), \(x^{n}\rightharpoonup \overline {x}\), we obtain \(\langle F(x), \overline {x}-x\rangle \leq 0\) and hence,
Let \(x_{t}=(1-t)\overline {x}+tx\in C\) for t ∈ [0,1]. From (10), we have
Then, for all 0 < t ≤ 1
Taking the limit as \(t\to 0^{+}\), we have \(\langle F(\overline {x}), x-\overline {x}\rangle \geq 0\), i.e., \(\overline {x}\in \text {Sol}(C, F)\). □
Lemma 4
([22, Remark 4.4]) Let {an} be a sequence of nonnegative real numbers. Suppose that for any integer m, there exists an integer psuch that p ≥ mandap ≤ ap+ 1. Let n0be an integer such that\(a_{n_{0}}\leq a_{n_{0}+1}\)and define, for all integer n ≥ n0, by
Then\(\{\tau (n)\}_{n\geq n_{0}}\)is a nondecreasing sequence satisfying\(\lim _{n\to \infty }\tau (n)=\infty \)and the following inequalities hold true:
3 The Algorithm and Convergence Analysis
In this section, we propose a strong convergence algorithm for solving SVIP by using the modified Halpern subgradient extragradient method. We impose the following assumptions on the mappings F1 and F2 associated with the SVIP.
(A1) \(F_{1}: {\mathcal{H}}_{1}\to {\mathcal{H}}_{1}\) is pseudomonotone on C and L1-Lipschitz continuous on \({\mathcal{H}}_{1}\).
(A2) \(\limsup _{n\to \infty }\langle F_{1}(x^{n}), y-y^{n}\rangle \leq \langle F_{1}(\overline {x}), y-\overline {y}\rangle \) for every sequence {xn}, {yn} in \({\mathcal{H}}_{1}\) converging weakly to \(\overline {x}\) and \(\overline {y}\), respectively.
(A3) \(F_{2}: {\mathcal{H}}_{2}\to {\mathcal{H}}_{2}\) is pseudomonotone on Q and L2-Lipschitz continuous on \({\mathcal{H}}_{2}\).
(A4) \(\limsup _{n\to \infty }\langle F_{2}(u^{n}), v-v^{n}\rangle \leq \langle F_{2}(\overline {u}), v-\overline {v}\rangle \) for every sequence {un}, {vn} in \({\mathcal{H}}_{2}\) converging weakly to \(\overline {u}\) and \(\overline {v}\), respectively.
Remark 1
-
(i)
In finite dimensional spaces conditions (A2) and (A4) automatically follow from the Lipschitz continuity of F1, F2.
-
(ii)
If F1 and F2 satisfy the assumptions (A1)–(A4), then the solution sets Sol(C, F1) and Sol(Q, F2) of VIP(C, F1) and VIP(Q, F2) are closed and convex (see e.g. [24]). Therefore, the solution set Ω = {x∗∈Sol(C, F1) : Ax∗∈Sol(Q, F2)} of the SVIP is also closed and convex.
The algorithm can be expressed as follows:
The following theorem shows the convergence of the algorithm.
Theorem 1
Suppose that the assumptions (A1)–(A4) hold. Then the sequence {xn} generated by Algorithm 1 converges strongly to an element x∗∈Ω, where x∗ = PΩ(x0), provided the solution set Ω of the SVIP is nonempty.
Proof
The proof of the theorem is divided into several steps.
Step 1 The sequences {xn}, {yn}, {zn}, {tn} and {vn} are bounded.
Since \(x^{\ast }\in {\Omega }\), we have \(x^{\ast }\in \text {Sol}(C, F_{1})\) and \(A(x^{\ast })\in \text {Sol}(Q, F_{2})\). From Lemma 2, we have, for all n ≥ 0
Since F2 is L2-Lipschitz continuous on \({\mathcal{H}}_{2}\), we get \(\|F_{2}(u^{n})-F_{2}(v^{n})\| \leq L_{2}\|u^{n}-v^{n}\|\). Thus, by induction, for every n ≥ 0, we have
By the definition of μn+ 1, we have \(\mu _{n+1}\leq \mu _{n}\) for all n ≥ 0. This together with (13) implies that the limit of {μn} exists. We denote \(\lim _{n\to \infty }\mu _{n}=\mu ^{\ast }\). It is clear that \(\mu ^{\ast }\geq \min \limits \left (\frac {\mu }{L_{2}}, \mu _{0}\right )>0\).
Using the same argument as above, we have
From \(\lim _{n\to \infty }\mu _{n}=\mu ^{\ast }>0\) and \(\lim _{n\to \infty }\lambda _{n}=\lambda ^{\ast }>0\), we get \(\lim _{n\to \infty }\left (1-\mu \frac {\mu _{n}}{\mu _{n+1}}\right )=1-\mu >0\), \(\lim _{n\to \infty }\left (1-\lambda \frac {\lambda _{n}}{\lambda _{n+1}}\right )=1-\lambda >0\). This implies that there exists \(n_{0}\in \mathbb {N}\) such that \(1-\mu \frac {\mu _{n}}{\mu _{n+1}}>0\) and \(1-\lambda \frac {\lambda _{n}}{\lambda _{n+1}}>0\) for all n ≥ n0. By (11) and (12), we get
From (14), since un = A(xn), we obtain, for all n ≥ n0
Hence
On the other hand
Combining (16) and (17), we obtain
From (15), (18) and \(\{\delta _{n}\}\subset [\underline {\delta }, \overline {\delta }]\subset \left (0, \frac {1}{\|A\|^{2}+1}\right )\), we get
Since \(\lambda _{n}\leq \lambda _{0}\), \(\mu _{n}\leq \mu _{0}\) for all n ≥ 0, F1 is L1-Lipschitz continuous on \({\mathcal{H}}_{1}\), F2 is L2-Lipschitz continuous on \({\mathcal{H}}_{2}\), we have
On the other hand
This implies that
So, by induction, we obtain, for every n ≥ n0 that
Hence, the sequence {xn} is bounded and so are the sequences {yn}, {zn}, {tn} and {vn} thanks to (19), (20) and (21).
Step 2 We prove that {xn} converges strongly to x∗.
We have
which together with (19) implies, for all n ≥ n0
Let us consider two cases.Case 1. There exists n1 such that \(\{\|x^{n}-x^{\ast }\|\}\) is decreasing for n ≥ n1. In this case the limit of \(\{\|x^{n}-x^{\ast }\|\}\) exists and we denote \(\lim _{n\to \infty }\|x^{n}-x^{\ast }\|^{2}=\xi \geq 0\). It follows from (24), \(\lim _{n\to \infty }\alpha _{n}=0\) and the boundedness of {tn} that
It follows from (25) that
Combining (12), (25) and \(\lim _{n\to \infty }\left (1-\lambda \frac {\lambda _{n}}{\lambda _{n+1}}\right ) =1-\lambda >0\), we obtain
From (27) and the triangle inequality, we get
Using (18) and \(\{\delta _{n}\}\subset [\underline {\delta }, \overline {\delta }]\subset \left (0, \frac {1}{\|A\|^{2}+1}\right )\), we have
Combining (26) and (29), we get
Note that, for all n,
It follows from the above inequality and (30) that
We now prove that
Choose a subsequence \(\{t^{n_{k}}\}\) of {tn} such that
Since \(\{t^{n_{k}}\}\) is bounded, we may assume that \(\{t^{n_{k}}\}\) converges weakly to some \(\overline {t}\in {\mathcal{H}}_{1}\).
Therefore
From (32), (28), (27) and \(t^{n_{k}}\rightharpoonup \overline {t}\), we conclude that \(x^{n_{k}}\), \(y^{n_{k}}\) and \(z^{n_{k}}\) converge weakly to \(\overline {t}\). Since \(\{z^{n_{k}}\}\subset C\) and C is weakly closed then \(\overline {t}\in C\).
We prove \(\overline {t}\in \text {Sol}(C, F_{1})\).
Indeed, let x ∈ C. From the definition of \(z^{n_{k}}\) and Lemma 1, we have
Since \(\lambda _{n_{k}}>0\) for every k, it follows from the above inequality that
From \(\lim _{k\to \infty }\|y^{n_{k}}-z^{n_{k}}\|=0\), \(\lim _{k\to \infty }\lambda _{n_{k}}=\lambda ^{\ast }>0\) and the boundedness of \(\{z^{n_{k}}\}\), we get
Using (35), condition (A2) and the weak convergence of two sequences \(\{y^{n_{k}}\}\), \(\{z^{n_{k}}\}\) to \(\overline {t}\), we have
i.e., \(\overline {t}\in \text {Sol}(C, F_{1})\).
On the other hand
Combining (11) and (36) yields
Using the above inequality, \(\lim _{n\to \infty }\|u^{n}-w^{n}\|=0\), \(\lim _{n\to \infty }\left (1-\mu \frac {\mu _{n}}{\mu _{n+1}}\right )=1-\mu >0\) and the fact that {xn} is bounded, we obtain
From \(x^{n_{k}}\rightharpoonup \overline {t}\), we get \(u^{n_{k}}=A(x^{n_{k}})\rightharpoonup A(\overline {t})\). This together with \(\lim _{n\to \infty }\|u^{n}-v^{n}\|=0\) implies \(v^{n_{k}}\rightharpoonup A(\overline {t})\). Since \(\{v^{n_{k}}\}\subset Q\) and Q is closed and convex, it is also weakly closed, and thus \(A(\overline {t})\in Q\).
We prove \(A(\overline {t})\in \text {Sol}(Q, F_{2})\).
Indeed, let y ∈ Q. From the definition of \(v^{n_{k}}\) and Lemma 1, we get
Since \(\mu _{n_{k}}>0\) for every k, it follows from (37) that
Since \(\lim _{k\to \infty }\|u^{n_{k}}-v^{n_{k}}\|=0\), \(\lim _{k\to \infty }\mu _{n_{k}}=\mu ^{\ast }>0\) and the sequence \(\{v^{n_{k}}\}\) is bounded, we get
Using (38), condition (A4) and the weak convergence of \(\{u^{n_{k}}\}\), \(\{v^{n_{k}}\}\) to \(A(\overline {t})\), we obtain
i.e., \(A(\overline {t})\in \text {Sol}(Q, F_{2})\).
It follows from \(\overline {t}\in \text {Sol}(C, F_{1})\) and \(A(\overline {t})\in \text {Sol}(Q, F_{2})\) that \(\overline {t}\in {\Omega }\). Which together with x∗ = PΩ(x0) implies that \(\langle x^{0}-x^{\ast }, \overline {t}-x^{\ast }\rangle \leq 0\). So, from (34), we have \(\limsup _{n\to \infty }\langle x^{0}-x^{\ast }, t^{n}-x^{\ast }\rangle \leq 0\).
From \(\lim _{n\to \infty }\|x^{n}-x^{\ast }\|^{2}=\xi \) and (25), we have
From \(\lim _{n\to \infty }\alpha _{n}=0\), the boundedness of {tn}, (33) and (39), we obtain
Assume, to get a contradiction, that ξ > 0, and choose ε = ξ > 0. It follows from (40) that there exists n2 ≥ 0 such that
Then, from (19) and (23), we get
which together with (41) implies
Thus, after a summation, we obtain
Therefore, we arrive at a contradiction
because \({\sum }_{n=0}^{\infty }\alpha _{n}=\infty \). Hence ξ = 0, which implies xn → x∗.Case 2. Suppose that for any integer m, there exists an integer n such that n ≥ m and ∥xn − x∗∥≤∥xn+ 1 − x∗∥. According to Lemma 4, there exists a nondecreasing sequence {τ(n)}n≥N of \(\mathbb {N}\) such that \(\lim _{n\to \infty }\tau (n)=\infty \) and the following inequalities hold
Choose n4 ≥ N such that τ(n) ≥ n0 for all n ≥ n4. From (42) and (22), we get
From (43), we have
which together with (19) implies, for all n ≥ n4, that
Then, it follows from the above inequality, the boundedness of {tn} and \(\lim _{n\to \infty }\alpha _{n}=0\) that
From (44) and the boundedness of {xn}, {yn} and {tn}, we obtain
Arguing similarly as in the first case, we can conclude that
Then, the boundedness of {tn} and \(\lim _{n\to \infty }\alpha _{n}=0\) yield
Using the inequality
as well as (19) and (42), we obtain, for all n ≥ n4
In particular, since ατ(n) > 0
Combining the above inequality with (42), we get
Taking the limit in (46) as \(n\to \infty \), and using (45), we obtain
which implies \(x^{n}\to x^{\ast }\). This complete the proof of Theorem 1. □
Remark 2
Theorem 1 is still true if the assumptions (A1)–(A4) are replaced by the following assumptions:
- (A)
\(F_{1}: {\mathcal{H}}_{1}\to {\mathcal{H}}_{1}\) is monotone on \({\mathcal{H}}_{1}\) and L1-Lipschitz continuous on \({\mathcal{H}}_{1}\).
- (B)
\(F_{2}: {\mathcal{H}}_{2}\to {\mathcal{H}}_{2}\) is monotone on \({\mathcal{H}}_{2}\) and L2-Lipschitz continuous on \({\mathcal{H}}_{2}\).
Proof
Note that in the proof of Theorem 1, the assumptions (A2) and (A4) are used to prove \(\overline {t}\in \text {Sol}(C, F_{1})\) and \(A(\overline {t})\in \text {Sol}(Q, F_{2})\), respectively. Now we will prove \(\overline {t}\in \text {Sol}(C, F_{1})\) and \(A(\overline {t})\in \text {Sol}(Q, F_{2})\) by using assumptions (A), (B) and Lemma 3.
Indeed, from assumption (A), \(z^{n}=P_{C}(y^{n}-\lambda _{n} F_{1}(y^{n}))\), \(\lim _{n\to \infty }\|y^{n}-z^{n}\|=0\), \(\lambda _{n}\geq \min \limits \left (\frac {\lambda }{L_{1}}, \lambda _{0}\right )>0\), \(y^{n_{k}}\rightharpoonup \overline {t}\) and Lemma 3, we imply \(\overline {t}\in \text {Sol}(C, F_{1})\).
Using the same argument, from (B), vn = PQ(un − μnF2(un)), \(\lim _{n\to \infty }\|u^{n}-v^{n}\|=0\), \(\mu _{n}\geq \min \limits \left (\frac {\mu }{L_{2}},\mu _{0}\right )>0\), \(u^{n_{k}}\rightharpoonup A(\overline {t})\) and Lemma 3, we have \(A(\overline {t})\in \text {Sol}(Q, F_{2})\). □
When F1 = F2 = 0, we have the following corollary from Algorithm 1 and Theorem 1.
Corollary 1
Let Cand Qbe two nonempty closed convex subset of two real Hilbert spaces\({\mathcal{H}}_{1}\)and\({\mathcal{H}}_{2}\), respectively. Suppose that positive sequences {αn}, {δn} satisfy the following conditions
Let {xn} be the sequence generated by\(x^{0}\in {\mathcal{H}}_{1}\)and
Then the sequence {xn} converges strongly to an element x∗∈Γ, where x∗ = PΓ(x0), provided the solution set Γ = {x∗∈ C : Ax∗∈ Q} of the SFP is nonempty.
4 Numerical Results
Let \({\mathcal{H}}_{1}=\mathbb {R}^{4}\) with the norm \(\|x\|=({x_{1}^{2}}+{x_{2}^{2}}+{x_{3}^{2}}+{x_{4}^{2}})^{\frac {1}{2}}\) for \(x=(x_{1}, x_{2}, x_{3}, x_{4})^{T}\in \mathbb {R}^{4}\) and \({\mathcal{H}}_{2}=\mathbb {R}^{2}\) with the standard norm \(\|y\|=({y_{1}^{2}}+{y_{2}^{2}})^{\frac {1}{2}}\). Let A(x) = (x1 + x3 + x4, x2 + x3 − x4)T for all \(x=(x_{1}, x_{2}, x_{3}, x_{4})^{T}\in \mathbb {R}^{4}\) then A is a bounded linear operator from \(\mathbb {R}^{4}\) into \(\mathbb {R}^{2}\) with \(\|A\|=\sqrt {3}\). For \(y=(y_{1}, y_{2})^{T}\in \mathbb {R}^{2}\), let B(y) = (y1, y2, y1 + y2, y1 − y2)T, then B is a bounded linear operator from \(\mathbb {R}^{2}\) into \(\mathbb {R}^{4}\) with \(\|B\|=\sqrt {3}\). Moreover, for any \(x=(x_{1}, x_{2}, x_{3}, x_{4})^{T}\in \mathbb {R}^{4}\) and \(y=(y_{1}, y_{2})^{T}\in \mathbb {R}^{2}\), 〈A(x), y〉 = 〈x, B(y)〉, so B = A∗ is an adjoint operator of A.
Let
and define a mapping \(F_{1}: \mathbb {R}^{4}\to \mathbb {R}^{4}\) by \(F_{1}(x)=(\sin \limits \|x\|+2)a^{0}\) for all \(x\in \mathbb {R}^{4}\), where \(a^{0}=(1,-1,-1, 2)^{T}\in \mathbb {R}^{4}\). It is easy to verify that F1 is pseudomonotone on \(\mathbb {R}^{4}\).
Furthermore, for all \(x, y\in \mathbb {R}^{4}\), we have
So F1 is \(\sqrt {7}\)-Lipschitz continuous on \(\mathbb {R}^{4}\).
It is easy to see that the solution set Sol(C, F1) of VIP(C, F1) is given by
Now let \(Q=\{(u_{1}, u_{2})^{T}\in \mathbb {R}^{2}: 2u_{1}-3u_{2}\geq -4\}\) and define another mapping \(F_{2}: \mathbb {R}^{2}\to \mathbb {R}^{2}\) by \(F_{2}(u)=(\sin \limits \|u\|+3)b^{0}\) for all \(u\in \mathbb {R}^{2}\), where \(b^{0}=(2,-3)^{T}\in \mathbb {R}^{2}\). Similarly, F2 is pseudomonotone on \(\mathbb {R}^{2}\), \(\sqrt {13}\)-Lipschitz continuous on \(\mathbb {R}^{2}\) and that the solution set Sol(Q, F2) of VIP(Q, F2) is given by
The solution set Ω of the SVIP is given by
Select a random starting point x0 = (− 1,1,2,− 3)T for the Algorithm 1. We choose μ = 0.7, μ0 = 1, λ = 0.4, λ0 = 2, \(\alpha _{n}=\frac {1}{n+2}\), \(\delta _{n}=\frac {n+1}{6n+8}\). An elementary computation shows that {αn}⊂ (0,1), \(\lim _{n\to \infty }\alpha _{n}=0\), \({\sum }_{n=0}^{\infty }\alpha _{n}=\infty \), \(\{\delta _{n}\}\subset \left [\frac {1}{8}, \frac {1}{6}\right ]\subset \left (0, \frac {1}{4}\right )=\left (0, \frac {1}{\|A\|^{2}+1}\right )\).
Suppose x = (2a − b + 1, a + b + 2, a, b)T ∈Ω then
The above equality holds if and only if 3b − a + 2 = 0 and \(a=-\frac {11}{17}\). So, we obtain \(a=-\frac {11}{17}\), \(b=-\frac {15}{17}\).
Therefore
With ε = 10− 9, the approximate solution obtained after 225081 iterations (with elapsed time 118.6242 seconds) is
which is a good approximation to \(x^{\ast }=\left (\frac {10}{17}, \frac {8}{17}, -\frac {11}{17}, -\frac {15}{17}\right )^{T}\).
Table 1 presents the numerical result of Algorithm 1 with different tolerances. From the preliminary numerical results reported in the table, we observe that the running time of Algorithm 1 depends very much on the tolerance.
We perform the iterative schemes in MATLAB R2018a running on a laptop with Intel(R) Core(TM) i5-3230M CPU @ 2.60GHz, 4 GB RAM.
5 Conclusion
In this paper, we have proposed an iterative algorithm for solving the split variational inequality problem involving Lipschitz continuous pseudomonotone mappings. The proof of convergence of the algorithm is performed without the prior knowledge of the Lipschitz constants of cost operators. The strong convergence of the iterative sequence generated by the proposed iterative algorithm to the solution of the SVIP is obtained. When applied to the well-known SFP, our method is reduced to a strongly convergent algorithm, which requires only two projections at each iteration step.
References
Cegielski, A.: General method for solving the split common fixed point problem. J. Optim. Theory Appl. 165, 385–404 (2015)
Cegielski, A., Al-Musallam, F.: Strong convergence of a hybrid steepest descent method for the split common fixed point problem. Optimization 65, 1463–1476 (2016)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Prob. 21, 2071–2084 (2005)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 26, 827–845 (2011)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61, 1119–1132 (2012)
Censor, Y., Segal, A.: Iterative projection methods in biomedical inverse problems. In: Censor, Y., Jiang, M., Louis, A.K. (eds.) Mathematical Methods in Biomedical Imaging and Intensity-Modulated Therapy (IMRT), Pisa, Italy, 2007, pp. 65–96. Scuola Normale Superiore Edizioni Della Normale (2008)
Censor, Y., Segal, A.: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587–600 (2009)
Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)
Eslamian, M., Eslamian, P.: Strong convergence of a split common fixed point problem. Numer. Funct. Anal. Optim. 37, 1248–1266 (2016)
Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003)
Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
Konnov, I.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)
Kopecká, E., Reich, S.: A note on alternating projections in Hilbert space. J. Fixed Point Theory Appl. 12, 41–47 (2012)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
Kraikaew, R., Saejung, S.: On split common fixed point problems. J. Math. Anal. Appl. 415, 513–524 (2014)
Maingé, P.-E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)
Masad, E., Reich, S.: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367–371 (2007)
Quy, N.V., Muu, L.D.: On existence and solution methods for strongly pseudomonotone equilibrium problems. Vietnam J. Math. 43, 229–238 (2015)
Shehu, Y.: New convergence theorems for split common fixed point problems in Hilbert spaces. J. Nonlinear Convex Anal. 16, 167–181 (2015)
Acknowledgements
The work of third author was supported by Posts and Telecommunications Institute of Technology (PTIT), Hanoi, Vietnam.
The authors would like to thank the two referees for their valuable remarks and comments which helped to improve the original version of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Van Huy, P., Hien, N.D. & Anh, T.V. A Strongly Convergent Modified Halpern Subgradient Extragradient Method for Solving the Split Variational Inequality Problem. Vietnam J. Math. 48, 187–204 (2020). https://doi.org/10.1007/s10013-019-00378-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10013-019-00378-y
Keywords
- Split variational inequality problem
- Split feasibility problem
- Halpern subgradient extragradient method
- Strong convergence
- Pseudomonotone mapping