Abstract
In this paper, basing on the forward-backward method and inertial techniques, we introduce a new algorithm for solving a variational inequality problem over the fixed point set of a nonexpansive mapping. The strong convergence of the algorithm is established under strongly monotone and Lipschitz continuous assumptions imposed on the cost mapping. As an application, we also apply and analyze our algorithm to solve a convex minimization problem of the sum of two convex functions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \({\mathscr{H}}\) be a real Hilbert space with an inner product 〈⋅,⋅〉 and the induced norm ∥⋅∥. Denote weak and strong convergence of a sequence \(\{x_{n}\} \subset {\mathscr{H}}\) to \(x \in {\mathscr{H}}\) by \( x_{n} \rightharpoonup x\) and \(x_{n} \rightarrow x\), respectively. Let C be a nonempty closed convex subset in \({\mathscr{H}}\), and \(F: {\mathscr{H}}\to {\mathscr{H}}\) be a cost mapping. The variational inequality problem VI(C, F) is to find a point x∗∈ C such that
The problem VI(C, F) was introduced first by Kinderleher and Stampacchia in [12]. This is an important problem that has a variety of theoretical and practical applications [16, 17]. Recently, there are very efficient algorithms for solving this problem. Some popular methods for solving the problem VI(C, F) are found, for instance, in [13].
Let \(S : {\mathscr{H}} \rightarrow {\mathscr{H}}\) be an operator. A fixed point of S is a point in \({\mathscr{H}}\) which is mapped to itself by S, and the set of all fixed points of S is denoted by
In this paper, we consider the variational inequality problems with fixed point constraints VIF(F, S), which consist of the following:
In the case S is the identity mapping, the problem VIF(F, S) is formulated in the form of the problem VI(C, F). In the case F = 0, it is written in the form of the lexicographic variational inequality problem when Sx := PrC[x − λG(x)], where \(\lambda >0, G:{\mathscr{H}}\to {\mathscr{H}}\), PrC is the metric projection on C. Many other problems can be formulated as the form of the problem VIF(F, S) [10, 11]. There are increasing interests in studying solution algorithms for a monotone class of the problem VIF(F, S) such as parallel subgradient methods [2], extragradient methods [4], subgradient extragradient methods [3], Krasnoselski-Mann iteration method [18], hybrid steepest descent methods [24, 25], and other [18, 22, 23].
Let \( f: {\mathscr{H}}\to \mathbb {R} \) be a convex and differentiable function with a \(\mathcal L\)-coercive gradient ∇f, i.e., \(\langle x-y, \nabla f(x)-\nabla f(y)\rangle \geq \mathcal L\|\nabla f(x)-\nabla f(y)\|^{2}\ \forall x,y\in {\mathscr{H}}\), and let \(g:{\mathscr{H}}\to \mathbb {R}\) be a proper lower semicontinuous and convex function. Consider the convex problem
It can be shown that x∗ is a solution point of the problem (1.1) if and only if it is characterized by the fixed point equation
where c > 0, the proximal operator of f is defined by \(\text {prox}_{f}(x)=\text {argmin}\{f(y)+\frac 1 2\|y-x\|^{2}: y\in {\mathscr{H}}\}\) and I is the identity operator. For solving the problem (1.1), the fixed point equation leads to the following iteration:
Under the condition \(\varepsilon _{k}\in \left (0,\frac 2 {\mathcal L}\right )\), Nakajo et al. [19] show that {xk} converges strongly to a solution of the problem (1.1). The iteration method (1.2) is known as the forward-backward splitting. By using inertial techniques and the forward-backward splitting method, Beck and Teboulle [6] proposed the fast iterative shrinkage-thresholding algorithm for solving the problem (1.1) as follows:
where \(t_{k+1}=\frac {1+\sqrt {1+4{t_{k}^{2}}}} 2, \theta _{k}=\frac {t_{k}-1}{t_{k+1}}, x^{0}\in {\mathscr{H}}\) and t0 = 1. Then, the rate of convergence is established and also applied to image restoration problems. In recent years, there have been many authors who modified some forward-backward and inertial methods for solving the other split type problems such as parallel inertial S-iteration forward-backward algorithm [8] for regression and classification problems, inertial hybrid projection-proximal point algorithms [1] for maximal monotone operators, inertial forward-backward algorithms [7] for the minimization of the sum of two nonconvex functions, inertial proximal method [9] for solving Ky Fan minimax inequalities, inertial forward-backward algorithm [15] for monotone inclusions, and other (see [20, 21] and the references therein).
It is worth noting from the above review that the convex minimization problem (1.1) is related to the fixed point problem. Also, we know that a forward-backward operator S := proxεg(I − ε∇f) is nonexpansive in the case \(0<\varepsilon < \frac 2{\mathcal L}\), i.e., \(\|Sx-Sy\|\leq \|x-y\|,~ \forall x,y\in {\mathscr{H}}\). So the study on fixed point problems for the class of nonexpansive operators plays an important role in creating optimization methods.
The purpose of this paper is to propose a new iteration algorithm by using the forward-backward iteration scheme (1.3) and inertial techniques for solving the problem VIF(F, S), where the cost mapping F is strongly monotone and Lipschitz continuous on \({\mathscr{H}}\). Furthermore, we prove a strong convergence result of the proposed algorithm under the condition onto parameters. Subsequently, we apply the proposed algorithm to solving a convex unconstrained minimization problem of the sum of two convex functions by using the nonexpansiveness of the forward-backward operator S.
The paper is organized as follows. In Section 2, we present some definitions and lemmas which will be used in the paper. Section 3 deals with a new inertial forward-backward algorithm for solving the variational inequalities over the fixed point set of a nonexpansive mapping VIF(F, S) and the proof of its strong convergence in a real Hilbert space \({\mathscr{H}}\). As an application of our proposed algorithm, Section 4 is devoted to solve a convex unconstrained minimization problem of the sum of two convex functions in \({\mathscr{H}}\).
2 Preliminaries
Denote weak and strong convergence of a sequence \(\{x^{n}\} \subset {\mathscr{H}}\) to \(x \in {\mathscr{H}}\) by \( x^{n} \rightharpoonup x\) and \(x^{n} \rightarrow x\), respectively.
We recall that a mapping \(S: {\mathscr{H}}\to {\mathscr{H}}\) is said to be
-
Strongly monotone with constant β > 0 (shortly β-strongly monotone), if
$$ \langle S(x)-S(y), x-y\rangle \geq \beta\|y-x\|^{2},\quad \forall x,y\in\mathcal{H}; $$ -
Lipschitz continuous with constant L > 0 (shortly L-Lipschitz continuous), if
$$\|Sx - Sy\| \leq L \|x - y\|, \quad\forall x, y \in \mathcal{H};$$ -
Contraction with constant L > 0, if S is L-Lipschitz continuous, where L < 1;
-
Nonexpansive, if S is 1-Lipschitz continuous.
For each \(x\in {\mathscr{H}}\), there exists a unique point in C, denoted by PrC(x) satisfying
The mapping PrC is usually called the metric projection of \({\mathscr{H}}\) on C. An important property of PrC is nonexpansive on \({\mathscr{H}}\).
Given a function \(g: {\mathscr{H}}\to \mathcal R\), the proximal mapping of g on C is the mapping given by
For any \(x \in {\mathscr{H}}\), the following three claims in [5] are equivalent
-
(a)
u = prox(g, C)(x);
-
(b)
\(x-u \in \partial g(u):=\{w_{u}\in {\mathscr{H}}: g(x)-g(u)\geq \langle w_{u},x-u\rangle ,~ \forall x\in {\mathscr{H}}\}\);
-
(c)
〈x − u, y − u〉≤ g(y) − g(u), ∀y ∈ C.
Moreover, the proximal mapping of g on C is (firmly) nonexpansive and
Note that, if g is the indicator function on C (defined by δC(x) = 0 if x ∈ C; otherwise \(\delta _{C}(x)=+\infty \)), then prox(g, C) = PrC.
Now we recall the following lemmas which are useful tools for proving our convergence results.
Lemma 2.1
[20, Lemma 2.6] Let {sk} be a sequence of nonnegative real numbers and {pk} a sequence of real numbers. Let {αk} be a sequence of real numbers in (0,1) such that \( {\sum }_{k=1}^{\infty }\alpha _{k}=\infty \). Assume that
If \( \limsup _{i\to \infty }p_{k_{i}}\leq 0 \) for every subsequence \( \{s_{k_{i}}\} \) of {sk} satisfying
then \( \lim _{k\to \infty }s_{k}=0. \)
Lemma 2.2
[14, Demiclosedness principle] Assume that S is a nonexpansive self-mapping of a nonempty closed convex subset C of a real Hilbert space \({\mathscr{H}}\). If Fix(S)≠∅, then I − S is demiclosed; that is, whenever {xk} is a sequence in C converging weakly to some \(\bar x\in C\) and the sequence {(I − S)(xk)} converges strongly to some \(\bar y\), it follows that \((I-S)(\bar x)=\bar y\). Here I is the identity operator of \({\mathscr{H}}\).
3 Algorithm and Its Convergence
For solving the variational inequalities over the fixed point set VIF(F, S), we assume the mappings F and S, parameters satisfy the following conditions.
-
(A1) F is β-strongly monotone, L-Lipschitz continuous such that β > 0 and L > 0;
-
(A2) S is nonexpansive;
-
(A3) The solution set of the problem VIF(F, S) is nonempty;
-
(A4) For every k ≥ 0, the positive parameters βk, γk, τk, λk and {μk} satisfy the following restrictions:
Algorithm 3.1 (Hybrid inertial contraction algorithm)
Initialization: Take \( x^{0}, x^{1} \in {\mathscr{H}}\) arbitrarily. Iterative steps: k = 1,2,…
Step 1. Compute an inertial parameter
Step 2. Compute
Step 3. Set k := k + 1 and return to Step 1.
A strong convergence result is established in the following theorem.
Theorem 3.2
Assume that the assumptions (A1)–(A4) are satiSfied. Then, the sequence {xk} generated by Algorithm 3.1 converges strongly to a unique solution x∗ of the problem VIF(F, S).
Proof
Since F is β-strongly monotone and L-Lipschitz continuous on \({\mathscr{H}}\), for each λk > 0, we have
It is well-known to see that F is strongly monotone, so the problem VIF(F, S) has a unique solution, and a point x∗∈Fix(S) is a solution of the problem if and only if x∗ = PrFix(S)[x∗− λkF(x∗)]. From the schemes (3.2) and (3.3), we get
where \(\delta _{k}:=\sqrt {1-2\lambda _{k}\beta +{\lambda _{k}^{2}} L^{2}}\in (0,1-a)\). Combining this and (3.1), we obtain
By using Step 1 and the conditions (3.1), we deduce
This implies \(M= \sup _{k} \left \{\frac {\theta _{k} }{a\beta _{k} \gamma _{k} }\|x^{k} -x^{k-1}\|+\frac {2\beta \|F(x^{*})\|}{aL^{2}}\right \}<+\infty \). Then,
By mathematical induction, we deduce that
So, {xk} is bounded. From (3.2), it follows \(\|w^{k}-x^{k}\|=\theta _{k}\|x^{k}-x^{k-1}\|<+\infty \). By using (3.4), we also have that both {zk} and {wk} are bounded. By (3.3) and the relation
we get
From wk = xk + θk(xk − xk− 1), it implies
Combining (3.5) and (3.6), we have
where
It follows that
where \(\sigma := \sup _{k} \sigma _{k}\in (0,\infty )\). Now we apply Lemma 2.1 for \(s_{k}:=\|x^{k} -x^{*}\|^{2}, \alpha _{k}:=\beta _{k}\gamma _{k}(1-{\delta _{k}^{2}})\in (0,1)\) and pk := σk. Since (3.7), we have
Assume that \( \left \{s_{k_{i}}\right \}\) is a subsequence of {sk} such that
Combining this, (3.7), and (3.1), we obtain
Consequently,
From the scheme (3.2), it follows
and hence
Then, using \(\lim _{k\to \infty }\gamma _{k}=0\) and the boundedness of {wk}, we get
Since (3.8) and (3.9), we obtain
We next show that \( \limsup _{i\to \infty }p_{k_{i}}\leq 0\). Since the condition (3.1), we have
Since \(\lambda _{k}\in \left (\frac {\beta }{L^{2}}, \frac {2\beta }{L^{2}}\right )\), the boundedness of {xk} and {μk}, it suffices to show that
Since {zk} is bounded, we can assume that there exists a subsequence \(\{\bar z^{k_{i}}\}\) of \(\{z^{k_{i}}\}\) such that \(\bar z^{k_{i}}\rightharpoonup \bar x\) and
Applying Lemma 2.2 for the nonexpansive mapping S with (3.10), we deduce that \( \bar x\in \text {Fix}(S)\). Thus
By Lemma 2.1, we can conclude that \(x^{k} \rightarrow x^{*}\) as \(k \rightarrow \infty \). The proof is complete. □
4 Application to Convex Problems
In this section, we consider the minimization problem (1.1) in the form of the sum of two convex functions in \({\mathscr{H}}\). Let \(g:{\mathscr{H}}\to \mathbb {R}\cup \{+\infty \} \) be proper lower semi-continuous and convex. The proximal operator of g on C, in short proxg, is formulated as follows:
It is well-known to see that proxg has the nonexpansiveness on \({\mathscr{H}}\) [5], i.e., ∥proxg(y1) −proxg(y2)∥≤∥y1 − y2∥ for all \(y_{1},y_{2}\in {\mathscr{H}}\).
In this situation, we put the following assumptions:
-
(B1) \(f: {\mathscr{H}}\to \mathbb {R} \) is convex and differentiable, its gradient ∇f is \(\mathcal L\)-coercive, i.e., \(\langle \nabla f(x)-\nabla f(y),x-y\rangle \geq \mathcal L\|\nabla f(x)-\nabla f(y)\|^{2}\) for all \(x,y\in {\mathscr{H}}\);
-
(B2) \(g:{\mathscr{H}}\to \mathbb {R}\cup \{+\infty \} \) is proper lower semicontinuous and convex;
-
(B3) The solution set Ω of (1.1) is nonempty;
-
(B4) \( F : {\mathscr{H}} \to {\mathscr{H}}\) is β-strongly monotone and L-Lipschitz continuous.
By utilizing Algorithm 3.1, we obtain the following algorithm for solving the problem (1.1).
Algorithm 4.1
Initialization: Take \(\varepsilon \in \left (0,\frac 2L\right )\) and two points \( x^{0}, x^{1} \in {\mathscr{H}}\) arbitrarily.
Iterative steps: k = 1, 2, …
Step 1. Compute an inertial parameter
Step 2. Compute
Step 3. Set k := k + 1 and return to Step 1.
A strong convergence result is established in the following theorem.
Theorem 4.2
Assume that the assumptions (B1)–(B4) are satisfied. Under the conditions (3.1) and \(\varepsilon \in \left (0, \frac {2}{\mathcal L}\right )\), the sequence {xk} generated by Algorithm 4.1 converges strongly to a solution x∗ of the convex problem (1.1) Moreover, x∗ = PrΩ[x∗− εF(x∗)].
Proof
For each \(x\in {\mathscr{H}}\), set S(x) = proxεg[x − ε∇f(x)]. For each \(x,y\in {\mathscr{H}}\), since proxεg is nonexpansive and the assumption (B1), we have
where the last inequality is deduced from \(\varepsilon \in \left (0, \frac {2}{\mathcal L}\right )\). Then, S is nonexpansive on \({\mathscr{H}}\). So, the convergence results are deduced from Theorem 3.2 for the nonexpansive mapping S and the cost mapping F. □
References
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14, 773–782 (2004)
Anh, P.N., Hien, N.D., Phuong, N.X., Ngoc, V.T.: Parallel subgradient methods for variational inequalities involving nonexpansive mappings. Appl. Anal. https://doi.org/10.1080/00036811.2019.1584288 (2019)
Anh, P.N., Le Thi, H.A.: New subgradient extragradient methods for solving monotone bilevel equilibrium problems. Optim. 68(1), 2097–2122 (2019)
Anh, P.N., Kim, J.K., Muu, L.D.: An extragradient method for solving bilevel variational inequalities. J. Glob. Optim. 52, 627–639 (2012)
Beck, A.: First-order Methods in Optimization. MOS-SIAM Series on Optimization, Chapter 6 (2017)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sc. 2, 183–202 (2009)
Bot, R.I., Csetnek, E.R., Laszlo, S.C.: An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions. EURO J. Comput. Optim. 4, 3–25 (2016)
Bussaban, L., Suantai, S., Kaewkhao, A.: A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 36, 35–44 (2020)
Chbani, Z., Riahi, H.: Weak and strong convergence of an inertial proximal method for solving Ky Fan minimax inequalities. Optim. Lett. 7, 185–206 (2013)
Duc, P.M., Muu, L.D.: A splitting algorithm for a class of bilevel equilibrium problems involving nonexpansive mappings. Optim. 65(10), 1855–1866 (2016)
Iiduka, H., Yamada, I.: A Subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optim. 58, 251–261 (2009)
Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic Press (1980)
Konnov, I. V.: Combined Relaxation Methods for Variational Inequalities. Springer-Verlag, Berlin (2000)
Liu, F., Nashed, M.Z.: Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Anal. 6, 313–344 (1998)
Lorenz, D.A., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)
Londono, G., Lozano, A.: A bilevel optimization program with equilibrium constraints for an urban network dependent on time. Transp. Res. Proc. 3, 905–914 (2014)
Marcotte, P.: Network design problem with congestion effects: A case of bilevel programming. Math. Progr. 34(2), 142–162 (1986)
Moudafi, A.: Krasnoselski–Mann iteration for hierarchical fixed-point problems. Inverse Prob. 23, 1635–1640 (2007)
Nakajo, K., Shimoji, K., Takahashi, W.: On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonl. Anal. Theory, Meth. Appl. 71(1–2), 112–119 (2009)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonl. Anal. 75, 724–750 (2012)
Solodov, M.: An explicit descent method for bilevel convex optimization. J. Conv. Anal. 14, 227–237 (2007)
Xu, M. H., Li, M., Yang, C.C.: Neural networks for a class of bilevel variational inequalities. J. Glob. Optim. 44, 535–552 (2009)
Yao, Y., Marino, G., Muglia, L.: A modified Korpelevichś method convergent to the minimum-norm solution of a variational inequality. Optim. 63, 559–569 (2014)
Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. Stud. Comput. Math. 8, 473–504 (2001)
Yamada, I., Ogura, N.: Hybrid steepest descent method for the variational inequality problem over the the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 25, 619–655 (2004)
Acknowledgements
We are very grateful to anonymous referees for their really helpful and constructive comments that helped us very much in improving the paper.
Funding
This research is funded by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant Number 101.02-2019.303.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Anh, P.N. Hybrid Inertial Contraction Algorithms for Solving Variational Inequalities with Fixed Point Constraints in Hilbert Spaces. Acta Math Vietnam 47, 743–753 (2022). https://doi.org/10.1007/s40306-021-00467-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40306-021-00467-6
Keywords
- Variational inequality problem
- Fixed point constraints
- Lipschitz continuous
- Strongly monotone
- Forward-backward method