Abstract
In this paper, building upon subgradient techniques and viscosity-type approximations, we propose a simple projection algorithm for solving the lexicographic Ky Fan inequality in a real Hilbert space, where the lower level is a variational inequality problem. By choosing suitable regularization parameters, a strong convergence of the proposed algorithm is established under mild assumptions imposed on the cost function. Some simple numerical examples are given to illustrate the performance of the proposed algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(\mathcal {H}\) be a real Hilbert space equipped with an inner product 〈⋅,⋅〉 and its induced norm ∥⋅∥. Let C be a nonempty closed convex subset of \(\mathcal {H}\). In this paper, we consider the lexicographic Ky Fan inequality, shortly L K F(g, F, C), as follows:
where \(g: C\times C\to \mathcal {R}\), Sol(F, C) denotes the set of all solutions of the variational inequality problem:
and \(F: C\to \mathcal {H}\). As usual, the bifunction g and the mapping F are called to be the cost bifunction and cost mapping.
Although Problem L K F(g, F, C) has a simple bilevel form, it coverts the following as special cases:
-
1.
Ky Fan inequality problem: Let C be a nonempty closed convex subset of \(\mathcal {H}\) and the bifunction \(g: C\times C\to \mathcal {R}\). We consider the Ky Fan inequality problem (see [12]):
$$ \text{Find}~x^{\ast}\in C~\text{such that}~g(x^{\ast}, x)\geq 0\quad \forall x\in C. $$(1)Setting F(x) = 0 for all x ∈ C, it is easy to see that Problem (1) coincides with the form L K F(g, F, C).
-
2.
Bilevel variational inequality problem: Let C be a nonempty closed convex subset of \(\mathcal {H}\), \(G: C\to \mathcal {H}\) and \(F: C\to \mathcal {H}\). The following problem is called the bilevel variational inequality problem (see [7, 16]), shortly B V I(G, F, C):
$$\text{Find}~x^{\ast}\in \text{Sol}(F, C)~\text{such that}~\langle G(x^{\ast}), y-x^{\ast}\rangle\geq 0\quad \forall x\in \text{Sol}(F,C). $$By setting g(x, y) := 〈G(x), y − x〉 for all x, y ∈ C, we can easily see that B V I(G, F, C) is equivalent to L K F(G, F, C).
-
3.
Minimum-norm problem: Let C be a nonempty closed convex subset of \(\mathcal {H}\), \(x^{0}\in \mathcal {H}\) and \(F: C\to \mathcal {H}\). The minimum-norm problem, in [31], shortly M N(F, C), is formulated in the following:
$$\min\{\|x^{0}-x\|: x\in \text{Sol}(F, C)\}. $$Taking g(x, y) := ∥y − x 0∥−∥x − x 0∥ for all x, y ∈ C, we can see that B V I(G, F, C) collapses into the minimum-norm problem M N(F, C).
Some methods for solving Problem L K F(g, F, C) and its special cases can be found, for instance, in [4, 5, 7, 14,15,16, 25, 28, 30]. A majority of existing methods is based on the proximal method which consists of solving a regularized Ky Fan problem, i.e., at current iteration, given the current iterate x n, we compute the next iterate x n+1 by solving the following problem:
where \(g: C\times C\to \mathcal {R}\) and r > 0. It is well-known that the viscosity method is a fundamental method to solve variational inequalities where the constraint set is the fixed point set of a nonexpansive mapping (shortly, the hierarchical variational inequality). That is the problem of finding x ∗∈Fix(T) (the fixed point set of a nonexpansive mapping T : C → C) such that
where V : C → C is nonexpansive and I is the identity mapping. There are two schemes to solve the hierarchical variational inequality in a real Hilbert space that one implicit and one explicit as follows:
and
where f is a contraction mapping on C, t ∈ (0,1) and {λ k } ⊂ [0, 1].
In [22], Maingé and Moudafi considered the viscosity method for the hierarchical variational inequality as follows:
where f : C → C is a contraction. Under concrete conditions, the authors have shown that the iterative sequence strongly converges to a solution of the hierarchical variational inequality. We note that the method can be regarded as a generalized version of Halpern’s algorithm. In [18], Lu et al. investigated other hybrid viscosity approximation methods for solving the hierarchical variational inequality. By combining viscosity approximation methods with projected subgradient techniques, Maingé in [20] established a strong convergence theorem for optimization with variational inequality constraints in \(\mathcal {H}\). Xu in [29] introduced a viscosity method for hierarchical fixed point approach to variational inequalities. Here, the author showed that the sequence defined by the implicit hierarchical scheme converges strongly in norm to a solution of the hierarchical variational inequality. Recently, the viscosity method has been studied to develop iteration algorithms for variational inequalities, Ky Fan inequalities, and other problems (see [9, 21]). Methods for solving Problem L K F(g, F, C) have also been studied extensively in the literature (see [1,2,3, 17, 23]).
The purpose of this paper is to extend the above viscosity approximation method in [22] to Problem L K F(g, F, C), with suitable modifications and subgradient techniques. Through this way, we obtain a strongly convergent algorithm for solving the problem in a real Hilbert space \(\mathcal {H}\). The iterative algorithm is quite simple. At each iteration, we only require computing the subgradient of a subdifferentiable convex function and the projection of a point onto the domain.
This paper is organized as follows. In Section 2, we recall some definitions and lemmas used in the paper. Section 3 deals with proposing and analyzing the convergence of the algorithm. Finally, we present some numerical experiments to illustrate the behavior of the proposed algorithms in Section 4.
2 Preliminaries
In this section, we collect some definitions and lemmas that will be used in the sequel. Let C be a nonempty closed convex subset of \(\mathcal {H}\), for every \(x\in \mathcal H\), there exists a unique element Pr C (x), defined by
It is also known that Pr C is firmly nonexpansive, or 1-inverse strongly monotone, i.e.,
Besides, we recall some other properties of the projection as follows (see [21, Proposition 4.1]):
A mapping \(F: C\to \mathcal H\) is said to be monotone on C, if
paramonotone on C, if F is monotone on C and
weakly closed on C, if
A bifunction \(g: C\times C\to \mathcal R\) is said to be monotone on C, if
ρ-strongly monotone on C, if
Throughout this paper, we consider the bifunction g, the mapping F and regularization parameters with the following assumptions:
-
(A 1) g is ρ-strongly monotone and for each x ∈ C, g(x,⋅) is weakly continuous and convex on the domain \(C, \partial _{2} g(x,\cdot )(x):=\{w_{x}\in \mathcal H: g(x,y)\geq \langle w_{x}, y-x\rangle \forall y\in C\}\) is upper semicontinuous and g(x, x) = 0 for all x ∈ C.
-
(A 2) \(F: C \to \mathcal H\) is paramonotone and weakly closed on C, and Lipschitz continuous on the domain C.
-
(A 3) The solution set \(\text {Sol}(F, C):=\{\bar x\in C: \langle F(\bar x), y-\bar x\rangle \geq 0\, \forall y \in C\}\) is nonempty.
-
(A 4) μ ∈ (0, ∞), there are two nonnegative real sequences {α k } and {λ k } such that
$$\sum\limits_{k=0}^{\infty} \lambda_{k} =\infty,\quad \sum\limits_{k=0}^{\infty} {\lambda_{k}^{2}}<\infty,\quad \lim_{ k\to \infty} \alpha_{k}=0,\quad \sum\limits_{k=0}^{\infty} \alpha_{k}=\infty,\quad \sum\limits_{k=0}^{\infty} \alpha_{k}\lambda_{k}=\infty. $$
It is easy to check again that condition (A 4) is satisfied by
We recall a series of preliminary results needed for our convergence analysis.
Lemma 1
[8] Let {a n }, {b n }, and {c n } be three sequences of nonnegative real numbers satisfying the inequality
for some integer n 0 ≥ 1, where \( {\sum }_{n=n_{0}}^{\infty }b_{n}<+\infty \) and \({\sum }_{n=n_{0}}^{\infty }c_{n}<+\infty \). Then \(\lim _{n\to \infty }a_{n}\) exists. If in addition {a n } has a subsequence whichconverges to zero, then \(\lim _{n\to \infty }a_{n}=0\).
Lemma 2
[20] Let {b n } be a sequence of real numbers that is not decreased at infinity, in the sense that there exists a subsequence \(\{b_{n_{j}}\}\) of {b n } which satisfies \(b_{n_{j}} < b_{n_{j}+1}\) for all j ≥ 0 . We also consider a sequence of integers {τ n } defined by
Then, {τ n }(n ≥ n 0) is a nondecreasingsequence verifying \(\lim _{n\to \infty }\tau _{n}=\infty \), and for all n ≥ 0, it holds that
Lemma 3
[20] Let {λ n } and {β n } be nonnegative sequences such that
Then, the following two results hold:
-
(i)
There exists a subsequence \(\{\beta _{n_{k}}\}\) of {β n } such that \(\lim _{k\to \infty }\beta _{n_{k}}=0\).
-
(ii)
If {λ n } and {β n } are also such that β n+1 − β n < 𝜃 λ n (for some positive 𝜃), then {β n } satisfies \(\lim _{n\to \infty }\beta _{n}=0\).
Lemma 4
[21] Let ϕ be the functional defined on C as ϕ(⋅) := 〈F(⋅),⋅− q〉, where \(q \in \mathcal {H}\) and \(F : C \to \mathcal {H}\) is monotone and weakly closed on C, and Lipschitz continuous on bounded subsets of C. Then, ϕ is weakly lower semicontinuous on C.
3 Convergence Results
In order to solve Problem L K F(g, F, C), we investigate the convergence analysis of the sequence {x k} given by the following iterative scheme.
We begin with the following lemma.
Lemma 5
Let the sequences {x k} and {w k} be generated by Algorithm 1. Then
-
(i)
∥x k+1 − x k∥ ≤ λ k for all k ≥ 0.
-
(ii)
The following holds for all x ∗ ∈ Sol(F, C)
$$ \|x^{k+1}-x^{\ast}\|^{2}\leq \|x^{k}-x^{\ast}\|^{2}-\frac{2\lambda_{k}}{\eta_{k}}\langle F(x^{k}), x^{k}-x^{\ast}\rangle-\frac{2\alpha_{k}\lambda_{k}}{\eta_{k}}\langle x^{k}-x^{\ast}, w^{k}\rangle+5{\lambda_{k}^{2}}. $$(3)
Proof
(i) Since Pr C is firmly nonexpansive, x k ∈ C and Step 2 of the algorithm provided for ∥d k∥≤ η k , we have
This implies (i).
(ii) Applying property (2) for x := x k ∈ C, z := x ∗∈ C, and \(y:=\frac {\lambda _{k}}{\eta _{k}}d^{k}\in \mathcal {H}\), and using \(x^{k+1}=\text {Pr}_{C}\left (x^{k}-\frac {\lambda _{k}}{\eta _{k}}d^{k}\right )\), we obtain
Combining the latter inequality with the bound ∥d k∥≤ η k , and d k = T(x k) + α k w k, we get
This implies (3). □
Theorem 1
Let the above assumptions (A 1) – (A 4) hold. Then, the sequence {x k} generated by Algorithm 1 converges strongly to the unique solution of Problem L K F(g, F, C).
Proof
We divide the proof into several claims.
Claim 1
The sequence {x k} is bounded.
Indeed, for each x ∗ ∈ Sol(F, C), we have 〈F(x ∗), x − x ∗〉 ≥ 0 for all x ∈ C. By the monotonicity of F and x k ∈ C, we get 〈F(x k), x k − x ∗〉 ≥ 0. Combining this and inequality (3), we have
Consequently,
For each k ≥ 1, set \(a_{k}:=\|x^{k}-x^{\ast }\|^{2}-5{\sum }_{j=0}^{k-1}{\lambda ^{2}_{j}}\). Then,
Case 1
There exists an index k 0 > 0 such that the sequence {a k } is nonincreasing with k ≥ k 0. It implies that {a k } is bounded above by \(a_{k_{0}}\) for all k ≥ k 0, i.e., \(a_{k}:=\|x^{k}-x^{\ast }\|^{2}-5{\sum }_{j=0}^{k-1}{\lambda ^{2}_{j}}\leq a_{k_{0}} \forall k\geq k_{0}\). Using assumption (A 4) that \({\sum }_{k=0}^{\infty }{\lambda ^{2}_{k}}<\infty \), we have
It follows that the sequence {x k} is bounded.
Case 2
There exists a subsequence \(\{a_{k_{j}}\}\) of {a k } such that \(a_{k_{j}}<a_{k_{j}+1}\) for all j ≥ 0. By Lemma 2, we have a subsequence {τ k } such that
Replacing k := τ k in (4) and using (5), we obtain
Combining this with the convexity of g(x,⋅), the ρ-strong monotonicity of g and g(x, x) = 0 for all x ∈ C, we get
Then, for all w ∗∈ ∂ 2 g(x ∗, x ∗), we have
This shows that the sequence \(\{x^{\tau _{k}}\}\) is bounded and hence \(\{a_{\tau _{k}}\}\) defined as \(a_{\tau _{k}}=\|x^{\tau _{k}}-x^{\ast }\|^{2}-5{\sum }_{k=0}^{\tau _{k}-1}\lambda ^{2}_{\tau _{k}}\) is also bounded. It follows from this and (5) that the sequence {a k } is bounded. By a similar argument as in Case 1, we conclude that the sequence {x k} is bounded.
Claim 2
For each x ∗ ∈ Sol(F, C) and a bounded sequence {z k} in C such that
we claim that any weak cluster point of {z k} belongs to Sol(F, C).
Indeed, suppose that \(\{z^{k_{j}}\}\) is a subsequence of {z k} such that \(z^{k_{j}}\) converges weakly to \(\hat z\) in \(\mathcal {H}\). Obviously, \(\hat z\in C\), because C is assumed to be closed and convex. Applying Lemma 4 for q := x ∗, ϕ(x) := 〈F(x), x − x ∗〉 and using Assumption (A 2), we have that ϕ is weakly lower semicontinuous on C and hence
From the monotonicity of F and x ∗∈Sol(F, C), it follows that
Assumption (6) yields
Combining this, (7) and (8), we obtain \(\langle F(\hat z), \hat z-x^{\ast }\rangle =0\). Since F is paramonotone and x ∗∈Sol(F, C), we also have \(\hat z\in \text {Sol}(F, C)\).
Claim 3
We show that, for all x ∗ ∈ Sol(F, C), there exists γ > 0 such that
Indeed, by Lemma 5 and the Lipschitz continuity of F, we have
where γ := sup{∥F(x k+1)∥ + L∥x k − x ∗∥ : k ≥ 1}. By Claim 1, we have γ < + ∞.
Claim 4
The sequence {x k} converges strongly to the unique solution x ∗ of L K F(g, F, C).
Indeed, Claim 1 ensures the boundedness of the sequence {x k}, hence {F(x k)} is bounded (by the Lipschitz continuity of F on bounded subsets of C), and so is {w k} (from Assumption (A 1)). By the definition of η k , we easily observe that {η k } is bounded, so that there exists a positive constant δ such that
Set
By Lemma 5, form (3) can be equivalently rewritten as
This means that
Now, we consider two cases as follows:
Case 1
There exists k 0 such that {b k }(k ≥ k 0) is nonincreasing, i.e., b k+1 ≤ b k for all k ≥ k 0, namely
From the above proof 〈F(x k), x k − x ∗〉≥〈F(x ∗), x k − x ∗〉≥ 0, we have
Since \({\sum }_{k=0}^{\infty }{\lambda _{k}^{2}}<\infty \) and Lemma 1, we obtain
From (9), together with the second estimate in (11), we immediately have
Then, by Claim 3, together with our assumptions \({\sum }_{k=0}^{\infty }{\lambda _{k}^{2}}<\infty \), \({\sum }_{k=0}^{\infty }\lambda _{k}=\infty \), and using Lemma 3, we obtain \(\lim _{k\to \infty }\langle F(x^{k}), x^{k}-x^{\ast }\rangle =0\). As a consequence, Claim 2 shows that any weak cluster point of {x k} belongs to Sol(F, C). By Lemma 5, we obviously have
From the boundedness of the sequence {x k} and \({\sum }_{k=0}^{\infty }{\lambda _{k}^{2}}<\infty \), we have
Moreover, from (9) and the assumption \({\sum }_{k=0}^{\infty }\lambda _{k}\alpha _{k}=\infty \), it is easily seen that \({\sum }_{k=0}^{\infty }\frac {\lambda _{k}\alpha _{k}}{\eta _{k}}=\infty \). This together with (12), leads immediately to
By the ρ-strong monotonicity of g and w k ∈ ∂ 2 g(x k, x k), we also have
Consequently, we get
Taking into account this and (13), we therefore obtain
Since {x k} is bounded and using the weak continuity of g(x ∗,⋅), there exists a subsequence \(\{x^{k_{j}}\}\) of {x k} converging weakly to an element \(\bar x\) in \(\mathcal H\) such that
By Claim 2 and x ∗ is the unique solution of Problem L K F(g, F, C), \(\bar x\in \text {Sol}(F, C)\) and hence \(g(x^{\ast }, \bar x)\geq 0\). Then, combining (15) and (16), we obtain
As a consequence, by the first estimate in (11), we deduce that
Case 2
There does not exist k 0 such that {b k }(k ≥ k 0) is nonincreasing. This implies that there exists a subsequence \(\{b_{k_{j}}\}\) of {b k } such that \(b_{k_{j}}<b_{k_{j}+1}\) for all j ≥ 0. In this case, we have a sequence of indexes {τ k } as defined in Lemma 2 such that \(b_{\tau _{k}}<b_{\tau _{k}+1}\) and \(b_{k}<b_{\tau _{k}+1}\). Denote by \(\mathcal W(x^{\tau _{k}})\) the set of weak cluster-points of \(\{x^{\tau _{k}}\}\). By Claim 2, we have \(\mathcal W(x^{\tau _{k}})\subset \text {Sol}(F, C)\). From (10), the ρ-trong monotonicity of g and \(w^{\tau _{k}}\in \partial _{2} g(x^{\tau _{k}},x^{\tau _{k}})\), we have
where w ∗∈ ∂ 2 g(x ∗, x ∗). Consequently,
Combining this, \(\langle F(x^{\tau _{k}}), x^{\tau _{k}}-x^{\ast }\rangle \geq 0\) and the boundedness of {x k}, we can derive
Furthermore, since \(\langle F(x^{\tau _{k}}), x^{\tau _{k}}-x^{\ast }\rangle \geq 0\) and (17), we obviously have
which, in light of (14), leads to
Passing to the upper limit, we obtain
so that
Using the inequality \(b_{k}< b_{\tau _{k}+1}\) for all k ≥ k 0, i.e.,
By the definition of τ k and 〈F(x j), x j − x ∗〉≥ 0, we get τ k ≤ k and hence
Combining this and Assumption (A 4), we have
-
\(\lim _{k\to \infty }{\sum }_{j=\tau _{k}}^{k}{\lambda _{j}^{2}}=0\) (because τ k → + ∞) and \({\sum }_{j=1}^{\infty }{\lambda _{j}^{2}}<\infty \);
-
\(\lim _{k\to \infty }{\sum }_{j=\tau _{k}}^{k}\frac {\lambda _{j}\langle F(x^{j}), x^{j}-x^{\ast }\rangle }{\eta _{j}}=0\), because \({\sum }_{j=\tau _{k}}^{k}\langle F(x^{j}), x^{j}-x^{\ast }\rangle \to 0\), λ k → 0 and η k is bounded away from zero;
-
\(\lim _{k\to \infty }\|x^{\tau _{k}+1}-x^{\ast }\|=0\).
From this and (18), we deduce that \(\lim _{k\to \infty }\|x^{k} -x^{\ast }\|=0\) and the sequence {x k} converges strongly to x ∗. □
4 Examples and Numerical Results
In this section, we illustrate the proposed algorithms by applying it to solve a class of the lexicographic Ky Fan inequality defined by L K F(g, F, C). Here, C is a polyhedral convex set given by
and the bifunction \(g: C\times C\to \mathcal R\) is of the form
where \(G: C\to \mathcal R^{n}\), \(Q\in \mathcal R^{n\times n}\) is a symmetric positive semidefinite matrix and \(q\in \mathcal R^{n}\). Since Q is symmetric positive semidefinite, g(x,⋅) is convex for each fixed x ∈ C. The cost mapping F will be specified in detail in the following examples. It is well-known that if G is ξ-strongly monotone on C and ξ > ∥Q∥, then g is strongly monotone on C × C with constant ξ −∥Q∥ (see [27], later [26]). As usual, we can say that x k is an 𝜖-solution of Problem L K F(g, F, C) if ∥x k+1 − x k∥≤ 𝜖.
Example 1
Consider the mapping \(F: \mathcal R^{3}\to \mathcal R^{3}\) given in [28] which is paramonotone (not strictly monotone) of the form
In this example, let C, Q and q be defined by
and
It is easy to see that G is strongly monotone with constant ξ = 3.8344, F is Lipschitz continuous with the constant L = 3.5616 and ∥Q∥ = 2.6. Then, g is strongly monotone with the constant ξ −∥Q∥ = 1.2344 and ∂ 2 g(x, x) = {(P + Q)x + q}. To test Algorithm 1, we choose parameters:
Then, the iterative scheme for solving L K F(g, F, C) is formulated as follows:
The experiments are implemented in Matlab R2013a running on a Laptop Intel(R) Core(TM) i3-3110M CPU @ 2.40 GHz 2.40 GHz 4 Gb RAM.
Example 2
Consider the mapping \(F: \mathcal R^{4}\to \mathcal R^{4}\) described in [13] which is paramonotone of the form
Take C, Q, and q as follows:
and
Then, F is Lipschitz continuous and g is strongly monotone on \(\mathcal R^{4}\) which satisfy Assumptions (A 1)– (A 3).
From the preliminary numerical results reported in Tables 1 and 2, we observe that
-
(a)
As with other methods for Ky Fan inequalities such as the proximal point algorithm, or the auxiliary principle for equilibrium problems, the convergence speed of the algorithm depends very much on the starting point.
-
(b)
The algorithm is quite sensitive to the choice of the parameters λ k and α k .
References
Anh, P.N.: A new extragradient iteration algorithm for bilevel variational inequalities. Acta Math. Vietnam. 37, 95–107 (2012)
Anh, P.N.: An interior proximal method for solving pseudomonotone nonlipschitzian multivalued variational inequalities. Nonlinear Anal. Forum 14, 27–42 (2009)
Anh, P.N., Kim, J.K.: Outer approximation algorithms for pseudomonotone equilibrium problems. Comput. Math. Appl. 61, 2588–2595 (2011)
Anh, P.N., Kim, J.K., Muu, L.D.: An extragradient algorithm for solving bilevel pseudomonotone variational inequalities. J. Glob. Optim. 52, 627–639 (2012)
Anh, P.N., Muu, L.D., Strodiot, J.J.: Generalized projection method for non-Lipschitz multivalued monotone variational inequalities. Acta Math. Vietnam. 34, 67–79 (2009)
Anh, P.N., Muu, L.D., Hien, N.V., Strodiot, J.J.: Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities. J. Optim. Theory Appl. 124, 285–306 (2005)
Baiocchi, C., Capelo, A.C.: Variational and Quasivariational Inequalities: Applications to Free Boundary Problems. Wiley, New York (1984)
Ceng, L.C., Cubiotti, P., Yao, J.C.: An implicit iterative scheme for monotone variational inequalities and fixed point problems. Nonlinear Anal. 69, 2445–2457 (2008)
Ceng, L.C., Yao, J.C.: Relaxed viscosity approximation methods for fixed point problems and variational inequality problems. Nonlinear Anal. 69, 3299–3309 (2008)
Daniele, P., Giannessi, F., Maugeri, A.: Equilibrium Problems and Variational Models. Kluwer Academic Publishers, Dordrecht (2003)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003)
Fan, K.: A minimax inequality and applications. In: Shisha, O (ed.) Inequality III, pp 103–113. Academic Press, New York (1972)
Gao, X.-B., Liao, L.-Z., Qi, L.: A novel neural network for variational inequalities with linear and nonlinear constraints. IEEE Trans. Neural Netw. 16, 1305–1317 (2005)
Glackin, J., Ecker, J.G., Kupferschmid, M.: Solving bilevel linear programs using multiple objective linear programming. J. Optim. Theory Appl. 140, 197–212 (2009)
Iiduka, H.: Strong convergence for an iterative method for the triple-hierarchical constrained optimization problem. Nonlinear Anal. 71, e1292–e1297 (2009)
Kalashnikov, V.V., Kalashinikova, N.I.: Solving two-level variational inequality. J. Glob. Optim. 8, 289–294 (1996)
Konnov, I.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)
Lu, X.W., Xu, H.K., Yin, X.M.: Hybrid methods for a class of monotone variational inequalities. Nonlinear Anal. 71, 1032–1041 (2009)
Luo, Z.-Q., Pang, J.-S., Ralph, D.: Mathematical Programs with Equilibrum Constraints. Cambridge University Press, Cambridge (1996)
Maingé, P.-E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Maingé, P.-E.: Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints. Eur. J. Oper. Res. 205, 501–506 (2010)
Maingé, P.-E., Moudafi, A.: Strong convergence of an iterative method for hierarchical fixed-point problems. Pac. J. Optim. 3, 529–538 (2007)
Moudafi, A.: Proximal methods for a class of bilevel monotone equilibrium problems. J. Glob. Optim. 47, 287–292 (2010)
Sabharwal, A., Potter, L.C.: Convexly constrained linear inverse problems: iterative least-squares and regularization. IEEE Trans. Signal Process. 46, 2345–2352 (1998)
Solodov, M.: An explicit descent method for bilevel convex optimization. J. Convex Anal. 14, 227–237 (2007)
Quoc, T.D., Anh, P.N., Muu, L.D.: Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 52, 139–159 (2012)
Quoc, T.D., Muu, L.D., Hien, N.V: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Trujillo-Cortez, R., Zlobec, S.: Bilevel convex programming models. Optimization 58, 1009–1028 (2009)
Xu, H.-K.: Viscosity method for hierarchical fixed point approach to variational inequalities. Taiwan. J. Math. 14, 463–478 (2010)
Xu, M.H., Li, M., Yang, C.C.: Neural networks for a class of bi-level variational inequalities. J. Glob. Optim. 44, 535–552 (2009)
Yao, Y., Marino, G., Muglia, L.: A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 63, 559–569 (2014)
Acknowledgements
This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number “101.02-2017.15”.
We are very grateful to two anonymous referees for their really helpful and constructive comments that helped us very much in improving the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Anh, P.N., Thuy, L.Q. & Anh, T.T.H. Strong Convergence Theorem for the Lexicographic Ky Fan Inequality. Vietnam J. Math. 46, 517–530 (2018). https://doi.org/10.1007/s10013-017-0253-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10013-017-0253-z