Abstract
This survey paper discusses some new results on vector variational inequalities. It can serve as an elementary introduction to vector variational inequalities and vector optimization problems. The focus point is made on the results about connectedness structure of the solution sets, which are obtained by a scalarization method and properties of semi-algebraic sets. The first major theorem says that both Pareto solution set and weak Pareto solution set of a vector variational inequality, where the constraint set is polyhedral convex and the basic operators are given by polynomial functions, have finitely many connected components. The second major theorem asserts that both proper Pareto solution set and weak Pareto solution set of a vector variational inequality, where the convex constraint set is given by polynomial functions and all the components of the basic operators are polynomial functions, have finitely many connected components, provided that the Mangasarian-Fromovitz Constraint Qualification is satisfied at every point of the constraint set. In addition, it has been established that if the proper Pareto solution set is dense in the Pareto solution set, then the latter also has finitely many connected components. Consequences of the results for vector optimization problems are shown.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Contents
-
1 Introduction ........................................................... 2
-
2 The Fermat Rule and Vector Variational Inequalities ................... 4
-
3 Some Classes of VVIs .................................................. 8
-
4 Scalarization Formulae................................................. 9
-
5 Sets Having Finitely Many Connected Components..................... 11
-
6 Semi-Algebraic Sets.................................................... 12
-
7 VVIs under Linear Constraints......................................... 13
-
8 VVIs under Polynomial Constraints.................................... 20
-
References ................................................................. 23
2 Introduction
The notion of vector variational inequality (VVI for short) was introduced by F. Giannessi [9]. As the most important vector equilibrium model, it has received a lot of attention from researchers; see, e.g., [3, Chapter 3], [10, 16–18, 28, 50]. Among the large collection of papers [10], we would like to draw readers’ special attention to the article [11], where different solution concepts were discussed and separation techniques were applied to studying VVIs.
It is well known (see, e.g., [28]) that VVI is both a unified model and an effective tool to address various questions (solution existence, optimality conditions, structure of the solution sets, solution stability, solution sensitivity, solution methods, etc.) about vector optimization problems (VPs for short). As far as we understand, the importance of VVIs for vector optimization is the same as that of variational inequalities for scalar optimization.
Solution sensitivity and topological properties of the solution sets of strongly monotone VVIs and monotone VVIs with applications to vector optimization problems have been considered in [28, 48]. Connectedness structure of the solution sets and solution stability of affine VVIs have been investigated, for instance, in [16, 50].
This survey paper is aimed at discussing some new results on vector variational inequalities. We hope that it can serve as an elementary introduction to vector variational inequalities and vector optimization problems. The focus point is made on the results of [17, 18] about connectedness structure of the solution sets, which are obtained by a scalarization method and properties of semi-algebraic sets. In [17], it has been proved that both Pareto solution set and weak Pareto solution set of a VVI, where the constraint set is polyhedral convex and the basic operators are given by polynomial functions, have finitely many connected components. In [18], it has been shown that both proper Pareto solution set and weak Pareto solution set of a VVI, where the convex constraint set is given by polynomial functions and all the components of the basic operators are polynomial functions, have finitely many connected components, provided that the Mangasarian-Fromovitz Constraint Qualification [33, p. 44] is satisfied at every point of the constraint set. It has been also established in [18] that if the proper Pareto solution set is dense in the Pareto solution set, then the latter has finitely many connected components.
Although finiteness of the numbers of connected components in the solution sets of a VVI is a remarkable fact, our knowledge about VVIs can be enlarged by other facts, like unboundedness of the connected components if the solution set in question is disconnected, and stability of the solution set when the VVI problem undergoes perturbations. Up to now, the later properties have been obtained just for monotone affine VVIs. The interested reader is referred to the research paper [50] and also to the survey [47] for a series of related results and proofs, which are based not on the properties of semi-algebraic sets [2], but on Robinson’s stability theorem for monotone generalized equations [36, Theorem 2].
Since vector variational inequalities can serve as a tool for studying vector optimization problems, consequences of the above-mentioned results for the latter can be discussed in details. In fact, throughout this paper, we will consider VPs in parallel with VVIs.
To make the paper easy for reading, we will use the finite-dimensional setting and keep our writing as elementary as possible. Also, for the ordering cone, we will use the most standard one, that is the nonnegative orthant of an Euclidean space.
Observe that infinite-dimensional settings (see, e.g., [39, 40, 48]) and general ordering cones (see, e.g., [11]) have been used widely in the study of VVIs.
In what follows, we denote the scalar product of two vectors x, y in an Euclidean space by 〈x, y〉. The nonnegative orthant in \(\mathbb R^{n}\) is denoted by \(\mathbb R^{n}_{+}\). The interior and the relative interior (see [37, p. 44] for definition) of a convex set \(D\subset \mathbb R^{n}\) are denoted respectively by intD and riD. For a matrix M, the symbol M T indicates the transpose of M.
3 The Fermat Rule and Vector Variational Inequalities
About the fundamental role of Fermat’s Rule in history of mathematics, Hiriart-Urruty [12, p. 34] wrote: “In 1629, (that is thirteen years before the birth of Newton), Fermat conceived his method “De maximis et minimis” that could be applied to the determination of values yielding the maximum or the minimum of a function, as well as tangents of curves, which boiled down to the foundations of differential calculus...”.
Let us recall the two modern forms of Fermat’s Rule. The first one addresses scalar optimization problems under geometrical constraints. The second one applies to vector optimization problems under geometrical constraints.
From now on, if not otherwise stated, let \(K\subset \mathbb R^{n}\) be a non-empty closed convex set and \({\Omega }_{K}\subset \mathbb R^{n}\) an open set containing K. Given a continuously differentiable function \(f:{\Omega }_{K}\to \mathbb R\), one considers the optimization problem
The necessary optimality condition in the next theorem is a generalization of the classical Fermat Rule which dealt with unconstrained scalar optimization problems.
Theorem 1
(Fermat’s Rule) The following assertions are valid:
-
(i)
For a point \(\bar x\in K\) to be a local solution of (1), it is necessary that
$$ \langle\nabla f(\bar x), y-\bar x\rangle\notin -\mathbb R_{+}\setminus\{0\}\quad \forall y\in K, $$(2)where ∇f(x) stands for the gradient vector of f at x. Condition (2) can be rewritten as
$$ \langle\nabla f(\bar x), y-\bar x\rangle\geq 0\quad\forall y\in K. $$(3) -
(ii)
If f is convex on K, i.e., the inequality f((1−t)x+ty)≤(1−t)f(x)+tf(y) holds for any x, y ∈ K, and t ∈ (0, 1), then (3) is sufficient for a point \(\bar x\in K\) to be a global solution of (1).
Assertion (i) is a special case of the first assertion of Theorem 2 below. Assertion (ii) can be easily obtained. Indeed, suppose that (3) is fulfilled, f is convex on K, and y ∈ K is chosen arbitrarily. By the convexity of f on K, we have \(f((1-t)\bar x+ty)\leq (1-t)f(\bar x)+tf(y)\) for any t ∈ (0,1). Hence, \(t^{-1}[f(\bar x+t(y-\bar x))-f(\bar x)]\leq f(y)-f(\bar x)\) for all t ∈ (0,1). Letting t→0+ yields \(f^{\prime }(\bar x;y-\bar x)\leq f(y)-f(\bar x)\), where \(f^{\prime }(\bar x;y-\bar x)\) is the directional derivative of f at \(\bar x\) in the direction \(y-\bar x\). As \(\langle \nabla f(\bar x), y-\bar x\rangle =f^{\prime }(\bar x;y-\bar x)\), from (3), it follows that \(0\leq f(y)-f(\bar x)\). The fulfillment of this inequality for all y ∈ K shows that \(\bar x\in K\) is a global solution of (1).
Setting F(x)=∇f(x) for all x ∈ K, one can transform the necessary optimality condition (3) to the next variational inequality (VI in brief) in the classical sense [25]: Find \(\bar x\in K\) such that
The solution set of this problem is abbreviated to Sol(VI). In general, the variational inequality (4) can be defined for any vector function \(F:K\to \mathbb R^{n}\). As concerning the case F(x)=∇f(x), it is worthy to stress that if \(f:{\Omega }_{K}\to \mathbb R\) is a twice continuously differentiable function, then the Hessian \(\nabla ^{2}f(x)=\left (\frac {\partial ^{2}f(x)}{\partial x_{j}\partial x_{i}}\right )\) is symmetric for all x ∈ K by Clairaut’s Theorem; hence
with J:={1, …, n}, F i (x) being the i-th component of F(x), and \(\frac {\partial F_{i}(x)}{\partial x_{j}}\) denoting the partial derivative of F i at x with respect to x j . If condition (5) is satisfied, then one says that (4) is a symmetric VI. Thus, C 2-smooth optimization problems correspond to symmetric VIs.
If \(\bar x\in K\) satisfies (3), then we call it a stationary point of (1). In the above notation, the stationary point set of (1) coincides with Sol(VI), provided that F(x)=∇f(x) for all x ∈ K.
The transformation of (1) to (4), whose solution is easier, is a fundamental idea in optimization theory. To solve (4), one can apply the projection method, the Tikhonov regularization method, the proximal point method, the extragradient method, the modifications and improvements of these methods, and so on; see [8, 23, 24, 26, 27, 34, 38, 42–44] and the references therein.
It is worthy to stress that not only optimization problems but also minimax problems can be effectively studied (see, e.g., [20]) by using VIs.
Next, let us consider a general differentiable vector optimization problem and recall several solution concepts in vector optimization. More details can be found in [21, Chapter 4] and [32, Chapter 2]. Suppose now that \(f=(f_{1},\dots ,f_{m}):{\Omega }_{K}\to \mathbb R^{m}\) is a continuously differentiable function defined on Ω K . The vector optimization problem with the constraint set K and the vector objective function f is written formally as follows:
Definition 1
A point x ∈ K is said to be an efficient solution (or a Pareto solution) of (VP) if \(\left (f(K)-f(x)\right )\cap \left (-\mathbb R^{m}_{+}\setminus \{0\}\right )=\emptyset \).
In other words, x ∈ K is an efficient solution of (VP) if and only if one cannot find any feasible point y such that f i (y)≤f i (x) for all i = 1, …, m and there is at least one index i 0 ∈ {1, …, m} such that \(f_{i_{0}}(y)<f_{i_{0}}(x)\).
Definition 2
One says that x ∈ K is a weakly efficient solution (or a weak Pareto solution) of (VP) if \(\left (f(K)-f(x)\right )\cap \left (-\text {int}\,\mathbb R^{m}_{+}\right )=\emptyset \).
Clearly, x ∈ K is a weakly efficient solution of (VP) if and only if one cannot find any feasible point y such that f i (y)<f i (x) for all i = 1, …, m. We will denote the efficient solution set and the weakly efficient solution set of (VP) respectively by Sol(VP) and Sol w(VP).
Example 1
Let n = m = 2, f(x) = x for all \(x\in \mathbb R^{2}\), and
It is easy to verify by definitions that Sol(VP)={x : x 1 + x 2=1, 0≤x 1≤1} and
We are going to formulate the Fermat Rule for vector optimization problems. For the convenience of the reader, the “sufficiency” part is also included and a proof [28, p. 749] is provided.
Theorem 2
(Fermat’s Rule for vector optimization problems) The following assertions are valid:
-
(i)
For a point \(\bar x\in K\) to be a weakly efficient solution of (VP), it is necessary that
$$ \nabla f(\bar x)(y-\bar x)\notin - \text{int}\,\mathbb R_{+}^{m}\quad\; \forall y\in K, $$(6)where ∇f(x)(u):=(〈∇f 1 (x),u〉, …, 〈∇f m (x),u〉) T stands for the value of the derivative mapping \(\nabla f(x):\mathbb R^{n}\to \mathbb R^{m}\) of f at x at an element \(u\in \mathbb R^{n}\). Condition (6) can be rewritten as
$$ \nabla f(\bar x)(y-\bar x)\nleq_{\text{int}\,\mathbb R^{m}_{+}}0\quad\; \forall y\in K, $$(7)where the inequality \(v\nleq _{\text {int}\,\mathbb R^{m}_{+}}0\) for \(v\in \mathbb R^{m}\) means that \(-v\notin \text {int}\,\mathbb R^{m}_{+}\).
-
(ii)
If f is convex on K with respect to the cone \(\mathbb R^{m}_{+}\) , i.e., the inequality \((1-t)f(x)+tf(y)\in f((1-t)x+ty)+\mathbb R^{m}_{+}\) holds for any x, y ∈ K and t ∈ (0, 1), then (7) is sufficient for a point \(\bar x\in K\) to be a weakly efficient solution of (VP).
Proof
(i) On the contrary, suppose that \(\bar x\in K\) is a weakly efficient solution of (VP), but (6) fails to hold. Then there exists \(\hat x\in K\) with \(\langle \nabla f_{i}(\bar x),\hat x-\bar x\rangle <0\) for i = 1, …, m. Setting \(x_{t}=\bar x+t(\hat x-\bar x)\), we have
for t ∈ (0,1). Hence, we can find δ ∈ (0,1) such that \(f_{i}(x_{t})-f_{i}(\bar x)<0\) for all t ∈ (0,δ) and for all i = 1, …, m. This is impossible because x t ∈ K for every t ∈ (0,δ) and \(\bar x\) is a weakly efficient solution of (VP).
(ii) It is clear that f is convex on K with respect to the cone \(\mathbb R^{m}_{+}\) if and only if, for every i ∈ {1, …, m}, the function f i is convex on K. To obtain a contradiction, suppose that (7) is fulfilled, but there exists \(\hat x\in K\) such that \(f_{i}(\hat x)<f_{i}(\bar x)\) for all i = 1, …, m. Then, invoking the convexity of the functions f i on K, we have
From this, we obtain \(\nabla f(\bar x)(\hat x-\bar x)\in - \text {int}\,\mathbb R_{+}^{m},\) which contradicts (7).
The proof is complete. □
It is reasonable to expect that a rule similar to that one in the first assertion of Theorem 2 is valid for efficient solutions of (VP). However, the claim “If \(\bar x\in K\) is an efficient solution of (VP), then
for every y ∈ K” is wrong! To justify this remark, let us consider an example of an unconstrained strongly convex quadratic vector optimization problem proposed by one of the two referees of [50].
Example 2
Let n = 1, m = 2, \(K=\mathbb R\), and f(x)=(x 2,(x−1)2) for every \(x\in \mathbb R\). It is easy to show that Sol(VP2)=[0,1]. For the efficient point \(\bar x=0\), one has
hence condition (8) cannot be fulfilled for every y ∈ K. (The same effect is attained if one chooses \(\bar x=1\).)
Meanwhile, if f is convex on K with respect to the cone \(\mathbb R^{m}_{+}\) then, for any \(\bar x\in K\), the property
is strong enough to guarantee that \(\bar x\) is an efficient solution of (VP). The proof of this claim, which can be done similarly as that of the second assertion of Theorem 2, is left after the reader.
The above consideration of optimality conditions for vector optimization problems shows some strong motivations for the introduction of the concept of vector variational inequality of [9] in the form we are going to recall.
Given m vector-valued functions \(F_{i}: K\to \mathbb R^{n}\), i = 1, …, m, we put F = (F 1, …, F m ) and
Consider the set
whose relative interior is given by riΣ={ξ ∈ Σ : ξ l >0 for all l = 1, …, m}.
Vector variational inequality [9, p. 167] defined by F, K and the cone \(C:=\mathbb R^{m}_{+}\) is the problem:
where the inequality \(v\nleq _{C\setminus \{0\}}0\) for \(v\in \mathbb R^{m}\) means that −v∉C∖{0}. To this problem, one associates [4] the following one:
where intC denotes the interior of C and the inequality \(v\nleq _{\text {int}C}0\) indicates that −v∉intC. The solution sets of (VVI) and (VVI) w are abbreviated respectively to Sol(VVI) and Solw(VVI). The elements of the first set (resp., of the second one) are said to be the Pareto solutions (resp., the weak Pareto solutions) of (VVI). For a better understanding of the adjective “weak” either in “weak Pareto solution” or in “weak Pareto problem”, we would like to quote a remark of Professor F. Giannessi [11, p. 155]: “the term “weak” is misleading and in contrast with its use in other branches of Mathematics; “relaxed” would be a more appropriate term.”
For m = 1, one has \(F=F_{1}:K\to \mathbb R^{n}\) and \(C=\mathbb R_{+}\), hence, both problems (VVI) and (VVI) w coincide with the classical variational inequality problem
(see (4)), whose solution set has been denoted by Sol(VI).
For each ξ ∈ Σ, consider the variational inequality
and denote its solution set by Sol(VI) ξ . The following definition has been suggested in [18].
Definition 3
If x ∈ K and there exists ξ ∈ riΣ such that x ∈ Sol(VI) ξ , then x is said to be a proper Pareto solution of (VVI).
The proper Pareto solution set of (VVI) is abbreviated to Solpr(VVI). Clearly, by taking the union of Sol(VI) ξ on ξ ∈ riΣ, we find whole the set Solpr(VVI).
Following [37, p. 170], a polyhedral convex set in \(\mathbb R^{n}\) is a set which can be expressed as the intersection of some finite collection of closed half-spaces, i.e., as the set of solutions to some finite system of inequalities of the form
with \(c_{i}\in \mathbb R^{n}\) and \(\gamma _{i}\in \mathbb R\) for all i = 1, …, k. It is clear that any polyhedral convex set is a closed convex set, but the converse is not true.
Later, we will see that Solpr(VVI)⊂Sol(VVI), and the inclusion becomes an equality if K is a polyhedral convex set; otherwise, it may be strict.
To have an idea about computing the above solution sets of (VVI), let us consider an example.
Example 3
(cf. Example 1) Let n = m = 2, F 1(x)=(1,0)T, F 2(x)=(0,1)T for all \(x\in \mathbb R^{2}\), and
Since F(x)(y − x) = y − x and \(\left \langle {\sum }_{l=1}^{2}\xi _{l}F_{l}(x),y-x\right \rangle =\langle \xi ,y-x\rangle \) for all x, y ∈ K and ξ ∈ Σ, one can easily verify that
and
We say that (VVI) is a symmetric vector variational inequality if for each l ∈ {1, …, m} the vector-function F l satisfies (5) with F being replaced by F l , i.e.,
where J = {1, …, n} and F l i (x) is the i-th component of F l (x). Similarly, we say that (VVI) is an anti-symmetric vector variational inequality if the following property is available for each l ∈ {1, …, m}:
As noted in [47, 50], quadratic vector optimization problems lead to symmetric VVIs.
Comparing the Fermat Rule in Theorem 2 with the definition of VVI, we can assert that the necessary optimality condition for weakly efficient solutions of (VP) is a VVI, and if f is convex on K with respect to the cone \(\mathbb R^{m}_{+}\) then Solw(VVI)=Solw(VP), provided that F l is taken as ∇f l for each l ∈ {1, …, m}. In addition, if f is C 2-smooth, then the problem (VVI) corresponding to (VP) is a symmetric VVI.
As it has been shown in [49] (see also [47, 50]), by using the special necessary and sufficient optimality conditions (not in the form of Fermat’s Rule!) for efficient solutions and weakly efficient solutions of a linear fractional vector optimization problem [5, 35] (see also [29, Chap. 8]), one can treat the problem as an anti-symmetric VVI.
4 Some Classes of VVIs
Convex (resp., strongly convex, quadratic, and polynomial) optimization problems (of both scalar and vector types) form four important special classes of problems in optimization theory. Likewise, monotone (resp., strongly monotone, affine, and polynomial) VVIs deserve a special attention. The first three classes were highlighted in [28, pp. 750-751]. The fourth one has been studied quite recently, in [17, 18]. By \(\mathbb R[x_{1},\dots , x_{n}]\), we denote the ring of polynomials in the variables x 1, …, x n with coefficients from \(\mathbb R\).
Definition 4
(i) One says that (VVI) is a monotone vector variational inequality if for each l ∈ {1, …, m} the problem VI (F l ,K) is a monotone VI. The latter means that 〈F l (y)−F l (x), y − x〉≥0 for all x, y ∈ K.
(ii) If for each l ∈ {1, …, m}, the problem VI (F l ,K) is a strongly monotone VI, i.e., there exists ρ l > 0 such that 〈F l (y)−F l (x), y − x〉 ≥ ρ l ∥y − x∥2 for all x, y ∈ K, then (VVI) is called a strongly monotone vector variational inequality.
(iii) Problem (VVI) is said to be an affine vector variational inequality if K is a polyhedral convex set and there exist matrices \(M_{l}\in \mathbb R^{n\times n}\) and vectors \(q_{l}\in \mathbb R^{n}(l=1,\dots ,m)\) such that F l (x) = M l x + q l for all l = 1, …, m and x ∈ K.
(iv) One calls (VVI) a polynomial vector variational inequality under polynomial constraints if all the components of F l , l = 1, …, m, are polynomial functions in the variables x 1, …, x n (i.e., for each l one has F l =(F l1, …, F l n ) with \(F_{lj}\in \mathbb R[x_{1},\dots , x_{n}]\) for all j = 1, …, n), and K is the solution set of a finite system of inequalities and equations given by polynomial functions. If the latter system is given by affine functions, then (VVI) is called a polynomial vector variational inequality under linear constraints.
Going back to scalar functions, we now recall the notion of strong convex function. A function \(\varphi :{\Omega }_{K}\to \mathbb R\) is said to be strongly convex on K if there exists a constant ρ>0 such that the inequality
holds for any x, y ∈ K and t ∈ (0,1). According to [45, Lemma 1, p. 184], the last condition is fulfilled if and only if the function
is convex on Ω. By a result in [46, Prop. 4.10] (see also [45]), a continuously differentiable function \(\varphi :{\Omega }_{K}\to \mathbb R\) is strongly convex on K with a constant ρ>0 if and only if
It is also well known [46, Prop. 4.10] (see also [45]) that a continuously differentiable function \(\varphi :{\Omega }_{K}\to \mathbb R\) is convex on K if and only if
Combining these facts with Theorem 2, we see at once that the VVI corresponding to a smooth convex (resp., smooth strongly convex) vector optimization problem is monotone (resp., strongly monotone).
As mentioned in Section 1, various results on solution sensitivity and topological properties of the solution sets of strongly monotone VVIs and monotone VVIs with applications to vector optimization problems can be found in [28, 48]. We refer to [16, 50] for several results on connectedness structure of the solution sets and solution stability of AVVIs.
5 Scalarization Formulae
First, we recall some basic facts on scalarization of vector optimization problems, which can be derived from Theorem 2 by the help of the Separation Theorem [37, Theorem 11.3] and a property of differentiable convex functions. A detailed proof of the next theorem was given in [28].
Theorem 3
(Scalarization of vector optimization problems) Let \(\bar x\in K\). The following assertions are valid:
-
(i)
If \(\bar x\in \text {Sol}^{w}\mathrm {(VP)}\), then there exists ξ ∈ Σ, where Σ is as in Section 2, such that
$$ \left\langle\sum\limits_{l=1}^{m}\xi_{l}\nabla f_{l}(\bar x), y-\bar x\right\rangle\geq 0\quad\; \forall y\in K. $$(12) -
(ii)
If all the components of f are convex on K and there exists ξ ∈ Σ satisfying (12), then \(\bar x\in \text {Sol}^{w}\mathrm {(VP)}\).
-
(iii)
If all the components of f are convex on K and there exists ξ ∈ ri Σ satisfying (12), then \(\bar x\in \text {Sol}\mathrm {(VP)}\).
Consider problem (VP) and put F l (x) = ∇f l (x). For each ξ ∈ Σ, we consider the parametric variational inequality (VI) ξ , which now becomes
In connection with Theorem 3, the forthcoming definitions have been considered in [17, 18].
Definition 5
If x ∈ K and there is ξ ∈ Σ such that x is a solution of (13), then we call x a stationary point of (VP).
Definition 6
If x ∈ K and there is ξ ∈ riΣ such that x is a solution of (13), then we call x a proper stationary point of (VP).
The stationary point set and the proper stationary point set of (VP) are abbreviated respectively to Stat(VP) and Pr(VP). By Theorem 3, one can find the whole set Stat(VP) by solving the variational inequality (13) and taking the union of the solution sets Sol(VI) ξ , ξ ∈ Σ. A similar procedure applies to Pr(VP), for which we have to find the union of the solution sets of (13) on ξ ∈ riΣ.
Summing up the above, we have
where the first inclusion is valid if all the functions f l are convex. In addition, under fulfillment of the latter, the third inclusion in (14) becomes equality.
Now, we are in a position to formulate scalarization formulae for VVIs. The detailed proofs can be found in [28] and [30].
Theorem 4
(Scalarization of vector variational inequalities; see [28, Theorem 2.1] and [30, Theorem 2.1]) It holds that
If K is a polyhedral convex set, i.e., K is the intersection of finitely many closed half-spaces of \(\mathbb R^{n}\) (the intersection of an empty family of closed half-spaces is set to be \(\mathbb R^{n}\) ), then
The next example was designed [18, Example 2.2] to show that the inclusion Solpr(VVI)⊂Sol(VVI) can be strict even if K is given by a unique strongly convex polynomial inequality. Here, both components of F are constant vector functions that are monotone, but not strongly monotone.
Example 4
Consider problem (VVI) with m = n = 2, F 1(x)=(1,0)T and F 2(x)=(0,1)T for every \(x=(x_{1},x_{2})^{T}\in \mathbb R^{2}\), and
By (15), x ∈ Solw(VVI) if and only if there exists ξ ∈ Σ such that x ∈ Sol(VI) ξ , or
This condition can be written equivalently as
where
for x ∈ K and N K (x): = ∅ for x∉K denotes the normal cone to K at x. Since \(\xi _{1}F_{1}(x)+\xi _{2}F_{2}(x)=\left (\begin {matrix} \xi _{1}\\ \xi _{2}\end {matrix}\right )\) and N K (x)={0} for every x ∈ intK, Sol(VI) ξ ∩intK = ∅. If x is taken from the boundary of K, then N K (x)={λ x : λ≥0}; hence (17) is satisfied if and only if
It follows that Solw(VVI)=Γ, where
is a closed circle-arc. As a by-product of the above calculations, we have
It is easy to show that the end-points \(\bar x:=(-1,0)^{T}\) and \(\hat x:=(0,-1)^{T}\) of Γ belong to Sol(VVI). So we have Sol(VVI)=Solw(VVI)=Γ, while \(\text {Sol}^{pr}\mathrm {(VVI)}={\Gamma }\setminus \{\bar x,\, \hat x\}.\)
We refer to [28, Section 6] for another interesting example of (VVI) with K being a closed ball in \(\mathbb R^{2}\), F 1, and F 2 strongly monotone affine operators, where the weak Pareto solution set coincides with the Pareto solution set, which is a closed circle-arc, and the proper Pareto solution set Solpr(VVI) is obtained by eliminating the end-points from that circle-arc.
We conclude this section with an example of a polynomial vector optimization problem, where Pr(VP) is different from Sol(VP).
Example 5
(See [18, Example 4.6]) Consider problem (VP) with m = n = 2, f 1(x) = x 1 and f 2(x) = x 2 for every \(x=(x_{1},x_{2})^{T}\in \mathbb R^{2}\), and
Let Γ, \(\bar x\), and \(\hat x\) be defined as in Example 4. It is not difficult to check that Sol(VP) = Solw (VP) = Stat(VP) = Γ and \(\mathrm {Pr(VP)}={\Gamma }\setminus \{\bar x,\, \hat x\}.\vspace *{-3pt}\)
6 Sets Having Finitely Many Connected Components
To discuss the connectedness structure of the sets Sol(VVI) and Solw(VVI), we will need some definitions from general topology and a lemma from [17]. A detailed proof of the lemma is provided to make our presentation clear.
A topological space X is said to be connected if one cannot represent X = U∪V where U, V are nonempty open sets of X with U∩V = ∅. A nonempty subset A⊂X of a topological space X is said to be a connected component of X if A (equipped with the induced topology) is connected and it is not a proper subset of any connected subset of X.
Lemma 1
(See [17, Lemma 2.2]) Let Ω be a subset of a topological space X with the closure denoted by \(\overline {\Omega }\) . If Ω has k connected components, then any subset A⊂X with the property \({\Omega }\subset A\subset \overline {\Omega }\) can have at most k connected components.
Proof
Suppose that \({\Omega }\subset A\subset \overline {\Omega }\) and Ω has k connected components, denoted by Ω i , i = 1, …, k. It is easy to show that \(\overline {\Omega }=\bigcup \limits _{i=1}^{k}\overline {\Omega }_{i},\) where \(\overline {\Omega }_{i}\) stands for the closure of Ω i in the topology of X. On one hand, by the inclusion \(A\subset \bigcup \limits _{i=1}^{k}\overline {\Omega }_{i}\), we have
On the other hand, since \(\bigcup \limits _{i=1}^{k}{\Omega }_{i}\subset A\), we have
for i = 1, …, k. Applying a remark after [22, Theorem 20] (see also [31, p. 188]), which says that if \(B\subset C\subset \overline {B}\) and B is connected then C is also connected, from (20) and the connectedness of Ω i , we can assert that \(\overline {\Omega }_{i}\cap A\) is connected for all i = 1, …, k. Thus, (19) shows that A can have at most k connected components and completes the proof. □
One may ask: How many connected components the efficient solution set (resp., the weakly efficient solution set) of a vector optimization problem can have? Similar question arises about the number of connected components of the Pareto solution set (resp., the weak Paretor solution set) of a vector variational inequality. Also, one may be curious to know whether each connected component of the just mentioned solution sets is simple enough (say, it is contractible), or not. Answers for these questions were given in [13, 19] where the authors showed that:
-
(i)
(see [13] and [47, pp. 315–319]) For any positive integer m, there exists a linear fractional vector optimization problem (which is a C ∞-smooth nonconvex vector optimization problem under linear constraints) whose weakly efficient solution set coincides with the efficient solution set and has exactly m connected components;
-
(ii)
(see [19] and [47, pp. 319–322]) There exists a linear fractional vector optimization problem with the number of criteria m = 3 such that the weakly efficient solution set coincides with the efficient solution set, which is connected by line segments, but not contractive.
Using the complete reduction of any linear fractional vector optimization problem to an anti-symmetric monotone affine VVI suggested in [49] (see also [47, p. 313]), from the just mentioned results we obtain the following ones:
-
(iii)
For any positive integer m, there exists an anti-symmetric monotone affine VVI whose weak Pareto solution set coincides with the Pareto solution set and has exactly m connected components;
-
(iv)
There exists an anti-symmetric monotone affine VVI with the number of criteria m = 3 such that the weak Pareto solution set coincides with the Pareto solution set, which is connected by line segments, but not contractive.
7 Semi-Algebraic Sets
To proceed furthermore with our discussions on VVIs and VPs, we need some theorems on semi-algebraic sets.
Definition 7
(See [2, Definition 2.1.4]) A semi-algebraic subset of \(\mathbb R^{n}\) is a subset of the form
where \(f_{i,j}\in \mathbb R[x_{1},\dots ,x_{n}]\) and ∗ i, j is either < or =, for i = 1, …, s and j = 1, …, r i , with s and r i being arbitrary natural numbers.
Thus, every semi-algebraic subset of \(\mathbb R^{n}\) can be represented as a finite union of sets of the form:
where ℓ and m are natural numbers, f 1, …, f ℓ , g 1, …, g m are in \(\mathbb R[x_{1},\dots , x_{n}].\) Since
it doesn’t matter if one replaces some of the strict inequalities in (21) by non-strict inequalities.
Open balls, closed balls, spheres, and unions of finitely many of those sets are some typical examples of semi-algebraic subsets in \(\mathbb R^{n}\). Semi-algebraic subsets of \(\mathbb R\) are exactly the finite unions of points and open intervals (bounded or unbounded).
By induction, from [2, Theorem 2.2.1], we can easily derive the following useful result.
Theorem 5
Let S be a semi-algebraic subset of \(\mathbb {R}^{n}\times \mathbb {R}^{m}\) , \({\Phi }:\mathbb {R}^{n}\times \mathbb {R}^{m}\to \mathbb {R}^{n}\) the natural projection on the space of the first n coordinates, i.e.,
for every \(x=\left (x_{1},\dots ,x_{n},x_{n+1},\dots ,x_{n+m}\right )^{T}\in \mathbb R^{n}\times \mathbb R^{m}\) . Then Φ(S) is a semi-algebraic subset of \(\mathbb {R}^{n}\).
Let us recall the concept of semi-algebraically connected subset.
Definition 8
(See [2, Definition 2.4.2]) A semi-algebraic subset S of \(\mathbb R^{n}\) is semi-algebraically connected if for every pair of disjoint semi-algebraic sets F 1 and F 2, which are closed in S and satisfy F 1∪F 2 = S, one has F 1 = S or F 2 = S.
The next fundamental theorem from real algebraic geometry describes clearly the connectedness structure of semi-algebraic sets.
Theorem 6
(See [2, Theorem 2.4.5]) A semi-algebraic subset S of \(\mathbb R^{n}\) is semi-algebraically connected if and only if it is connected. Every semi-algebraic set has a finite number of connected components, which are semi-algebraic.
8 VVIs under Linear Constraints
This section discusses the connectedness structure of the solution sets of vector variational inequalities of the form (VVI) under two blanket assumptions:
-
(a1)
All the components of F l , l = 1, …, m, are polynomial functions in the variables x 1, …, x n , i.e., for every l ∈ {1, …, m} one has F l = (F l1, …, F l n ) with \(F_{lj}\in \mathbb R[x_{1},\dots , x_{n}]\)
-
(a2)
K is a nonempty polyhedral convex set, i.e., there exist a positive integer p, a matrix \(A=(a_{ij})\in {\mathbb R}^{p\times n},\) and a vector \(b=(b_{i})\in {\mathbb R}^{p}\) such that \(K=\left \{x\in {\mathbb R}^{n}\, :\, Ax\leq b\right \}\).
The main result of [17] is formulated as follows.
Theorem 7
(See [17, Theorem 3.1]) If the assumptions (a1) and (a2) are satisfied, then
-
(i)
the weak Pareto solution set Sol w (VVI) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)), and
-
(ii)
the Pareto solution set Sol(VVI) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)).
We now explain the main ideas of the proof of Theorem 7 and refer the readers to [17] for a detailed proof. To every index set α⊂I with I:={1, …, p}, we associate the pseudo-face
of K, where a i j is the element in the i-th row and the j-th column of A, and b i denotes the i-th component of b. By Theorem 4, we have
with (VI) ξ denoting the variational inequality
where \(F(x,\xi ):=\sum \limits _{l=1}^{m}\xi _{l}F_{l}(x)\) for every ξ = (ξ 1, …, ξ m )T ∈ Σ. Denote the normal cone to the convex set K at \(x\in {\mathbb R}^{n}\) by N K (x) and recall that
if x ∈ K, and N K (x) = ∅ if x∉K. Using the notations F(x, ξ) and N K (x), we can rewrite the inclusion x ∈ Sol(VI) ξ equivalently as
We have \( K=\underset {\alpha \subset I}{\bigcup }\mathcal {F}_{\alpha }\), \(\mathcal {F}_{\alpha }\cap \mathcal {F}_{\widetilde \alpha }=\emptyset \) if \(\alpha \neq \widetilde \alpha \) and, therefore,
Since a finite union of semi-algebraic subsets of \(\mathbb R^{n}\) is again a semi-algebraic subset of \(\mathbb R^{n}\), by (24), and by Theorem 6, we see that the proof of the assertion (i) will be completed if we can establish the following result.
Claim 1
For every index set α⊂I, the intersection \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }\) is a semi-algebraic subset of \(\mathbb R^{n}\).
By the Farkas lemma [37, Corollary 22.3.1], we have
for every \(x\in \mathcal {F}_{\alpha },\) where a i.:=(a i1, …, a i n ) denotes the i-th row of A and
is the convex cone generated by vectors \(z_{i}\in \mathbb R^{n},\; i=1,\dots , k.\) Due to formulas (22), (23), and (25),
Since \(\text {pos}\{a_{i.}^{T}\, :\, i\in \alpha \}\) is a convex polyhedral cone, there exists a matrix \(C_{\alpha }=\left (c_{ij}\right )\in \mathbb R^{n_{\alpha }\times n}\), where \(n_{\alpha }\in \mathbb N\), such that
The inequality on the right-hand side of (28) can be rewritten as
which is the following system of n α polynomial inequalities
Note that expression \(\sum \limits _{j=1}^{n}\sum \limits _{l=1}^{m}c_{kj}\xi _{l}F_{lj}(x)\) on the right-hand side of (29) is a polynomial in the variables x 1, …, x n ,ξ 1, …, ξ m . Consider the set
By (29), we have
Denote by |α|, the number of elements of α and observe that Ω α is determined by |α|+1 polynomial equations, n α + m polynomial inequalities, and p−|α| strict polynomial inequalities of the variables \((x,\xi )=(x_{1},{\dots } x_{n},\xi _{1},\dots ,\xi _{m})\in \mathbb R^{n+m}\). Hence, Ω α is a semi-algebraic set.
From (28), it follows that \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }={\Phi }({\Omega }_{\alpha })\), where \({\Phi }:\mathbb {R}^{n}\times \mathbb {R}^{m}\to \mathbb {R}^{n}\) is the natural projection on the space of the first n coordinates. According to Theorem 5, \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }\) is a semi-algebraic set. This proves Claim 1.
We have thus seen that Sol w(VVI) is a semi-algebraic subset of \(\mathbb R^{n}\). Then, thanks to Theorem 6, Solw(VVI) has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\).
The assertion (ii) can be proved similarly.
A topological space X is said to be path connected if, for every x, y in X, there is a continuous mapping γ:[0,1]→X such that γ(0) = x and γ(1) = y. It is well known that any path connected topological space is connected, but the converse is not true. For example, the set
considered with the induced topology, is connected, but not path connected.
Concerning Theorem 7, the following natural question arises: Is it true that each connected component of the weak Pareto solution set Sol w(VVI) (resp., of the Pareto solution set Sol(VVI)) is a path connected set? Mr. Vu Trung Hieu, a young colleague of the author of this survey, has given a positive solution for this question. To present his result, we have to recall two additional notions from real algebraic geometry.
Definition 9
(See [2, Definition 2.2.5]) Let \({\Omega }_{1}\subset \mathbb R^{n_{1}}\) and \({\Omega }_{2}\subset \mathbb R^{n_{2}}\) be two semi-algebraic sets. A mapping Ψ:Ω1→Ω2 is semi-algebraic if its graph is a semi-algebraic subset of \(\mathbb R^{n_{1}+n_{2}}\).
Definition 10
(See [2, Definition 2.5.12]) A semi-algebraic subset \({\Omega }\subset \mathbb R^{n}\) is said to be semi-algebraically path connected if, for every x, y in X, there exists a continuous semi-algebraic mapping γ:[0,1]→Ω such that γ(0) = x and γ(1) = y.
Clearly, any semi-algebraically path connected set \({\Omega }\subset \mathbb R^{n}\) is a path connected topological space. To see that the converse is not true in general, one can choose
Since Ω is not a semi-algebraic set, it cannot be semi-algebraically path connected. Note also that, for any ε>0, there does not exist any continuous semi-algebraic curve joining x = (−ε, εsin(1/ε))T with y = (ε, εsin(1/ε))T.
Lemma 2
If \({\Omega }\subset \mathbb R^{n}\) is a semi-algebraic subset, then the following properties are equivalent:
-
(i)
Ω is semi-algebraically path connected;
-
(ii)
Ω is semi-algebraically connected;
-
(iii)
Ω is connected.
Proof
The equivalence between (i) and (ii) is assured by [2, Prop. 2.5.13], while the equivalence between (ii) and (iii) follows from [2, Prop. 2.4.5] (see Theorem 6). □
Proposition 1
(Vu Trung Hieu; private communication) If the assumptions (a1) and (a2) are satisfied, then
-
(i)
each connected component of the weak Pareto solution set Sol w (VVI) is a semi-algebraically path connected subset of \(\mathbb R^{n}\) (so it is path connected), and
-
(ii)
each connected component of the Pareto solution set Sol(VVI) is a semi-algebraically path connected subset of \(\mathbb R^{n}\) (so it is path connected).
Proof
To prove the first assertion, observe that each connected component of Sol w(VVI) is connected semi-algebraic set by Theorem 7. Hence, according to Lemma 2, it is a semi-algebraically path connected subset of \(\mathbb R^{n}\). The second assertion can be obtained in the same way. □
Remark 1
By Theorem 7 and the arguments of the proof of Proposition 1, one can see that each connected component of a semi-algebraic subset of \(\mathbb R^{n}\) is a semi-algebraically path connected subset. Applying this property to any solution set to be considered in the sequel, which is a semi-algebraic set, we can assert that each of its connected components is a semi-algebraically path connected set (so it is path connected).
It is clear that if F l (x) = M l (x) + q l , where M l is an n×n matrix and \(q_{l}\in \mathbb R^{n}\), for l = 1, …, m, then each component F l j (x) of the functions F l , l = 1, …, m, is a polynomial function in the variables x 1, …, x n . Therefore, Theorem 7 solves in the affirmative Question 1 of [50, p. 66] about the connectedness structure of the solution sets of affine vector variational inequalities, without requiring the positive semidefiniteness of the matrices M l . Moreover, it assures that each connected component of the solution set under consideration is a semi-algebraic subset. Note that, by [50, Theorems 4.1 and 4.2], if the AVVI under consideration is monotone and if the solution set in question is disconnected, then each of its connected components is unbounded. The later result cannot be obtained by tools of real algebraic geometry.
In [16], by a different approach using fractional functions and some properties of determinant, it has been proved that both the Pareto solution set and the weak Pareto solution set of an AVVI have finitely many connected components, provided that m = 2 and a regularity condition is satisfied. So, Theorem 7 implies the results of [16].
The problem of finding an upper bound for the numbers of connected components of Solw(VVI) and Sol(VVI) requires further investigations. In the case m = 2, an explicit upper bound for the numbers of connected components of Solw(VVI) and Sol(VVI) is given in [16] under a regularity condition. This result gives a partial solution to Question 2 of [50].
Linear fractional vector optimization problems (or LFVOPs) and quadratic vector optimization problems (or QVOPs) are two fundamental classes of vector optimization problems. Both classes contain linear vector optimization problems as an important subclass. By using Theorem 7, one can get some facts about the connectedness structure of the solution sets in LFVOPs. Moreover, a property of the stationary point set of polynomial vector optimization problems under linear constraints can be obtained and applied to convex QVOPs.
We now present some basic information about LFVOPs. More details can be found in [41, 49] and [29, Chap. 8]. Let \(f_{l}:\mathbb R^{n}\to \mathbb R,\ l=1,\dots ,m\) be the linear fractional functions of the form
where \(a_{l}\in \mathbb R^{n}, b_{l}\in \mathbb R^{n}, \alpha _{l}\in \mathbb R,\) and \(\beta _{l}\in \mathbb R\). Let \(K\subset \mathbb R^{n}\) be satisfying assumption (a2). Suppose that \({b_{l}^{T}}x+\beta _{l}>0\) for all l ∈ {1, …, m} and x ∈ K. Put f(x)=(f 1(x), …, f m (x)),
and observe that Ω K is open and convex, K⊂Ω K , and f is continuously differentiable on Ω K . Consider the linear fractional vector optimization problem
LFVOPs have been studied intensively for more than three decades; see [5, 6, 13–15, 19, 35, 41, 47, 49, 50] and the references therein.
The efficient solution set and the weakly efficient solution set of (VP 1) are denoted by Sol(VP1) and Solw(VP1), respectively. According to [35], x ∈ Sol(VP1) if and only if there exists ξ = (ξ 1, …, ξ m ) ∈ riΣ such that
Similarly, x ∈ Solw(VP1) if and only if there exists ξ = (ξ 1, …, ξ m ) ∈ Σ such that (30) holds.
Condition (30) can be rewritten in the form of a parametric affine variational inequality as follows:
with
where
It is well known [49] that (VI)\(^{\prime }_{\xi }\) is an anti-symmetric monotone AVI for every ξ ∈ Σ. Denote by Ψ(ξ) the solution set of (VI)\(^{\prime }_{\xi }\) and consider the multifunction \({\Psi }:{\Sigma }\rightrightarrows \mathbb R^{n},\ \xi \mapsto {\Psi }(\xi )\). According to the above-recalled necessary and sufficient optimality conditions for (VP 1), we have
and
By formulas (31), (32), and Theorem 4, Sol(VP1) (resp., Solw(VP1)) coincides with the Pareto solution set (resp., the weak Pareto solution set) of the monotone AVVI defined by K and F l (x) = M l x + q l , l = 1, …, m. Hence, the next result is a direct consequence of Theorem 7.
Theorem 8
(See [17, Theorem 4.5]) It holds that
-
(i)
the Pareto solution set Sol(VP 1 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)), and
-
(ii)
the weak Pareto solution set Sol w (VP 1 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)).
We have seen that both the Pareto solution set and the weak Pareto solution set of (VP1) are semi-algebraic subsets and have finitely many connected components. As mentioned in Section 5, Hoa, Phuong, and Yen [13] had proved that, for any natural number m≥1, there exists a LFVOP with m objective criteria, where the sets Sol(VP1) and Solw(VP1) coincide and have exactly m connected components. The problem of finding an upper estimate for the number of connected components of Sol(VP1) has been solved in [15] for the case m = 2.
If \(f_{l}:\mathbb R^{n}\to \mathbb R,\ l=1,\dots ,m,\) are polynomial functions and \(K\subset \mathbb R^{n}\) is a polyhedral convex set then (VP), now denoted by (VP 2), is called a polynomial vector optimization problem under linear constraints.
We denote the efficient solution set, the weakly efficient solution set, the stationary point set, and the proper stationary point set of (VP 2), respectively, by Sol(VP2), Solw(VP2), Stat(VP 2), and Pr(VP 2).
Theorem 9
(See [17, Theorem 4.6]) The following assertions hold:
-
(i)
The set Stat(VP 2 ) (resp., the set Pr(VP 2 )) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));
-
(ii)
If all the functions f l are convex, then Sol w (VP 2 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));
-
(iii)
If all the functions f l are convex, and the set Pr(VP 2) is dense in Sol(VP 2), then Sol(VP 2) has a finite number of connected components.
To see how Lemma 1 works for the class of problems in question, we recall here the proof of the assertion (iii). Since all the functions f l are convex and the set Pr(VP 2) is dense in Sol(VP2), we have
where the first inclusion is a special case of the first inclusion in (14), and \(\overline {\mathrm {Pr(VP_{2})}}\) is the closure of Pr(VP 2) in the Euclidean topology of \(\mathbb R^{n}\). By (i), we see that Pr(VP 2) has finitely many connected components. Now, from (33) and Lemma 1, it follows that Sol(VP2) has a finite number of connected components.
If K is a polyhedral convex set and \(f_{l}(x)=\frac {1}{2}x^{T}M_{l}x+{q_{l}^{T}}x,\ l=1,\dots ,m,\) where \(M_{l}\in \mathbb R^{n\times n},\ l=1,\dots , m\), are symmetric matrices, and q l for l = 1, …, m are vectors in \(\mathbb R^{n}\), then (VP) is called a quadratic vector optimization problem (QVOP for short) and is denoted by (VP 3).
Clearly, a quadratic vector optimization problem is a polynomial vector optimization problem. Hence, Theorem 9 implies the next result on the connectedness structure of the stationary point set Stat(VP 3) and of the weakly efficient solution set Solw(VP3).
Corollary 1
(See [17, Corollary 4.7]) The following properties hold:
-
(i)
The set Stat(VP 3 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));
-
(ii)
If all the matrices \(M_{l}\in \mathbb R^{n\times n},\ l=1,\dots , m\) , are positive semidefinite, then Sol w (VP 3 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)).
We have seen that the stationary point set of a QVOP and the weakly efficient solution set of a convex QVOP have finitely many connected components. Note that, by [50, Theorem 4.1], if the weakly efficient solution set of a convex QVOP is disconnected, then each of its connected components is unbounded. Nevertheless, up to now we are unaware of any convex QVOP with a disconnected weakly efficient solution set or a disconnected efficient solution set. We finish this section with two open questions.
Question 1
The weak Pareto solution set (resp., the Pareto solution set) of a symmetric monotone affine VVI is always connected?
Question 2
The weak Pareto solution set (resp., the Pareto solution set) of any anti-symmetric monotone affine VVI with m criteria can have at most m connected components?
9 VVIs under Polynomial Constraints
Throughout this section, we let
where g i ,h j belong to \(\mathbb R[x_{1},\dots , x_{n}]\) for all i = 1, …, p, j = 1, …, s, and assume that K is convex.
Remark 2
If the functions g i are convex and the functions h j are affine, i.e., \(h_{j}(x)=\langle a_{j},x\rangle +b_{j},\ a_{j}\in \mathbb R^{n},\ b_{j}\in \mathbb R\) for all j = 1, …, s, then the set K given by (34) is convex.
In order to have an explicit formula for computing the normal cone to K at every point x ∈ K, one has to impose a regularity condition on the functions appeared in (34).
Definition 11
(See [33, p. 44]) One says that the Mangasarian-Fromovitz Constraint Qualification (the MFCQ for brevity) is satisfied at a point x ∈ K if
-
(i)
the gradient vectors {∇h j (x) : j = 1, …, s} are linearly independent;
-
(ii)
there exists \(v\in \mathbb R^{n}\) with 〈∇h j (x), v〉=0 for all j = 1, …, s, and 〈∇g i (x), v〉<0, for all i ∈ I(x), where I(x):={i : g i (x)=0}.
We will discuss the connectedness structure of the solution sets of vector variational inequalities of the form (VVI) under two assumptions:
-
(a1)
All the components of F l , l = 1, …, m, are polynomial functions in the variables x 1, …, x n ; i.e., for each l one has F l =(F l1, …, F l n ) with \(F_{lj}\in \mathbb R[x_{1},\dots , x_{n}]\) for all j = 1, …, n;
-
(a2)
The MFCQ is satisfied at every point of K.
The main result of [18] can be formulated as follows.
Theorem 10
(See [18, Theorem 3.3]) If (a1) and (a2) are fulfilled, then it holds that
-
(i)
The weak Pareto solution set Sol w (VVI) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));
-
(ii)
The proper Pareto solution set Sol pr (VVI) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));
-
(iii)
If the set Sol pr (VVI) is dense in Sol(VVI), then Sol(VVI) has a finite number of connected components.
Let us outline the main arguments for proving this theorem. Put I = {1, …, p} and J = {1, …, s}. To every subset α⊂I (the case α = ∅ is not excluded) one associates the (possibly curved) pseudo-face of K:
Using the notation I(x) given in Definition 11, we have I(x) = α for all \(x\in \mathcal {F}_{\alpha }\).
By (15), we have
with (VI) ξ denoting the variational inequality
where \(F(x,\xi ):={\sum }_{l=1}^{m}\xi _{l}F_{l}(x)\) for every ξ = (ξ 1, …, ξ m )T ∈ Σ. Using the notation F(x, ξ) and the definition of normal cone in (18), we can rewrite the inclusion x ∈ Sol(VI) ξ equivalently as
We have \( K=\underset {\alpha \subset I}{\bigcup }\mathcal {F}_{\alpha }\), \(\mathcal {F}_{\alpha }\cap \mathcal {F}_{\widetilde \alpha }=\emptyset \) if \(\alpha \neq \widetilde \alpha \) and, therefore,
Since a finite union of semi-algebraic subsets of \(\mathbb R^{n}\) is again a semi-algebraic subset of \(\mathbb R^{n}\), by (37) and by Theorem 6, we see that the proof of the assertion (i) will be completed if we can establish
Claim 2
For every index set α⊂I, the intersection \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }\) is a semi-algebraic subset of \(\mathbb R^{n}\).
By virtue of the assumption (a2), the result formulated in [1, Remark on p. 151] gives us an explicit formula for computing the Clarke tangent cone [7, p. 51] to K at every point x ∈ K:
Since K is convex, the normal cone N K (x) defined in (18) coincides ([7, Proposition 2.4.4]) with Clarke normal cone [7, p. 51] to K at x, which is the negative dual cone of the Clarke tangent cone T K (x). So,
This means that a vector \(x^{*}\in \mathbb R^{n}\) belongs to N K (x) if and only if the inequality 〈x ∗,v〉≤0 is a consequence of the system
Hence, by Farkas’ Lemma [37, Corollary 22.3.1], x ∗ ∈ N K (x) if and only if there exist λ i ≥0, i ∈ I(x), \(\mu _{j}\in \mathbb R\), j ∈ J, such that
It follows that
for every α⊂I and for every \(x\in \mathcal {F}_{\alpha }\).
According to the formulas (35), (36), and (38),
The equality in (39) is equivalent to the forthcoming system of polynomial equalities in the variables x 1, …, x n , ξ 1, …, ξ m ,λ i , i ∈ α, μ j , j ∈ J:
Denote the power of α by |α| and put
By (40), we have
Since Ω α is determined by |α| + n+1 polynomial equations, m+|α| polynomial inequalities, and p−|α| strict polynomial inequalities of the variables \((x,\xi ,\mu )=(x_{1},\dots , x_{n},\xi _{1},\dots ,\xi _{m},\mu _{1},\dots ,\mu _{s})\in \mathbb R^{n+m+s}\) and λ i , i ∈ α, it is a semi-algebraic set.
From (39), we get \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }={\Phi }({\Omega }_{\alpha })\), where \({\Phi }:\mathbb R^{n+m+|\alpha |+s}\to \mathbb {R}^{n}\) is the natural projection on the space of the first n coordinates. According to Theorem 5, \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }\) is a semi-algebraic set. This proves Claim 2.
We have thus shown that the weak Pareto solution set Sol w(VVI) is a semi-algebraic subset of \(\mathbb R^{n}\). Then, according to Theorem 6, Solw(VVI) has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\).
The assertion (ii) can be proved in the same manner.
For (iii), one can observe by the density of Solpr(VVI) in Sol(VVI) that
where \(\overline {\text {Sol}^{pr}\mathrm {(VVI)}}\) is the closure of Solpr(VVI) in the Euclidean topology of \(\mathbb R^{n}\). Since Solpr(VVI) has finitely many connected components, from (41) and Lemma 1, it follows that Sol(VVI) has a finite number of connected components.
Note that, in the problem discussed in Example 4, the assumption “the set Solpr(VVI) is dense in Sol(VVI)” in Theorem 10 is satisfied.
Let \(f_{l}: \mathbb R^{n}\to \mathbb R,\ \, l=1,\dots ,m,\) be polynomial functions and let \(K\subset \mathbb R^{n}\) be given as in (34), where \(g_{i}, h_{j}\in \mathbb R[x_{1},\dots , x_{n}]\) for all i = 1, …, p, j = 1, …, s. In this setting, (VP) is called a polynomial vector optimization problem under polynomial constraints and is denoted by (pVP). The Pareto solution set, the weakly Pareto solution set, the stationary point set, and the proper stationary point set of (pVP) are abbreviated respectively to Sol(pVP), Solw(pVP), Stat(pVP), and Pr(pVP).
Theorem 11
(See [18, Theorem 4.5]) If K is convex and the assumption (a2) is fulfilled, then the following assertions are valid:
-
(i)
The set Stat(pVP) (resp., the set Pr(pVP)) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));
-
(ii)
If all the functions f l are convex, then Sol w (pVP) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));
-
(iii)
If all the functions f l are convex and the set Pr(pVP) is dense in Sol(pVP), then Sol(pVP) has a finite number of connected components.
It is clear that the assumption “the set Pr(pVP) is dense in Sol(pVP)” in Theorem 11 is satisfied for the problem described in Example 5.
References
Aubin, J.-P., Frankowska, H.: Set-valued Analysis. Reprint of the 1990 edition. Birkhauser̈, Boston (2009)
Bochnak, R., Coste, M., Roy, M.F.: Real Algebraic Geometry. Spinger, Berlin (1998)
Chen, G.-Y., Huang, X.X., Yang, X.Q.: Vector Optimization. Set-Valued and Variational Analysis. Springer (2005)
Chen, G.Y., Yang, X.Q.: The complementarity problems and their equivalence with the weak minimal element in ordered spaces. J. Math. Anal. Appl. 153, 136–158 (1990)
Choo, E.U., Atkins, D.R.: Bicriteria linear fractional programming. J. Optim. Theory Appl. 36, 203–220 (1982)
Choo, E.U., Atkins, D.R.: Connectedness in multiple linear fractional programming. Manag. Sci. 29, 250–255 (1983)
Clarke, F.H.: Optimization and Nonsmooth Analysis. Reprint of the 1983 edition. SIAM, Philadelphia (1990)
Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, Vols. I and II. Springer, New York (2003)
Giannessi, F.: Theorems of alternative, quadratic programs and complementarity problems. In: Cottle, R.W., Giannessi, F., Lions, J.-L (eds.) Variational Inequality and Complementarity Problems, pp 151–186. Wiley, New York (1980)
Giannessi, F. (ed.): Vector Variational Inequalities and Vector Equilibria. Kluwer Academic Publishers, Dordrecht (2000)
Giannessi, F., Mastroeni, G., Pellegrini, L.: On the theory of vector optimization and variational inequalities. Image space analysis and separation. In: Giannessi, F. (ed.) Vector Variational Inequalities and Vector Equilibria, pp 153–215. Kluwer Academic Publishers, Dordrecht (2000)
Hiriart-Urruty, J.-B.: Mathematical faits divers [a note written for “Journees Fermat 85”; translated from French by Ponstein, J.]
Hoa, T.N., Phuong, T.D., Yen, N.D.: Linear fractional vector optimization problems with many components in the solution sets. J. Industr. Manag. Optim. 1, 477–486 (2005)
Hoa, T.N., Phuong, T.D., Yen, N.D.: On the parametric affine variational inequality approach to linear fractional vector optimization problems. Vietnam J. Math. 33, 477–489 (2005)
Hoa, T.N., Huy, N.Q., Phuong, T.D., Yen, N.D.: Unbounded components in the solution sets of strictly quasiconcave vector maximization problems. J. Glob. Optim. 37, 1–10 (2007)
Huong, N.T.T., Hoa, T.N., Phuong, T.D., Yen, N.D.: A property of bicriteria affine vector variational inequalities. Appl. Anal. 10, 1867–1879 (2012)
Huong, N.T.T., Yao, J.-C., Yen, N.D.: Connectedness structure of the solution sets of vector variational inequalities. Preprint, submitted (2015)
Huong, N.T.T., Yao, J.-C., Yen, N.D.: Polynomial vector variational inequalities under polynomial constraints and applications. Preprint, accepted for publication in SIAM Journal on Optimization (2016)
Huy, N.Q., Yen, N.D.: Remarks on a conjecture of J. Benoist. Nonlinear Anal. Forum 9, 109–117 (2004)
Huy, N.Q., Yen, N.D.: Minimax variational inequalities. Acta Math. Vietnam. 36, 265–281 (2011)
Jahn, J.: Vector Optimization. Theory, Applications, and Extensions, 2nd edn. Springer (2011)
Kelley, J.L.: General Topology. D. Van Nostrand, New York (1955)
Khanh, P.D.: Solution methods for pseudomonotone variational inequalitites. Ph.D. thesis. Institute of Mathematics, VAST, Hanoi (2015)
Khanh, P.D., Vuong, P.T.: Modified projection method for strongly pseudomonotone variational inequalities. J. Glob. Optim. 58, 341–350 (2014)
Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic, New York (1980)
Khobotov, E.N.: A modification of the extragradient method for solving variational inequalities and some optimization problems (in Russian). Zh. Vychisl. Mat. i Mat. Fiz. 27, 1462–1473 (1987)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems (in Russian). Ekonom. i Mat. Metody 12, 747–756 (1977). English translation in Matekon 13, 35–49
Lee, G.M., Kim, D.S., Lee, B.S., Yen, N.D.: Vector variational inequalities as a tool for studying vector optimization problems. Nonlinear Anal. 34, 745–765 (1998)
Lee, G.M., Tam, N.N., Yen, N.D.: Quadratic Programming and Affine Variational Inequalities: A Qualitative Study. Series: Nonconvex Optimization and Its Applications, Vol. 78. Springer, New York (2005)
Lee, G.M., Yen, N.D.: A result on vector variational inequalities with polyhedral constraint sets. J. Optim. Theory Appl. 109, 193–197 (2001)
Lipschutz, S.: Theory and Problems of General Topology. Schaum Publishing Company, New York (1965)
Luc, D.T.: Theory of Vector Optimization. Springer (1989)
Mangasarian, O.L., Fromovitz, S.: The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. J. Math. Anal. Appl. 17, 37–47 (1967)
Martinet, B.: Regularisation d’inéquations variationelles parapproximations successives. Revue Française Automatique Informatique et Recherche Opérationelle 4, 154–159 (1970)
Malivert, C.: Multicriteria fractional programming. In: Sofonea, M., Corvellec, J.N. (eds.) Proceedings of the 2nd Catalan days on applied mathematics, pp 189–198. Presses Universitaires de Perpinan (1995)
Robinson, S.M.: Generalized equations and their solutions, Part I: basic theory. Math. Program. Study 10, 128–141 (1979)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Song, W.: Generalized vector variational inequalities. In: Giannessi, F. (ed.) Vector Variational Inequalities and Vector Equilibria, pp 381–401. Kluwer Academic Publishers, Dordrecht (2000)
Song, W.: Vector equilibrium problems with set-valued mappings. In: Giannessi, F (ed.) Vector Variational Inequalities and Vector Equilibria, pp 403–422. Kluwer Academic Publishers, Dordrecht (2000)
Steuer, R.E.: Multiple Criteria Optimization: Theory, Computation and Application. Wiley, New York (1986)
Tam, N.N., Yao, J.-C., Yen, N.D.: Solution methods for pseudomonotone variational inequalities. J. Optim. Theory Appl. 138, 253–273 (2008)
Thanh Hao, N.: Tikhonov regularization algorithm for pseudomonotone variational inequalities. Acta Math. Vietnam. 31, 283–289 (2006)
Tseng, P.: On linear convergence of iterative methods for the variational inequality problem. J. Compt. Appl. Math. 60, 237–252 (1995)
Vasilev, F.P.: Numerical Methods for Solving Extremal Problems (in Russian), 2nd edn. Moscow, Nauka (1988)
Vial, J.-P.: Strong and weak convexity of sets and functions. Math. Oper. Res. 8, 231–259 (1983)
Yen, N.D.: Linear fractional and convex quadratic vector optimization problems. In: Ansari, Q.H., Yao, J.-C. (eds.) Recent Developments in Vector Optimization, pp 297–328. Springer (2012)
Yen, N.D., Lee, G.M.: On monotone and strongly monotone vector variational inequalities. In: Giannessi, F (ed.) Vector Variational Inequalities and Vector Equilibria, pp 467–478. Kluwer Academic Publishers, Dordrecht (2000)
Yen, N.D., Phuong, T.D.: Connectedness and stability of the solution set in linear fractional vector optimization problems. In: Giannessi, F (ed.) Vector Variational Inequalities and Vector Equilibria, pp 479–489. Kluwer Academic Publishers, Dordrecht (2000)
Yen, N.D., Yao, J.-C.: Monotone affine vector variational inequalities. Optimization 60, 53–68 (2011)
Acknowledgments
The author was supported by Project VAST.HTQT.PHAP.02/14-15. He thanks Prof. Jen-Chih Yao and Dr. Nguyen Thi Thu Huong for the successful research collaboration, and Mr. Vu Xuan Truong for useful discussions on connectedness structure of semi-algebraic sets. Proposition 1 in this paper is due to Mr. Vu Trung Hieu.
Author information
Authors and Affiliations
Corresponding author
Additional information
Dedicated to Professor Franco Giannessi on the occasion of his 80th birthday
Rights and permissions
About this article
Cite this article
Yen, N.D. An Introduction to Vector Variational Inequalities and Some New Results. Acta Math Vietnam 41, 505–529 (2016). https://doi.org/10.1007/s40306-015-0168-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40306-015-0168-2
Keywords
- Vector variational inequality
- Vector optimization problem
- Fermat’s rules
- Solution set
- Scalarization
- Semi-algebraic set
- Connectedness structure