1 Contents

  • 1   Introduction ........................................................... 2

  • 2   The Fermat Rule and Vector Variational Inequalities ................... 4

  • 3   Some Classes of VVIs .................................................. 8

  • 4   Scalarization Formulae................................................. 9

  • 5   Sets Having Finitely Many Connected Components..................... 11

  • 6   Semi-Algebraic Sets.................................................... 12

  • 7   VVIs under Linear Constraints......................................... 13

  • 8   VVIs under Polynomial Constraints.................................... 20

  • References ................................................................. 23

2 Introduction

The notion of vector variational inequality (VVI for short) was introduced by F. Giannessi [9]. As the most important vector equilibrium model, it has received a lot of attention from researchers; see, e.g., [3, Chapter 3], [10, 1618, 28, 50]. Among the large collection of papers [10], we would like to draw readers’ special attention to the article [11], where different solution concepts were discussed and separation techniques were applied to studying VVIs.

It is well known (see, e.g., [28]) that VVI is both a unified model and an effective tool to address various questions (solution existence, optimality conditions, structure of the solution sets, solution stability, solution sensitivity, solution methods, etc.) about vector optimization problems (VPs for short). As far as we understand, the importance of VVIs for vector optimization is the same as that of variational inequalities for scalar optimization.

Solution sensitivity and topological properties of the solution sets of strongly monotone VVIs and monotone VVIs with applications to vector optimization problems have been considered in [28, 48]. Connectedness structure of the solution sets and solution stability of affine VVIs have been investigated, for instance, in [16, 50].

This survey paper is aimed at discussing some new results on vector variational inequalities. We hope that it can serve as an elementary introduction to vector variational inequalities and vector optimization problems. The focus point is made on the results of [17, 18] about connectedness structure of the solution sets, which are obtained by a scalarization method and properties of semi-algebraic sets. In [17], it has been proved that both Pareto solution set and weak Pareto solution set of a VVI, where the constraint set is polyhedral convex and the basic operators are given by polynomial functions, have finitely many connected components. In [18], it has been shown that both proper Pareto solution set and weak Pareto solution set of a VVI, where the convex constraint set is given by polynomial functions and all the components of the basic operators are polynomial functions, have finitely many connected components, provided that the Mangasarian-Fromovitz Constraint Qualification [33, p. 44] is satisfied at every point of the constraint set. It has been also established in [18] that if the proper Pareto solution set is dense in the Pareto solution set, then the latter has finitely many connected components.

Although finiteness of the numbers of connected components in the solution sets of a VVI is a remarkable fact, our knowledge about VVIs can be enlarged by other facts, like unboundedness of the connected components if the solution set in question is disconnected, and stability of the solution set when the VVI problem undergoes perturbations. Up to now, the later properties have been obtained just for monotone affine VVIs. The interested reader is referred to the research paper [50] and also to the survey [47] for a series of related results and proofs, which are based not on the properties of semi-algebraic sets [2], but on Robinson’s stability theorem for monotone generalized equations [36, Theorem 2].

Since vector variational inequalities can serve as a tool for studying vector optimization problems, consequences of the above-mentioned results for the latter can be discussed in details. In fact, throughout this paper, we will consider VPs in parallel with VVIs.

To make the paper easy for reading, we will use the finite-dimensional setting and keep our writing as elementary as possible. Also, for the ordering cone, we will use the most standard one, that is the nonnegative orthant of an Euclidean space.

Observe that infinite-dimensional settings (see, e.g., [39, 40, 48]) and general ordering cones (see, e.g., [11]) have been used widely in the study of VVIs.

In what follows, we denote the scalar product of two vectors x, y in an Euclidean space by 〈x, y〉. The nonnegative orthant in \(\mathbb R^{n}\) is denoted by \(\mathbb R^{n}_{+}\). The interior and the relative interior (see [37, p. 44] for definition) of a convex set \(D\subset \mathbb R^{n}\) are denoted respectively by intD and riD. For a matrix M, the symbol M T indicates the transpose of M.

3 The Fermat Rule and Vector Variational Inequalities

About the fundamental role of Fermat’s Rule in history of mathematics, Hiriart-Urruty [12, p. 34] wrote: “In 1629, (that is thirteen years before the birth of Newton), Fermat conceived his method “De maximis et minimis” that could be applied to the determination of values yielding the maximum or the minimum of a function, as well as tangents of curves, which boiled down to the foundations of differential calculus...”.

Let us recall the two modern forms of Fermat’s Rule. The first one addresses scalar optimization problems under geometrical constraints. The second one applies to vector optimization problems under geometrical constraints.

From now on, if not otherwise stated, let \(K\subset \mathbb R^{n}\) be a non-empty closed convex set and \({\Omega }_{K}\subset \mathbb R^{n}\) an open set containing K. Given a continuously differentiable function \(f:{\Omega }_{K}\to \mathbb R\), one considers the optimization problem

$$ \min\left\{f(x)\;:\;x\in K\right\}. $$
(1)

The necessary optimality condition in the next theorem is a generalization of the classical Fermat Rule which dealt with unconstrained scalar optimization problems.

Theorem 1

(Fermat’s Rule) The following assertions are valid:

  1. (i)

    For a point \(\bar x\in K\) to be a local solution of (1), it is necessary that

    $$ \langle\nabla f(\bar x), y-\bar x\rangle\notin -\mathbb R_{+}\setminus\{0\}\quad \forall y\in K, $$
    (2)

    where ∇f(x) stands for the gradient vector of f at x. Condition (2) can be rewritten as

    $$ \langle\nabla f(\bar x), y-\bar x\rangle\geq 0\quad\forall y\in K. $$
    (3)
  2. (ii)

    If f is convex on K, i.e., the inequality f((1−t)x+ty)≤(1−t)f(x)+tf(y) holds for any x, y ∈ K, and t ∈ (0, 1), then (3) is sufficient for a point \(\bar x\in K\) to be a global solution of (1).

Assertion (i) is a special case of the first assertion of Theorem 2 below. Assertion (ii) can be easily obtained. Indeed, suppose that (3) is fulfilled, f is convex on K, and yK is chosen arbitrarily. By the convexity of f on K, we have \(f((1-t)\bar x+ty)\leq (1-t)f(\bar x)+tf(y)\) for any t ∈ (0,1). Hence, \(t^{-1}[f(\bar x+t(y-\bar x))-f(\bar x)]\leq f(y)-f(\bar x)\) for all t ∈ (0,1). Letting t→0+ yields \(f^{\prime }(\bar x;y-\bar x)\leq f(y)-f(\bar x)\), where \(f^{\prime }(\bar x;y-\bar x)\) is the directional derivative of f at \(\bar x\) in the direction \(y-\bar x\). As \(\langle \nabla f(\bar x), y-\bar x\rangle =f^{\prime }(\bar x;y-\bar x)\), from (3), it follows that \(0\leq f(y)-f(\bar x)\). The fulfillment of this inequality for all yK shows that \(\bar x\in K\) is a global solution of (1).

Setting F(x)=∇f(x) for all xK, one can transform the necessary optimality condition (3) to the next variational inequality (VI in brief) in the classical sense [25]: Find \(\bar x\in K\) such that

$$ \langle F(\bar x), y-\bar x\rangle\geq 0\quad\forall y\in K. $$
(4)

The solution set of this problem is abbreviated to Sol(VI). In general, the variational inequality (4) can be defined for any vector function \(F:K\to \mathbb R^{n}\). As concerning the case F(x)=∇f(x), it is worthy to stress that if \(f:{\Omega }_{K}\to \mathbb R\) is a twice continuously differentiable function, then the Hessian \(\nabla ^{2}f(x)=\left (\frac {\partial ^{2}f(x)}{\partial x_{j}\partial x_{i}}\right )\) is symmetric for all xK by Clairaut’s Theorem; hence

$$ \frac{\partial F_{i}(x)}{\partial x_{j}}=\frac{\partial F_{j}(x)}{\partial x_{i}}\quad\;\forall x\in K,\ \forall i, j\in J $$
(5)

with J:={1, …, n}, F i (x) being the i-th component of F(x), and \(\frac {\partial F_{i}(x)}{\partial x_{j}}\) denoting the partial derivative of F i at x with respect to x j . If condition (5) is satisfied, then one says that (4) is a symmetric VI. Thus, C 2-smooth optimization problems correspond to symmetric VIs.

If \(\bar x\in K\) satisfies (3), then we call it a stationary point of (1). In the above notation, the stationary point set of (1) coincides with Sol(VI), provided that F(x)=∇f(x) for all xK.

The transformation of (1) to (4), whose solution is easier, is a fundamental idea in optimization theory. To solve (4), one can apply the projection method, the Tikhonov regularization method, the proximal point method, the extragradient method, the modifications and improvements of these methods, and so on; see [8, 23, 24, 26, 27, 34, 38, 4244] and the references therein.

It is worthy to stress that not only optimization problems but also minimax problems can be effectively studied (see, e.g., [20]) by using VIs.

Next, let us consider a general differentiable vector optimization problem and recall several solution concepts in vector optimization. More details can be found in [21, Chapter 4] and [32, Chapter 2]. Suppose now that \(f=(f_{1},\dots ,f_{m}):{\Omega }_{K}\to \mathbb R^{m}\) is a continuously differentiable function defined on Ω K . The vector optimization problem with the constraint set K and the vector objective function f is written formally as follows:

$$ \text{(VP)}\qquad\qquad \text{Minimize}\ \; f(x)\quad \mathrm{subject\ to}\quad x\in K. $$

Definition 1

A point xK is said to be an efficient solution (or a Pareto solution) of (VP) if \(\left (f(K)-f(x)\right )\cap \left (-\mathbb R^{m}_{+}\setminus \{0\}\right )=\emptyset \).

In other words, xK is an efficient solution of (VP) if and only if one cannot find any feasible point y such that f i (y)≤f i (x) for all i = 1, …, m and there is at least one index i 0 ∈ {1, …, m} such that \(f_{i_{0}}(y)<f_{i_{0}}(x)\).

Definition 2

One says that xK is a weakly efficient solution (or a weak Pareto solution) of (VP) if \(\left (f(K)-f(x)\right )\cap \left (-\text {int}\,\mathbb R^{m}_{+}\right )=\emptyset \).

Clearly, xK is a weakly efficient solution of (VP) if and only if one cannot find any feasible point y such that f i (y)<f i (x) for all i = 1, …, m. We will denote the efficient solution set and the weakly efficient solution set of (VP) respectively by Sol(VP) and Sol w(VP).

Example 1

Let n = m = 2, f(x) = x for all \(x\in \mathbb R^{2}\), and

$$K=\left\{x=(x_{1},x_{2})^{T}\in\mathbb R^{2}\; :\; x_{1}\geq 0,\ x_{2}\geq 0,\ x_{1}+x_{2}\geq 1\right\}.$$

It is easy to verify by definitions that Sol(VP)={x : x 1 + x 2=1, 0≤x 1≤1} and

$$\text{Sol}^{w}\mathrm{(VP)}= \left\{x\; :\; x_{1}+x_{2}=1, 0\leq x_{1}\leq 1\right\}\cup\left( [1,+\infty)\times \{0\}\right)\cup \left( \{0\}\times [1,+\infty)\right).$$

We are going to formulate the Fermat Rule for vector optimization problems. For the convenience of the reader, the “sufficiency” part is also included and a proof [28, p. 749] is provided.

Theorem 2

(Fermat’s Rule for vector optimization problems) The following assertions are valid:

  1. (i)

    For a point \(\bar x\in K\) to be a weakly efficient solution of (VP), it is necessary that

    $$ \nabla f(\bar x)(y-\bar x)\notin - \text{int}\,\mathbb R_{+}^{m}\quad\; \forall y\in K, $$
    (6)

    where ∇f(x)(u):=(〈∇f 1 (x),u〉, …, 〈∇f m (x),u〉) T stands for the value of the derivative mapping \(\nabla f(x):\mathbb R^{n}\to \mathbb R^{m}\) of f at x at an element \(u\in \mathbb R^{n}\). Condition (6) can be rewritten as

    $$ \nabla f(\bar x)(y-\bar x)\nleq_{\text{int}\,\mathbb R^{m}_{+}}0\quad\; \forall y\in K, $$
    (7)

    where the inequality \(v\nleq _{\text {int}\,\mathbb R^{m}_{+}}0\) for \(v\in \mathbb R^{m}\) means that \(-v\notin \text {int}\,\mathbb R^{m}_{+}\).

  2. (ii)

    If f is convex on K with respect to the cone \(\mathbb R^{m}_{+}\) , i.e., the inequality \((1-t)f(x)+tf(y)\in f((1-t)x+ty)+\mathbb R^{m}_{+}\) holds for any x, y ∈ K and t ∈ (0, 1), then (7) is sufficient for a point \(\bar x\in K\) to be a weakly efficient solution of (VP).

Proof

(i) On the contrary, suppose that \(\bar x\in K\) is a weakly efficient solution of (VP), but (6) fails to hold. Then there exists \(\hat x\in K\) with \(\langle \nabla f_{i}(\bar x),\hat x-\bar x\rangle <0\) for i = 1, …, m. Setting \(x_{t}=\bar x+t(\hat x-\bar x)\), we have

$$f_{i}(x_{t})-f_{i}(\bar x)=t\langle \nabla f_{i}(\bar x),\hat x-\bar x\rangle+o(t)=t\left[\langle \nabla f_{i}(\bar x),\hat x-\bar x\rangle+\frac{o(t)}{t}\right]$$

for t ∈ (0,1). Hence, we can find δ ∈ (0,1) such that \(f_{i}(x_{t})-f_{i}(\bar x)<0\) for all t ∈ (0,δ) and for all i = 1, …, m. This is impossible because x t K for every t ∈ (0,δ) and \(\bar x\) is a weakly efficient solution of (VP).

(ii) It is clear that f is convex on K with respect to the cone \(\mathbb R^{m}_{+}\) if and only if, for every i ∈ {1, …, m}, the function f i is convex on K. To obtain a contradiction, suppose that (7) is fulfilled, but there exists \(\hat x\in K\) such that \(f_{i}(\hat x)<f_{i}(\bar x)\) for all i = 1, …, m. Then, invoking the convexity of the functions f i on K, we have

$$0>f_{i}(\hat x)-f_{i}(\bar x)\geq \langle \nabla f_{i}(\bar x),\hat x-\bar x\rangle\quad \forall i\in \{1,\dots,m\}.$$

From this, we obtain \(\nabla f(\bar x)(\hat x-\bar x)\in - \text {int}\,\mathbb R_{+}^{m},\) which contradicts (7).

The proof is complete. □

It is reasonable to expect that a rule similar to that one in the first assertion of Theorem 2 is valid for efficient solutions of (VP). However, the claim “If \(\bar x\in K\) is an efficient solution of (VP), then

$$ \nabla f(\bar x)(y-\bar x)\notin - \mathbb R_{+}^{m}\setminus\{0\} $$
(8)

for every yKis wrong! To justify this remark, let us consider an example of an unconstrained strongly convex quadratic vector optimization problem proposed by one of the two referees of [50].

Example 2

Let n = 1, m = 2, \(K=\mathbb R\), and f(x)=(x 2,(x−1)2) for every \(x\in \mathbb R\). It is easy to show that Sol(VP2)=[0,1]. For the efficient point \(\bar x=0\), one has

$$\{\nabla f(\bar x)(y-\bar x)\;:\; y\in\mathbb R\}=\{0\}\times \mathbb R;$$

hence condition (8) cannot be fulfilled for every yK. (The same effect is attained if one chooses \(\bar x=1\).)

Meanwhile, if f is convex on K with respect to the cone \(\mathbb R^{m}_{+}\) then, for any \(\bar x\in K\), the property

$$ \nabla f(\bar x)(y-\bar x)\notin - \mathbb R_{+}^{m}\setminus\{0\}\quad \forall y\in K $$
(9)

is strong enough to guarantee that \(\bar x\) is an efficient solution of (VP). The proof of this claim, which can be done similarly as that of the second assertion of Theorem 2, is left after the reader.

The above consideration of optimality conditions for vector optimization problems shows some strong motivations for the introduction of the concept of vector variational inequality of [9] in the form we are going to recall.

Given m vector-valued functions \(F_{i}: K\to \mathbb R^{n}\), i = 1, …, m, we put F = (F 1, …, F m ) and

$$F(x)(u)=\left( \langle F_{1}(x),u\rangle, \dots, \langle F_{m}(x),u\rangle\right)^{T}\ \; \forall x\in K,\ \forall u\in\mathbb R^{n}.$$

Consider the set

$${\Sigma}=\left\{\xi=(\xi_{1},\dots,\xi_{m})^{T}\in\mathbb R^{m}_{+}\,:\, \sum\limits_{l=1}^{m}\xi_{l}=1\right\},$$

whose relative interior is given by riΣ={ξ ∈ Σ : ξ l >0 for all l = 1, …, m}.

Vector variational inequality [9, p. 167] defined by F, K and the cone \(C:=\mathbb R^{m}_{+}\) is the problem:

$$\mathrm{(VVI)}\qquad \ \, \text{Find}\ \, x\in K \ \; \mathrm{such\ that}\ \, F(x)(y-x)\nleq_{C\setminus\{0\}}0\quad \forall y\in K,$$

where the inequality \(v\nleq _{C\setminus \{0\}}0\) for \(v\in \mathbb R^{m}\) means that −vC∖{0}. To this problem, one associates [4] the following one:

$$\mathrm{(VVI)}^{w}\qquad \text{Find}\ \, x\in K \ \; \mathrm{such\ that}\ \, F(x)(y-x)\nleq_{\text{int}C}0\quad \forall y\in K,$$

where intC denotes the interior of C and the inequality \(v\nleq _{\text {int}C}0\) indicates that −v∉intC. The solution sets of (VVI) and (VVI) w are abbreviated respectively to Sol(VVI) and Solw(VVI). The elements of the first set (resp., of the second one) are said to be the Pareto solutions (resp., the weak Pareto solutions) of (VVI). For a better understanding of the adjective “weak” either in “weak Pareto solution” or in “weak Pareto problem”, we would like to quote a remark of Professor F. Giannessi [11, p. 155]: “the term “weak” is misleading and in contrast with its use in other branches of Mathematics; “relaxed” would be a more appropriate term.”

For m = 1, one has \(F=F_{1}:K\to \mathbb R^{n}\) and \(C=\mathbb R_{+}\), hence, both problems (VVI) and (VVI) w coincide with the classical variational inequality problem

$$\mathrm{(VI)} \qquad \text{Find}\ \, x\in K \ \; \mathrm{such\ that}\ \, \langle F(x),y-x\rangle\geq 0\quad \forall y\in K$$

(see (4)), whose solution set has been denoted by Sol(VI).

For each ξ ∈ Σ, consider the variational inequality

$$\mathrm{(VI)_{\xi}} \qquad \text{Find}\ \, x\in K \ \; \mathrm{such\ that}\ \, \left\langle \sum\limits_{l=1}^{m}\xi_{l}F_{l}(x),y-x\right\rangle\geq 0\quad \forall y\in K,$$

and denote its solution set by Sol(VI) ξ . The following definition has been suggested in [18].

Definition 3

If xK and there exists ξ ∈ riΣ such that x ∈ Sol(VI) ξ , then x is said to be a proper Pareto solution of (VVI).

The proper Pareto solution set of (VVI) is abbreviated to Solpr(VVI). Clearly, by taking the union of Sol(VI) ξ on ξ ∈ riΣ, we find whole the set Solpr(VVI).

Following [37, p. 170], a polyhedral convex set in \(\mathbb R^{n}\) is a set which can be expressed as the intersection of some finite collection of closed half-spaces, i.e., as the set of solutions to some finite system of inequalities of the form

$$\langle c_{i},x\rangle\leq \gamma_{i},\quad\; i=1,\dots, k,$$

with \(c_{i}\in \mathbb R^{n}\) and \(\gamma _{i}\in \mathbb R\) for all i = 1, …, k. It is clear that any polyhedral convex set is a closed convex set, but the converse is not true.

Later, we will see that Solpr(VVI)⊂Sol(VVI), and the inclusion becomes an equality if K is a polyhedral convex set; otherwise, it may be strict.

To have an idea about computing the above solution sets of (VVI), let us consider an example.

Example 3

(cf. Example 1) Let n = m = 2, F 1(x)=(1,0)T, F 2(x)=(0,1)T for all \(x\in \mathbb R^{2}\), and

$$K=\left\{x=(x_{1},x_{2})^{T}\in\mathbb R^{2}\; :\; x_{1}\geq 0,\ x_{2}\geq 0,\ x_{1}+x_{2}\geq 1\right\}.$$

Since F(x)(yx) = yx and \(\left \langle {\sum }_{l=1}^{2}\xi _{l}F_{l}(x),y-x\right \rangle =\langle \xi ,y-x\rangle \) for all x, yK and ξ ∈ Σ, one can easily verify that

$$\text{Sol}^{pr}\mathrm{(VVI)}=\mathrm{Sol(VVI)}=\{x\; :\; x_{1}+x_{2}=1,\ 0\leq x_{1}\leq 1\}$$

and

$$\text{Sol}^{w}\mathrm{(VVI)}= \left\{x\; :\; x_{1}+x_{2}=1, 0\leq x_{1}\leq 1\right\}\cup\left( [1,+\infty)\times \{0\}\right)\cup \left( \{0\}\times [1,+\infty)\right).$$

We say that (VVI) is a symmetric vector variational inequality if for each l ∈ {1, …, m} the vector-function F l satisfies (5) with F being replaced by F l , i.e.,

$$ \frac{\partial F_{li}(x)}{\partial x_{j}}=\frac{\partial F_{lj}(x)}{\partial x_{i}}\quad\;\forall x\in K,\ \forall i, j\in J, $$
(10)

where J = {1, …, n} and F l i (x) is the i-th component of F l (x). Similarly, we say that (VVI) is an anti-symmetric vector variational inequality if the following property is available for each l ∈ {1, …, m}:

$$ \frac{\partial F_{li}(x)}{\partial x_{j}}=-\frac{\partial F_{lj}(x)}{\partial x_{i}}\quad\;\forall x\in K,\ \forall i, j\in J. $$
(11)

As noted in [47, 50], quadratic vector optimization problems lead to symmetric VVIs.

Comparing the Fermat Rule in Theorem 2 with the definition of VVI, we can assert that the necessary optimality condition for weakly efficient solutions of (VP) is a VVI, and if f is convex on K with respect to the cone \(\mathbb R^{m}_{+}\) then Solw(VVI)=Solw(VP), provided that F l is taken as ∇f l for each l ∈ {1, …, m}. In addition, if f is C 2-smooth, then the problem (VVI) corresponding to (VP) is a symmetric VVI.

As it has been shown in [49] (see also [47, 50]), by using the special necessary and sufficient optimality conditions (not in the form of Fermat’s Rule!) for efficient solutions and weakly efficient solutions of a linear fractional vector optimization problem [5, 35] (see also [29, Chap. 8]), one can treat the problem as an anti-symmetric VVI.

4 Some Classes of VVIs

Convex (resp., strongly convex, quadratic, and polynomial) optimization problems (of both scalar and vector types) form four important special classes of problems in optimization theory. Likewise, monotone (resp., strongly monotone, affine, and polynomial) VVIs deserve a special attention. The first three classes were highlighted in [28, pp. 750-751]. The fourth one has been studied quite recently, in [17, 18]. By \(\mathbb R[x_{1},\dots , x_{n}]\), we denote the ring of polynomials in the variables x 1, …, x n with coefficients from \(\mathbb R\).

Definition 4

(i) One says that (VVI) is a monotone vector variational inequality if for each l ∈ {1, …, m} the problem VI (F l ,K) is a monotone VI. The latter means that 〈F l (y)−F l (x), yx〉≥0 for all x, yK.

(ii) If for each l ∈ {1, …, m}, the problem VI (F l ,K) is a strongly monotone VI, i.e., there exists ρ l > 0 such that 〈F l (y)−F l (x), yx〉 ≥ ρ l yx2 for all x, yK, then (VVI) is called a strongly monotone vector variational inequality.

(iii) Problem (VVI) is said to be an affine vector variational inequality if K is a polyhedral convex set and there exist matrices \(M_{l}\in \mathbb R^{n\times n}\) and vectors \(q_{l}\in \mathbb R^{n}(l=1,\dots ,m)\) such that F l (x) = M l x + q l for all l = 1, …, m and xK.

(iv) One calls (VVI) a polynomial vector variational inequality under polynomial constraints if all the components of F l , l = 1, …, m, are polynomial functions in the variables x 1, …, x n (i.e., for each l one has F l =(F l1, …, F l n ) with \(F_{lj}\in \mathbb R[x_{1},\dots , x_{n}]\) for all j = 1, …, n), and K is the solution set of a finite system of inequalities and equations given by polynomial functions. If the latter system is given by affine functions, then (VVI) is called a polynomial vector variational inequality under linear constraints.

Going back to scalar functions, we now recall the notion of strong convex function. A function \(\varphi :{\Omega }_{K}\to \mathbb R\) is said to be strongly convex on K if there exists a constant ρ>0 such that the inequality

$$\varphi((1-t)x+ty)\leq (1-t)\varphi(x)+t\varphi(y)-\rho t(1-t)\|x-y\|^{2}$$

holds for any x, yK and t ∈ (0,1). According to [45, Lemma 1, p. 184], the last condition is fulfilled if and only if the function

$$ \widetilde{\varphi}(x):=\varphi(x)-\rho\|x\|^{2}$$

is convex on Ω. By a result in [46, Prop. 4.10] (see also [45]), a continuously differentiable function \(\varphi :{\Omega }_{K}\to \mathbb R\) is strongly convex on K with a constant ρ>0 if and only if

$$\langle \nabla\varphi(y)-\nabla\varphi(x),y-x\rangle\geq 2\rho \|y-x\|^{2}\quad\; \forall x,y\in K.$$

It is also well known [46, Prop. 4.10] (see also [45]) that a continuously differentiable function \(\varphi :{\Omega }_{K}\to \mathbb R\) is convex on K if and only if

$$\langle \nabla\varphi(y)-\nabla\varphi(x),y-x\rangle\geq 0\quad\; \forall x,y\in K.$$

Combining these facts with Theorem 2, we see at once that the VVI corresponding to a smooth convex (resp., smooth strongly convex) vector optimization problem is monotone (resp., strongly monotone).

As mentioned in Section 1, various results on solution sensitivity and topological properties of the solution sets of strongly monotone VVIs and monotone VVIs with applications to vector optimization problems can be found in [28, 48]. We refer to [16, 50] for several results on connectedness structure of the solution sets and solution stability of AVVIs.

5 Scalarization Formulae

First, we recall some basic facts on scalarization of vector optimization problems, which can be derived from Theorem 2 by the help of the Separation Theorem [37, Theorem 11.3] and a property of differentiable convex functions. A detailed proof of the next theorem was given in [28].

Theorem 3

(Scalarization of vector optimization problems) Let \(\bar x\in K\). The following assertions are valid:

  1. (i)

    If \(\bar x\in \text {Sol}^{w}\mathrm {(VP)}\), then there exists ξ ∈ Σ, where Σ is as in Section 2, such that

    $$ \left\langle\sum\limits_{l=1}^{m}\xi_{l}\nabla f_{l}(\bar x), y-\bar x\right\rangle\geq 0\quad\; \forall y\in K. $$
    (12)
  2. (ii)

    If all the components of f are convex on K and there exists ξ ∈ Σ satisfying (12), then \(\bar x\in \text {Sol}^{w}\mathrm {(VP)}\).

  3. (iii)

    If all the components of f are convex on K and there exists ξ ∈ ri Σ satisfying (12), then \(\bar x\in \text {Sol}\mathrm {(VP)}\).

Consider problem (VP) and put F l (x) = ∇f l (x). For each ξ ∈ Σ, we consider the parametric variational inequality (VI) ξ , which now becomes

$$ \qquad \text{Find}\ \, x\in K \ \; \mathrm{such\ that} \ \left\langle\sum\limits_{l=1}^{m}\xi_{l}\nabla f_{l}(x),y- x\right\rangle\geq 0\quad \forall y\in K. $$
(13)

In connection with Theorem 3, the forthcoming definitions have been considered in [17, 18].

Definition 5

If xK and there is ξ ∈ Σ such that x is a solution of (13), then we call x a stationary point of (VP).

Definition 6

If xK and there is ξ ∈ riΣ such that x is a solution of (13), then we call x a proper stationary point of (VP).

The stationary point set and the proper stationary point set of (VP) are abbreviated respectively to Stat(VP) and Pr(VP). By Theorem 3, one can find the whole set Stat(VP) by solving the variational inequality (13) and taking the union of the solution sets Sol(VI) ξ , ξ ∈ Σ. A similar procedure applies to Pr(VP), for which we have to find the union of the solution sets of (13) on ξ ∈ riΣ.

Summing up the above, we have

$$ \bigcup_{\xi\in \text{ri}{\Sigma}}\mathrm{Sol(VI)}_{\xi}=\mathrm{Pr(VP)}\subset\mathrm{Sol(VP})\subset \text{Sol}^{w}\mathrm{(VP})\subset \mathrm{Stat(VP})=\bigcup_{\xi\in{\Sigma}}\mathrm{Sol(VI)}_{\xi}, $$
(14)

where the first inclusion is valid if all the functions f l are convex. In addition, under fulfillment of the latter, the third inclusion in (14) becomes equality.

Now, we are in a position to formulate scalarization formulae for VVIs. The detailed proofs can be found in [28] and [30].

Theorem 4

(Scalarization of vector variational inequalities; see [28, Theorem 2.1] and [30, Theorem 2.1]) It holds that

$$\bigcup_{\xi\in\text{ri}{\Sigma}}\mathrm{Sol(VI)}_{\xi}=\text{Sol}^{pr}\mathrm{(VVI)}\subset {\mathrm{Sol(VVI)}}\subset \text{Sol}^{w}\mathrm{(VVI)}=\bigcup_{\xi\in{\Sigma}}{\mathrm{Sol(VI)}}_{\xi}. $$
(15)

If K is a polyhedral convex set, i.e., K is the intersection of finitely many closed half-spaces of \(\mathbb R^{n}\) (the intersection of an empty family of closed half-spaces is set to be \(\mathbb R^{n}\) ), then

$$\bigcup_{\xi\in\text{ri}{\Sigma}}\mathrm{Sol(VI)}_{\xi}=\text{Sol}^{pr}\mathrm{(VVI)}=\mathrm{Sol(VVI)}. $$
(16)

The next example was designed [18, Example 2.2] to show that the inclusion Solpr(VVI)⊂Sol(VVI) can be strict even if K is given by a unique strongly convex polynomial inequality. Here, both components of F are constant vector functions that are monotone, but not strongly monotone.

Example 4

Consider problem (VVI) with m = n = 2, F 1(x)=(1,0)T and F 2(x)=(0,1)T for every \(x=(x_{1},x_{2})^{T}\in \mathbb R^{2}\), and

$$K=\left\{x=(x_{1}, x_{2})^{T}\in\mathbb R^{2} \;:\; {x_{1}^{2}}+{x_{2}^{2}}-1\leq 0\right\}.$$

By (15), x ∈ Solw(VVI) if and only if there exists ξ ∈ Σ such that x ∈ Sol(VI) ξ , or

$$\left\langle\xi_{1}F_{1}(x)+\xi_{2}F_{2}(x),y-x\right\rangle\geq 0\quad \forall y\in K.$$

This condition can be written equivalently as

$$ 0\in \xi_{1}F_{1}(x)+\xi_{2}F_{2}(x)+N_{K}(x), $$
(17)

where

$$ N_{K}(x):=\{x^{*}\in\mathbb R^{2}\;:\; \langle x^{*},y-x\rangle\leq 0\ \, \forall y\in K\} $$
(18)

for xK and N K (x): = for xK denotes the normal cone to K at x. Since \(\xi _{1}F_{1}(x)+\xi _{2}F_{2}(x)=\left (\begin {matrix} \xi _{1}\\ \xi _{2}\end {matrix}\right )\) and N K (x)={0} for every x ∈ intK, Sol(VI) ξ ∩intK = . If x is taken from the boundary of K, then N K (x)={λ x : λ≥0}; hence (17) is satisfied if and only if

$$x_{1}=-\frac{\xi_{1}}{\sqrt{(\xi_{1})^{2}+(\xi_{2})^{2}}},\quad x_{2}=-\frac{\xi_{2}}{\sqrt{(\xi_{1})^{2}+(\xi_{2})^{2}}}.$$

It follows that Solw(VVI)=Γ, where

$${\Gamma}:=\left\{x\in\mathbb R^{2} \;:\; -1\leq x_{1}\leq 0,\ \, x_{2}=-\sqrt{1-{x_{1}^{2}}}\right\}$$

is a closed circle-arc. As a by-product of the above calculations, we have

$$\text{Sol}^{pr}\mathrm{(VVI)}=\left\{x\in\mathbb R^{2} \;:\; -1<x_{1}<0,\ \, x_{2}=-\sqrt{1-{x_{1}^{2}}}\right\}.$$

It is easy to show that the end-points \(\bar x:=(-1,0)^{T}\) and \(\hat x:=(0,-1)^{T}\) of Γ belong to Sol(VVI). So we have Sol(VVI)=Solw(VVI)=Γ, while \(\text {Sol}^{pr}\mathrm {(VVI)}={\Gamma }\setminus \{\bar x,\, \hat x\}.\)

We refer to [28, Section 6] for another interesting example of (VVI) with K being a closed ball in \(\mathbb R^{2}\), F 1, and F 2 strongly monotone affine operators, where the weak Pareto solution set coincides with the Pareto solution set, which is a closed circle-arc, and the proper Pareto solution set Solpr(VVI) is obtained by eliminating the end-points from that circle-arc.

We conclude this section with an example of a polynomial vector optimization problem, where Pr(VP) is different from Sol(VP).

Example 5

(See [18, Example 4.6]) Consider problem (VP) with m = n = 2, f 1(x) = x 1 and f 2(x) = x 2 for every \(x=(x_{1},x_{2})^{T}\in \mathbb R^{2}\), and

$$K=\left\{x=(x_{1}, x_{2})^{T}\in\mathbb R^{2} \;:\; {x_{1}^{2}}+{x_{2}^{2}}-1\leq 0\right\}.$$

Let Γ, \(\bar x\), and \(\hat x\) be defined as in Example 4. It is not difficult to check that Sol(VP) = Solw (VP) = Stat(VP) = Γ and \(\mathrm {Pr(VP)}={\Gamma }\setminus \{\bar x,\, \hat x\}.\vspace *{-3pt}\)

6 Sets Having Finitely Many Connected Components

To discuss the connectedness structure of the sets Sol(VVI) and Solw(VVI), we will need some definitions from general topology and a lemma from [17]. A detailed proof of the lemma is provided to make our presentation clear.

A topological space X is said to be connected if one cannot represent X = UV where U, V are nonempty open sets of X with UV = . A nonempty subset AX of a topological space X is said to be a connected component of X if A (equipped with the induced topology) is connected and it is not a proper subset of any connected subset of X.

Lemma 1

(See [17, Lemma 2.2]) Let Ω be a subset of a topological space X with the closure denoted by \(\overline {\Omega }\) . If Ω has k connected components, then any subset A⊂X with the property \({\Omega }\subset A\subset \overline {\Omega }\) can have at most k connected components.

Proof

Suppose that \({\Omega }\subset A\subset \overline {\Omega }\) and Ω has k connected components, denoted by Ω i , i = 1, …, k. It is easy to show that \(\overline {\Omega }=\bigcup \limits _{i=1}^{k}\overline {\Omega }_{i},\) where \(\overline {\Omega }_{i}\) stands for the closure of Ω i in the topology of X. On one hand, by the inclusion \(A\subset \bigcup \limits _{i=1}^{k}\overline {\Omega }_{i}\), we have

$$ A=\bigcup\limits_{i=1}^{k}(\overline{\Omega}_{i}\cap A). $$
(19)

On the other hand, since \(\bigcup \limits _{i=1}^{k}{\Omega }_{i}\subset A\), we have

$$ {\Omega}_{i}= {\Omega}_{i}\cap A\subset\overline{\Omega}_{i}\cap A\subset\overline{\Omega}_{i} $$
(20)

for i = 1, …, k. Applying a remark after [22, Theorem 20] (see also [31, p. 188]), which says that if \(B\subset C\subset \overline {B}\) and B is connected then C is also connected, from (20) and the connectedness of Ω i , we can assert that \(\overline {\Omega }_{i}\cap A\) is connected for all i = 1, …, k. Thus, (19) shows that A can have at most k connected components and completes the proof. □

One may ask: How many connected components the efficient solution set (resp., the weakly efficient solution set) of a vector optimization problem can have? Similar question arises about the number of connected components of the Pareto solution set (resp., the weak Paretor solution set) of a vector variational inequality. Also, one may be curious to know whether each connected component of the just mentioned solution sets is simple enough (say, it is contractible), or not. Answers for these questions were given in [13, 19] where the authors showed that:

  1. (i)

    (see [13] and [47, pp. 315–319]) For any positive integer m, there exists a linear fractional vector optimization problem (which is a C -smooth nonconvex vector optimization problem under linear constraints) whose weakly efficient solution set coincides with the efficient solution set and has exactly m connected components;

  2. (ii)

    (see [19] and [47, pp. 319–322]) There exists a linear fractional vector optimization problem with the number of criteria m = 3 such that the weakly efficient solution set coincides with the efficient solution set, which is connected by line segments, but not contractive.

Using the complete reduction of any linear fractional vector optimization problem to an anti-symmetric monotone affine VVI suggested in [49] (see also [47, p. 313]), from the just mentioned results we obtain the following ones:

  1. (iii)

    For any positive integer m, there exists an anti-symmetric monotone affine VVI whose weak Pareto solution set coincides with the Pareto solution set and has exactly m connected components;

  2. (iv)

    There exists an anti-symmetric monotone affine VVI with the number of criteria m = 3 such that the weak Pareto solution set coincides with the Pareto solution set, which is connected by line segments, but not contractive.

7 Semi-Algebraic Sets

To proceed furthermore with our discussions on VVIs and VPs, we need some theorems on semi-algebraic sets.

Definition 7

(See [2, Definition 2.1.4]) A semi-algebraic subset of \(\mathbb R^{n}\) is a subset of the form

$$\bigcup_{i=1}^{s}\bigcap_{j=1}^{r_{i}}\left\{x\in \mathbb R^{n}\,:\,f_{i,j}(x)*_{i,j}0\right\}, $$

where \(f_{i,j}\in \mathbb R[x_{1},\dots ,x_{n}]\) and ∗ i, j is either < or =, for i = 1, …, s and j = 1, …, r i , with s and r i being arbitrary natural numbers.

Thus, every semi-algebraic subset of \(\mathbb R^{n}\) can be represented as a finite union of sets of the form:

$$ \left\{x\in \mathbb R^{n}\,:\,f_{1}(x)=...=f_{\ell}(x)=0,\ g_{1}(x)<0,\dots,g_{m}(x)<0\right\}, $$
(21)

where and m are natural numbers, f 1, …, f , g 1, …, g m are in \(\mathbb R[x_{1},\dots , x_{n}].\) Since

$$\left\{x\,:\, g_{i}(x)\leq 0\right\}=\left\{x\,:\, g_{i}(x)=0\right\}\cup \left\{x\,:\, g_{i}(x)<0\right\},$$

it doesn’t matter if one replaces some of the strict inequalities in (21) by non-strict inequalities.

Open balls, closed balls, spheres, and unions of finitely many of those sets are some typical examples of semi-algebraic subsets in \(\mathbb R^{n}\). Semi-algebraic subsets of \(\mathbb R\) are exactly the finite unions of points and open intervals (bounded or unbounded).

By induction, from [2, Theorem 2.2.1], we can easily derive the following useful result.

Theorem 5

Let S be a semi-algebraic subset of \(\mathbb {R}^{n}\times \mathbb {R}^{m}\) , \({\Phi }:\mathbb {R}^{n}\times \mathbb {R}^{m}\to \mathbb {R}^{n}\) the natural projection on the space of the first n coordinates, i.e.,

$${\Phi}\left( x_{1},\dots,x_{n},x_{n+1},\dots,x_{n+m}\right)=\left( x_{1},\dots,x_{n}\right)^{T}$$

for every \(x=\left (x_{1},\dots ,x_{n},x_{n+1},\dots ,x_{n+m}\right )^{T}\in \mathbb R^{n}\times \mathbb R^{m}\) . Then Φ(S) is a semi-algebraic subset of \(\mathbb {R}^{n}\).

Let us recall the concept of semi-algebraically connected subset.

Definition 8

(See [2, Definition 2.4.2]) A semi-algebraic subset S of \(\mathbb R^{n}\) is semi-algebraically connected if for every pair of disjoint semi-algebraic sets F 1 and F 2, which are closed in S and satisfy F 1F 2 = S, one has F 1 = S or F 2 = S.

The next fundamental theorem from real algebraic geometry describes clearly the connectedness structure of semi-algebraic sets.

Theorem 6

(See [2, Theorem 2.4.5]) A semi-algebraic subset S of \(\mathbb R^{n}\) is semi-algebraically connected if and only if it is connected. Every semi-algebraic set has a finite number of connected components, which are semi-algebraic.

8 VVIs under Linear Constraints

This section discusses the connectedness structure of the solution sets of vector variational inequalities of the form (VVI) under two blanket assumptions:

  1. (a1)

    All the components of F l , l = 1, …, m, are polynomial functions in the variables x 1, …, x n , i.e., for every l ∈ {1, …, m} one has F l = (F l1, …, F l n ) with \(F_{lj}\in \mathbb R[x_{1},\dots , x_{n}]\)

  2. (a2)

    K is a nonempty polyhedral convex set, i.e., there exist a positive integer p, a matrix \(A=(a_{ij})\in {\mathbb R}^{p\times n},\) and a vector \(b=(b_{i})\in {\mathbb R}^{p}\) such that \(K=\left \{x\in {\mathbb R}^{n}\, :\, Ax\leq b\right \}\).

The main result of [17] is formulated as follows.

Theorem 7

(See [17, Theorem 3.1]) If the assumptions (a1) and (a2) are satisfied, then

  1. (i)

    the weak Pareto solution set Sol w (VVI) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)), and

  2. (ii)

    the Pareto solution set Sol(VVI) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)).

We now explain the main ideas of the proof of Theorem 7 and refer the readers to [17] for a detailed proof. To every index set αI with I:={1, …, p}, we associate the pseudo-face

$$\mathcal{F}_{\alpha}:=\left\{x\in {\mathbb R}^{n}\,:\, \sum\limits_{j=1}^{n}a_{ij}x_{j}=b_{i}\ \, \forall i\in\alpha,\ \, \sum\limits_{j=1}^{n}a_{ij}x_{j}< b_{i}\ \,\forall i\notin\alpha\right\}$$

of K, where a i j is the element in the i-th row and the j-th column of A, and b i denotes the i-th component of b. By Theorem 4, we have

$$\text{Sol}^{w}\mathrm{(VVI)}=\bigcup_{\xi\in{\Sigma}}\mathrm{Sol(VI)}_{\xi} $$
(22)

with (VI) ξ denoting the variational inequality

$$\qquad \text{Find}\ \, x\in K \ \; \mathrm{such\ that}\ \, \left\langle F(x,\xi),y-x\right\rangle\geq 0\quad \forall y\in K,$$

where \(F(x,\xi ):=\sum \limits _{l=1}^{m}\xi _{l}F_{l}(x)\) for every ξ = (ξ 1, …, ξ m )T ∈ Σ. Denote the normal cone to the convex set K at \(x\in {\mathbb R}^{n}\) by N K (x) and recall that

$$N_{K}(x)=\{x^{*}\in {\mathbb R}^{n}\,:\, \langle x^{*}, y-x\rangle \leq 0\ \, \forall y\in K\}$$

if xK, and N K (x) = if xK. Using the notations F(x, ξ) and N K (x), we can rewrite the inclusion x ∈ Sol(VI) ξ equivalently as

$$ F(x,\xi)\in -N_{K}(x). $$
(23)

We have \( K=\underset {\alpha \subset I}{\bigcup }\mathcal {F}_{\alpha }\), \(\mathcal {F}_{\alpha }\cap \mathcal {F}_{\widetilde \alpha }=\emptyset \) if \(\alpha \neq \widetilde \alpha \) and, therefore,

$$ \text{Sol}^{w}\mathrm{(VVI)}=\bigcup_{\alpha\subset I} [\text{Sol}^{w}\mathrm{(VVI)}\cap \mathcal{F}_{\alpha}]. $$
(24)

Since a finite union of semi-algebraic subsets of \(\mathbb R^{n}\) is again a semi-algebraic subset of \(\mathbb R^{n}\), by (24), and by Theorem 6, we see that the proof of the assertion (i) will be completed if we can establish the following result.

Claim 1

For every index set α⊂I, the intersection \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }\) is a semi-algebraic subset of \(\mathbb R^{n}\).

By the Farkas lemma [37, Corollary 22.3.1], we have

$$ N_{K}(x)=\text{pos}\left\{a_{i.}^{T}\, :\, i\in\alpha\right\} $$
(25)

for every \(x\in \mathcal {F}_{\alpha },\) where a i.:=(a i1, …, a i n ) denotes the i-th row of A and

$$\text{pos}\left\{z_{1},\dots,z_{k}\right\}:=\left\{\lambda_{1}z_{1}+{\dots} +\lambda_{k}z_{k}\, :\,\lambda_{i}\geq 0,\ i=1,\dots, k \right\}$$

is the convex cone generated by vectors \(z_{i}\in \mathbb R^{n},\; i=1,\dots , k.\) Due to formulas (22), (23), and (25),

$$\text{Sol}^{w}\mathrm{(VVI)}\cap \mathcal{F}_{\alpha}=\bigcup_{\xi\in{\Sigma}}\left\{x\in\mathcal{F}_{\alpha} \,:\,F(x,\xi)\in -\text{pos}\left\{a_{i.}^{T}\, :\, i\in\alpha\right\}\right\}. $$
(26)

Since \(\text {pos}\{a_{i.}^{T}\, :\, i\in \alpha \}\) is a convex polyhedral cone, there exists a matrix \(C_{\alpha }=\left (c_{ij}\right )\in \mathbb R^{n_{\alpha }\times n}\), where \(n_{\alpha }\in \mathbb N\), such that

$$\text{pos}\{a_{i.}^{T}\, :\, i\in\alpha\}=\{y\in\mathbb R^{n}\,:\,C_{\alpha} y\geq 0\}. $$
(27)

By (26) and (27),

$$ \text{Sol}^{w}\mathrm{(VVI)}\cap \mathcal{F}_{\alpha}=\bigcup_{\xi\in{\Sigma}}\left\{x\in\mathcal{F}_{\alpha} \,:\,C_{\alpha} F(x,\xi)\leq 0\right\}. $$
(28)

The inequality on the right-hand side of (28) can be rewritten as

$$C_{\alpha} \left( \sum\limits_{l=1}^{m}\xi_{l}F_{l}(x)\right)\leq 0,$$

which is the following system of n α polynomial inequalities

$$ \sum\limits_{j=1}^{n}\sum\limits_{l=1}^{m}c_{kj}\xi_{l}F_{lj}(x)\leq 0, \quad k=1,\dots, n_{\alpha}. $$
(29)

Note that expression \(\sum \limits _{j=1}^{n}\sum \limits _{l=1}^{m}c_{kj}\xi _{l}F_{lj}(x)\) on the right-hand side of (29) is a polynomial in the variables x 1, …, x n ,ξ 1, …, ξ m . Consider the set

$$ {\Omega}_{\alpha}:= \left\{(x,\xi)\in\mathcal{F}_{\alpha}\times{\Sigma} \,:\,C_{\alpha} F(x,\xi)\leq 0\right\}. $$

By (29), we have

$$\begin{array}{@{}rcl@{}} {\Omega}_{\alpha} &=&\left\{(x,\xi)\in\mathbb R^{n}\times\mathbb R^{m}\,:\,\sum\limits_{j=1}^{n}a_{ij}x_{j}=b_{i},\ i\in\alpha,\right.\\&&\qquad\qquad\qquad\qquad~~~ \, \sum\limits_{j=1}^{n}a_{ij}x_{j}< b_{i},\ i\notin\alpha, \\&& \qquad\qquad\qquad\qquad~~~\sum\limits_{j=1}^{n}\sum\limits_{i=1}^{m}c_{kj}\xi_{i}F_{ij}(x)\leq 0,\ k=1,\dots, n_{\alpha},\\ &&\qquad\qquad\qquad~~~~~~~~~~\left.\sum\limits_{l=1}^{m}\xi_{l}=1,\ \xi_{l}\geq 0,\ l=1,\dots, m\right\}. \end{array} $$

Denote by |α|, the number of elements of α and observe that Ω α is determined by |α|+1 polynomial equations, n α + m polynomial inequalities, and p−|α| strict polynomial inequalities of the variables \((x,\xi )=(x_{1},{\dots } x_{n},\xi _{1},\dots ,\xi _{m})\in \mathbb R^{n+m}\). Hence, Ω α is a semi-algebraic set.

From (28), it follows that \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }={\Phi }({\Omega }_{\alpha })\), where \({\Phi }:\mathbb {R}^{n}\times \mathbb {R}^{m}\to \mathbb {R}^{n}\) is the natural projection on the space of the first n coordinates. According to Theorem 5, \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }\) is a semi-algebraic set. This proves Claim 1.

We have thus seen that Sol w(VVI) is a semi-algebraic subset of \(\mathbb R^{n}\). Then, thanks to Theorem 6, Solw(VVI) has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\).

The assertion (ii) can be proved similarly.

A topological space X is said to be path connected if, for every x, y in X, there is a continuous mapping γ:[0,1]→X such that γ(0) = x and γ(1) = y. It is well known that any path connected topological space is connected, but the converse is not true. For example, the set

$$X=\left\{(t,\sin (1/t))^{T}\in\mathbb R^{2}\, :\, t\neq 0\right\}\cup \left( \{0\}\times [-1,1]\right)$$

considered with the induced topology, is connected, but not path connected.

Concerning Theorem 7, the following natural question arises: Is it true that each connected component of the weak Pareto solution set Sol w(VVI) (resp., of the Pareto solution set Sol(VVI)) is a path connected set? Mr. Vu Trung Hieu, a young colleague of the author of this survey, has given a positive solution for this question. To present his result, we have to recall two additional notions from real algebraic geometry.

Definition 9

(See [2, Definition 2.2.5]) Let \({\Omega }_{1}\subset \mathbb R^{n_{1}}\) and \({\Omega }_{2}\subset \mathbb R^{n_{2}}\) be two semi-algebraic sets. A mapping Ψ:Ω1→Ω2 is semi-algebraic if its graph is a semi-algebraic subset of \(\mathbb R^{n_{1}+n_{2}}\).

Definition 10

(See [2, Definition 2.5.12]) A semi-algebraic subset \({\Omega }\subset \mathbb R^{n}\) is said to be semi-algebraically path connected if, for every x, y in X, there exists a continuous semi-algebraic mapping γ:[0,1]→Ω such that γ(0) = x and γ(1) = y.

Clearly, any semi-algebraically path connected set \({\Omega }\subset \mathbb R^{n}\) is a path connected topological space. To see that the converse is not true in general, one can choose

$${\Omega}=\left\{(t,t\sin (1/t))^{T}\in\mathbb R^{2}\, :\, t\neq 0\right\}\cup \{(0,0)^{T}\}.$$

Since Ω is not a semi-algebraic set, it cannot be semi-algebraically path connected. Note also that, for any ε>0, there does not exist any continuous semi-algebraic curve joining x = (−ε, εsin(1/ε))T with y = (ε, εsin(1/ε))T.

Lemma 2

If \({\Omega }\subset \mathbb R^{n}\) is a semi-algebraic subset, then the following properties are equivalent:

  1. (i)

    Ω is semi-algebraically path connected;

  2. (ii)

    Ω is semi-algebraically connected;

  3. (iii)

    Ω is connected.

Proof

The equivalence between (i) and (ii) is assured by [2, Prop. 2.5.13], while the equivalence between (ii) and (iii) follows from [2, Prop. 2.4.5] (see Theorem 6). □

Proposition 1

(Vu Trung Hieu; private communication) If the assumptions (a1) and (a2) are satisfied, then

  1. (i)

    each connected component of the weak Pareto solution set Sol w (VVI) is a semi-algebraically path connected subset of \(\mathbb R^{n}\) (so it is path connected), and

  2. (ii)

    each connected component of the Pareto solution set Sol(VVI) is a semi-algebraically path connected subset of \(\mathbb R^{n}\) (so it is path connected).

Proof

To prove the first assertion, observe that each connected component of Sol w(VVI) is connected semi-algebraic set by Theorem 7. Hence, according to Lemma 2, it is a semi-algebraically path connected subset of \(\mathbb R^{n}\). The second assertion can be obtained in the same way. □

Remark 1

By Theorem 7 and the arguments of the proof of Proposition 1, one can see that each connected component of a semi-algebraic subset of \(\mathbb R^{n}\) is a semi-algebraically path connected subset. Applying this property to any solution set to be considered in the sequel, which is a semi-algebraic set, we can assert that each of its connected components is a semi-algebraically path connected set (so it is path connected).

It is clear that if F l (x) = M l (x) + q l , where M l is an n×n matrix and \(q_{l}\in \mathbb R^{n}\), for l = 1, …, m, then each component F l j (x) of the functions F l , l = 1, …, m, is a polynomial function in the variables x 1, …, x n . Therefore, Theorem 7 solves in the affirmative Question 1 of [50, p. 66] about the connectedness structure of the solution sets of affine vector variational inequalities, without requiring the positive semidefiniteness of the matrices M l . Moreover, it assures that each connected component of the solution set under consideration is a semi-algebraic subset. Note that, by [50, Theorems 4.1 and 4.2], if the AVVI under consideration is monotone and if the solution set in question is disconnected, then each of its connected components is unbounded. The later result cannot be obtained by tools of real algebraic geometry.

In [16], by a different approach using fractional functions and some properties of determinant, it has been proved that both the Pareto solution set and the weak Pareto solution set of an AVVI have finitely many connected components, provided that m = 2 and a regularity condition is satisfied. So, Theorem 7 implies the results of [16].

The problem of finding an upper bound for the numbers of connected components of Solw(VVI) and Sol(VVI) requires further investigations. In the case m = 2, an explicit upper bound for the numbers of connected components of Solw(VVI) and Sol(VVI) is given in [16] under a regularity condition. This result gives a partial solution to Question 2 of [50].

Linear fractional vector optimization problems (or LFVOPs) and quadratic vector optimization problems (or QVOPs) are two fundamental classes of vector optimization problems. Both classes contain linear vector optimization problems as an important subclass. By using Theorem 7, one can get some facts about the connectedness structure of the solution sets in LFVOPs. Moreover, a property of the stationary point set of polynomial vector optimization problems under linear constraints can be obtained and applied to convex QVOPs.

We now present some basic information about LFVOPs. More details can be found in [41, 49] and [29, Chap. 8]. Let \(f_{l}:\mathbb R^{n}\to \mathbb R,\ l=1,\dots ,m\) be the linear fractional functions of the form

$$f_{l}(x)=\frac{{a_{l}^{T}}x+\alpha_{l}}{{b_{l}^{T}}x+\beta_{l}},$$

where \(a_{l}\in \mathbb R^{n}, b_{l}\in \mathbb R^{n}, \alpha _{l}\in \mathbb R,\) and \(\beta _{l}\in \mathbb R\). Let \(K\subset \mathbb R^{n}\) be satisfying assumption (a2). Suppose that \({b_{l}^{T}}x+\beta _{l}>0\) for all l ∈ {1, …, m} and xK. Put f(x)=(f 1(x), …, f m (x)),

$${\Omega}_{K}:=\left\{x\in\mathbb R^{n}\,:\,{b_{l}^{T}}x+\beta_{l}>0,\ \, l=1,\dots,m\right\},$$

and observe that Ω K is open and convex, K⊂Ω K , and f is continuously differentiable on Ω K . Consider the linear fractional vector optimization problem

$$(\text{VP}_{1}) \qquad\qquad \text{Minimize}\ \, f(x)\ \;~\text{subject}\;\text{to} \; x~\in~K. $$

LFVOPs have been studied intensively for more than three decades; see [5, 6, 1315, 19, 35, 41, 47, 49, 50] and the references therein.

The efficient solution set and the weakly efficient solution set of (VP 1) are denoted by Sol(VP1) and Solw(VP1), respectively. According to [35], x ∈ Sol(VP1) if and only if there exists ξ = (ξ 1, …, ξ m ) ∈ riΣ such that

$$\left\langle \sum\limits_{l=1}^{m}\xi_{l}\left[\left( {b_{l}^{T}}x +\beta_{l}\right)a_{l}-\left( {a_{l}^{T}}x+\alpha_{l}\right)b_{l}\right],y-x\right\rangle\geq 0 \quad \forall y\in K. $$
(30)

Similarly, x ∈ Solw(VP1) if and only if there exists ξ = (ξ 1, …, ξ m ) ∈ Σ such that (30) holds.

Condition (30) can be rewritten in the form of a parametric affine variational inequality as follows:

$$(\text{VI})^{\prime}_{\xi} \qquad\qquad \langle M(\xi)x+q(\xi),y-x\rangle\geq 0\quad \ \forall y\in K, $$

with

$$M(\xi):=\sum\limits_{l=1}^{m}\xi_{l} M_{l},\qquad q(\xi):=\sum\limits_{l=1}^{m}\xi_{l} q_{l},$$

where

$$ M_{l}=a_{l}{b_{l}^{T}}-b_{l}{a_{l}^{T}},\quad q_{l}=\beta_{l}a_{l}-\alpha_{l}b_{l}\quad (l=1,\dots,m).$$

It is well known [49] that (VI)\(^{\prime }_{\xi }\) is an anti-symmetric monotone AVI for every ξ ∈ Σ. Denote by Ψ(ξ) the solution set of (VI)\(^{\prime }_{\xi }\) and consider the multifunction \({\Psi }:{\Sigma }\rightrightarrows \mathbb R^{n},\ \xi \mapsto {\Psi }(\xi )\). According to the above-recalled necessary and sufficient optimality conditions for (VP 1), we have

$$\mathrm{Sol(VP}_{1})=\bigcup_{\xi\in \text{ri}{\Sigma}} {\Psi}(\xi)={\Psi}(\text{ri}{\Sigma}), $$
(31)

and

$$\text{Sol}^{w}\mathrm{(VP}_{1})=\bigcup_{\xi\in{\Sigma}}{\Psi}(\xi)={\Psi}({\Sigma}). $$
(32)

By formulas (31), (32), and Theorem 4, Sol(VP1) (resp., Solw(VP1)) coincides with the Pareto solution set (resp., the weak Pareto solution set) of the monotone AVVI defined by K and F l (x) = M l x + q l , l = 1, …, m. Hence, the next result is a direct consequence of Theorem 7.

Theorem 8

(See [17, Theorem 4.5]) It holds that

  1. (i)

    the Pareto solution set Sol(VP 1 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)), and

  2. (ii)

    the weak Pareto solution set Sol w (VP 1 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)).

We have seen that both the Pareto solution set and the weak Pareto solution set of (VP1) are semi-algebraic subsets and have finitely many connected components. As mentioned in Section 5, Hoa, Phuong, and Yen [13] had proved that, for any natural number m≥1, there exists a LFVOP with m objective criteria, where the sets Sol(VP1) and Solw(VP1) coincide and have exactly m connected components. The problem of finding an upper estimate for the number of connected components of Sol(VP1) has been solved in [15] for the case m = 2.

If \(f_{l}:\mathbb R^{n}\to \mathbb R,\ l=1,\dots ,m,\) are polynomial functions and \(K\subset \mathbb R^{n}\) is a polyhedral convex set then (VP), now denoted by (VP 2), is called a polynomial vector optimization problem under linear constraints.

We denote the efficient solution set, the weakly efficient solution set, the stationary point set, and the proper stationary point set of (VP 2), respectively, by Sol(VP2), Solw(VP2), Stat(VP 2), and Pr(VP 2).

Theorem 9

(See [17, Theorem 4.6]) The following assertions hold:

  1. (i)

    The set Stat(VP 2 ) (resp., the set Pr(VP 2 )) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));

  2. (ii)

    If all the functions f l are convex, then Sol w (VP 2 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));

  3. (iii)

    If all the functions f l are convex, and the set Pr(VP 2) is dense in Sol(VP 2), then Sol(VP 2) has a finite number of connected components.

To see how Lemma 1 works for the class of problems in question, we recall here the proof of the assertion (iii). Since all the functions f l are convex and the set Pr(VP 2) is dense in Sol(VP2), we have

$$ \text{Pr}(\text{VP}_{2})\subset \text{Sol}(\text{VP}_{2}) \subset \overline{\mathrm{Pr(VP_{2})},} $$
(33)

where the first inclusion is a special case of the first inclusion in (14), and \(\overline {\mathrm {Pr(VP_{2})}}\) is the closure of Pr(VP 2) in the Euclidean topology of \(\mathbb R^{n}\). By (i), we see that Pr(VP 2) has finitely many connected components. Now, from (33) and Lemma 1, it follows that Sol(VP2) has a finite number of connected components.

If K is a polyhedral convex set and \(f_{l}(x)=\frac {1}{2}x^{T}M_{l}x+{q_{l}^{T}}x,\ l=1,\dots ,m,\) where \(M_{l}\in \mathbb R^{n\times n},\ l=1,\dots , m\), are symmetric matrices, and q l for l = 1, …, m are vectors in \(\mathbb R^{n}\), then (VP) is called a quadratic vector optimization problem (QVOP for short) and is denoted by (VP 3).

Clearly, a quadratic vector optimization problem is a polynomial vector optimization problem. Hence, Theorem 9 implies the next result on the connectedness structure of the stationary point set Stat(VP 3) and of the weakly efficient solution set Solw(VP3).

Corollary 1

(See [17, Corollary 4.7]) The following properties hold:

  1. (i)

    The set Stat(VP 3 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));

  2. (ii)

    If all the matrices \(M_{l}\in \mathbb R^{n\times n},\ l=1,\dots , m\) , are positive semidefinite, then Sol w (VP 3 ) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\)).

We have seen that the stationary point set of a QVOP and the weakly efficient solution set of a convex QVOP have finitely many connected components. Note that, by [50, Theorem 4.1], if the weakly efficient solution set of a convex QVOP is disconnected, then each of its connected components is unbounded. Nevertheless, up to now we are unaware of any convex QVOP with a disconnected weakly efficient solution set or a disconnected efficient solution set. We finish this section with two open questions.

Question 1

The weak Pareto solution set (resp., the Pareto solution set) of a symmetric monotone affine VVI is always connected?

Question 2

The weak Pareto solution set (resp., the Pareto solution set) of any anti-symmetric monotone affine VVI with m criteria can have at most m connected components?

9 VVIs under Polynomial Constraints

Throughout this section, we let

$$ K=\left\{x\in\mathbb R^{n}\, :\, g_{i}(x)\leq 0,\ i=1,\dots, p, \ h_{j}(x)=0,\ j=1,\dots,s\right\}, $$
(34)

where g i ,h j belong to \(\mathbb R[x_{1},\dots , x_{n}]\) for all i = 1, …, p, j = 1, …, s, and assume that K is convex.

Remark 2

If the functions g i are convex and the functions h j are affine, i.e., \(h_{j}(x)=\langle a_{j},x\rangle +b_{j},\ a_{j}\in \mathbb R^{n},\ b_{j}\in \mathbb R\) for all j = 1, …, s, then the set K given by (34) is convex.

In order to have an explicit formula for computing the normal cone to K at every point xK, one has to impose a regularity condition on the functions appeared in (34).

Definition 11

(See [33, p. 44]) One says that the Mangasarian-Fromovitz Constraint Qualification (the MFCQ for brevity) is satisfied at a point xK if

  1. (i)

    the gradient vectors {∇h j (x) : j = 1, …, s} are linearly independent;

  2. (ii)

    there exists \(v\in \mathbb R^{n}\) with 〈∇h j (x), v〉=0 for all j = 1, …, s, and 〈∇g i (x), v〉<0, for all iI(x), where I(x):={i : g i (x)=0}.

We will discuss the connectedness structure of the solution sets of vector variational inequalities of the form (VVI) under two assumptions:

  1. (a1)

    All the components of F l , l = 1, …, m, are polynomial functions in the variables x 1, …, x n ; i.e., for each l one has F l =(F l1, …, F l n ) with \(F_{lj}\in \mathbb R[x_{1},\dots , x_{n}]\) for all j = 1, …, n;

  2. (a2)

    The MFCQ is satisfied at every point of K.

The main result of [18] can be formulated as follows.

Theorem 10

(See [18, Theorem 3.3]) If (a1) and (a2) are fulfilled, then it holds that

  1. (i)

    The weak Pareto solution set Sol w (VVI) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));

  2. (ii)

    The proper Pareto solution set Sol pr (VVI) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));

  3. (iii)

    If the set Sol pr (VVI) is dense in Sol(VVI), then Sol(VVI) has a finite number of connected components.

Let us outline the main arguments for proving this theorem. Put I = {1, …, p} and J = {1, …, s}. To every subset αI (the case α = is not excluded) one associates the (possibly curved) pseudo-face of K:

$$\mathcal{F}_{\alpha}=\left\{x\in {\mathbb R}^{n}\;:\; g_{i}(x)=0\ \, \forall i\in\alpha,\ \, g_{i}(x)< 0\ \,\forall i\notin\alpha,\ \, h_{j}(x)=0\ \,\forall j\in J\right\}.$$

Using the notation I(x) given in Definition 11, we have I(x) = α for all \(x\in \mathcal {F}_{\alpha }\).

By (15), we have

$$\text{Sol}^{w}\mathrm{(VVI)}=\bigcup_{\xi\in{\Sigma}}\mathrm{Sol(VI)}_{\xi} $$
(35)

with (VI) ξ denoting the variational inequality

$$\qquad \text{Find}\ \, x\in K \ \; \mathrm{such\ that}\ \, \left\langle F(x,\xi),y-x\right\rangle\geq 0\quad \forall y\in K,$$

where \(F(x,\xi ):={\sum }_{l=1}^{m}\xi _{l}F_{l}(x)\) for every ξ = (ξ 1, …, ξ m )T ∈ Σ. Using the notation F(x, ξ) and the definition of normal cone in (18), we can rewrite the inclusion x ∈ Sol(VI) ξ equivalently as

$$ -F(x,\xi)\in N_{K}(x). $$
(36)

We have \( K=\underset {\alpha \subset I}{\bigcup }\mathcal {F}_{\alpha }\), \(\mathcal {F}_{\alpha }\cap \mathcal {F}_{\widetilde \alpha }=\emptyset \) if \(\alpha \neq \widetilde \alpha \) and, therefore,

$$ \text{Sol}^{w}\mathrm{(VVI)}=\bigcup_{\alpha\subset I} [\text{Sol}^{w}\mathrm{(VVI)}\cap \mathcal{F}_{\alpha}]. $$
(37)

Since a finite union of semi-algebraic subsets of \(\mathbb R^{n}\) is again a semi-algebraic subset of \(\mathbb R^{n}\), by (37) and by Theorem 6, we see that the proof of the assertion (i) will be completed if we can establish

Claim 2

For every index set α⊂I, the intersection \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }\) is a semi-algebraic subset of \(\mathbb R^{n}\).

By virtue of the assumption (a2), the result formulated in [1, Remark on p. 151] gives us an explicit formula for computing the Clarke tangent cone [7, p. 51] to K at every point xK:

$$T_{K}(x)=\{v\in\mathbb R^{n}\;:\;\langle\nabla g_{i}(x),v\rangle\leq0\ \, \forall i\in I(x),\ \, \langle\nabla h_{j}(x),v\rangle= 0\ \,\forall j\in J\}.$$

Since K is convex, the normal cone N K (x) defined in (18) coincides ([7, Proposition 2.4.4]) with Clarke normal cone [7, p. 51] to K at x, which is the negative dual cone of the Clarke tangent cone T K (x). So,

$$N_{K}(x)=\left( T_{K}(x)\right)^{*}=\left\{x^{*}\in\mathbb R^{n}\; :\; \langle x^{*},v\rangle\leq 0\ \; \forall v\in T_{K}(x)\right\}. $$

This means that a vector \(x^{*}\in \mathbb R^{n}\) belongs to N K (x) if and only if the inequality 〈x ,v〉≤0 is a consequence of the system

$$\langle\nabla g_{i}(x),v\rangle\leq0\ \, \forall i\in I(x),\ \, \langle\nabla h_{j}(x),v\rangle= 0\ \,\forall j\in J.$$

Hence, by Farkas’ Lemma [37, Corollary 22.3.1], x N K (x) if and only if there exist λ i ≥0, iI(x), \(\mu _{j}\in \mathbb R\), jJ, such that

$$x^{*}=\sum\limits_{i\in I(x)}\lambda_{i}\nabla g_{i}(x)+\sum\limits_{j\in J}\mu_{j}\nabla h_{j}(x).$$

It follows that

$$ N_{K}(x)=\left\{\sum\limits_{i\in\alpha}\lambda_{i}\nabla g_{i}(x)+\sum\limits_{j\in J}\mu_{j}\nabla h_{j}(x)\; : \; \lambda_{i}\geq 0\ \, \forall i\in\alpha,\ \; \mu_{j}\in\mathbb R\ \,\forall j\in J \right\} $$
(38)

for every αI and for every \(x\in \mathcal {F}_{\alpha }\).

According to the formulas (35), (36), and (38),

$$ \begin{array}{llll} && \text{Sol}^{w}\mathrm{(VVI)}\cap\mathcal{F}_{\alpha}\\ &&\qquad = \underset{\xi\in{\Sigma}}{\bigcup} \left\{x\in\mathcal{F}_{\alpha}\,:\, -\sum\limits_{l=1}^{m}\xi_{l}F_{l}(x)=\underset{i\in\alpha}\sum\lambda_{i}\nabla g_{i}(x)+\underset{j\in J}{\sum}\mu_{j}\nabla h_{j}(x),\right.\\&& \left.\qquad \qquad\qquad\qquad\lambda_{i}\geq 0\ \forall i\in\alpha,\ \,\mu_{j}\in\mathbb R\ \forall j\in J{}\phantom{\underset{\xi\in{\Sigma}}{\bigcup}} \right\}. \end{array} $$
(39)

The equality in (39) is equivalent to the forthcoming system of polynomial equalities in the variables x 1, …, x n , ξ 1, …, ξ m ,λ i , iα, μ j , jJ:

$$ -\sum\limits_{l=1}^{m}\xi_{l}F_{lk}(x)=\sum\limits_{i\in\alpha}\lambda_{i}\frac{\partial g_{i}(x)}{\partial x_{k}}+\sum\limits_{j\in J}\mu_{j}\frac{\partial h_{j}(x)}{\partial x_{k}}, \quad k=1,\dots, n. $$
(40)

Denote the power of α by |α| and put

$$\begin{array}{@{}rcl@{}} {\Omega}_{\alpha}= \left\{(x,\xi,\lambda,\mu)\in\mathcal{F}_{\alpha}\times{\Sigma}\times\mathbb R_{+}^{|\alpha|}\times\mathbb R^{s}\right. \,:\,&&-\sum\limits_{l=1}^{m}\xi_{l}F_{l}(x)=\sum\limits_{i\in\alpha}\lambda_{i}\nabla g_{i}(x)\\ &&+\left.\sum\limits_{j\in J}\mu_{j}\nabla h_{j}(x)\right\}. \end{array} $$

By (40), we have

$$\begin{array}{llll}{\Omega}_{\alpha} &=&\left\{(x,\xi,\lambda,\mu)\in\mathbb R^{n+m+|\alpha|+s}\;:\; g_{i}(x)=0,\ i\in\alpha,\ \, g_{i}(x)< 0,\ i\notin\alpha,\right. \\ && -\sum\limits_{l=1}^{m}\xi_{l}F_{lk}(x)=\sum\limits_{i\in\alpha}\lambda_{i}\frac{\partial g_{i}(x)}{\partial x_{k}}\\ &&+\sum\limits_{j=1}^{s}\mu_{j}\frac{\partial h_{j}(x)}{\partial x_{k}},\ \, k=1,\dots,n,\\ &&~~~\sum\limits_{l=1}^{m}\xi_{l}=1,\ \xi_{l}\geq 0,\ \, l=1,\dots, m,\\ && \left.\lambda_{i}\geq 0,\ i\in\alpha\!\!\!\phantom{\frac{~}{~}}\right\}. \end{array} $$

Since Ω α is determined by |α| + n+1 polynomial equations, m+|α| polynomial inequalities, and p−|α| strict polynomial inequalities of the variables \((x,\xi ,\mu )=(x_{1},\dots , x_{n},\xi _{1},\dots ,\xi _{m},\mu _{1},\dots ,\mu _{s})\in \mathbb R^{n+m+s}\) and λ i , iα, it is a semi-algebraic set.

From (39), we get \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }={\Phi }({\Omega }_{\alpha })\), where \({\Phi }:\mathbb R^{n+m+|\alpha |+s}\to \mathbb {R}^{n}\) is the natural projection on the space of the first n coordinates. According to Theorem 5, \(\text {Sol}^{w}\mathrm {(VVI)}\cap \mathcal {F}_{\alpha }\) is a semi-algebraic set. This proves Claim 2.

We have thus shown that the weak Pareto solution set Sol w(VVI) is a semi-algebraic subset of \(\mathbb R^{n}\). Then, according to Theorem 6, Solw(VVI) has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\).

The assertion (ii) can be proved in the same manner.

For (iii), one can observe by the density of Solpr(VVI) in Sol(VVI) that

$$ \text{Sol}^{pr}\mathrm{(VVI)}\subset \mathrm{Sol(VVI)}\subset\overline {\text{Sol}^{pr}\mathrm{(VVI)}}, $$
(41)

where \(\overline {\text {Sol}^{pr}\mathrm {(VVI)}}\) is the closure of Solpr(VVI) in the Euclidean topology of \(\mathbb R^{n}\). Since Solpr(VVI) has finitely many connected components, from (41) and Lemma 1, it follows that Sol(VVI) has a finite number of connected components.

Note that, in the problem discussed in Example 4, the assumption “the set Solpr(VVI) is dense in Sol(VVI)” in Theorem 10 is satisfied.

Let \(f_{l}: \mathbb R^{n}\to \mathbb R,\ \, l=1,\dots ,m,\) be polynomial functions and let \(K\subset \mathbb R^{n}\) be given as in (34), where \(g_{i}, h_{j}\in \mathbb R[x_{1},\dots , x_{n}]\) for all i = 1, …, p, j = 1, …, s. In this setting, (VP) is called a polynomial vector optimization problem under polynomial constraints and is denoted by (pVP). The Pareto solution set, the weakly Pareto solution set, the stationary point set, and the proper stationary point set of (pVP) are abbreviated respectively to Sol(pVP), Solw(pVP), Stat(pVP), and Pr(pVP).

Theorem 11

(See [18, Theorem 4.5]) If K is convex and the assumption (a2) is fulfilled, then the following assertions are valid:

  1. (i)

    The set Stat(pVP) (resp., the set Pr(pVP)) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));

  2. (ii)

    If all the functions f l are convex, then Sol w (pVP) is a semi-algebraic subset of \(\mathbb R^{n}\) (so it has finitely many connected components and each of them is a semi-algebraic subset of \(\mathbb R^{n}\));

  3. (iii)

    If all the functions f l are convex and the set Pr(pVP) is dense in Sol(pVP), then Sol(pVP) has a finite number of connected components.

It is clear that the assumption “the set Pr(pVP) is dense in Sol(pVP)” in Theorem 11 is satisfied for the problem described in Example 5.