1 Introduction

This paper mainly deals with constrained vector optimization problems formulated as follows:

$$ \begin{array}{@{}rcl@{}} &&\text{Min}_{\mathbb{R}^{m}_{+}} f(x):=(f_{1}(x), \ldots, f_{m}(x))\\ &&\text{s.t.}\ \ x\in C:=\{x\in {\Omega} | g_{t}(x)\leqq 0,\ \ t\in T\}, \end{array} $$
(VP)

where fi, iI := {1,…,m}, and gt, tT, are local Lipschitz functions from a Banach space X to \(\mathbb {R}\), Ω is a nonempty and closed subset of X, and T is an arbitrary (possibly infinite) index set. Optimization problems of this type relate to semi-infinite vector optimization problems, provided that the space X is finite-dimensional, and to infinite vector optimization problems if X is infinite-dimensional (see [2, 16]). The modeling of problems as (VP) naturally arises in a wide range of applications in various fields of mathematics, economics, and engineering; we refer the readers to the books [16, 35] and to the papers [4, 5, 7,8,9,10, 13, 14, 18, 23] with the references therein.

Our main concern is to study the optimality conditions for approximate Pareto solutions of problem (VP). It should be noted here that the study of approximate solutions is very important because, from the computational point of view, numerical algorithms usually generate only approximate solutions if we stop them after a finite number of steps. Furthermore, the solution set may be empty in the general noncompact case (see [20, 22, 26, 27, 33, 38]) whereas approximate solutions exist under very weak assumptions (see Propositions 2.1 and 2.2 in Section 2 below).

In the literature, there are many publications devoted to optimality conditions for approximate solutions of semi-infinite/infinite scalar optimization problems (see, for example, [13, 23, 24, 29,30,31, 34, 36]). However, in contrast to the scalar case, there are a few works dealing with optimality conditions for approximate Pareto solutions of semi-infinite/infinite vector optimization problems (see [25, 28, 37]). In [28, 37], the authors obtained necessary and sufficient optimality conditions for approximate Pareto solutions of a convex semi-infinite/infinite vector optimization problem under various kind of Farkas–Minkowski constraint qualifications. By using the Chankong–Haimes scalarization method, Kim and Son [25] established some necessary optimality conditions for approximate quasi Pareto solutions of a local Lipschitz semi-infinite vector optimization problem.

In this paper, we present some necessary conditions of the Karush–Kuhn–Tucker type for approximate (quasi) Pareto solutions of the problem (VP) under a Slater-type constraint qualification hypothesis. Sufficient optimality conditions for approximate (quasi) Pareto solutions of the problem (VP) are also provided by means of introducing the concepts of (strictly) generalized convex functions, defined in terms of the Clarke subdifferential of local Lipschitz functions. The obtained results improve the corresponding ones in [25, 28, 37]. As an application, we establish optimality conditions for cone-constrained convex vector optimization problems and semidefinite vector optimization problems. In addition, some examples are also given for illustrating the obtained results.

In Section 2, we recall some basic definitions and preliminaries from the theory of vector optimization and variational analyses. Section 3 presents the main results. The application of the obtained results in Section 3 to cone-constrained convex vector optimization problems and semidefinite vector optimization problems is addressed in Section 4.

2 Preliminaries

2.1 Approximate Pareto Solutions

Let \(\mathbb {R}^{m}_{+} := \{y := (y_{1}, \ldots , y_{m}) | y_{i}\geqq 0,~ i\in I\}\) be the nonnegative orthant in \(\mathbb {R}^{m}\). For \(a, b\in \mathbb {R}^{m}\), by \(a\leqq b\), we mean \(b-a\in \mathbb {R}^{m}_{+}\); by ab, we mean \(b-a\in \mathbb {R}^{m}_{+}\setminus \{0\}\); and by a < b, we mean \(b-a\in \operatorname {int}\mathbb {R}^{m}_{+}\).

Definition 2.1

(See [32]) Let \(\xi \in \mathbb {R}^{m}_{+}\) and \(\bar x\in C\). We say that(i) \(\bar {x}\) is a weakly Pareto solution (resp. a Pareto solution) of (VP) if there is no xC such that

$$ f(x)< f(\bar{x})\ \ (\text{resp.} f(x)\leq f(\bar{x})). $$

(ii) \(\bar x\) is a ξ-weakly Pareto solution (resp. a ξ-Pareto solution) of (VP) if there is no xC such that

$$ f(x) + \xi < f(\bar x) \ \ (\text{resp.} f(x) + \xi \leq f(\bar x)). $$

(iii) \(\bar x\) is a ξ-quasi-weakly Pareto solution (resp. a ξ-quasi Pareto solution) of (VP) if there is no xC such that

$$ f(x) + \|x - \bar x\| \xi < f(\bar x)\ \ (\text{resp.} f(x) + \|x-\bar x\|\xi\leq f(\bar x)). $$

Remark 2.1

If ξ = 0, then the concepts of a ξ-Pareto solution and a ξ-quasi-Pareto solution (resp. a ξ-weakly Pareto solution and a ξ-quasi-weakly Pareto solution) coincide with the concept of a Pareto solution (resp. a weakly Pareto solution). Hence, when dealing with approximate Pareto solutions, we only consider the case that \(\xi \in \mathbb {R}^{m}_{+}\setminus \{0\}\).

Definition 2.2

Let A be a subset in \(\mathbb {R}^{m}\) and \({\bar {y}} \in \mathbb {R}^{m}\). The set \(A \cap ({\bar {y}} - \mathbb {R}^{m}_{+})\) is called a section of A at \({\bar {y}}\) and denoted by \([A]_{\bar {y}}.\) The section \([A]_{\bar {y}}\) is said to be bounded if and only if there is \(a\in \mathbb {R}^{m}\) such that

$$ [A]_{\bar{y}} \subset a + \mathbb{R}^{m}_{+}. $$

Remark 2.2

Let \(\bar y\) be an arbitrary element in f(C). It is easily seen that every ξ-Pareto solution (resp. ξ-quasi Pareto solution) of (VP) on \(C\cap f^{-1}\left ([f(C)]_{\bar y}\right )\) is also a ξ-Pareto one (resp. ξ-quasi Pareto one) of (VP) on C.

The following results give some sufficient conditions for the existence of approximate Pareto solutions of (VP).

Proposition 2.1

(Existence of approximate Pareto solutions) Assume that f(C) has a nonempty bounded section. Then for each \(\xi \in \mathbb {R}^{m}_{+}\setminus \{0\},\) the problem (VP) admits at least one ξ-Pareto solution.

Proof

The assertion follows directly from Remark 2.2 and [17, Lemma 3.1], so is omitted. □

Proposition 2.2

(Existence of approximate quasi Pareto solutions) If f(C) has a nonempty bounded section, then for every \(\xi \in \operatorname {int} \mathbb {R}^{m}_{+},\) the problem (VP) admits at least one ξ-quasi Pareto solution.

Proof

Let x0C be such that the section of f(C) at f(x0) is bounded. By Proposition 2.1, there exists \(\bar x \in C\) such that \(f(\bar x)\leqq f(x^{0})\) and

$$ \begin{array}{@{}rcl@{}} f(C)\ \cap\ [f(\bar x) - \xi - \mathbb{R}^{m}_{+}\setminus\{0\}] &=& \emptyset. \end{array} $$

Consequently,

$$ f(C)\cap [f(\bar x) - \xi - \text{int} \mathbb{R}^{m}_{+}]\ =\ \emptyset. $$

By the continuity of f and the closedness of C, for each xC, the set

$$ \{u \in C | f(u)+\|u-x\|\xi \ \leqq \ f(x)\} $$

is closed. By [1, Theorem 3.1], there exists xC such that \(f(x^{*})<f(\bar x)\) and

$$ \begin{array}{@{}rcl@{}} f(x) + \|x-x^{*}\|\xi- f(x^{*}) & \notin& -\mathbb{R}^{m}_{+} \quad \textrm{for all} \quad x \in C, x\neq x^{*}. \end{array} $$

Consequently,

$$ \begin{array}{@{}rcl@{}} f(x) + \|x-x^{*}\|\xi & \nleq & f(x^{*}) \quad \textrm{ for all } \quad x \in C, x \ne x^{*}. \end{array} $$

Thus, x is a ξ-quasi Pareto solution of (VP). The proof is complete. □

2.2 Normals and Subdifferentials

For a Banach space X, the bracket 〈⋅,⋅〉 stands for the canonical pairing between space X and its dual X. The closed unit ball of X is denoted by BX. The closed ball with center x and radius δ is denoted by B(x,δ). Let A be a nonempty subset of X. The topological interior, the topological closure, and the convex hull of A are denoted, respectively, by intA, clA, and coneA. The symbol A stands for the polar cone of a given set AX, i.e.,

$$ A^{\circ}=\{x^{*}\in X^{*}| \langle x^{*}, x\rangle\leqq 0, \ \ \forall x\in A\}. $$

Let \(\varphi \colon X \to \mathbb {R}\) be a local Lipschitz function. The Clarke generalized directional derivative of φ at \(\bar x\in X\) in the direction dX, denoted by \(\varphi ^{\circ }(\bar x; d)\), is defined by

$$ \varphi^{\circ}(\bar x; d):=\mathop{\limsup\limits_{x\to\bar x}} \limits_{t\downarrow 0}\frac{\varphi(x+td)-\varphi(x)}{t}. $$

The Clarke subdifferential of φ at \(\bar x\) is defined by

$$ \begin{array}{@{}rcl@{}} \partial \varphi (\bar x):=\{x^{*}\in X^{*} | \langle x^{*}, d\rangle\leqq \varphi^{\circ}(\bar x; d), \ \ \forall d\in X\}. \end{array} $$

Let S be a nonempty closed subset of X. The Clarke tangent cone to S at \(\bar x\in S\) is defined by

$$ T(\bar x; S):=\{v\in X| d_{S}^{\circ}(\bar x; v)=0\}, $$

where dS denotes the distance function to S. The Clarke normal cone to S at \(\bar x\in S\) is defined by

$$ N(\bar x; S):=[T(\bar x; S)]^{\circ}. $$

The following lemmas will be used in the sequel.

Lemma 2.1

(See [12, p. 52]) Let φ be a local Lipschitz function from X to \(\mathbb {R}\) and S be a nonempty subset of X. If \(\bar x\) is a local minimizer of φ on S, then

$$ 0\in\partial \varphi(\bar x)+N(\bar x; S). $$

Lemma 2.2

(See [12, Proposition 2.3.3]) Let \(\varphi _{l}\colon X\to \mathbb {R}\), l = 1,…,p, \(p\geqq 2\), be a local Lipschitz around \(\bar x\in X\). Then, we have the following inclusion:

$$ \partial (\varphi_{1}+\cdots+\varphi_{p}) (\bar x)\subset \partial \varphi_{1} (\bar x) +\cdots+\partial \varphi_{p} (\bar x). $$

Lemma 2.3

(See [12, Proposition 2.3.12]) Let \(\varphi _{l}\colon X\to {\mathbb {R}}\), l = 1,…,p, be a local Lipschitz around \(\bar x\in X\). Then, the function \( \phi (\cdot ):=\max \limits \{\varphi _{l}(\cdot )|l=1, \ldots , p\}\) is also a local Lipschitz around \(\bar x\) and one has

$$ \partial \phi(\bar x)\subset \bigcup\left\{\sum\limits_{l=1}^{p}\lambda_{l}\partial\varphi_{l} (\bar x)| (\lambda_{1}, \ldots, \lambda_{p})\in \mathbb{R}^{p}_{+}, \sum\limits_{l=1}^{p}\lambda_{l}=1, \lambda_{l}[\varphi_{l}(\bar x)-\phi(\bar x)]=0\right\}. $$

3 Optimality Conditions

Hereafter, we assume that the following assumptions are satisfied:(i) T is a compact topological space;(ii) X is separable; or T is metrizable and gt(x) is upper semicontinuous (w) in t for each xX.

Denote by \(\mathbb {R}_{+}^{|T|}\) the set of all functions \(\mu \colon T\to \mathbb {R}_{+}\) such that μt := μ(t) = 0 for all tT except for finitely many points. The active constraint multipliers set of the problem (VP) at \(\bar x\in {\Omega }\) is defined by

$$ A(\bar x):=\left\{\mu\in \mathbb{R}_{+}^{|T|}| \mu_{t}g_{t}(\bar x)=0, \ \ \forall t\in T\right\}. $$

For each xX, put \(G(x):=\max \limits _{t\in T} g_{t}(x)\) and

$$ T(x):= \left\{t\in T| g_{t}(x)=G(x)\right\}. $$

Fix \(\xi \in \mathbb {R}^{m}_{+}\setminus \{0\}\). The following theorem gives a necessary optimality condition of fuzzy Karush–Kuhn–Tucker type for ξ-weakly Pareto solutions of the problem (VP).

Theorem 3.1

Let \(\bar x\) be a ξ-weakly Pareto solution of the problem (VP). If the following constraint qualification condition holds:

then, for any δ > 0 small enough, there exist \(x_{\delta }\in C\cap B(\bar x, \delta )\) and \(\lambda :=(\lambda _{1}, \ldots , \lambda _{m})\in \mathbb {R}^{m}_{+}\) with \({\sum }_{i\in I}\lambda _{i}=1\) such that

$$ \begin{array}{@{}rcl@{}} &&0\in \sum\limits_{i\in I} \lambda_{i}\partial f_{i}(x_{\delta})+\mathbb{R}_{+}\mathrm{cl conv}\left\{\bigcup \partial g_{t}(x_{\delta}) | t\in T(x_{\delta})\right\}+N(x_{\delta}; {\Omega})+\frac{1}{\delta}\max_{i\in I}\{\xi_{i}\}B_{X^{*}},\\ &&\lambda_{i}\left[f_{i}(x_{\delta})-f_{i}(\bar x)+\xi_{i}-\max_{i\in I}\{f_{i}(x_{\delta})-f_{i}(\bar x)+\xi_{i}\}\right]=0, \ \ i\in I, \end{array} $$

where clconv(⋅) denotes the closed convex hull with the closure taken in the weak-topology of the dual space X.

Proof

For each xX, put \(\psi (x):=\max \limits _{i\in I}\{f_{i}(x)-f_{i}(\bar x)+\xi _{i}\}.\) Then, we have \(\psi (\bar x)=\max \limits _{i\in I}\{\xi _{i}\}\). Since \(\bar x\) is a ξ-weakly Pareto solution of (VP), one has

$$ \psi(x)\geqq 0, \ \ \forall x\in C. $$
(3.1)

Clearly, ψ is locally Lipschitz and bounded from below on C. By (3.1), we have

$$ \psi(\bar x)\leqq \inf_{x\in C}\psi(x)+\max_{i\in I}\{\xi_{i}\}. $$

By the Ekeland variational principle [15, Theorem 1.1], for any δ > 0, there exists xδC such that \(\|x_{\delta }-\bar x\|<\delta \) and

$$ \psi(x_{\delta})\leqq \psi(x)+\frac{1}{\delta}\max_{i\in I}\{\xi_{i}\}\|x- x_{\delta}\|, \ \ \forall x\in C. $$

For each xC, put

$$ \varphi(x):=\psi(x)+\frac{1}{\delta}\max_{i\in I}\{\xi_{i}\}\|x- x_{\delta}\|. $$

Then, xδ is a minimizer of φ on C. By Lemma 2.1, we have

$$ 0\in \partial \varphi(x_{\delta})+N(x_{\delta}; C). $$
(3.2)

By Lemma 2.2 and [21, Example 4, p. 198], one has

$$ \partial \varphi(x_{\delta})\subset \partial \psi(x_{\delta})+\frac{1}{\delta}\max_{i\in I}\{\xi_{i}\} B_{X^{*}}. $$
(3.3)

Thanks to Lemma 2.3, we have

$$ \partial \psi(x_{\delta})\subset \bigg\{\sum\limits_{i\in I}\lambda_{i}\partial f_{i}(x_{\delta}) | \lambda_{i}\geqq 0, i\in I, \sum\limits_{i\in I}\lambda_{i}=1, \lambda_{i}[f_{i}(x_{\delta})-f_{i}(\bar x)+\xi_{i}-\psi(x_{\delta})]=0 \bigg\}. $$
(3.4)

We claim that there exists \(\bar {\delta }>0\) such that for all \(x\in B(\bar x, \bar {\delta })\) there is dT(x;Ω) satisfying G(x;d) < 0. Indeed, if otherwise, then there exists a sequence {xk} converging to \(\bar x\) such that \(G^{\circ }(x^{k}; d)\geqq 0\) for all \(k\in \mathbb {N}\) and dT(x;Ω). Hence, \(G^{\circ }(x^{k}; \bar d)\geqq 0\) for all \(k\in \mathbb {N}\). By the upper semicontinuity of G(⋅,⋅), we have:

$$ G^{\circ}(\bar x; \bar d)\geqq \limsup_{k\to\infty}G^{\circ}(x^{k}; \bar d)\geqq 0, $$

contrary to condition (𝓤).

By [19, Theorem 6] and [36, Theorem 2.1], for each \(\delta \in (0, \bar \delta )\), we have

$$ \begin{array}{@{}rcl@{}} N(x_{\delta}; C)&\subset& N(x_{\delta}; {\Omega})+\mathbb{R}_{+}\partial G(x_{\delta})\\ &\subset& N(x_{\delta}; {\Omega})+\mathbb{R}_{+}\mathrm{cl conv}\left\{\bigcup \partial g_{t}(x_{\delta})| t\in T(x_{\delta})\right\}. \end{array} $$

Combining this with (3.2)–(3.4), we obtain the desired assertion. □

Remark 3.1

By using approximate subdifferentials, Lee et al. [28, Theorem 8.3] derived some necessary optimality conditions for ξ-weakly Pareto solutions of a convex infinite vector optimization problem. However, we are not familiar with any results on optimality conditions for ξ-weakly Pareto solutions of a nonconvex problem of type (VP). Theorem 3.1 may be the first result of this type. We also note here that when T is a finite set, the result in Theorem 3.1 is a corresponding result of [11, Theorem 3.4], where the optimality condition was given in terms of the limiting subdifferential.

Theorem 3.2

Let \(\bar x\) be a ξ-quasi-weakly Pareto solution of the problem (VP). If the condition (𝓤) holds at \(\bar x\) and the convex hull of \(\left \{\bigcup \partial g_{t}(\bar x)| t\in T(\bar x)\right \}\) is weak-closed, then, there exist \(\lambda :=(\lambda _{1}, \ldots , \lambda _{m})\in \mathbb {R}^{m}_{+}\) with \({\sum }_{i\in I}\lambda _{i}=1\), and \(\mu \in A(\bar x)\) such that

$$ \begin{array}{@{}rcl@{}} 0\in \sum\limits_{i\in I}\lambda_{i}\partial f_{i}(\bar x)+\sum\limits_{t\in T}\mu_{t}\partial g_{t}(\bar x)+\sum\limits_{i\in I}\lambda_{i}\xi_{i} B_{X^{*}}+N(\bar x; {\Omega}). \end{array} $$
(3.5)

Proof

For each xX, put \({\Phi } (x):=\max \limits _{i\in I}\{f_{i}(x)-f_{i}(\bar x)+\xi _{i}\|x-\bar x\|\}.\) Then, \({\Phi }(\bar x)=0\). Since \(\bar x\) is a ξ-quasi-weakly Pareto solution of (VP), we have

$$ {\Phi}(x)\geqq 0, \ \ \forall x\in C. $$

This means that \(\bar x\) is a minimizer of Φ on C. By Lemma 2.1, one has

$$ 0\in\partial {\Phi}(\bar x)+N(\bar x; C). $$
(3.6)

By Lemma 2.3, we have

$$ \partial {\Phi}(\bar x)\subset \left\{ \sum\limits_{i\in I}\lambda_{i}\left[\partial f_{i}(\bar x)+\xi_{i}B_{X^{*}}\right]| \lambda_{i}\geqq 0, \sum\limits_{i\in I}\lambda_{i}=1\right\}. $$
(3.7)

Since condition (𝓤) holds at \(\bar x\) and the convex hull of \(\left \{\bigcup \partial g_{t}(\bar x)| t\in T(\bar x)\right \}\) is weak-closed, we obtain

$$ \begin{array}{ll} N(\bar x; C)&\subset N(\bar x; {\Omega})+\mathbb{R}_{+}\partial G(\bar x)\\ &\subset N(\bar x; {\Omega})+\mathbb{R}_{+}\text{conv}\left\{\bigcup \partial g_{t}(\bar x)| t\in T(\bar x)\right\}. \end{array} $$
(3.8)

To finish the proof of the theorem, it remains to combine (3.6), (3.7), and (3.8). □

Remark 3.2

(i) When X is a finite-dimensional space and the constraint functions \(g_{t}\colon X\to \mathbb {R},\)tT, are local Lipschitz with respect to x uniformly in tT, i.e., for each xX, there is a neighborhood U of x and a constant K > 0 such that

$$ |g_{t}(u)-g_{t}(v)|\leqq K\|u-v\|, \ \ \forall u, v \in U \ \ \text{and}\ \ \forall t\in T, $$

then the set \(\left \{\bigcup \partial g_{t}(x)| t\in T(x)\right \}\) is compact. Consequently, its convex hull is always closed.(ii) Recently, by using the Chankong–Haimes scalarization scheme (see [6]), Kim and Son [25, Theorem 3.3] obtained some necessary optimality conditions for ξ-quasi Pareto solutions of a local Lipschitz semi-infinite vector optimization problem. We note here that condition (𝓤) is weaker than the following condition (𝒜i) used in [25, Theorem 3.3]:

Thus, Theorem 3.2 improves [25, Theorem 3.3]. To illustrate, we consider the following example:

Example 3.1

Let \(f\colon \mathbb {R}\to \mathbb {R}^{2}\) be defined by f(x) := (f1(x),f2(x)), where

$$ f_{1}(x):= \left\{\begin{array}{ll} x^{2}\cos\frac{1}{x} &\text{if}\ \ x\neq 0,\\ 0 &\text{otherwise}, \end{array}\right. $$

and f2(x) := 0 for all \(x\in \mathbb {R}\). Assume that \({\Omega }=\mathbb {R}\), T = [1,2], and gt(x) = tx for all \(x\in \mathbb {R}\). Then, the feasible set of (VP) is \(C=(-\infty , 0]\). Let \(\bar x:=0\in C\). Clearly, for any \(\xi \in \mathbb {R}^{2}_{+}\setminus \{0\}\), \(\bar x\) is a ξ-quasi-weakly Pareto solution of (VP). It is easy to check that

$$ \partial f_{1}(\bar x)=[-1, 1], \partial f_{2}(\bar x)=\{0\}, \ \ \text{and} \ \ \partial g_{t}(\bar x)=\{t\}, \ \ \ \forall t\in T. $$

Hence, for each \(d\in \mathbb {R}\), we have

$$ f_{1}^{\circ}(\bar x; d)=|d|, f_{2}^{\circ}(\bar x; d)=0,\ \ \text{and} \ \ g_{t}^{\circ}(\bar x; d)=td, \ \ \ \forall t\in T. $$

Clearly, every d < 0 satisfies condition (𝓤). However, for every i = 1,2, condition (𝒜i) does not hold. Thus, Theorem 3.2 can be applied for this example, but not [25, Theorem 3.3].

The following concepts are inspired from [10].

Definition 3.1

Let f := (f1,…,fm) and gT := (gt)tT.(i) We say that (f,gT) is generalized convex on Ω at \(\bar x\) if, for any x ∈Ω, \(z_{i}^{*}\in \partial f_{i}(\bar x)\), i = 1,…,m, and \(x^{*}_{t}\in \partial g_{t}(\bar x)\), tT, there exists \(\nu \in T(\bar x; {\Omega })\) satisfying

$$ \begin{array}{@{}rcl@{}} &&f_{i}(x)-f_{i}(\bar x)\geqq \langle z^{*}_{i}, \nu\rangle, \ \ i=1, \ldots, m,\\ &&g_{t}(x)-g_{t}(\bar x)\geqq \langle x^{*}_{t}, \nu\rangle, \ \ t\in T, \end{array} $$

and

$$ \langle b^{*}, \nu\rangle\leqq \|x-\bar x\|, \ \ \forall b^{*}\in B_{X^{*}}. $$

(ii) We say that (f,gT) is strictly generalized convex on Ω at \(\bar x\) if, for any \(x\in {\Omega }\setminus \{\bar x\}\), \(z_{i}^{*}\in \partial f_{i}(\bar x)\), i = 1,…,m, and \(x^{*}_{t}\in \partial g_{t}(\bar x)\), tT, there exists \(\nu \in T(\bar x; {\Omega })\) satisfying

$$ \begin{array}{@{}rcl@{}} &&f_{i}(x)-f_{i}(\bar x)>\langle z^{*}_{i}, \nu\rangle, \ \ i=1, \ldots, m,\\ &&g_{t}(x)-g_{t}(\bar x)\geqq \langle x^{*}_{t}, \nu\rangle, \ \ t\in T, \end{array} $$

and

$$ \langle b^{*}, \nu\rangle\leqq \|x-\bar x\|, \ \ \forall b^{*}\in B_{X^{*}}. $$

Remark 3.3

Clearly, if Ω is convex and fi, iI, and gt, tT are convex (resp. strictly convex), then (f,gT) is generalized convex (resp. strictly generalized convex) on Ω at any \(\bar x\in {\Omega }\) with \(\nu :=x-\bar x\) for each x ∈Ω. Furthermore, by a similar argument in [10, Example 3.2], we can show that the class of generalized convex functions is properly larger than the one of convex functions.

Theorem 3.3

Let \(\bar x\in C\) and assume that there exist \(\lambda :=(\lambda _{1}, \ldots , \lambda _{m})\in \mathbb {R}^{m}_{+}\) with \({\sum }_{i\in I}\lambda _{i}=1\), and \(\mu \in A(\bar x)\) satisfying (3.5).(i) If (f,gT) is generalized convex on Ω at \(\bar x\), then \(\bar x\) is a ξ-quasi-weakly Pareto solution of (VP).(ii) If (f,gT) is strictly generalized convex on Ω at \(\bar x\), then \(\bar x\) is a ξ-quasi Pareto solution of (VP).

Proof

We will follow the proof scheme of [11, Theorem 3.13]. By (3.5), there exist \(z_{i}^{*}\in \partial f_{i}(\bar x)\), i = 1,…,m, \(x^{*}_{t}\in \partial g_{t}(\bar x)\), tT, \(b^{*}\in B_{X^{*}}\), and \(\omega ^{*}\in N(\bar x; {\Omega })\) such that

$$ \sum\limits_{i\in I}\lambda_{i}z^{*}_{i}+\sum\limits_{t\in T}\mu_{t}x^{*}_{t}+\sum\limits_{i\in I}\lambda_{i}\xi_{i} b^{*}+\omega^{*}=0, $$

or, equivalent

$$ \sum\limits_{i\in I}\lambda_{i}z^{*}_{i}+\sum\limits_{t\in T}\mu_{t}x^{*}_{t}+\sum\limits_{i\in I}\lambda_{i}\xi_{i} b^{*}=-\omega^{*}. $$

We first prove (i). On the contrary, if \(\bar x\) is not a ξ-quasi-weakly Pareto solution of (VP), then there is xC such that

$$ f(x)+\|x-\bar x\|\xi< f(\bar x). $$

From this and the fact that \({\sum }_{i\in I}\lambda _{i}=1\), we obtain

$$ \sum\limits_{i\in I} \lambda_{i} f_{i}(x) +\sum\limits_{i\in I} \lambda_{i} \xi_{i}\|x-\bar x\|< \sum\limits_{i\in I} \lambda_{i} f_{i}(\bar x). $$
(3.9)

Since (f,gT) is generalized convex on Ω at \(\bar x\), for such x, there exists \(\nu \in T(\bar x; {\Omega })\) such that

$$ \begin{array}{@{}rcl@{}} 0\leqq -\langle \omega^{*}, \nu\rangle&=& \sum\limits_{i\in I}\lambda_{i} \langle z^{*}_{i}, \nu \rangle +\sum\limits_{t\in T}\mu_{t} \langle x^{*}_{t}, \nu \rangle +\sum\limits_{i\in I}\lambda_{i}\xi_{i} \langle b^{*}, \nu\rangle \\ &\leqq &\sum\limits_{i\in I}\lambda_{i} [f_{i}(x)-f_{i}(\bar x)]+\sum\limits_{t\in T}\mu_{t} [g_{t}(x)-g_{t}(\bar x)]+\sum\limits_{i\in I}\lambda_{i}\xi_{i}\|x-\bar x\|. \end{array} $$

Hence,

$$ \sum\limits_{i\in I} \lambda_{i} f_{i}(\bar x)\leqq \sum\limits_{i\in I} \lambda_{i} f_{i}(x)+\sum\limits_{t\in T}\mu_{t} [g_{t}(x)-g_{t}(\bar x)]+\sum\limits_{i\in I}\lambda_{i}\xi_{i}\|x-\bar x\|. $$

Combining this with the facts that \(x, \bar x\in C\), and \(\mu \in A(\bar x)\), we conclude that

$$ \sum\limits_{i\in I} \lambda_{i} f_{i}(\bar x)\leqq \sum\limits_{i\in I} \lambda_{i} f_{i}(x) +\sum\limits_{i\in I}\lambda_{i}\xi_{i}\|x-\bar x\|, $$
(3.10)

contrary to (3.9).

We now prove (ii). Assume on the contrary that \(\bar x\) is not a ξ-quasi Pareto solution of (VP), i.e., there exists yC satisfying

$$ f(y)+\|y-\bar x\|\xi\leq f(\bar x). $$

This implies that \(y\neq \bar x\) and

$$ \sum\limits_{i\in I} \lambda_{i} f_{i}(y) +\sum\limits_{i\in I} \lambda_{i} \xi_{i}\|y-\bar x\|\leqq \sum\limits_{i\in I} \lambda_{i} f_{i}(\bar x). $$
(3.11)

Since (f,gT) is strictly generalized convex on Ω at \(\bar x\), for such y, there exists \(\vartheta \in T(\bar x; {\Omega })\) such that

$$ \begin{array}{@{}rcl@{}} 0\leqq -\langle \omega^{*}, \vartheta\rangle&=& \sum\limits_{i\in I}\lambda_{i} \langle z^{*}_{i}, \vartheta \rangle +\sum\limits_{t\in T}\mu_{t} \langle x^{*}_{t}, \vartheta \rangle +\sum\limits_{i\in I}\lambda_{i}\xi_{i} \langle b^{*}, \vartheta\rangle \\ &<& \sum\limits_{i\in I}\lambda_{i} [f_{i}(y)-f_{i}(\bar x)]+\sum\limits_{t\in T}\mu_{t} [g_{t}(y)-g_{t}(\bar x)]+\sum\limits_{i\in I}\lambda_{i}\xi_{i}\|y-\bar x\|. \end{array} $$

An analysis similar to that in the proof of (3.10) shows that

$$ \sum\limits_{i\in I} \lambda_{i} f_{i}(\bar x)< \sum\limits_{i\in I} \lambda_{i} f_{i}(y) +\sum\limits_{i\in I}\lambda_{i}\xi_{i}\|y-\bar x\|, $$

contrary to (3.11). □

Remark 3.4

The conclusions of Theorem 3.3 are still valid if (f,gT) is a generalized convex in the sense of Chuong and Kim [10, Definition 3.3].

We now present an example which demonstrates the importance of the generalized convexity of (f,gT) in Theorem 3.3. In particular, condition (3.5) alone is not sufficient to guarantee that \(\bar x\) is a ξ-quasi-weakly Pareto solution of (VP) if the generalized convexity of (f,gT) on Ω at \(\bar x\) is violated.

Example 3.2

Let \(f\colon \mathbb {R}\to \mathbb {R}^{2}\) be defined by f(x) := (f1(x),f2(x)), where

$$ f_{i}(x):= \left\{\begin{array}{ll} x^{2}\cos\frac{1}{x}\ \ &\text{if}\ \ x\neq 0,\\ 0\ \ &\text{otherwise} \end{array}\right. $$

for i = 1,2. Assume that \({\Omega }=\mathbb {R}\), T = [0,1], and gt(x) = xt for all \(x\in \mathbb {R}\) and t ∈ [0,1]. Then, the feasible set of (VP) is \(C=(-\infty , 0]\). Let \(\bar x:=0\in C\). Clearly, fi, i = 1,2, and gt, tT, are local Lipschitz at \(\bar x\). An easy computation shows that

$$ \partial f_{1}(\bar x)=\partial f_{2}(\bar x)=[-1, 1], \ \ \text{and}\ \ \partial g_{t}(\bar x)=\{1\}, \ \ \ \forall t\in T. $$

Take arbitrarily \(\xi =(\xi _{1}, \xi _{2})\in \mathbb {R}^{2}_{+}\setminus \{0\}\) satisfying \(\xi _{i}<\frac {1}{\pi }\) for all i = 1,2. Then, we see that \(\bar x\) satisfies condition (3.5) for \(\lambda _{1}=\lambda _{2}=\frac {1}{2}\), and μt = 0 for all tT. However, \(\bar x\) is not a ξ-quasi-weakly Pareto solution of (VP). Indeed, let \(\hat {x}=-\frac {1}{\pi }\in C\). Then

$$ f_{i}(\hat{x})+\xi_{i}\|\hat{x}-\bar x\|=\frac{1}{\pi}\left( \xi_{i}-\frac{1}{\pi}\right)<f_{i}(\bar x), \ \ \ \forall i=1, 2, $$

as required. We now show that (f,gT) is not generalized convex on Ω at \(\bar x\). Indeed, by choosing \(z_{i}^{*}=0\in \partial f_{i}(\bar x)\) for i = 1,2, we have

$$ f_{i}(\hat{x})-f_{i}(\bar x)=-\frac{1}{\pi^{2}}<\langle z^{*}_{i}, \nu\rangle, \ \ \ \forall \nu\in \mathbb{R}. $$

4 Applications

4.1 Cone-Constrained Convex Vector Optimization Problems

In this subsection, we consider the following cone-constrained convex vector optimization problem:

$$ \begin{array}{@{}rcl@{}} &&\operatorname{Min}_{\mathbb{R}^{m}_{+}} f(x):=(f_{1}(x), \ldots, f_{m}(x))\\ &&\text{s.t.}\ \ x\in C:=\{x\in {\Omega} | g(x)\in -K\}, \end{array} $$
(CCVP)

where the function f and the set Ω are as in the previous sections, K is a closed convex cone in a normed space Y, and g is a continuous and K-convex mapping from X to Y. Recall that the mapping g is said to be K-convex if

$$ g[\theta x+(1-\theta)y]-\theta g(x)-(1-\theta)g(y) \in -K,\ \ \forall x,y \in X,\ \ \forall \theta \in [0,1]. $$

Let Y be the dual space of Y and K+ be the positive polar cone of K, i.e.,

$$ K^{+}:=\{y^{\ast}\in Y^{\ast} \mid \langle y^{\ast}, k\rangle \geqq 0,\ \ \forall k \in K \}. $$

Then, K+ is weak-closed. Moreover, it is easily seen that

$$ g(x) \in -K \Leftrightarrow g_{s}(x) \leqq 0,\ \ \forall s \in K^{+}, $$

where gs(x) := 〈s,g(x)〉. Hence, the problem (CCVP) is equivalent to the following vector optimization problem:

$$ \begin{array}{@{}rcl@{}} &&\operatorname{Min}_{\mathbb{R}^{m}_{+}} f(x):=(f_{1}(x), \ldots, f_{m}(x))\\ &&\text{s.t.}\ \ x\in C:=\{x\in {\Omega} | g_{s}(x) \leqq 0,\ \ s\in K^{+}\}. \end{array} $$
(4.1)

In order to apply the results in Section 3 to problem (4.1), we need to have a compact set of indices, which is not the case with the cone K+. However, since Y is a normed space, it is easily seen that the set \(K^{+}\cap B_{Y^{\ast }}\) is weak-compact and

$$ g(x) \in -K \Leftrightarrow g_{s}(x)\leqq 0,\ \ \forall s \in K^{+} \cap B_{Y^{\ast}}. $$

Hence, we can rewrite problem (4.1) as

$$ \begin{array}{@{}rcl@{}} &&\text{Min}_{ \mathbb{R}^{m}_{+}} f(x):=(f_{1}(x), \ldots, f_{m}(x))\\ &&\text{s.t.}\ \ x\in C:=\{x\in {\Omega} | g_{s}(x) \leqq 0,\ \ s\in {T}\}, \end{array} $$

where \({T}:= K^{+} \cap B_{Y^{\ast }}.\)

Proposition 4.1

Let \(\xi \in \mathbb {R}^{m}_{+}\setminus \{0\}\) and \(\bar x\) be a ξ-quasi-weakly Pareto solution of the problem (CCVP). If condition (𝓤) holds at \(\bar x\) and the convex hull \(\operatorname {co}\left \{\bigcup \partial g_{s}(\bar x)| s\in {I}(\bar x)\right \}\) is weak-closed, then, there exist \(\lambda :=(\lambda _{1}, \ldots , \lambda _{m})\in \mathbb {R}^{m}_{+}\) with \({\sum }_{i\in I}\lambda _{i}=1\), and \(\zeta \in K^{+}\) such that

$$ \begin{array}{@{}rcl@{}} 0\in \sum\limits_{i\in I}\lambda_{i}\partial f_{i}(\bar x)+\partial g_{\zeta}(\bar x)+\sum\limits_{i\in I}\lambda_{i}\xi_{i} B_{X^{*}}+N(\bar x; {\Omega}). \end{array} $$
(4.2)

Proof

By Theorem 3.2, there exist \(\lambda :=(\lambda _{1}, \ldots , \lambda _{m})\in \mathbb {R}^{m}_{+}\) with \({\sum }_{i\in I}\lambda _{i}=1\), and \(\mu \in {A}(\bar x)\) such that

$$ \begin{array}{@{}rcl@{}} 0\in \sum\limits_{i\in I}\lambda_{i}\partial f_{i}(\bar x)+\sum\limits_{s \in \mathcal{T}}\mu_{s}\partial g_{s}(\bar x)+\sum\limits_{i\in I}\lambda_{i}\xi_{i} B_{X^{*}}+N(\bar x; {\Omega}). \end{array} $$

Note that for each sT, the function gs is continuous and convex on X. Moreover, since \(\mu \in {A}(\bar {x})\), there exists only finitely many μs,sT, differ from zero. Hence,

$$ \sum\limits_{s \in {T}}\mu_{s}\partial g_{s}(\bar x)=\partial\left( \left\langle\sum\limits_{s \in {T}}\mu_{s} s, g\right\rangle\right)(\bar x)=\partial g_{\zeta}(\bar x), $$

where \(\zeta :={\sum }_{s \in {T}}\mu _{s} s\). Clearly, ζK+. The proof is complete. □

Remark 4.1

Assume that Ω is convex. Since g is continuous and K-convex, we see that if fi,iI, are convex (resp. strictly convex), then (f,gT) is generalized convex (resp. strictly generalized convex) on Ω at any \(\bar x\in {\Omega }\) with \(\nu :=x-\bar x\) for each x ∈Ω. Thus, by Theorem 3.3, (4.2) is a sufficient condition for a point \(\bar {x}\in C\) to be a ξ-quasi-weakly Pareto (resp. ξ-quasi Pareto) solution of (CCVP) provided that fi,iI, are convex (resp. strictly convex).

4.2 Semidefinite Vector Optimization Problem

Let f : ℝn → ℝ be a local Lipschitz continuous function, Ω ⊂ ℝn be a nonempty closed convex subset of ℝn, and g : ℝnSp be a continuous mapping, where Sp denotes the set of p × p symmetric matrices. For a p × p matrix A = (aij), the notion trace(A) is defined by

$$ \operatorname{trace} (A):= \sum\limits_{i=1}^{p} a_{ii}. $$

We suppose that Sp is equipped with a scalar product AB := trace(AB), where AB is the matrix product of A and B. A matrix ASp is said to be a negative semidefinite (resp. positive semidefinite) matrix if \(\langle v, Av\rangle \leqq 0\) (resp. \(\langle v, Av\rangle \geqq 0\)) for all v ∈ ℝn. If matrix A is negative semidefinite (resp. positive semidefinite matrix), it is denoted by A ≼ 0 (resp. A ≽ 0).

We now consider the following semidefinite vector optimization problem:

$$ \begin{array}{@{}rcl@{}} &&\operatorname{Min}_{\mathbb{R}^{m}_{+}} f(x):=(f_{1}(x), \ldots, f_{m}(x))\\ &&\text{s.t.}\ \ x\in C:=\{x\in {\Omega} | g(x) \preceq 0\}. \end{array} $$
(SDVP)

Let us denote by \(S^{p}_{+}\) the set of all positive semidefinite matrices of Sp. It is well known that \(S^{p}_{+}\) is a proper convex cone, i.e., it is closed, convex, pointed, and solid (see [3]), and that \(S^{p}_{+}\) is a self-dual cone, i.e., \((S^{p}_{+})^{+}=S^{p}_{+}\). Hence, the problem (SDVP) can be rewritten under the form of the problem (CCVP), where \(K:=S^{p}_{+}\). For Λ ∈ K, the function gΛ becomes a function from ℝn to ℝ defined by

$$ g_{\Lambda}(x)={\Lambda} \bullet g(x),\ \ \forall x \in \Bbb{R}^{n}. $$

If g is affine, i.e., \(g(x):=F_{0}+{\sum }_{i=1}^{n} F_{i}x_{i}\), where F0,F1,…,FnSp are the given matrices, then the subdifferential of the function gΛ(x) is equal to

$$ \partial (g_{\Lambda})(x)=({\Lambda} \bullet F_{1}, \ldots, {\Lambda} \bullet F_{n})=:({\Lambda} \bullet F). $$

In that case, by Remark 3.2, the second assumption of Proposition 4.1 can be removed. Thus, we obtain the following result.

Proposition 4.2

Let \(\xi \in \mathbb {R}^{m}_{+}\setminus \{0\}\). If \(\bar x\) is a ξ-quasi-weakly Pareto solution of the problem (SDVP), then there exist \(\lambda :=(\lambda _{1}, \ldots , \lambda _{m})\in \mathbb {R}^{m}_{+}\) with \({\sum }_{i\in I}\lambda _{i}=1\), and \({\Lambda } \in S^{p}_{+}\) such that

$$ \begin{array}{@{}rcl@{}} 0\in \sum\limits_{i\in I}\lambda_{i}\partial f_{i}(\bar x)+{\Lambda} \bullet F+\sum\limits_{i\in I}\lambda_{i}\xi_{i} B_{X^{*}}+N(\bar x; {\Omega}). \end{array} $$
(4.3)

Remark 4.2

Since g is an affine mapping, by Theorem 3.3, (4.3) is a sufficient condition for a point \(\bar {x}\in C\) to be a ξ-quasi-weakly Pareto (resp. ξ-quasi Pareto) solution of (SDVP) provided that fi,iI, are convex (resp. strictly convex).