1 Introduction

Very recently, Chicco et al. [1], in finite-dimensional spaces, have introduced the approximate solution named as E-optimal solution based on the concept of the improvement set and investigated the existence of E-optimal solution for vector optimization problems. Later, the notion of E-optimal solution was extended by Guti\(\acute{\text {e}}\)rrez et al. [2] to locally convex spaces. The E-optimal solution unifies some known exact and approximate solutions in vector optimization problems. So far, some applications of improvement sets in vector optimization are investigated (see [1,2,3,4,5,6,7,8,9,10,11,12,13] and the references therein). It is worth noticing that Zhao et al. [5] used the improvement set to introduce E-optimal solution and weak E-optimal solution of the constrained set-valued optimization problem (for short, SVOP) and established scalarization theorems and Lagrange multiplier theorems of weak E-optimal solution under the assumption of near E-subconvexlikeness.

Historically speaking, the image space analysis (shortly, ISA), which is an important tool to study vector optimization problems, variational inequalities and other related problems due to their abundant applications in many disciplines (see, for example, [14,15,16,17]), was born with [18]. Reference [19], as well as several others, belongs to the gestation. On the other hand, ISA has also become a unified method for investigating such problems, which can be expressed under the form of the inconsistency of a parametric system. The inconsistency of this system was reduced to the disjunction of two suitable sets in the image space. Moreover, the disjunction between the two suitable sets can be proved by verifying that they lie in two disjoint level sets of a suitable separation function. Various linear or nonlinear functions were proposed in [12, 13, 18, 20,21,22,23,24,25] and then were proved to be weak separation functions or regular weak separation functions or strong separation functions. Recently, Chinaie and Zafarani [26] have proposed two kinds of separation functions, which unify the known linear or nonlinear separation functions. By virtue of ISA, some authors have investigated regularity, duality and optimality conditions for constrained extremum problems (see [12, 13, 18, 20,21,22,23,24, 26,27,28,29,30,31] and the references therein). It is noteworthy that the authors in [12] used the improvement set and the oriented/signed distance function to introduce a nonlinear vector regular weak separation functions and a nonlinear scalar weak separation function, and obtained Lagrangian-type optimality conditions in the sense of vector separation and scalar separation, respectively. In [13], Chen et al. also defined a vector-valued regular weak separation function and a scalar weak separation function via the improvement set and obtained some optimality conditions in terms of E-optimal solution for multiobjective optimization with general constraints. For set-valued optimization problems with constraints, Chinaie and Zafarani [24, 26], Li and Li [31] obtained some optimality conditions via ISA. By means of the generalized level set of set-valued maps, Chinaie and Zafarani [32] extended a method of scalarization of vector-valued optimization proposed in [25] to set-valued optimization. Similar to the works in [32], Chinaie and Zafarani [33] also presented scalarization results in the sense of \(\varepsilon \)-feeble multifunction minimum point, which rely on some level sets of set-valued functions.

The purpose of this paper is to apply the ISA and improvement sets to investigate set-valued optimization problem with constraints. We firstly introduce E-quasi-convexity of set-valued maps and disclose the relations between E-quasi-convexity of set-valued maps and C-quasi-convexity of set-valued maps defined in [34]. By virtue of the E-quasi-convexity of set-valued maps and a new assumption, we explore the scalarizations of E-optimal solution and weak E-optimal solution of SVOP, respectively. Then, we construct two kinds of generalized Lagrangian set-valued maps concerning the nonlinear scalarization functions \(\Delta \) and the improvement set E, which are different from the ones in the literature. Similar to Definition 2.4 in [35], the notions of saddle point of the two generalized Lagrangian set-valued maps are defined. Finally, we discuss the saddle point properties, which disclose the relations between the existence of saddle points and the fact that \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit nonlinear separation, where \(\mathcal {K}_{(\bar{x},\bar{y})}\) denotes the image of SVOP and \(\mathcal {H}=E\times D\).

This paper is organized as follows. In Sect. 2, we recall some definitions and introduce the notion of E-quasi-convexity of set-valued maps. In Sect. 3, under some suitable conditions, some scalarization results of E-optimal solution and weak E-optimal solution of SVOP are obtained, respectively. In Sect. 4, we analyze the general features of the ISA for SVOP and consider two classes of nonlinear weak separation functions. Moreover, two classes of generalized Lagrangian set-valued maps are constructed to discuss saddle point properties.

2 Preliminaries

Throughout this paper, \(\mathbb {R}_{+}^{m}\) represents the nonnegative orthant of the m-dimensional Euclidean space \(\mathbb {R}^{m}\). Let X be a topological vector space, and let Y and Z be two normed linear spaces with normed dual spaces \(Y^{*}\) and \(Z^{*}\), respectively. For a set \(K\subseteq Y\), the closure, the complement, the topological interior and the boundary of K are denoted by \(\text {cl}\,K\), \(K^{c}\), \(\text {int}\,K\) and \(\partial K\), respectively. Let \(C\subseteq Y\) and \(D\subseteq Z\) be two closed, convex and pointed cones with \(\text {int}\,C\ne \emptyset \) and \(\text {int}\,D\ne \emptyset \). Let 0 denote the zero element for every space. The dual cone and the strict dual cone of K are, respectively, defined as

$$\begin{aligned} K^{*}:=\{\varphi \in Y^{*}:\varphi (y)\ge 0,\forall y\in K\}, \quad K^{\sharp }:=\{\varphi \in Y^{*}:\varphi (y)>0,\forall y\in K\backslash \{0\}\}. \end{aligned}$$

Lemma 2.1

[36] If \(K\subseteq Y\) be a convex cone with \(\mathrm{int}\,K\ne \emptyset \), then

$$\begin{aligned} \mathrm{int}\,K=\{y\in Y:\varphi (y)>0,\forall \varphi \in K^{*}\backslash \{0\}\}. \end{aligned}$$

Definition 2.1

[2] Let E be a nonempty subset of Y. E is said to be an improvement set with respect to C, iff \(0\notin E\) and \(E+C=E\). We denote the family of the improvement sets in Y by \(\mathfrak {T}_{Y}\).

Remark 2.1

The following properties concerning improvement sets can be found in [4].

  1. (i)

    \(C\backslash \{0\}\in \mathfrak {T}_{Y}\), \(\mathrm{int}\,C\in \mathfrak {T}_{Y}\);

  2. (ii)

    If \(\varepsilon \in C\backslash \{0\}\), then \(\varepsilon +C\in \mathfrak {T}_{Y}\) and \(\varepsilon +\mathrm{int}\,C\in \mathfrak {T}_{Y}\);

  3. (iii)

    If \(E\subseteq C\) and \(E\in \mathfrak {T}_{Y}\), then \(E+E\subseteq E\) and \(\mathrm{int}\,E+E\subseteq E\);

  4. (iv)

    If \(E\in \mathfrak {T}_{Y}\), then \(\mathrm{int}\,E=E+\mathrm{int}\,C\).

Remark 2.2

Remark 2.1 in [6] pointed out that \(\mathrm{int}\,E\ne \emptyset \) if \(E\ne \emptyset \).

Remark 2.3

Let \(\emptyset \ne E\in \mathfrak {T}_{Y}\) be a closed convex subset of Y. Then, there exists a subset of E (denoted by M) such that the conic extension of M with respect to C is equal to E. Put \(M:=\mathrm{int}\,E\subseteq E\). Since E is a closed convex subset of Y, it follows from Theorem 1.1.2(iv) in [37] that \(\mathrm{cl}\,M=\mathrm{cl}\,E=E\). Hence, \(\mathrm{cl}\,M+C=E+C=E\).

Definition 2.2

[34] Let A be a convex subset of X. A set-valued map \(F:A\rightrightarrows Y\) is called C-quasi-convex, iff

$$\begin{aligned} (F(x_{1})+C)\cap (F(x_{2})+C)\subseteq F(tx_{1}+(1-t)x_{2}) +C,\quad \forall x_{1},x_{2}\in A,\forall t\in [0,1]. \end{aligned}$$

From now on, we suppose that \(\emptyset \ne E\in \mathfrak {T}_{Y}\). We next introduce a new notion of E-quasi-convex set-valued map.

Definition 2.3

Let A be a convex subset of X. A set-valued map \(F:A\rightrightarrows Y\) is called E-quasi-convex on A, iff

$$\begin{aligned} (F(x_{1})+E)\cap (F(x_{2})+E)\subseteq F(tx_{1}+(1-t)x_{2})+E,\quad \forall x_{1},x_{2}\in A,\forall t\in [0,1]. \end{aligned}$$

Remark 2.4

The following examples show that there are no implication relations between Definitions 2.2 and 2.3.

Example 2.1

Let \(X:=\mathbb {R}\), \(Y:=\mathbb {R}^{2}\), \(A:=[-1,1]\), \(E:=\{(x_{1},x_{2})\in \mathbb {R}^{2}:x_{1}+x_{2}\ge 1\}\) and \(C:=\mathbb {R}_{+}^{2}\). The set-valued map \(F:A\rightrightarrows Y\) is defined as follows:

$$\begin{aligned} F(x):= \left\{ \begin{array}{lllll} \{0\}\times [1,+\infty [,\quad \;&{}\text {if}\quad x=-1,\\ \,[1,x^{2}+2]\times [1,x+2], \quad &{}\text {if}\quad x\in ]-1,1[,\\ \,[1,+\infty [\times \{0\},\quad \;&{}\text {if}\quad x=1.\\ \end{array}\right. \end{aligned}$$

Clearly, \(E\in \mathfrak {T}_{Y}\). Moreover, it is easy to see that F is C-quasi-convex on A. Let \(\bar{x}_{1}:=1\), \(\bar{x}_{2}:=-1\) and \(\bar{t}:=\frac{1}{3}\). Then, \((1,1)\in (F(\bar{x}_{1})+E)\cap (F(\bar{x}_{2})+E)\). However, \((1,1)\notin F(\bar{t}\bar{x}_{1}+(1-\bar{t})\bar{x}_{2})+E\). Hence, F is not E-quasi-convex on A.

Example 2.2

Let \(X:=\mathbb {R}\), \(Y:=\mathbb {R}^{2}\), \(A:=[-1,1]\), \(E:=\{(x_{1},x_{2})\in \mathbb {R}^{2}:x_{1}\in \mathbb {R}, x_{2}\ge 1\}\) and \(C:=\mathbb {R}_{+}^{2}\). The set-valued map \(F:A\rightrightarrows Y\) is defined as follows:

$$\begin{aligned} F(x):= \left\{ \begin{array}{lllll} \{\min \{0,x\}\}\times [0,|x|],\quad &{}\text {if}\quad x\in [-1,1]\backslash \{0\}, \\ \,[1,2]\times [0,1],\quad &{}\text {if}\quad x=0. \end{array}\right. \end{aligned}$$

It is easy to check that \(E\in \mathfrak {T}_{Y}\) and F is E-quasi-convex on A. If we take \(\bar{x}_{1}:=-1\), \(\bar{x}_{2}:=1\) and \(\bar{t}:=\frac{1}{2}\), then \((0,0)\in (F(\bar{x}_{1})+C)\cap (F(\bar{x}_{2})+C)\). However, \((0,0)\notin F(\bar{t}\bar{x}_{1}+(1-\bar{t})\bar{x}_{2})+C\). Therefore, F is not C-quasi-convex on A.

The following proposition gives a characterization of E-quasi-convex set-valued map in terms of its generalized level sets with respect to E.

Proposition 2.1

Let A be a convex subset of X and \(F:A\rightrightarrows Y\) be a set-valued map. F is an E-quasi-convex on A, iff for each \(y\in Y\), the generalized level set

$$\begin{aligned} F^{-1}(y-E):=\{x\in A:y\in F(x)+E\} \end{aligned}$$

is a convex set.

Proof

This conclusion is easily derived from Definition 2.3. \(\square \)

In this paper, we consider the following constrained set-valued optimization problem:

$$\begin{aligned} \mathrm{SVOP}\qquad \mathrm{min}\quad F(x),\quad \mathrm{s.t.}\quad x\in S:=\{x\in A:G(x)\cap (-D)\ne \emptyset \}, \end{aligned}$$

where \(A\subseteq X\), and \(F:A\rightrightarrows Y\) and \(G:A\rightrightarrows Z\) are two set-valued maps with nonempty values. We assume that the feasible set S is nonempty. The graph of F is defined by

$$\begin{aligned} \text {gr}(F):=\{(x,y)\in X\times Y:x\in A,y\in F(x)\}. \end{aligned}$$

For \(\varphi \in Y^{*}\), we denote the support functional of \(Q\subseteq Y\) at y by \(\sigma _{Q}(\varphi )\). Let

$$\begin{aligned} \sigma _{Q}(\varphi ):=\sup _{y\in Q}\varphi (y),\quad F(A) :=\bigcup _{x\in A}F(x),\quad \varphi (F(x)):=\{\varphi (y):y\in F(x)\}. \end{aligned}$$

Remark 2.5

Remark 1.2 in [32] showed that, if G is D-quasi-convex on a convex subset A of X, then S is a convex set.

The concepts of E-optimal solution and weak E-optimal solution for vector optimization with vector-valued maps have been introduced in [2] and have been further extended for SVOP by Zhao et al. [5].

Definition 2.4

[5] A point \(\bar{x}\in S\) is called an E-optimal solution of SVOP, iff there exists \(\bar{y}\in F(\bar{x})\) such that

$$\begin{aligned} F(S)\cap (\bar{y}-E)=\emptyset . \end{aligned}$$

The point pair \((\bar{x},\bar{y})\) is called an E-optimal point of SVOP. Denote by \(V^{E}(F)\) the set of all E-optimal solutions of SVOP.

Definition 2.5

[5] A point \(\bar{x}\in S\) is called a weak E-optimal solution of SVOP, iff there exists \(\bar{y}\in F(\bar{x})\) such that

$$\begin{aligned} F(S)\cap (\bar{y}-\text {int}\,E)=\emptyset . \end{aligned}$$
(1)

The point pair \((\bar{x},\bar{y})\) is called a weak E-optimal point of SVOP. Denote by \(V^{E}_{w}(F)\) the set of all weak E-optimal solutions of SVOP.

Remark 2.6

Obviously, it follows from Definitions 2.4 and 2.5 that \(V^{E}(F)\subseteq V^{E}_{w}(F)\). However, Example 2.1 in [5] shows that the inclusion relation \(V^{E}_{w}(F)\subseteq V^{E}(F)\) does not hold.

Remark 2.7

It follows from Remark 2.1(iv) that (1) is equivalent to \((F(S)+E)\cap (\bar{y}-\text {int}\,C)=\emptyset \).

Remark 2.8

Obviously, if \(E=C\backslash \{0\}\), then Definition 2.4 reduces to Definition 2.3 in [24]. If \(E=\text {int}\,C\), then Definition 2.4 reduces to Definition 6.4 in [38]. Moreover, if \(\varepsilon \in C\), \(E=\varepsilon +\text {int}\,C\), then Definition 2.4 reduces to Definition 1.1 in [39].

Definition 2.6

Let \(\Omega \) be a nonempty set of X and \(\bar{a}\in \Omega \). We say that \(\Omega \) is E-bounded from above by \(\bar{a}\), iff \(\Omega -\bar{a}\subseteq -\text {cl}\,E\).

3 Scalarization

In this section, we give scalarizations of E-optimal solution and weak E-optimal solution of SVOP by virtue of the improvement set E, respectively. The method of scalarization was applied by Giannessi and Pellegrini [25] for single-valued functions, and then extended by Chinaie and Zafarani [32, 33] to set-valued functions.

We introduce the following scalar optimization problem associated with SVOP:

$$\begin{aligned} \varphi \mathrm{\text {-}SVOP}\quad \mathrm{min}\quad \varphi (F(x)), \quad \;\mathrm{s.t.}\quad x\in S\cap F^{-1}(y-E), \end{aligned}$$

which depends on the parameter \(y\in Y\), where \(\varphi \in C^{*}\backslash \{0\}\) is fixed.

Definition 3.1

Let \(\bar{x}\in S\cap F^{-1}(y-E)\). \(\bar{x}\) is called an optimal solution of \(\varphi \mathrm{\text {-}SVOP}\) with respect to E, iff there exists \(\bar{y}\in F(\bar{x})\) such that

$$\begin{aligned} \varphi (\zeta )-\varphi (\bar{y})\ge \sigma _{-E}(\varphi ), \quad \forall x\in S\cap F^{-1}(y-E),\forall \zeta \in F(x). \end{aligned}$$
(2)

The point pair \((\bar{x},\bar{y})\) is called an optimal point of \(\varphi \mathrm{\text {-}SVOP}\) with respect to E.

Remark 3.1

For \(\varepsilon \in C\backslash \{0\}\), let \(F_{\varepsilon }(x):=F(x)+\varepsilon \). If \(E=\varepsilon +C\) and \(\varphi \in C^{\sharp }\), then \(E\in \mathfrak {T}_{Y}\) and (2) reduces to

$$\begin{aligned} \varphi (\bar{y})\le \varphi (\zeta )+\varphi (\varepsilon ), \quad \forall x\in S\cap F_{\varepsilon }^{-1}(y-C),\forall \zeta \in F(x), \end{aligned}$$

which was investigated in [33].

Definition 3.2

Let \(\bar{x}\in S\cap F^{-1}(y-E)\).

  1. (i)

    \(\bar{x}\) is called a local optimal solution of \(\varphi \)-SVOP with respect to E, iff there exist \(\bar{y}\in F(\bar{x})\) and a neighborhood U of \(\bar{x}\) such that

    $$\begin{aligned} \varphi (\zeta )-\varphi (\bar{y})\ge \sigma _{-E}(\varphi ), \quad \forall x\in S\cap U\cap F^{-1}(y-E),\forall \zeta \in F(x). \end{aligned}$$
    (3)

    The point pair \((\bar{x},\bar{y})\) is called a local optimal point of \(\varphi \)-SVOP with respect to E.

  2. (ii)

    \(\bar{x}\) is called a strict local optimal solution of \(\varphi \)-SVOP with respect to E, iff there exist \(\bar{y}\in F(\bar{x})\) and a neighborhood U of \(\bar{x}\) such that

    $$\begin{aligned} \varphi (\zeta )-\varphi (\bar{y})>\sigma _{-E}(\varphi ), \quad \forall x\in S\cap U\cap F^{-1}(y-E),\forall \zeta \in F(x). \end{aligned}$$

    The point pair \((\bar{x},\bar{y})\) is called a strict local optimal point of \(\varphi \)-SVOP with respect to E.

Now, we give a sufficient condition of weak E-optimal solution of SVOP, which plays an important role in the establishment of scalarization.

Theorem 3.1

Let \(\varphi \in C^{*}\backslash \{0\}\) be given and \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). If the following system in the unknown x is impossible:

$$\begin{aligned} \exists x\in S\cap F^{-1}(\bar{y}-\mathrm{int}\,E),\exists y \in F(x)\;\mathrm{such\,that} \;\varphi (y)-\varphi (\bar{y}) <\sigma _{-E}(\varphi ), \end{aligned}$$
(4)

then \((\bar{x},\bar{y})\) is a weak E-optimal point of SVOP.

Proof

Suppose that \((\bar{x},\bar{y})\) is not a weak E-optimal point of SVOP. Then, from Remark 2.7, we have

$$\begin{aligned} (F(S)+E)\cap (\bar{y}-\text {int}\,C)\ne \emptyset , \end{aligned}$$

which implies that there exist \(\hat{x}\in S\), \(\hat{y}\in F(\hat{x})\) and \(\hat{e}\in E\) such that \(\hat{y}-\bar{y}+\hat{e}\in -\text {int}\,C\). Hence, by Lemma 2.1, we get \(\varphi (\hat{y}-\bar{y}+\hat{e})<0\), that is,

$$\begin{aligned} \varphi (\hat{y})-\varphi (\bar{y})<\varphi (-\hat{e}) \le \sup \limits _{e\in E}\varphi (-e)=\sup \limits _{e\in -E}\varphi (e) =\sigma _{-E}(\varphi ). \end{aligned}$$
(5)

Clearly,

$$\begin{aligned} \hat{x}\in S\cap F^{-1}(\bar{y}-\text {int}\,E). \end{aligned}$$
(6)

It follows from (5) and (6) that the system (4) is possible. This is a contradiction. \(\square \)

Let us recall the following assumption related to F, which was introduced by Chinaie and Zafarani [33] to study the feeble minimizer of SVOP.

Assumption A

Let \(F:A\rightrightarrows Y\) be a set-valued map and \((\bar{x},\bar{y})\in \text {gr}(F)\). We say that F satisfies Assumption A at \((\bar{x},\bar{y})\) with respect to cone C if, when \(\bar{y}\in F(\hat{x})+C\) for some \(\hat{x}\in A\) and \(y\in F(\bar{x})+C\), then \(y\in F(\hat{x})+C\).

Remark 3.2

For Assumption A, if \(A=\mathbb {R}^{n}\) and \(Y=\mathbb {R}^{l}\), then F is a vector-valued map and the set \(F^{-1}(y-C)\) becomes the level set of F. Therefore, we observe that Assumption A coincides with Proposition 5 in [25].

In order to obtain some scalarization results of (weak) E-optimal point of SVOP, we propose the following assumption (called Assumption B).

Assumption B

Let \(F:A\rightrightarrows Y\) be a set-valued map and \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). We say that F satisfies Assumption B at \((\bar{x},\bar{y})\) with respect to E if, when \(\bar{y}\in F(\hat{x})+E\) for some \(\hat{x}\in A\) and \(y\in F(\bar{x})+E\), then \(y\in F(\hat{x})+E\).

Remark 3.3

If \(E\subseteq C\), then it is easy to verify that all single-valued maps satisfy Assumption B at each point of its graph. In fact, let \(f:A\rightarrow Y\) be a single-valued map, \(\bar{x}\in A\) and \(\bar{y}=f(\bar{x})\), if \(\bar{y}\in f(\hat{x})+E\) for some \(\hat{x}\in A\) and \(y\in f(\bar{x})+E\), then \(y\in f(\hat{x})+E+E\subseteq f(\hat{x})+E\).

Remark 3.4

When \(E\subseteq C\), Assumption B implies Assumption A. However, if \(E\nsubseteq C\), then Assumption B does not imply Assumption A. The following example illustrates this case.

Example 3.1

Let \(X=Y=A:=\mathbb {R}^{2}\), \(E:=\{(x_{1},x_{2}) \in \mathbb {R}^{2}:x_{1}+x_{2}\ge 1,x_{2}\ge \frac{1}{2}\}\) and \(C=\mathbb {R}_{+}^{2}\), and let

$$\begin{aligned} F(x):= \left\{ \begin{array}{lllll} {[}0,|x_{1}|]\times [1,|x_{2}|+1], &{}\quad \text {if}\quad x_{2}<0,\\ \left\{ \left( \frac{1}{2},\frac{1}{2}\right) \right\} , &{}\quad \text {if}\quad x_{2}\ge 0. \end{array}\right. \end{aligned}$$

Clearly, \(E\in \mathfrak {T}_{Y}\) and \(E\nsubseteq C\). Set \(\bar{x}:=(1,-1)\), \(\bar{y}:=(1,1)\in F(\bar{x})\), \(\hat{x}:=(1,1)\) and \(y:=(\frac{1}{3},3)\). It is easy to check that F satisfies Assumption B at \((\bar{x},\bar{y})\). However, F does not satisfy Assumption A at \((\bar{x},\bar{y})\).

Theorem 3.2

Let \(F:A\rightrightarrows Y\) and \(G:A\rightrightarrows Z\) be set-valued maps, and let \(\varphi \in C^{*}\backslash \{0\}\) and \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). Suppose that the following conditions are satisfied:

  1. (i)

    \(\bar{x}\in S\cap F^{-1}(y-E)\) for some arbitrary \(y\in Y\);

  2. (ii)

    F satisfies Assumption B at \((\bar{x},\bar{y})\) with respect to E;

  3. (iii)

    \((\bar{x},\bar{y})\) is an optimal point of \(\varphi \)-SVOP.

Then, \((\bar{x},\bar{y})\) is a weak E-optimal point of SVOP.

Proof

Suppose that \((\bar{x},\bar{y})\) is an optimal point of \(\varphi \)-SVOP. Then (2) holds. We next show that the system (4) in the unknown x is impossible. Assume that the system (4) is possible. Then, there exist \(x'\in S\cap F^{-1}(\bar{y}-\text {int}\,E)\) and \(y'\in F(x')\) such that

$$\begin{aligned} \varphi (y')-\varphi (\bar{y})<\sigma _{-E}(\varphi ). \end{aligned}$$
(7)

Clearly,

$$\begin{aligned} S\cap F^{-1}(\bar{y}-\text {int}\,E)\subseteq S\cap F^{-1}(\bar{y}-E). \end{aligned}$$

Therefore, \(x'\in S\cap F^{-1}(\bar{y}-E)\), which means that

$$\begin{aligned} \bar{y}\in F(x')+E. \end{aligned}$$
(8)

From (8), Conditions (i) and (ii), we have \(y\in F(x')+E\), i.e.,

$$\begin{aligned} x'\in S\cap F^{-1}(y-E). \end{aligned}$$
(9)

The combination of (8) and (9) contradicts (2). Hence, the system (4) in the unknown x is impossible. From Theorem 3.1, we immediately obtain \((\bar{x},\bar{y})\) which is a weak E-optimal point of SVOP. \(\square \)

Theorem 3.3

Let A be a convex subset of X, and let \(F:A\rightrightarrows Y\) and \(G:A\rightrightarrows Z\) be set-valued maps. Let \(\varphi \in C^{*}\backslash \{0\}\) and \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). Suppose that the following conditions are satisfied:

  1. (i)

    F and G are E-quasi-convex and D-quasi-convex on A, respectively;

  2. (ii)

    \(\bar{x}\in S\cap F^{-1}(y-E)\) for an arbitrary \(y\in Y\);

  3. (iii)

    F satisfies Assumption B at \((\bar{x},\bar{y})\) with respect to E;

  4. (iv)

    \((\bar{x},\bar{y})\) is a strict local optimal point of \(\varphi \)-SVOP.

Then, \((\bar{x},\bar{y})\) is an E-optimal point of SVOP.

Proof

Assume that \((\bar{x},\bar{y})\) is not an E-optimal point of SVOP. Then, there exist \(x'\in S\) and \(y'\in F(x')\) such that \(\bar{y}-y'\in E\), that is, \(\bar{y}\in F(x')+E\). From the E-prequasiinvexity of F and Condition (ii), we have

$$\begin{aligned} \bar{y}\in (F(x')+E)\cap (F(\bar{x})+E)&\subseteq F(\bar{x}+t(x'-\bar{x}))+E\nonumber \\&\subseteq F(\bar{x}+t(x'-\bar{x}))+E+C,\quad \forall t\in [0,1]. \end{aligned}$$
(10)

Let \(x_{t}:=\bar{x}+t(x'-\bar{x}),\forall t\in [0,1]\). It follows from (10) that there exist \(y_{t}\in F(x_{t})\) and \(\hat{e}\in E\) such that \(\bar{y}-y_{t}-\hat{e}\in C\). Since \(\varphi \in C^{*}\backslash \{0\}\), we get \(\varphi (\bar{y}-y_{t}-\hat{e})\ge 0\), i.e.,

$$\begin{aligned} \varphi (y_{t}-\bar{y})\le \varphi (-\hat{e}) \le \sup \limits _{e\in E}\varphi (-e)=\sigma _{-E}(\varphi ). \end{aligned}$$
(11)

Now, we show that for sufficiently small \(t>0\),

$$\begin{aligned} x_{t}\in S\cap U\cap F^{-1}(y-E), \end{aligned}$$
(12)

where U is a neighborhood of \(\bar{x}\). Clearly, \(x_{t}\in U\) for sufficiently small \(t>0\) and it follows from Remark 2.5 that \(x_{t}\in S\). According to \(y\in F(\bar{x})+E\), \(\bar{y}\in F(x_{t})+E\) and Condition (iii), we deduce that \(y\in F(x_{t})+E\), that is, \(x_{t}\in F^{-1}(y-E)\). Therefore, (12) holds for sufficiently small \(t>0\). The combination of (11) and (12) contradicts the fact that \((\bar{x},\bar{y})\) is a strict local optimal point of \(\varphi \)-SVOP. \(\square \)

4 Optimality Conditions

Let us recall the main features of the ISA for SVOP. Let \(\bar{x}\in S\) and \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). We define the map \(\mathcal {A}_{(\bar{x},\bar{y})}:A\rightrightarrows Y\times Z\) as

$$\begin{aligned} \mathcal {A}_{(\bar{x},\bar{y})}(x):=(\bar{y}-F(x),-G(x)). \end{aligned}$$

Let

$$\begin{aligned} \mathcal {K}_{(\bar{x},\bar{y})}&:=\{(u,v)\in Y \times Z:u\in \bar{y}-F(x),v\in -G(x),x\in A\},\\ \mathcal {H}&:=\{(u,v)\in Y\times Z:u\in E,v\in D\} \end{aligned}$$

and

$$\begin{aligned} \mathcal {H}_{ie}:=\{(u,v)\in Y\times Z:u\in \mathrm{int}\,E,v\in D\}. \end{aligned}$$

The set \(\mathcal {K}_{(\bar{x},\bar{y})}\) is called the image of SVOP, while \(Y\times Z\) is the image space associated with SVOP. From Definition 2.4 and the definitions of \(\mathcal {H}\) and \(\mathcal {K}_{(\bar{x},\bar{y})}\), we deduce that

$$\begin{aligned} \bar{x}\in V^{E}(F) \Leftrightarrow \mathcal {K}_{(\bar{x},\bar{y})} \cap \mathcal {H}=\emptyset . \end{aligned}$$
(13)

Similarly, we have

$$\begin{aligned} \bar{x}\in V_{w}^{E}(F) \Leftrightarrow \mathcal {K}_{(\bar{x},\bar{y})} \cap \mathcal {H}_{ie}=\emptyset . \end{aligned}$$
(14)

Proposition 4.1

Let \(\mathcal {E}_{(\bar{x},\bar{y})}:=\mathcal {K}_{(\bar{x}, \bar{y})}-C\times D\) be the conic extension of the image set \(\mathcal {K}_{(\bar{x},\bar{y})}\). Then,

  1. (i)

    \(\mathcal {K}_{(\bar{x},\bar{y})}\cap \mathcal {H} =\emptyset \;\Leftrightarrow \;\mathcal {E}_{(\bar{x},\bar{y})} \cap \mathcal {H}=\emptyset \);

  2. (ii)

    \(\mathcal {K}_{(\bar{x},\bar{y})}\cap \mathcal {H}_{ie} =\emptyset \;\Leftrightarrow \;\mathcal {E}_{(\bar{x},\bar{y})} \cap \mathcal {H}_{ie}=\emptyset \).

Proof

The proofs are similar to Proposition 2.4 in [13]. \(\square \)

Remark 4.1

If \(E=C\backslash \{0\}\), Proposition 4.1(i) reduces to Lemma 3.1 in [24]. If \(E=\mathrm{int}\,C\), then Proposition 4.1(i) reduces to Proposition 2 in [31].

Corollary 4.1

Let \(\bar{x}\in S\). Then, \((\bar{x},\bar{y})\in \mathrm{gr}(F)\) is an E-optimal point of SVOP iff

$$\begin{aligned} \mathcal {E}_{(\bar{x},\bar{y})}\cap \mathcal {H}=\emptyset . \end{aligned}$$

Remark 4.2

When \(E=C\backslash \{0\}\), Corollary 4.1 reduces to Corollary 2.1 in [26].

Corollary 4.2

Let \(\bar{x}\in S\). Then, \((\bar{x},\bar{y})\in \mathrm{gr}(F)\) is a weak E-optimal point of SVOP iff

$$\begin{aligned} \mathcal {E}_{(\bar{x},\bar{y})}\cap \mathcal {H}_{ie}=\emptyset . \end{aligned}$$

We next give some properties of other kinds of extension of the image set. Let \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). We write

$$\begin{aligned}&\mathcal {K}_{(\bar{x},\bar{y})}^{g\text {-}lev_{\bar{y}}} :=\mathcal {A}_{(\bar{x},\bar{y})}(F^{-1}(\bar{y}-E))\\&\quad =\{(\bar{y}-y,-z):x\in F^{-1}(\bar{y}-E),y\in F(x),z\in G(x)\}. \end{aligned}$$

Proposition 4.2

Let \(\mathcal {E}_{(\bar{x},\bar{y})}^{g\text {-}lev_{\bar{y}}} :=\mathcal {K}_{(\bar{x},\bar{y})}^{g\text {-}lev_{\bar{y}}}-C\times D\) be the conic extension of the image set \(\mathcal {K}_{(\bar{x}, \bar{y})}^{g\text {-}lev_{\bar{y}}}\). Then,

$$\begin{aligned} \mathcal {K}_{(\bar{x},\bar{y})}^{g\text {-}lev_{\bar{y}}} \cap \mathcal {H}=\emptyset \Leftrightarrow \mathcal {E}_{(\bar{x}, \bar{y})}^{g\text {-}lev_{\bar{y}}}\cap \mathcal {H}=\emptyset . \end{aligned}$$

Proof

It is similar to the proof of Proposition 2.4 in [13]. \(\square \)

Theorem 4.1

Let \(\bar{x}\in S\). Then, \((\bar{x},\bar{y})\in \mathrm{gr}(F)\) is an E-optimal point of SVOP iff \(\mathcal {K}_{(\bar{x}, \bar{y})}^{g\text {-}lev_{\bar{y}}}\cap \mathcal {H}=\emptyset \).

Proof

Necessity. Let \((\bar{x},\bar{y})\) be an E-optimal point of SVOP. It follows from (13) that

$$\begin{aligned} \mathcal {K}_{(\bar{x},\bar{y})}\cap \mathcal {H}=\emptyset . \end{aligned}$$
(15)

From (15) and the definition of \(F^{-1}(\bar{y}-E)\), we have \(\mathcal {K}_{(\bar{x},\bar{y})}^{g\text {-}lev_{\bar{y}}} \cap \mathcal {H}=\emptyset \).

Sufficiency. If \((\bar{x},\bar{y})\) is not an E-optimal point of SVOP, then there exist \(x'\in S\), \(y'\in F(x')\) and \(z'\in G(x')\cap (-D)\) such that \((\bar{y}-y',-z')\in \mathcal {H}=E\times D\). Consequently, \(\bar{y}\in F(x')+E\), that is, \(x'\in F^{-1}(\bar{y} -E)\). Thus, we deduce that \(\mathcal {K}_{(\bar{x}, \bar{y})}^{g\text {-}lev_{\bar{y}}}\cap \mathcal {H}\ne \emptyset \), which is a contradiction. \(\square \)

Theorem 4.2

Let \(\bar{x}\in S\). Then, \((\bar{x},\bar{y})\in \mathrm{gr}(F)\) is a weak E-optimal point of SVOP iff

$$\begin{aligned} \mathcal {K}_{(\bar{x},\bar{y})}^{g\text {-}lev_{\bar{y}}} \cap \mathcal {H}_{ie}=\emptyset . \end{aligned}$$

Proof

It is similar to the proof of Theorem 4.1. \(\square \)

The following theorem shows that \(\bar{y}\) in the generalized level set of Theorem 4.1 can be replaced by an arbitrary y when Assumption B is satisfied.

Theorem 4.3

Suppose that \(\bar{x}\in S\cap F^{-1}(y-E)\) for some arbitrary y and F satisfies Assumption B at \((\bar{x},\bar{y})\in \mathrm{gr}(F)\) with respect to E. Then, \((\bar{x},\bar{y})\) is an E-optimal point of SVOP iff

$$\begin{aligned} \mathcal {K}_{(\bar{x},\bar{y})}^{g\text {-}lev_{y}} \cap \mathcal {H}=\emptyset . \end{aligned}$$
(16)

Proof

Necessity. Let \((\bar{x},\bar{y})\) be an E-optimal point of SVOP. From (13), we have \(\mathcal {K}_{(\bar{x},\bar{y})} \cap \mathcal {H}=\emptyset \). This, together with the definition of \(F^{-1}(y-E)\), yields that \(\mathcal {K}_{(\bar{x},\bar{y})}^{g \text {-}lev_{y}}\cap \mathcal {H}=\emptyset \).

Sufficiency. Suppose that \((\bar{x},\bar{y})\) is not an E-optimal point of SVOP. Then, there exist \(x'\in S\), \(y'\in F(x')\) and \(z'\in G(x')\cap (-D)\) such that

$$\begin{aligned} (\bar{y}-y',-z')\in \mathcal {H}=E\times D. \end{aligned}$$
(17)

Therefore,

$$\begin{aligned} \bar{y}\in F(x')+E. \end{aligned}$$
(18)

Since \(\bar{x}\in S\cap F^{-1}(y-E)\) for some arbitrary y, we have

$$\begin{aligned} y\in F(\bar{x})+E. \end{aligned}$$
(19)

Since F satisfies Assumption B at \((\bar{x},\bar{y})\) with respect to E, the combination of (18) and (19) implies that \(y\in F(x')+E\), i.e.,

$$\begin{aligned} x'\in S\cap F^{-1}(y-E). \end{aligned}$$
(20)

According to (17) and (20), we have \(\mathcal {K}_{(\bar{x},\bar{y})}^{g\text {-}lev_{y}} \cap \mathcal {H} \ne \emptyset \), which contradicts (16). \(\square \)

Theorem 4.4

Suppose that \(\bar{x}\in S\cap F^{-1}(y-E)\) for some arbitrary y and F satisfies Assumption B at \((\bar{x},\bar{y})\in \mathrm{gr}(F)\) with respect to \(\mathrm{int}\,E\). Then, \((\bar{x},\bar{y})\) is a weak E-optimal point of SVOP iff

$$\begin{aligned} \mathcal {K}_{(\bar{x},\bar{y})}^{g\text {-}lev_{y}} \cap \mathcal {H}_{ie}=\emptyset . \end{aligned}$$

Proof

It is similar to the proof of Theorem 4.3. \(\square \)

Consider a function \(w:Y\times Z\times \Pi \rightarrow \mathbb {R}\), where \(\Pi \) is a set of parameters to be specified case by case. For each \(\pi \in \Pi \), the sets

$$\begin{aligned} \text {lev}_{\ge 0}w(\cdot ;\pi ):=\{z\in Y\times Z:w(z;\pi )\ge 0\} \end{aligned}$$

and

$$\begin{aligned} \text {lev}_{>0}w(\cdot ;\pi ):=\{z\in Y\times Z:w(z;\pi )>0\} \end{aligned}$$

are called nonnegative and positive level sets of the function w, respectively.

Definition 4.1

The class of all the functions \(w:Y\times Z\times \Pi \rightarrow \mathbb {R}\), such that

  1. (i)

    \(\mathcal {H}\subseteq \text {lev}_{\ge 0}w(\cdot ;\pi ), \forall \pi \in \Pi \),

  2. (ii)

    \(\bigcap _{\pi \in \Pi }\text {lev}_{>0}w(\cdot ;\pi ) \subseteq \mathcal {H}\),

is called the class of weak separation functions.

Definition 4.2

[40] Let \(\Theta \) be a subset in Y. The function \(\Delta _{\Theta }:Y\rightarrow \mathbb {R}\cup \{\pm \infty \}\) is defined as

$$\begin{aligned} \Delta _{\Theta }(y):=d_{\Theta }(y)-d_{\Theta ^{c}}(y), \end{aligned}$$

where \(d_{\Theta }(y):=\inf _{\theta \in \Theta }\Vert y-\theta \Vert \), \(d_{\emptyset }(y):=+\infty \) and \(\Vert y\Vert \) denotes the norm of y in Y.

Proposition 4.3

[41] Let \(\Theta \subseteq Z\) be nonempty and \(\Theta \ne Z\). Then, the following hold:

  1. (i)

    \(\Delta _{\Theta }\) is real-valued and Lipschitz continuous;

  2. (ii)

    \(\Delta _{\Theta }(\theta )<0\) for all \(\theta \in \mathrm{int}\,\Theta \);

  3. (iii)

    \(\Delta _{\Theta }(\theta )=0\) for all \(\theta \in \partial \,\Theta \);

  4. (iv)

    \(\Delta _{\Theta }(\theta )>0\) for all \(\theta \in \mathrm{int}\,\Theta ^{c}\).

In [26], the authors introduced two classes of separation functions, which unify the known linear or nonlinear separation functions in the literature. Now, by the oriented distance function \(\Delta \) and the improvement set E, we consider the following nonlinear separation functions, which are different from the ones in the literature:

$$\begin{aligned} w_{1}(u,v;\phi ):=-\Delta _{\text {cl}\,E}(u)+\phi (v), \end{aligned}$$

where \((u,v)\in Y\times Z\) and \(\phi \in \Pi _{1}:=D^{*}\);

$$\begin{aligned} w_{2}(u,v;\varphi ,\lambda ):=\varphi (u)-\lambda \Delta _{D}(v), \end{aligned}$$

where \((u,v)\in Y\times Z\) and \((\varphi ,\lambda )\in \Pi _{2} :=(E^{*}\times \mathbb {R}_{+})\backslash \{(0,0)\}\).

Remark 4.3

If either \(E=C\backslash \{0\}\) or \(E=\mathrm{int}\,C\), then \(w_{1}\) becomes \(w(u,v;\phi ):=-\Delta _{\text {cl}\,C}(u)+\phi (v)\), which was studied in [22]. We observe that, in the case where \(Y=\mathbb {R}\), \(Z=\mathbb {R}^{m}\) and \(E=\mathbb {R}_{+}\backslash \{0\}\), \(w_{2}\) reduces to another version, which was discussed in [21].

Theorem 4.5

The nonlinear functions \(w_{1}\) and \(w_{2}\) are weak separation functions.

Proof

The proofs that \(w_{1}\) and \(w_{2}\) are weak separation functions are similar to Proposition 3.1(i) in [22] and Proposition 3.1(i) in [21], respectively. \(\square \)

Definition 4.3

The sets \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) (or \(\mathcal {H}_{ie}\)) admit a nonlinear separation with respect to \(w_{1}\), iff there exists \(\bar{\phi }\in D^{*}\) such that

$$\begin{aligned} -\Delta _{\text {cl}\,E}(u)+\bar{\phi }(v)\le 0,\quad \forall (u,v) \in \mathcal {K}_{(\bar{x},\bar{y})}. \end{aligned}$$
(21)

Theorem 4.6

Let \(\bar{x}\in S\) and \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). If there exists \(\bar{\phi }\in D^{*}\) such that \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}_{ie}\) admit a nonlinear separation with respect to \(w_{1}\), then \(\bar{x}\in V_{w}^{E}(F)\).

Proof

Since there exists \(\bar{\phi }\in D^{*}\) such that \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}_{ie}\) admit a nonlinear separation with respect to \(w_{1}\), then (21) holds. On the other hand, for each \((u,v)\in \mathcal {H}_{ie}\), from Proposition 4.3(ii), (iii), we have \(-\Delta _{\text {cl}\,E}(u)+\bar{\phi }(v)>0\). Thus, \(\mathcal {K}_{(\bar{x},\bar{y})}\cap \mathcal {H}_{ie}=\emptyset \). By (14), we have \(\bar{x}\in V_{w}^{E}(F)\). \(\square \)

Definition 4.4

The sets \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear separation with respect to \(w_{2}\), iff there exists \((\bar{\varphi },\bar{\lambda })\in E^{*}\times \mathbb {R}_{+}\) with \((\bar{\varphi },\bar{\lambda })\ne (0,0)\) such that

$$\begin{aligned} \bar{\varphi }(u)-\bar{\lambda }\Delta _{D}(-v)\le 0, \quad \forall (u,v)\in \mathcal {K}_{(\bar{x},\bar{y})}. \end{aligned}$$
(22)

If \((\bar{\varphi },\bar{\lambda })\in E^{\sharp }\times \mathbb {R}_{+}\) in (22), then the separation is said to be regular.

Theorem 4.7

Let \(\bar{x}\in S\) and \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). If \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear regular separation with respect to \(w_{2}\), then \(\bar{x}\in V^{E}(F)\).

Proof

Since \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear regular separation with respect to \(w_{2}\), then there exists \((\bar{\varphi },\bar{\lambda })\in E^{\sharp }\times \mathbb {R}_{+}\) such that (22) holds. For any \((u,v)\in \mathcal {H}\), by Proposition 4.3(ii), (iii), we obtain \(\bar{\varphi }(u)-\bar{\lambda }\Delta _{D}(-v)>0\). Thus, \(\mathcal {K}_{(\bar{x},\bar{y})}\cap \mathcal {H}=\emptyset \), which implies that \(\bar{x}\in V^{E}(F)\). \(\square \)

We define the generalized Lagrangian set-valued map \(\mathcal {L}_{1}: A\times D^{*}\rightrightarrows \mathbb {R}\) as

$$\begin{aligned} \mathcal {L}_{1}(x,\phi ):=\Delta _{\text {cl}\,E}(\bar{y}-F(x)) -\sigma _{-G(x)}(\phi ),\quad \forall (x,\phi )\in A\times D^{*}, \end{aligned}$$

where \(\Delta _{\text {cl}\,E}(\bar{y}-F(x)):=\bigcup _{y\in F(x)}\Delta _{\text {cl}\,E}(\bar{y}-y)\) and G is compact valued.

It is worth noticing that the generalized Lagrangian set-valued function \(\mathcal {L}_{1}\) is different from the ones in the literature. Similar to Definition 4.2 in [35], we introduce the following definition.

Definition 4.5

\((\bar{x},\bar{\phi })\) is a saddle point of \(\mathcal {L}_{1}(x,\phi )\) on \(A\times D^{*}\), iff there exists \(y'\in F(\bar{x})\) such that

$$\begin{aligned}&\Delta _{\text {cl}\,E}(\bar{y}-y')-\sigma _{-G(\bar{x})}(\phi )\nonumber \\&\quad \le \Delta _{\text {cl}\,E}(\bar{y}-y')-\sigma _{-G(\bar{x})}(\bar{\phi })\nonumber \\&\quad \le \Delta _{\text {cl}\,E}(\bar{y}-y)-\sigma _{-G(x)}(\bar{\phi }), \quad \forall x\in A,\forall y\in F(x),\forall \phi \in D^{*}. \end{aligned}$$
(23)

The following theorem gives a necessary condition for the nonlinear separation between \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) .

Theorem 4.8

Let \(\bar{x}\in S\) and \((\bar{x},\bar{y})\in \mathrm{gr}(F)\). Assume that \(F(\bar{x})\) is E-bounded from above by \(\bar{y}\) and G is compact valued. If \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear separation with respect to \(w_{1}\), then there exists \(\bar{\phi }\in D^{*}\) such that \((\bar{x},\bar{\phi })\) is a saddle point of \(\mathcal {L}_{1}(x,\phi )\) on \(A\times D^{*}\).

Proof

Suppose that \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear separation with respect to \(w_{1}\). Then, there exists \(\bar{\phi }\in D^{*}\) such that (21) holds, i.e., for any \(x\in A\), \(y\in F(x)\) and \(z\in G(x)\),

$$\begin{aligned} -\Delta _{\text {cl}\,E}(\bar{y}-y)+\bar{\phi }(-z)\le 0. \end{aligned}$$

or, equivalently,

$$\begin{aligned} -\Delta _{\text {cl}\,E}(\bar{y}-y)+\sigma _{-G(x)}(\bar{\phi }) \le 0,\quad \forall x\in A,\forall y\in F(x). \end{aligned}$$
(24)

Taking \(x=\bar{x}\) and \(y=\bar{y}\) in (24), we have

$$\begin{aligned} -\Delta _{\text {cl}\,E}(0)+\sigma _{-G(\bar{x})}(\bar{\phi })\le 0. \end{aligned}$$

Since \(F(\bar{x})\) is E-bounded from above by \(\bar{y}\), we obtain \(F(\bar{x})-\bar{y}\subseteq -\text {cl}\,E\). Hence, there exists \(y'\in F(\bar{x})\) such that \(\bar{y}-y'\in \mathrm{cl}\,E\). According to Proposition 4.3(ii), (iii), we observe that

$$\begin{aligned} \Delta _{\text {cl}\,E}(\bar{y}-y')\le 0. \end{aligned}$$
(25)

Clearly, \(\Delta _{\text {cl}\,E}(0)\le 0\). Therefore, one has \(\sigma _{-G(\bar{x})}(\bar{\phi })\le 0\). On the other hand, \(\bar{x}\in S\) allows \(\bar{x}\in A\) and \(G(\bar{x})\cap (-D)\ne \emptyset \). It follows that \(\inf _{z\in G(\bar{x})}\bar{\phi }(z)\le 0\) and \(\inf _{z\in G(\bar{x})}\phi (z)\le 0\), which imply that \(\sigma _{-G(\bar{x})}(\bar{\phi })\ge 0\) and \(\sigma _{-G(\bar{x})}(\phi )\ge 0\), respectively. Consequently,

$$\begin{aligned} \sigma _{-G(\bar{x})}(\bar{\phi })=0. \end{aligned}$$
(26)

It follows from (24)–(26) that

$$\begin{aligned} \Delta _{\text {cl}\,E}(\bar{y}-y')-\sigma _{-G(\bar{x})}(\bar{\phi }) \le 0 \le \Delta _{\text {cl}\,E}(\bar{y}-y)-\sigma _{-G(x)}(\bar{\phi }),\ \quad \forall x\in A,\forall y\in F(x), \end{aligned}$$

which is the second inequality of (23). Moreover, from \(\sigma _{-G(\bar{x})}(\phi )\ge 0\) and \(\sigma _{-G(\bar{x})}(\bar{\phi })=0\), we have

$$\begin{aligned} \Delta _{\text {cl}\,E}(\bar{y}-y')-\sigma _{-G(\bar{x})}(\phi ) \le \Delta _{\text {cl}\,E}(\bar{y}-y')-\sigma _{-G(\bar{x})}(\bar{\phi }), \quad \forall \phi \in D^{*}, \end{aligned}$$

which is the first inequality of (23). Therefore, \((\bar{x},\bar{\phi })\) is a saddle point of \(\mathcal {L}(x,\phi )\) on \(A\times D^{*}\). \(\square \)

To obtain the sufficient condition for the nonlinear separation between \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\), it is necessary to strengthen the conditions of Theorem 4.8.

Theorem 4.9

Let \((\bar{x},\bar{y})\in \mathrm{gr}(F)\) and \(\bar{x}\in S\). Assume that \(F(\bar{x})-\bar{y}\subseteq -\partial (\mathrm{cl}\,E)\) and G is compact valued. If there exists \(\bar{\phi }\in D^{*}\) such that \((\bar{x},\bar{\phi })\) is a saddle point of \(\mathcal {L}_{1}(x,\phi )\) on \(A\times D^{*}\), then \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear separation with respect to \(w_{1}\).

Proof

By the first inequality of (23), we obtain

$$\begin{aligned} \sigma _{-G(\bar{x})}(\phi )\ge \sigma _{-G(\bar{x})} (\bar{\phi }),\quad \forall \phi \in D^{*}. \end{aligned}$$
(27)

Taking \(\phi =0\) in (27), we get \(\sigma _{-G(\bar{x})}(\bar{\phi })\le 0\). Due to \(\bar{x}\in S\), we have \(\sigma _{-G(\bar{x})}(\bar{\phi })\ge 0\). So,

$$\begin{aligned} \sigma _{-G(\bar{x})}(\bar{\phi })=0. \end{aligned}$$
(28)

Since \(F(\bar{x})-\bar{y}\subseteq \partial (\text {cl}\,E)\), it follows from Proposition 4.3(iii) that for \(y'\in F(\bar{x})\),

$$\begin{aligned} \Delta _{\text {cl}\,E}(y'-\bar{y})=0. \end{aligned}$$
(29)

From (28), (29) and the second inequality of (23), we have

$$\begin{aligned} -\Delta _{\text {cl}\,E}(\bar{y}-y)+\sigma _{-G(x)}(\bar{\phi }) \le 0,\quad \forall x\in A,\forall z\in G(x), \end{aligned}$$
(30)

or, equivalently,

$$\begin{aligned} -\Delta _{\text {cl}\,E}(\bar{y}-y)+\bar{\phi }(-z) \le 0,\forall x\in A,\quad \forall y\in F(x),\forall z\in G(x). \end{aligned}$$

Therefore, \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear separation with respect to \(w_{1}\). \(\square \)

We now define the generalized Lagrangian set-valued function \(\mathcal {L}_{2}: A\times ({E}^{*}\times \mathbb {R}_{+}) \backslash \{(0,0)\}\rightrightarrows \mathbb {R}\) as

$$\begin{aligned} \mathcal {L}_{2}(x,\varphi ,\lambda ):=\inf \limits _{y\in F(x)}\varphi (y) +\lambda \Delta _{D}(-G(x)),\quad \forall x\in A,\forall (\varphi ,\lambda ) \in (E^{*}\times \mathbb {R}_{+})\backslash \{(0,0)\}, \end{aligned}$$

where F is compact valued and \(\Delta _{D}(-G(x)) :=\bigcup _{v\in -G(x)}\Delta _{D}(v)\).

Definition 4.6

Let \(\bar{\varphi }\in E^{*}\). \((\bar{x},\bar{\lambda })\) is a saddle point of \(\mathcal {L}_{2}(x,\bar{\varphi },\lambda )\) on \(A\times \mathbb {R}_{+}\), iff there exists \(\bar{z}\in G(\bar{x})\) such that

$$\begin{aligned} \inf \limits _{y\in F(\bar{x})}\bar{\varphi }(y)+\lambda \Delta _{D}(-\bar{z})&\le \inf \limits _{y\in F(\bar{x})}\bar{\varphi }(y)+\bar{\lambda } \Delta _{D}(-\bar{z})\nonumber \\&\le \inf \limits _{y\in F(x)}\bar{\varphi }(y)+\lambda \Delta _{D}(-z), \quad \forall x\in A,\forall z\in G(x),\forall \lambda \in \mathbb {R}_{+}. \end{aligned}$$
(31)

Theorem 4.10

Let \((\bar{x},\bar{y})\in \mathrm{gr}(F)\), \(G(\bar{x})\subseteq -\partial D\) and \(\inf _{y\in F(\bar{x})}\bar{\varphi }(y)=\bar{\varphi }(\bar{y})\) for a fixed \(\bar{\varphi }\in E^{*}\). Then, the following statements are equivalent:

  1. (i)

    \(\bar{x}\in S\), and \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear separation with respect to \(w_{2}\);

  2. (ii)

    \((\bar{x},\bar{\lambda })\) is a saddle point of \(\mathcal {L}_{2}(x,\bar{\varphi },\lambda )\) on \(A\times \mathbb {R}_{+}\).

Proof

(i)\(\Rightarrow \)(ii). Suppose that \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear separation with respect to \(w_{2}\). Then, for each \(x\in A\), \(y\in F(x)\) and \(z\in G(x)\), we have

$$\begin{aligned} \bar{\varphi }(\bar{y}-y)-\bar{\lambda }\Delta _{D}(-z)\le 0, \end{aligned}$$

which implies that

$$\begin{aligned} \sup \limits _{y\in F(x)}\bar{\varphi }(-y)-\bar{\lambda }\Delta _{D}(-z) \le \bar{\varphi }(-\bar{y}),\quad \forall x\in A,\forall z\in G(x). \end{aligned}$$
(32)

From \(G(\bar{x})\subseteq -\partial D\) and Proposition 4.3(iii), we obtain \(\Delta _{D}(-\bar{z})=0\) for \(\bar{z}\in G(\bar{x})\). As a consequence,

$$\begin{aligned} \bar{\varphi }(-\bar{y})=\sup \limits _{y\in F(\bar{x})} \bar{\varphi }(-y)-\bar{\lambda }\Delta _{D}(-\bar{z}). \end{aligned}$$
(33)

By (32) and (33), we deduce that

$$\begin{aligned} \inf \limits _{y\in F(\bar{x})}\bar{\varphi }(y) +\bar{\lambda }\Delta _{D}(-\bar{z}) \le \inf \limits _{y\in F(x)}\bar{\varphi }(y) +\bar{\lambda }\Delta _{D}(-z),\quad \forall x\in A,\forall z\in G(x), \end{aligned}$$

which is the second inequality of (31). It follows from \(\Delta _{D}(-\bar{z})=0\) that

$$\begin{aligned} \inf \limits _{y\in F(\bar{x})}\bar{\varphi }(y)+\lambda \Delta _{D}(-\bar{z}) \le \inf \limits _{y\in F(\bar{x})}\bar{\varphi }(y)+\bar{\lambda } \Delta _{D}(-\bar{z}),\quad \forall \lambda \in \mathbb {R}_{+}, \end{aligned}$$

which is the first inequality of (31).

(ii)\(\Rightarrow \)(i). Assume that \((\bar{x},\bar{\lambda })\) is a saddle point for \(\mathcal {L}(x,\bar{\varphi },\lambda )\) on \(A\times \mathbb {R}_{+}\). Then, there exists \(\bar{z}\in G(\bar{x})\) such that

$$\begin{aligned}&\sup \limits _{y\in F(\bar{x})}\bar{\varphi }(-y)-\lambda \Delta _{D}(-\bar{z})\nonumber \\&\quad \ge \sup \limits _{y\in F(\bar{x})}\bar{\varphi }(-y)-\bar{\lambda } \Delta _{D}(-\bar{z})\nonumber \\&\quad \ge \sup \limits _{y\in F(x)}\bar{\varphi }(-y) -\bar{\lambda }\Delta _{D}(-z),\quad \forall x\in A, \forall z\in G(x),\forall \lambda \in \mathbb {R}_{+}. \end{aligned}$$
(34)

We claim that \(\bar{x}\in S\). Suppose that \(\bar{x}\notin S\). Then, \(G(\bar{x})\cap (-D)=\emptyset \). It follows from Proposition 4.3(iv) that \(\Delta _{D}(-\bar{z})>0\). Let \(\lambda \rightarrow +\infty \), we have \(\lambda \Delta _{D}(-\bar{z})\rightarrow +\infty \), which contradicts the first inequality of (31). Since \(G(\bar{x})\subseteq -\partial D\), we get \(\Delta _{D}(-\bar{z})=0\) for \(\bar{z}\in G(\bar{x})\). From the second inequality of (34), one has

$$\begin{aligned} \sup \limits _{y\in F(x)}\bar{\varphi }(-y) -\bar{\lambda }\Delta _{D}(-z) +\bar{\varphi }(\bar{y})\le 0, \end{aligned}$$

which implies that for any \(x\in A\), \( y\in F(x)\) and \( z\in G(x)\),

$$\begin{aligned} \bar{\varphi }(-y)-\bar{\lambda }\Delta _{D}(-z)+\bar{\varphi }(\bar{y})\le 0, \end{aligned}$$

i.e.,

$$\begin{aligned} \bar{\varphi }(\bar{y}-y)-\bar{\lambda }\Delta _{D}(-z) \le 0,\quad \forall x\in A,\forall y\in F(x),\forall z\in G(x). \end{aligned}$$

Therefore, \(\mathcal {K}_{(\bar{x},\bar{y})}\) and \(\mathcal {H}\) admit a nonlinear separation with respect to \(w_{2}\). \(\square \)

5 Conclusions

In this paper, the concept of E-quasi-convexity of set-valued maps is proposed. Some examples are given to illustrate the relations between E-quasi-convexity of set-valued maps and C-quasi-convexity of set-valued maps. By considering the generalized level set of set-valued maps, Assumption B and E-quasi-convexity, the scalarization results in the sense of E-optimal solution and weak E-optimal solution of SVOP are established, respectively. By virtue of two kinds of nonlinear weak separation functions characterized by the nonlinear scalarization function \(\Delta \) and the improvement set E, we introduce two classes of generalized Lagrangian set-valued functions and establish some optimality conditions. We have shown that the existence of a nonlinear separation is equivalent to a saddle point of the generalized Lagrangian set-valued functions. In the future work, we will consider the generalized Lagrangian set-valued functions and their relations with the scalarization techniques analyzed in Sect. 3.