1 Introduction

The Image Space approach (IS) was initiated in [8] and was carried on in some other articles; see [9, 13, 14, 25, 26] and references therein. The (IS) approach has been proved to be a fruitful method in many topics of optimization theory (e.g., optimality condition, existence of solution, duality, vector variational inequalities and vector equilibrium problems); see [17, 13, 14]. Moreover, it has been shown that several theoretical aspects of constrained extremum problem, such as duality, penalty methods, regularity and Lagrangian-type optimality can be developed by (IS). In this approach, the disjunction of two suitable disjoint sets by a linear or nonlinear separation implies the optimality for constrained extremum problem.

Besides the direction of nonlinear/nonconvex separation in image spaces, there are other interesting approaches to set-valued optimization based on generalized differentiation and extremal principle; see the 2-volumes monograph [23] and [24].

Here, we focus our attention on some nonlinear separation functions for the constrained extremum problem. We extend the nonlinear regular weak separation functions that have been discussed in [11, 16] and [22, 28] for multivalued optimization problems. We also define a new nonlinear (regular) weak separation function based on the oriented distance function \(\vartriangle \) and derive some optimality conditions. In particular, the relation between saddle points of the generalized Lagrangian functions and optimality for the constrained extremum problem are deduced.

The paper is organized as follows: In Sect. 1, we present some basic concepts and different types of solutions of a vector optimization problem. In Sect. 2, we recall the main concepts concerning the image space analysis and we consider some properties of the image problem. Sect. 3 illustrates the equivalence between the existence of a nonlinear separation function and a saddle point condition for the generalized Lagrangian function.

Let X be a topological vector space and let Y and Z be two normed linear spaces with normed dual spaces \(Y^{*}\) and \(Z^{*},\) respectively, and \(F:U\rightrightarrows Y\) be a multifunction defined on a nonempty subset U of X with values in Y. The set

$$\begin{aligned} \text{ dom }~F : =\{x: F(x)\ne \emptyset \} \end{aligned}$$

is called the domain of F,  and the set

$$\begin{aligned} \text{ gr }~F : =\{(x,y):x\in \text{ dom }~F,~ y\in F(x)\}= \bigcup _{x \in \text{ dom }~F} [ \{x\} \times F(x)] \end{aligned}$$

is called the graph of F . Let \(C\subset Y\) and \(D\subset Z\) be pointed, closed and convex cones with nonempty interiors. The space of continuous linear operators from Z to Y is denoted by L(ZY) and

$$\begin{aligned} L_{+}(Z, Y):= \{T \in L(Z, Y) :~~ T(D)\subseteq C \}. \end{aligned}$$

The positive dual cone of C is defined by

$$\begin{aligned} C^{+}: =\{ p \in Y^{*} : p(y) \ge 0, ~ \forall y \in C\}, \end{aligned}$$

and the set of all positive linear functionals in \(C^{+}\) is

$$\begin{aligned}C^{+i}: =\{ p\in Y^{*} : p(y) > 0, ~ \forall y \in C{\setminus }\{0\}\}. \end{aligned}$$

Note that, if C is a convex cone in Y,  then \(\text{ int }~C^+ \subseteq C^{+i}\) and the equality holds if \(\text{ int }~C^{+}\ne \emptyset \) . A partial order \(\le _{C}\) in Y is defined by

$$\begin{aligned} y_{1} \le _{C} y_{2} \;\; \Leftrightarrow \;\; y_{2}-y_{1} \in C,~~ \forall y_{1}, y_{2} \in Y. \end{aligned}$$

For simplicity, throughout this article, we denote \( {\buildrel _{\circ }\over {\mathrm {C}}}\) \(: = \mathrm {int}~C\) and \(C_0: = C {\setminus } \{0\}.\)

Definition 1.1

Let U be a convex subset of X. A multifunction \(F:U \rightrightarrows Y\) is said to be C-multifunction on U, iff for all \(x_{1}, x_{2} \in U\) and \(t \in [0,1]\), we have

$$\begin{aligned} tF(x_{1})+(1-t)F(x_{2})\subseteq F(tx_{1} + (1-t)x_{2})+C. \end{aligned}$$

In the sequel, we suppose that \(F:U\rightrightarrows Y\) is a multifunction defined on a nonempty convex subset U of X with values in Y.

Definition 1.2

Let \(F:U\rightrightarrows Y\) and \(G:U\rightrightarrows Z\) be two multifunctions with nonempty values. We consider the following vector optimization problem:

$$\begin{aligned} \text{ min }_{C}~ F(x) ~~~~ s.t.~~~~ \quad x\in R: =\{x\in U: G(x)\cap (-D) \ne \emptyset \}, \end{aligned}$$
(1)

where R is called the feasible region of Problem (1), which we suppose nonempty.

Definition 1.3

A point \(\bar{x}\in R\) is called a minimum point of Problem (1) iff

$$\begin{aligned} \exists \bar{y}\in F(\bar{x}) ~~~ s.t. ~~~ (F(R))\cap (\bar{y}-C_0)=\emptyset . \end{aligned}$$

In this case we say that \((\bar{x}, \bar{y})\) is a minimizer for Problem (1). A point \(\bar{x}\in R\) is called a weak minimum point of Problem (1) iff

$$\begin{aligned} \exists \bar{y}\in F(\bar{x}) \quad \text{ s.t. } \quad (F(R))\cap (\bar{y}-{\buildrel _{\circ } \over {\mathrm {C}}})=\emptyset . \end{aligned}$$

In this case we say that \((\bar{x}, \bar{y})\) is a weak minimizer for Problem (1).

The following result presents a necessary and sufficient condition for a vector to be a minimum point or a weak minimum point of Problem (1).

Lemma 1.1

[20] Let \(\bar{x}\in R\) and \((\bar{x}, \bar{y})\in \mathrm {gr}~F\). Then

  1. (i)

        \((\bar{x}, \quad \bar{y})\) is a minimizer of Problem (1) iff

    $$\begin{aligned} (\bar{y} - C_0, -D)\cap (F(x),G(x))=\emptyset \quad \forall x\in U. \end{aligned}$$
  2. (ii)

        \((\bar{x}, \quad \bar{y})\) is a weak minimizer of Problem (1) iff

    $$\begin{aligned} (\bar{y} -{\buildrel _{\circ } \over {\mathrm {C}}} , -D)\cap (F(x),G(x)) =\emptyset \quad \forall x\in U. \end{aligned}$$

2 Image space analysis

In this section, we develop the image space analysis for vector optimization with multifunction constraints and multifunction objectives. Let \(\bar{x}\in R\) and \(\bar{p}: =(\bar{x}, \bar{y})\in \mathrm {gr}~ F\). We introduce the multifunction \(A_{\bar{p}}:U \rightrightarrows Y\times Z, \) defined by

$$\begin{aligned} A_{\bar{p}}(x):=\{(\bar{y}-y,-z): ~y \in F(x)~,~z \in G(x) \},~ x\in U, \end{aligned}$$

and we associate the following sets to \(\bar{p}\in \mathrm {gr}~ F\)

$$\begin{aligned} \mathcal {H} = C_{0}\times D~~~,~~~\mathcal {K}_{\bar{p}}= A_{\bar{p}}(U). \end{aligned}$$

The set \(\mathcal {K}_{\bar{p}}\) is called the image space associated with Problem (1). By Lemma 1.1, \(\bar{p} =(\bar{x}, \bar{y})\) is a minimizer of Problem (1) iff

$$\begin{aligned} \mathcal {K}_{\bar{p}}\cap \mathcal {H}=\emptyset , \end{aligned}$$
(2)

and \(\bar{p}= (\bar{x}, \bar{y})\) is a weak minimizer of Problem (1) iff

$$\begin{aligned} \mathcal {K}_{\bar{p}}\cap \mathcal {H}_{ic}=\emptyset , \end{aligned}$$
(3)

where, \(\mathcal {H}_{ic}={\buildrel _{\circ } \over {\mathrm {C}}} \times D.\)

Remark 2.1

In general, the image space \(\mathcal {K}_{\bar{p}}\) is not convex, even when the two functions F and G are C-multifunction and D-multifunction on the convex set U,  respectively. To overcome this defect, we introduce the extended image space \(\mathcal {K}_{\bar{p}}\) with respect to the cone \(\text{ cl }~ {\mathcal {H}}\) as \(\mathcal {E }_{\bar{p}}=\mathcal {K}_{\bar{p}}-\text{ cl }~ {\mathcal {H}}.\) In fact, by imposing some convexity assumptions on F and G, we obtain the convexity of the extended image space.

Lemma 2.1

[6] Let \(F:U \rightrightarrows Y\) and \(G:U \rightrightarrows Z\) be C-multifunction and D-multifunction on the convex set U,  respectively. Then the extended image \( \mathcal {E } _{\bar{p}}=\mathcal {K}_{\bar{p}}-\text{ cl }~ {\mathcal {H}}\) is convex and

$$\begin{aligned} \mathcal {K}_{\bar{p}}\cap \mathcal {H} =\emptyset \Longleftrightarrow \mathcal {E}_{\bar{p}}\cap \mathcal {H}=\emptyset . \end{aligned}$$

Corollary 2.1

Let \(\bar{x}\in R.\) Then \(\bar{p} =(\bar{x}, \bar{y})\in \mathrm {gr}~ F\) is a minimizer of Problem (1) iff

$$\begin{aligned} \mathcal {E}_{\bar{p}}\cap \mathcal {H}=\emptyset . \end{aligned}$$
(4)

Remark 2.2

Let \(\mathcal {H}_{0}\) be a subset of \(\mathcal {H}\), defined by \(\mathcal {H}_{0}= C_0 \times \{0_{z}\}.\) Then by a similar argument as that of the proof of Proposition 2.1 in [12], we can deduce that (4) is equivalent to

$$\begin{aligned} \mathcal {E}_{\bar{p}}\cap \mathcal {H}_{0}=\emptyset . \end{aligned}$$
(5)

3 Nonlinear separation functions

The separation functions play an important role in the optimality conditions for constrained optimization. In order to prove disjunction between the two sets \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H},\) we will show that \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) lie in two disjoint level sets of a linear or nonlinear separation function.

Definition 3.1

Let \(\Pi \) be a set of parameters and \(\mathcal {H} = C_0 \times D\).The class of all the functions \(\omega :Y\times Z\times \Pi \longrightarrow \mathbb {R},\) such that

$$\begin{aligned} \mathcal {H} \subseteq \text{ lev }_{\ge 0}~~\omega (., .,\pi ),~~~~~ \forall \pi \in \Pi , \end{aligned}$$
(6)

and

$$\begin{aligned} {\bigcap }_{\pi \in \Pi }\text{ lev }_{> 0}~~\omega (., .,\pi )\subseteq \mathcal {H} \end{aligned}$$
(7)

is called the class of weak separation functions and is denoted by \(\mathbb {W}(\Pi ),\) in which \(\text{ lev }_{> 0}~~ \omega (.,.,{\pi }):=\{(u,v)\in Y\times Z : \omega (u,v,{\pi })> 0 \}\) denotes the level set of \(\omega (.,., {\pi }).\)

Definition 3.2

The class of all the functions \(\omega :Y\times Z\times \Pi \longrightarrow \mathbb {R}\), such that

$$\begin{aligned} {\bigcap }_{\pi \in \Pi }\text{ lev }_{> 0}~~\omega (., .,\pi ) = \mathcal {H}, \end{aligned}$$
(8)

is called the class of regular weak separation functions and is denoted by \(\mathbb {W}_{r}(\Pi )\).

Suppose that \(\Pi = Y^{*}\times \Gamma \) is the given set of parameters and the class of functions \(\omega _{1} :Y \times Z \times Y^{*}\times \Gamma \mapsto \mathbb {R }\) is given by

$$\begin{aligned} \omega _{1} (u ,v ,\theta ,\gamma ):=\langle \theta , u \rangle + {\omega }_0( v , \gamma ), \end{aligned}$$

where \({\omega }_0\) fulfills the following conditions:

$$\begin{aligned} \forall \gamma \in \Gamma ,~~\forall \alpha \in \mathbb {R_{+}},~~\exists {\gamma _{\alpha }}\in \Gamma ~~s.t. ~~ \alpha {\omega }_0(v,\gamma )= {\omega }_0(v,{\gamma _{\alpha }}) ~~\forall v \in Z. \end{aligned}$$
(9)
$$\begin{aligned} {\bigcap }_{\gamma \in \Gamma }\text{ lev }_{\ge 0}~~{\omega }_0(., \gamma ) = D. \end{aligned}$$
(10)

The above weak separation has been discussed by Giannessi in [9]. Note that the above conditions imply that

$$\begin{aligned} \exists \bar{\gamma }\in \Gamma ~~s.t.~~{\omega }_0(.,\bar{\gamma })\equiv 0. \end{aligned}$$
(11)
$$\begin{aligned} \forall v\not \in D,~~\exists \gamma \in \Gamma ~~s.t.~~{\omega }_0(v , \gamma )< 0. \end{aligned}$$
(12)

In the sequel, we consider the following assumptions:

$$\begin{aligned} \inf _{\gamma \in \Gamma }{\omega }_0( v , \gamma )= -\infty , ~~~~\forall v \not \in D. \end{aligned}$$
(13)
$$\begin{aligned} \inf _{\gamma \in \Gamma }{\omega }_0( v , \gamma )=0,~~~~\forall v \in D. \end{aligned}$$
(14)

One can show that (13) and (14) imply (10), see [12] and if \(Z=\mathbb {R}^{m},\) then (9) implies (13) and (14), see [9].

In the sequel, by using the oriented distance function we introduce a new nonlinear class of functions.

Definition 3.3

Suppose that \(A \subseteq Y\) and \(d_{A}(y)=\inf \{\Vert a-y \Vert : a \in A \}\) is the distance function from A. The function \(\bigtriangleup _{A}: Y \rightarrow \mathbb {R}\cup \{\pm \infty \}\) defined by

$$\begin{aligned} \triangle _A(y)=d_{A}(y)-d_{Y {\setminus } A}(y) \end{aligned}$$

is called the oriented distance function.

This function was defined in [15] and some of its main properties are gathered in the following result.

Proposition 3.1

[18, 19, 27] If the set A is nonempty and \(A \varsubsetneq Y\) with nonempty interior, then:

  1. (i)

    \(\triangle _{A}\) is real valued and 1-Lipschitzian function;

  2. (ii)

    \(\triangle _{A}< 0\) for every \(y \in \text{ int }A,\) \(~\triangle _{A}= 0\) for every \(y \in \partial A,\) and \(\triangle _{A}> 0\) for every \(y \in \text{ int }(Y {\setminus } A);\)

  3. (iii)

    If A is closed, then it holds that \(A= \{y : \triangle _{A}(y)\le 0 \};\)

  4. (iv)

    If A is convex, then \(\triangle _{A}\) is convex;

  5. (v)

    If A is a cone, then \(\triangle _{A}\) is positively homogeneous;

  6. (vi)

    If A is a closed convex cone, then \(\triangle _{A}\) is nonincreasing with respect to the ordering relation induced by C on Y.

  7. (vi)

    If A is a convex cone, then \(\triangle _{A}(y) = \sup _{\{\theta \in C^{+} , \parallel \theta \parallel = 1 \}} -\langle \theta , y \rangle ,\) for all \(y \in Y\).

Now, by the oriented distance function \(\triangle ,\) we consider the nonlinear class of functions \(\omega _{2} :Y \times Z \times \Pi \mapsto \mathbb {R }\) defined by

$$\begin{aligned} \omega _{2}(u ,v ,\pi ):= -\triangle _{C}( u ) + {\omega }_0( v , \pi ). \end{aligned}$$

Remark 3.1

The class of separation functions \(\omega _{1}\) and \(\omega _{2}\) have unified the following known linear or nonlinear separation functions:

  1. (i)

    \(\omega _{3} (u ,v ,\theta ,\gamma ):=\langle \theta , u \rangle + \langle \gamma , v \rangle ,\)      \((\theta ,\gamma )\in \Pi = (C^{+} \times D^{+}){\setminus } \{(0 , 0)\} ,\)

  2. (ii)

    \(\omega _{4} (u ,v ,\theta ,\gamma ):=\langle \theta , u \rangle - \triangle _{\mathbb {R}_{+}}(\langle \gamma , v \rangle ),\)      \((\theta ,\gamma )\in \Pi = (C^{+} \times D^{+}){\setminus } \{(0 , 0)\},\)

  3. (iii)

    \(\omega _{5} (u ,v ,\theta ,\gamma ):=\langle \theta , u \rangle - \gamma d_{D}(v),\)     \((\theta ,\gamma )\in \Pi = (C^{+} \times \mathbb {R}){\setminus } \{(0 , 0)\},\)

  4. (iv)

    \(\omega _{6} (u ,v ,\theta ):=\langle \theta , u \rangle - \delta _{D}(v),\) where, \(\theta \in \Pi = C^{+} \) and \(\delta _{D},\) is the indicator function of D

  5. (v)

    \(\omega _{7}(u ,v ,\gamma ):= -\triangle _{C}( u ) + \langle \gamma , v \rangle , ~~~~~\gamma \in \Pi = D^{+},\)

  6. (vi)

    \(\omega _{8}(u, v):= -\triangle _{C}( u ) - \delta _{D}(v),\)

  7. (vii)

    \(\omega _{9} (u ,v ,\theta ,T ):=\langle \theta , u \rangle - \triangle _{C}( Tv ),\) where \((\theta ,T ) \in \Pi =(C^{+}\times L_{+}(Z, Y)).\)

The linear weak separation \(\omega _{3}\) has been discussed by many authors. The separation functions \(\omega _{3}\), \(\omega _{4}\), \(\omega _{6}\), \(\omega _{7},\)and \(\omega _{8}\) are weak separation functions and regular weak separation functions for some parameter sets \(\Pi \), see [3, 17, 21].

Proposition 3.2

  1. (i)

    If \({\omega }_0\) fulfills both conditions (13) and (14), then \(\omega _{1} \in \mathbb {W}_{r}(\Pi )\), where \(\Pi = C^{+i}\times \Gamma \);

  2. (ii)

    If \({\omega }_0\) fulfills both conditions (13) and (14), then \(\omega _{2} \in \mathbb {W}(\Pi )\), where \(\Pi = D^{+}\).

  3. (iii)

    \(\omega _{5} \in \mathbb {W}_{r}(\Pi )\), where \(\Pi = C^{+i}\times \mathbb {R}^{+};\)

  4. (iv)

    \(\omega _{9} \in \mathbb {W}_{r}(\Pi )\), where \(\Pi = C^{+i}\times L_{+}(Z, Y).\)

Proof

  1. (i)

    With minor modifications in the proof of Proposition 4.3.3. in [9], we can deduce the proof.

  2. (ii)

    Let \((u ,v) \in \mathcal {H}\), by condition (14) and Proposition 3.1, we have \({\omega }_0( v , \pi )\ge 0\) and \(\triangle _{C}( u )\le 0,\) which implies

    $$\begin{aligned} \mathcal {H} \subseteq \text{ lev }_{\ge 0}~~\omega _{2} (., .,\pi ),~~~~~ \forall \pi \in D^{+}. \end{aligned}$$

    We will prove the following inclusion:

    $$\begin{aligned} {\bigcap }_{\pi \in D^{+} }\text{ lev }_{> 0}~~\omega _{2}(., .,\pi )\subseteq \mathcal {H}. \end{aligned}$$

    On the contrary, assume that there exists \((\hat{u } , \hat{v})\not \in \mathcal {H},\) such that

    $$\begin{aligned} \omega _{2}(\hat{u} ,\hat{v} ,\pi )> 0 ,~~~~~~~~\forall \pi \in D^{+}. \end{aligned}$$
    (15)

    We consider two cases: Case (i) If \(\hat{u} \not \in C_{0}\) and \(\hat{v} \in Z\), then \(\hat{u} \in \partial C\) or \(\hat{u} \in Y {\setminus } C, \) by Proposition 3.1, we deduce that \(\triangle _{C}(\hat{u})\ge 0.\) From condition (11), there exists \(\hat{\pi }\in D^{+},\) such that \({\omega }_0( \hat{v} , \hat{\pi })= 0.\) So,

    $$\begin{aligned} \omega _{2}(\hat{u} ,\hat{v} ,\hat{\pi })\le 0, \end{aligned}$$

    which contradicts (15). Case (ii) If \(\hat{u} \in C_{0}\) and \(\hat{v} \not \in D\), then from condition (10), there exists \(\hat{\pi } \in D^{+},\) such that

    $$\begin{aligned} \omega _{2}(\hat{u} ,\hat{v} ,\hat{\pi }) = -\triangle _{C}(\hat{u}) + {\omega }_0( \hat{v} , \hat{\pi })< 0, \end{aligned}$$

    which again contradicts (15).

  3. (iii)

    Since \({\omega }_0( v , \gamma ) = -\gamma d_{D}(v)\) and \(\omega _0\) fulfills both conditions (13) and (14), by part (i) we obtain the result.

  4. (iv)

    Since \(\omega _9\) is linear with respect to u, so that it is a regular separation function provided \( \theta \in C^{+i}.\) Let \((u ,v) \in \mathcal {H}\), then \(\langle \theta , u \rangle > 0\), for each \( \theta \in C^{+}{\setminus } \{0\}\) and \(\triangle _{C}(Tv)\le 0\), for all \(T\in L_{+}(Z, Y).\) Hence we have,

    $$\begin{aligned} \mathcal {H} \subseteq \text{ lev }_{> 0}~~\omega _{9} (., .,\theta , T) \end{aligned}$$

    Now we prove the following inclusion:

    $$\begin{aligned} {\bigcap }_{(\theta , T)\in C^{+}{\setminus } \{0\} \times L_{+}(Z, Y) }\text{ lev }_{> 0}~~\omega _{9}(., .,\theta ,T)\subseteq \mathcal {H}. \end{aligned}$$

    On the contrary, assume that there exists \((\hat{u } , \hat{v})\not \in \mathcal {H}\) such that

    $$\begin{aligned} \omega _{9}(\hat{u} ,\hat{v}, \theta , T )> 0 \quad \forall \theta \in C^{+}{\setminus } \{0\} \quad \forall T ~~\in L_{+}(Z, Y). \end{aligned}$$
    (16)

    We consider the following two cases: Case (i) If \(\hat{u} \not \in C_{0}\) and \(\hat{v} \in Z\), then there exists \(\hat{\theta } \in C^{+}{\setminus } \{0\}\) such that \(\langle \hat{\theta }, \hat{u}\rangle \le 0\). If we set \(T = 0 \in L_{+}(Z, Y)\), then

    $$\begin{aligned} \omega _{9}(\hat{u} ,\hat{v} , \hat{\theta }, T)\le 0, \end{aligned}$$

    which contradicts (16). Case (ii) If \(\hat{u} \in C_{0}\) and \(\hat{v} \not \in D\), then there exists \(\hat{\gamma }\in D^{+},\) such that \(\langle \hat{\gamma } , \hat{v} \rangle < 0\). We define the operator \(T_{n}:Z\longrightarrow Y\), by

    $$\begin{aligned} T_{n}(z)= n \langle \hat{\gamma } , z \rangle \hat{e}, \quad \forall z \in Z, \end{aligned}$$

    for some \(\hat{e}\in {\buildrel _{\circ }\over {\mathrm {C}}}\). Clearly, \(T_{n}\in L_{+}(Z, Y)\) and

    $$\begin{aligned} \omega _{9}(\hat{u} ,\hat{v} , \hat{\theta }, T_{n})\le 0, \end{aligned}$$

    for sufficiently large \(n \in \mathbb {N}\), which contradicts (16). Therefore, we have \(\omega _{9} \in \mathbb {W}_{r}(\Pi )\). \(\square \)

Definition 3.4

Let \(\bar{x}\in R\) and \(\bar{p} =(\bar{x}, \bar{y})\in \text{ gr }~F.\) Then we say that \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation w.r.t. \(\omega _{i}\), for \(i = 1,2,3,4,5,6,7,8,9\), iff there exists \(\bar{\pi } \in \Pi , \) such that \(\omega _{i}(u ,v , \bar{\pi })\not \equiv 0 \) and

$$\begin{aligned} \mathcal {H} \subseteq \text{ lev }_{\ge 0}~~\omega _{i}(., .,\bar{\pi }); \end{aligned}$$
(17)
$$\begin{aligned} \mathcal {K}_{\bar{p}} \subseteq \text{ lev }_{\le 0}~~ \omega _{i}(., ., \bar{\pi }). \end{aligned}$$
(18)

For \(i = 1,3,4,5,6,9\), if \(\bar{\pi }\in {C^{+i}}\times \Gamma ,\) then the separation is said to be regular.

In general, the existence of a nonlinear separation does not guarantee the disjunction of \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H};\) whereas, if the separation function \({\omega }_{1}\) is regular, then the strict inequality in (18) holds and we obtain a nonlinear version of Proposition 4.1 in [6] as follows.

Proposition 3.3

Let \(\bar{x}\in R\) and \(\bar{p} =(\bar{x}, \bar{y})\in \mathrm {gr}~F.\) If \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a regular nonlinear separation w.r.t. \(\omega _1{},\) then \(\bar{p}\) is a minimizer of Problem (1).

By a similar argument, as that of the proof of Theorem 4.2 in [6], we obtain its nonlinear version.

Proposition 3.4

Let \(\bar{x} \in R\), \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~F.\) Let \(\omega _{1}\) be a class of regular nonlinear separation functions satisfying both conditions (13) and (14). If

$$\begin{aligned} \inf _{\gamma \in D^+} \sup _{(u , v)\in \mathcal {K}_{\bar{p}}}\omega _{1} (u,v,\bar{\theta },\gamma )\ \le 0, \end{aligned}$$

then \(\bar{p}\) is a minimizer of Problem (1).

Remark 3.2

Similar to the case of nonlinear separation \(\omega _1,\) the existence of a nonlinear separation \(\omega _2\) does not guarantee the disjunction of \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H};\) whereas, if both conditions (17) and (18) hold for some \({\bar{\pi }} \in \Pi \), and at least one of them is strict, i.e.

$$\begin{aligned} \omega _{2} (u, v, \bar{\pi })< 0,~~~~~~~~\forall (u, v )\in \mathcal {K}_{\bar{p}}; \end{aligned}$$

or

$$\begin{aligned} {\omega }_{2} \in \mathbb {W}_{r}(\Pi ), \end{aligned}$$

then we say that the nonlinear separation \({\omega }_{2}(u, v, \pi ) = -\triangle _{\mathcal {C}}(u) + {\omega }_0(v , \pi )\) is regular.

The following result is directly derived from Definition 3.4 and (3).

Proposition 3.5

Let \(\bar{x}\in R\) and \(\bar{p} =(\bar{x}, \bar{y})\in \mathrm {gr}~F.\) If \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation w.r.t. \(\omega _{2},\) then \(\bar{p}\) is a weak minimizer of Problem (1).

Remark 3.3

By a similar argument, as that of the proof of Proposition 4.1 in [17], we deduce that the following conditions

$$\begin{aligned} \omega _{1} (u, v,\bar{\theta },\bar{\gamma })\le 0, ~~~~~~~~\forall (u, v )\in \mathcal {K}_{\bar{p}}; \end{aligned}$$
$$\begin{aligned} \omega _{2} (u, v, \bar{\pi })\le 0, ~~~~~~~~\forall (u, v )\in \mathcal {K}_{\bar{p}}; \end{aligned}$$

are equivalent to

$$\begin{aligned} \omega _{1} (u, v,\bar{\theta },\bar{\gamma })\le 0, ~~~~~~~~\forall (u, v )\in \mathcal {E}_{\bar{p}}; \end{aligned}$$
$$\begin{aligned} \omega _{2} (u, v, \bar{\pi })\le 0, ~~~~~~~~\forall (u, v)\in \mathcal {E}_{\bar{p}}; \end{aligned}$$

respectively.

The next result is a nonlinear version of Theorem 4.2 in [6].

Theorem 3.1

Let \(\bar{x} \in R\), \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~F.\) Let \(\omega _{2}\) be a class of nonlinear separation functions satisfying both conditions (13) and (14). If for each \(z \in G(x)\cap (-D),\)

$$\begin{aligned} \inf _{\pi \in D^+} \sup _{\{ y \in F(x): x \in R\}} \omega _{2} (\bar{y}- y , -z ,\pi ) < 0, \end{aligned}$$

then \(\bar{p}\) is a minimizer of Problem (1).

Proof

Suppose, on the contrary, that \(\bar{p}\) is not a minimizer of Problem (1), then by (2), \(\mathcal {K}_{\bar{p}}\cap \mathcal {H}\ne \emptyset .\) Therefore, there exists \(\hat{x} \in R,~\hat{y} \in F(\hat{x})\) and \(\hat{z} \in G(\hat{x}),\) such that

$$\begin{aligned} (\bar{y}-\hat{y} , -\hat{z} ) \in \mathcal {K}_{\bar{p}}\cap \mathcal {H}. \end{aligned}$$

Hence,

$$\begin{aligned} \sup _{\{ y \in F(x): x \in R\}} \omega _{2}(\bar{y}-y , -z , \pi )\ge -\triangle _{\mathcal {C}}( \bar{y} - \hat{y}) + {\omega }_0( -\hat{z} , \pi ) . \end{aligned}$$

Since \(\inf _{\pi \in D^{+}}{\omega }_0( -\hat{z} , \pi ) = 0 \) and \((\bar{y} - \hat{y })\in C_{0},\) then

$$\begin{aligned} \inf _{\pi \in D^{+}} \sup _{\{ y \in F(x): x \in R\}} \omega _{2}(\bar{y}-y ,-z ,\pi ) \ge 0, \end{aligned}$$

which is a contradiction. \(\square \)

In order to obtain saddle point conditions for the generalized Lagrangian function associated with Problem (1), we consider the generalized Lagrangian function

\(\mathcal {L}_{1}:U \times C^{+}\times \Gamma \mapsto \mathbb {R }\) defined by

$$\begin{aligned} \mathcal {L}_{1}(x , \theta , \gamma )=\inf _{y \in F(x)}\langle \theta , y \rangle - \sup _{z \in G(x)\cap -D}{\omega }_0( -z , \gamma ), \end{aligned}$$

where , F and G,  are compact valued.The generalized Lagrangian function \(\mathcal {L}_{1}(x , \theta , \gamma )\) refines the ones in the literature.

For obtaining a saddle point of the generalized Lagrangian function in our context, we need the following stronger versions of conditions (13) and (14):

$$\begin{aligned} \inf _{\gamma \in \Gamma }\sup _{v \in D_1}{\omega }_0( v , \gamma )= -\infty . \end{aligned}$$
(19)
$$\begin{aligned} \inf _{\gamma \in \Gamma }\sup _{v\in D_2}{\omega }_0( v , \gamma )=0, \end{aligned}$$
(20)

where \(D_1\) and \(D_2,\) are compact subsets of \(Z {\setminus } D\) and D,  respectively and \(\omega _0\) is continuous in the first argument.

Remark 3.4

It is obvious that if two sets \(D_1\) and \(D_2,\) are singletons then the above conditions are equivalent to (13) and (14). Moreover, we note that (19) and (20) hold when \({\omega }_0( v , \gamma )= \langle \gamma , v \rangle \).

The following result shows that the existence of a nonlinear separation between \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) is equivalent to the existence of a saddle point for the generalized Lagrangian function \(\mathcal {L}_{1}(x , \theta , \gamma ).\) The proof is similar to the proof of Theorem 4.3 in [6]; therefore, it is omitted.

Theorem 3.2

Let \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F,\) and \(\omega _{1}\) be a class of nonlinear functions satisfying conditions (19) and (20).

  1. (i)

    If \((\bar{x} , \bar{\gamma })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}_{1}(x , \bar{\theta } , \gamma ),\) i.e.

    $$\begin{aligned} \mathcal {L}_{1}(\bar{x} , \bar{\theta } , \gamma ) \le \mathcal {L}_{1}(\bar{x} , \bar{\theta } , \bar{\gamma } )\le \mathcal {L}_{1}(x , \bar{\theta } , \bar{\gamma } ),~~ \forall x \in U,~ \forall \gamma \in D^{+}, \end{aligned}$$

    for a fixed \(\bar{\theta } \in C^+,\) then \(\bar{x} \in R\) and \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation.

  2. (ii)

    Suppose that \(F(\bar{x})\subseteq \{\bar{y}\} + C.\) If there exists \((\bar{\theta } ,\bar{\gamma })\in C^{+}\times D^{+} \) for which \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation w.r.t. \(\omega _1(u, v, \bar{\theta }, \bar{\gamma }),\) then \((\bar{x} , \bar{\gamma })\) is a saddle point for the generalized Lagrangian function ,i.e.

    $$\begin{aligned} \mathcal {L}_{1}(\bar{x} , \bar{\theta } , \gamma ) \le \mathcal {L}_{1}(\bar{x} , \bar{\theta } , \bar{\gamma } )\le \mathcal {L}_{1}(x , \bar{\theta } , \bar{\gamma } ),~~ \forall x \in U,~ \forall \gamma \in D^{+}. \end{aligned}$$

Remark 3.5

In Theorem 3.2, if we consider \(\bar{\theta }\in C^{+i}\), then we obtain a similar result for regular nonlinear separation.

The following result is directly derived from Proposition 3.2 and part (i) of Theorem 3.2.

Corollary 3.1

Assume \({\omega }_0\) satisfies both conditions (19) and (20). If \((\bar{x} , \bar{\gamma })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}_{1}({x} , \bar{\theta } , \gamma )\) for some \(\bar{\theta }\in C^{+i},\) then \(\bar{p}\) is a minimizer of Problem (1).

In order to obtain saddle point conditions for the generalized Lagrangian function associated with Problem (1) w.r.t. \(\omega _{2},\) we consider the generalized Lagrangian function \(\mathcal {L}_{2}:U \times \Gamma \mapsto \mathbb {R }\) defined by

$$\begin{aligned} \mathcal {L}_{2}(x , \pi )=\inf _{y \in F(x)}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(x)\cap -D}{\omega }_0( -z , \pi ), \end{aligned}$$

where F and G are compact valued and \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F.\)

The next result shows that the existence of a regular nonlinear separation functions \({\omega }_2 (u , v , \pi )\) between \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) is equivalent to the existence of a saddle point for the generalized Lagrangian function \(\mathcal {L}_{2}(x , \pi ).\)

Theorem 3.3

Let \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F,\) \(F(\bar{x})\subseteq \{\bar{y}\} + C \) and \(\omega _{2}\) be the class of nonlinear functions satisfying both conditions (19) and (20).

  1. (i)

    If \((\bar{x} , \bar{\pi })\) is a saddle point for the generalized Lagrangian function, i.e.

    $$\begin{aligned} \mathcal {L}_{2}(\bar{x} , \pi ) \le \mathcal {L}_{2}(\bar{x} , \bar{\pi } )\le \mathcal {L}_{2}(x , \bar{\pi } ),~~ \forall x \in U,~ \forall \pi \in D^{+}, \end{aligned}$$

    then \(\bar{x} \in R\) and \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\), admit a nonlinear separation.

  2. (ii)

    Suppose that \(\bar{x} \in R,\) if \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation, then \((\bar{x} , \bar{\pi })\) is a saddle point for the generalized Lagrangian function, i.e.

    $$\begin{aligned} \mathcal {L}_{2}(\bar{x} , \pi ) \le \mathcal {L}_{2}(\bar{x} , \bar{\pi } )\le \mathcal {L}_{2}(x , \bar{\pi } ),~~ \forall x \in U,~ \forall \pi \in D^{+}, \end{aligned}$$

Proof

  1. (i)

    Suppose that \((\bar{x}, \bar{\pi })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}_{2}(x, \pi )\), then

    $$\begin{aligned} \mathcal {L}_{2}(\bar{x} , \pi ) \le \mathcal {L}_{2}(\bar{x} , \bar{\pi } )\le \mathcal {L}_{2}(x , \bar{\pi } ),~~ \forall x \in U,~ \forall \pi \in D^+. \end{aligned}$$

    Or, equivalently for each \(\pi \in \Gamma ,\) and for each \(x \in U\), we have

    $$\begin{aligned} \inf _{y \in F(x)}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(x)\cap -D}{\omega }_0( -z ,\bar{\pi })\ge \end{aligned}$$
    (21)
    $$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \bar{\pi })\ge \end{aligned}$$
    $$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \pi ). \end{aligned}$$

    First, we prove that \(\bar{x}\in R.\) On the contrary, suppose that \(\bar{x}\not \in R.\) Then \(G(\bar{x})\cap -D = \emptyset .\) So, in the second inequality in (21), we have

    $$\begin{aligned} \inf _{\pi \in D^{+}}\sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \pi )= -\infty , \end{aligned}$$

    which contradicts the first inequality in (21). Therefore \(\bar{x}\in R.\) On the other hand, since for each \(y \in F(\bar{x}),\) we have \( y \in \bar{y} + C,\) then \(\inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y )= 0\). Now, from the inequality (21), we have

    $$\begin{aligned} \inf _{y \in F(x)}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(x)\cap -D}{\omega }_0( -z ,\bar{\pi })\ge - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \pi ). \end{aligned}$$

    By using (20), we obtain

    $$\begin{aligned} \sup _{y \in F(x)-D}(-\triangle _{\mathcal {C}}( \bar{y}- y )) + \sup _{z \in G(x)\cap -D} {\omega }_0( -z ,\bar{\pi })\le 0. \end{aligned}$$

    Hence,

    $$\begin{aligned} -\triangle _{\mathcal {C}}( \bar{y}- y ) + {\omega }_0( -z , \bar{\pi })\le 0 . \end{aligned}$$

    Which shows that \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation.

  2. (ii)

    Assume that \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\), admit a nonlinear separation. Then for each \(x \in U,~y \in F(x)\) and \( z \in G(x),\) we have

    $$\begin{aligned} -\triangle _{\mathcal {C}}( \bar{y}- y ) + {\omega }_0( -z , \bar{\pi })\le 0, \end{aligned}$$

    or equivalently

    $$\begin{aligned} \sup _{y \in F(x)}(-\triangle _{\mathcal {C}}( \bar{y}- y )) + \sup _{z \in G(x)\cap -D} {\omega }_0( -z ,\bar{\pi })\le 0, \end{aligned}$$

    thus

    $$\begin{aligned} \inf _{y \in F(x)}\triangle _{\mathcal {C}}( \bar{y} - y ) - \sup _{z \in G(x)\cap -D} {\omega }_0( -z ,\bar{\pi })\ge 0. \end{aligned}$$

    In particular, for \(y \in F(\bar{x})\), by using (20), we obtain

    $$\begin{aligned} \sup _{z \in G(\bar{x})\cap -D}{\omega }_0(-z, \bar{\pi })= 0, \end{aligned}$$
    (22)

    and

    $$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y )=0, \end{aligned}$$
    (23)

    since \(F(\bar{x})\subseteq \{\bar{y}\} + C.\) On the other hand by (20), we obtain

    $$\begin{aligned} 0= \sup _{z \in G(\bar{x})\cap -D}{\omega }_0(-z, \bar{\pi })\le \sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \pi ), \end{aligned}$$

    and from (22) and (23), we deduce

    $$\begin{aligned} \inf _{y \in F(x)}\triangle _{\mathcal {C}}( \bar{y} - y ) - \sup _{z \in G(x)\cap -D} {\omega }_0( -z ,\bar{\pi })\ge \end{aligned}$$
    $$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0(-z, \bar{\pi })\ge \end{aligned}$$
    $$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0(-z, \pi ). \end{aligned}$$

    Or

    $$\begin{aligned} \mathcal {L}_{2}(\bar{x} , \pi ) \le \mathcal {L}_{2}(\bar{x} , \bar{\pi } )\le \mathcal {L}_{2}(x , \bar{\pi } ),~~ \forall x \in U,~ \forall \pi \in D^+, \end{aligned}$$

    i.e. \((\bar{x} , \bar{\pi })\) is a saddle point for the generalized Lagrangian function.

\(\square \)

The following result is directly derived from Proposition 3.5 and Theorem 3.3.

Corollary 3.2

Let \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F \) and \(\omega _{2}\) be the class of nonlinear functions satisfying both conditions (19) and (20). Suppose that \(F(\bar{x})\subseteq \{\bar{y}\} + C.\) If \((\bar{x} , \bar{\pi })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}_{2}(x , \pi ),\) then \(\bar{x} \in R\) and \(\bar{p}\) is a weak minimizer of Problem (1).

Remark 3.6

Using the class of regular separation functions \(\omega _{1},\) we can obtain an application to penalty methods for the constrained extremum Problem (1).

Let \(\bar{\theta }\in C^{+i}\) be fixed. Consider the following extremum Problem:

A point \(\bar{x}\in R\) is called a minimum point of Problem \((P_{\bar{\gamma }})\) iff

$$\begin{aligned} \exists \bar{y}\in F(\bar{x}): \langle \bar{\theta }, \bar{y} \rangle + \bar{\gamma }\inf _{z \in G(\bar{x})} d_D(-z)\le \langle \bar{\theta }, y \rangle + \bar{\gamma }\inf _{z \in G({x})} d_D(-z),\forall x \in R,\forall y \in F(x). \end{aligned}$$

In this case, \((\bar{x}, \bar{y})\) is a minimizer for Problem \((P_{\bar{\gamma }}).\) If there exists \(\bar{\gamma } \in \mathbb {R}\), such that any solution of Problem \((P_{\bar{\gamma }})\), say \((\bar{x}, \bar{y})\), is a solution of Problem (1), then we say that the function \(\mathcal {L}^{\omega }(x, \bar{\gamma })\) is an exact penalty function of Problem (1) at \(\bar{x}\).

By a minor modification in the proof of Theorem 4.4 in [22], we can obtain the following result for set-valued optimization problems.

Theorem 3.4

Let Z be a reflexive space, \(\bar{x}\in R\), \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F,\) \(F(\bar{x})\subseteq \{\bar{y}\} + C\) and \(\bar{\theta }\in C^{+i}.\) Then, the following statements are equivalent:

  1. (i)

    \(\text{ cl } \text{ cone }~{ \mathcal {E } _{\bar{p}}}\cap {\mathcal {H}_{u}} = \emptyset .\)

  2. (ii)

    there exists \(\bar{\gamma } \in \mathbb {R}_{+}{\setminus } \{ 0\},\) such that

    $$\begin{aligned} \sup _{y \in F(x)}\langle \bar{\theta } , \bar{y}-y \rangle \le \bar{\gamma }\inf _{z \in G(x)} d_{D}(-z), ~~\forall x \in U. \end{aligned}$$
  3. (iii)

    there exists \(\bar{\gamma } \in \mathbb {R}_{+}{\setminus } \{ 0\},\) such that

    $$\begin{aligned} \omega _1 (u, v,\bar{\theta },\bar{\gamma })\le 0 ,~~~~~~~~\forall (u, v)\in \mathcal {K}_{\bar{p}}, \end{aligned}$$

    where

    $$\begin{aligned} \omega _1 (u, v, \theta , \gamma ) = \langle \theta , u \rangle + {\omega }_0(v , \gamma )= \langle \theta , u \rangle - \gamma d_{D}(v). \end{aligned}$$
  4. (iv)

    \(\mathcal {L}^{\omega }(x, \gamma )\) is an exact penalty function of Problem (1) at \(\bar{x}\).

Proof

Assume that (i) holds, and (ii) doesn’t satisfy. Then for any \(n \in \mathbb {N},\) there exist \(x_{n} \in U\), \(y_{n} \in F(x_{n})\) and \(z_{n} \in G(x_{n})\) such that

$$\begin{aligned} \langle \bar{\theta } , \bar{y} - y_{n} \rangle > n d_{D}(-z_{n}). \end{aligned}$$

Since Z is a reflexive Banach space, the norm is a continuous convex coercive function and D is a closed convex set, then for any \(n \in \mathbb {N},\) there exists \(v_n \in D\) such that

$$\begin{aligned} d_{D}(-z_{n}) = \parallel -z_n - v_n\parallel . \end{aligned}$$

Let \({\alpha }_n:= \frac{1}{\langle \bar{\theta } , \bar{y} - y_{n} \rangle } > 0\). Then,

$$\begin{aligned} {\alpha }_n \parallel -z_n - v_n \parallel < \frac{1}{n}, \quad \forall n \in \mathbb {N}. \end{aligned}$$

Thus, \(\lim _{n\longrightarrow \infty }\alpha _{n}(-z_{n}- v_{n}) = 0.\) So,

$$\begin{aligned} \lim _{n\longrightarrow \infty }\alpha _{n}(\langle \bar{\theta } , \bar{y} - y_{n} \rangle , -z_{n}- v_{n}) = (1, 0). \end{aligned}$$

Or equivalently

$$\begin{aligned} \text{ cl } \text{ cone }~{ \mathcal {E } _{\bar{p}}}\cap {\mathcal {H}_{u}} \ne \emptyset , \end{aligned}$$

which contradicts (i).

Now, assume that (ii) holds and on the contrary, suppose that (i) is not fulfilled. Then there exists \((c, 0)\in \text{ cl } \text{ cone }~{\mathcal {E} _{\bar{p}}}\cap \mathcal {H}_{u}\). Hence, for any \(n \in \mathbb {N}\) there exist \({\alpha }_n > 0\), \(x_{n} \in U\), \(y_{n} \in F(x_{n})\), \(z_{n} \in G(x_{n})\) and \((u_{n}, v_{n})\in \text{ cl }~{\mathcal {H}}\) such that

$$\begin{aligned} \lim _{n \longrightarrow \infty }\alpha _{n}(\bar{y} - y_{n} - u_{n}) = c,~~~~~~~\lim _{n\longrightarrow \infty }\alpha _{n}(-z_{n} - v_{n}) = 0. \end{aligned}$$

Therefore, \(\langle \bar{\theta }, \bar{y} - y_{n} - u_{n}\rangle > 0,\) for sufficiently large n, and

$$\begin{aligned} \lim _{n\longrightarrow \infty }\frac{\parallel -z_{n} - v_{n}\parallel }{\langle \bar{\theta }, \bar{y} - y_{n} - u_{n}\rangle } = 0. \end{aligned}$$

Then

$$\begin{aligned} \lim _{n\longrightarrow \infty }\frac{d_{D}(-z_{n})}{\langle \bar{\theta }, \bar{y} - y_{n} - u_{n}\rangle } = 0, \end{aligned}$$

since,

$$\begin{aligned} 0\le \frac{d_{D}(-z_{n})}{\langle \bar{\theta }, \bar{y} - y_{n} - u_{n}\rangle }\le \frac{\parallel -z_{n} - v_{n}\parallel }{\langle \bar{\theta }, \bar{y} - y_{n} - u_{n}\rangle }. \end{aligned}$$

So, we can deduce that for any \(\gamma > 0,\) there exist \(y_{n} \in F(x_{n})\), \(z_{n} \in G(x_{n})\) and \(u_{n}\in C\) such that

$$\begin{aligned} d_{D}(-z_{n})< \frac{1}{\gamma }\langle \bar{\theta },(\bar{y} - y_{n} - u_{n})\rangle \le \frac{1}{\gamma }\langle \bar{\theta },(\bar{y} - y_{n})\rangle , \end{aligned}$$

for sufficiently large \(n \in \mathbb {N}\), which contradicts (ii).

(ii) is equivalent to (iii), since (ii) is equivalent to

$$\begin{aligned} \exists \bar{\gamma } \in \mathbb {R}_{+}{\setminus } \{ 0\}~~~~s.t.~~~~~\langle \bar{\theta } , \bar{y}-y \rangle \le \bar{\gamma }d_{D}(-z), ~~\forall y \in F(x),~~\forall z \in G(x),~~\forall x \in U, \end{aligned}$$

which is equivalent to (iii) by definition of \(\mathcal {K}_{\bar{p}}\).

(ii) is also equivalent to (iv). Indeed, (ii) is equivalent to

$$\begin{aligned} \exists \bar{\gamma } \in \mathbb {R}_{+}{\setminus } \{ 0\}~~~~s.t.~~~~~\langle \bar{\theta } , \bar{y} \rangle \le \inf _{y \in F(x)}\langle \bar{\theta } , y \rangle + \bar{\gamma }\inf _{z \in G(x)} d_{D}(-z). \end{aligned}$$

Or

$$\begin{aligned}&\exists \bar{\gamma } \in \mathbb {R}_{+}{\setminus } \{ 0\}~~~~s.t.~~~~~\inf _{y \in F(\bar{x})}\langle \bar{\theta } , \bar{y} \rangle + \bar{\gamma }\inf _{z \in G(\bar{x})} d_{D}(-z)\\&\quad \le \inf _{y \in F(x)}\langle \bar{\theta } , y \rangle + \bar{\gamma }\inf _{z \in G(x)} d_{D}(-z), \end{aligned}$$

since \(G(\bar{x})\cap -D \ne \emptyset \). Therefore, \(\mathcal {L}^{\omega }(\bar{x}, \bar{\gamma })\le \mathcal {L}^{\omega }(x, \bar{\gamma })\). i.e. \((\bar{x}, \bar{y})\) is a minimizer for Problem \((P_{\bar{\gamma }}).\) On the other hand from (iii) we have

$$\begin{aligned} \omega _1 (u, v,\bar{\theta },\bar{\gamma })\le 0, \quad \forall (u, v)\in \mathcal {K}_{\bar{p}}. \end{aligned}$$

Or

$$\begin{aligned} \inf _{\gamma \in \gamma } \sup _{(u , v)\in \mathcal {K}_{\bar{p}}}\omega _1 (u,v,\bar{\theta }, \gamma )\le 0, \end{aligned}$$

then, \(\bar{p}\) is a minimizer of Problem (1) by Proposition 3.4, and \(\mathcal {L}^{\omega }(x, \gamma )\) is an exact penalty function of Problem (1) at \(\bar{x}\) . \(\square \)