Abstract
In this paper, by means of the image space analysis, we obtain optimality conditions for vector optimization of objective multifunction with multivalued constraints based on disjunction of two suitable subsets of the image space. By the oriented distance function a nonlinear regular separation is introduced and some optimality conditions for the constrained extremum problem are obtained. It is shown that the existence of a nonlinear separation is equivalent to a saddle point condition for the generalized Lagrangian function.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The Image Space approach (IS) was initiated in [8] and was carried on in some other articles; see [9, 13, 14, 25, 26] and references therein. The (IS) approach has been proved to be a fruitful method in many topics of optimization theory (e.g., optimality condition, existence of solution, duality, vector variational inequalities and vector equilibrium problems); see [1–7, 13, 14]. Moreover, it has been shown that several theoretical aspects of constrained extremum problem, such as duality, penalty methods, regularity and Lagrangian-type optimality can be developed by (IS). In this approach, the disjunction of two suitable disjoint sets by a linear or nonlinear separation implies the optimality for constrained extremum problem.
Besides the direction of nonlinear/nonconvex separation in image spaces, there are other interesting approaches to set-valued optimization based on generalized differentiation and extremal principle; see the 2-volumes monograph [23] and [24].
Here, we focus our attention on some nonlinear separation functions for the constrained extremum problem. We extend the nonlinear regular weak separation functions that have been discussed in [11, 16] and [22, 28] for multivalued optimization problems. We also define a new nonlinear (regular) weak separation function based on the oriented distance function \(\vartriangle \) and derive some optimality conditions. In particular, the relation between saddle points of the generalized Lagrangian functions and optimality for the constrained extremum problem are deduced.
The paper is organized as follows: In Sect. 1, we present some basic concepts and different types of solutions of a vector optimization problem. In Sect. 2, we recall the main concepts concerning the image space analysis and we consider some properties of the image problem. Sect. 3 illustrates the equivalence between the existence of a nonlinear separation function and a saddle point condition for the generalized Lagrangian function.
Let X be a topological vector space and let Y and Z be two normed linear spaces with normed dual spaces \(Y^{*}\) and \(Z^{*},\) respectively, and \(F:U\rightrightarrows Y\) be a multifunction defined on a nonempty subset U of X with values in Y. The set
is called the domain of F, and the set
is called the graph of F . Let \(C\subset Y\) and \(D\subset Z\) be pointed, closed and convex cones with nonempty interiors. The space of continuous linear operators from Z to Y is denoted by L(Z, Y) and
The positive dual cone of C is defined by
and the set of all positive linear functionals in \(C^{+}\) is
Note that, if C is a convex cone in Y, then \(\text{ int }~C^+ \subseteq C^{+i}\) and the equality holds if \(\text{ int }~C^{+}\ne \emptyset \) . A partial order \(\le _{C}\) in Y is defined by
For simplicity, throughout this article, we denote \( {\buildrel _{\circ }\over {\mathrm {C}}}\) \(: = \mathrm {int}~C\) and \(C_0: = C {\setminus } \{0\}.\)
Definition 1.1
Let U be a convex subset of X. A multifunction \(F:U \rightrightarrows Y\) is said to be C-multifunction on U, iff for all \(x_{1}, x_{2} \in U\) and \(t \in [0,1]\), we have
In the sequel, we suppose that \(F:U\rightrightarrows Y\) is a multifunction defined on a nonempty convex subset U of X with values in Y.
Definition 1.2
Let \(F:U\rightrightarrows Y\) and \(G:U\rightrightarrows Z\) be two multifunctions with nonempty values. We consider the following vector optimization problem:
where R is called the feasible region of Problem (1), which we suppose nonempty.
Definition 1.3
A point \(\bar{x}\in R\) is called a minimum point of Problem (1) iff
In this case we say that \((\bar{x}, \bar{y})\) is a minimizer for Problem (1). A point \(\bar{x}\in R\) is called a weak minimum point of Problem (1) iff
In this case we say that \((\bar{x}, \bar{y})\) is a weak minimizer for Problem (1).
The following result presents a necessary and sufficient condition for a vector to be a minimum point or a weak minimum point of Problem (1).
Lemma 1.1
[20] Let \(\bar{x}\in R\) and \((\bar{x}, \bar{y})\in \mathrm {gr}~F\). Then
-
(i)
\((\bar{x}, \quad \bar{y})\) is a minimizer of Problem (1) iff
$$\begin{aligned} (\bar{y} - C_0, -D)\cap (F(x),G(x))=\emptyset \quad \forall x\in U. \end{aligned}$$ -
(ii)
\((\bar{x}, \quad \bar{y})\) is a weak minimizer of Problem (1) iff
$$\begin{aligned} (\bar{y} -{\buildrel _{\circ } \over {\mathrm {C}}} , -D)\cap (F(x),G(x)) =\emptyset \quad \forall x\in U. \end{aligned}$$
2 Image space analysis
In this section, we develop the image space analysis for vector optimization with multifunction constraints and multifunction objectives. Let \(\bar{x}\in R\) and \(\bar{p}: =(\bar{x}, \bar{y})\in \mathrm {gr}~ F\). We introduce the multifunction \(A_{\bar{p}}:U \rightrightarrows Y\times Z, \) defined by
and we associate the following sets to \(\bar{p}\in \mathrm {gr}~ F\)
The set \(\mathcal {K}_{\bar{p}}\) is called the image space associated with Problem (1). By Lemma 1.1, \(\bar{p} =(\bar{x}, \bar{y})\) is a minimizer of Problem (1) iff
and \(\bar{p}= (\bar{x}, \bar{y})\) is a weak minimizer of Problem (1) iff
where, \(\mathcal {H}_{ic}={\buildrel _{\circ } \over {\mathrm {C}}} \times D.\)
Remark 2.1
In general, the image space \(\mathcal {K}_{\bar{p}}\) is not convex, even when the two functions F and G are C-multifunction and D-multifunction on the convex set U, respectively. To overcome this defect, we introduce the extended image space \(\mathcal {K}_{\bar{p}}\) with respect to the cone \(\text{ cl }~ {\mathcal {H}}\) as \(\mathcal {E }_{\bar{p}}=\mathcal {K}_{\bar{p}}-\text{ cl }~ {\mathcal {H}}.\) In fact, by imposing some convexity assumptions on F and G, we obtain the convexity of the extended image space.
Lemma 2.1
[6] Let \(F:U \rightrightarrows Y\) and \(G:U \rightrightarrows Z\) be C-multifunction and D-multifunction on the convex set U, respectively. Then the extended image \( \mathcal {E } _{\bar{p}}=\mathcal {K}_{\bar{p}}-\text{ cl }~ {\mathcal {H}}\) is convex and
Corollary 2.1
Let \(\bar{x}\in R.\) Then \(\bar{p} =(\bar{x}, \bar{y})\in \mathrm {gr}~ F\) is a minimizer of Problem (1) iff
Remark 2.2
Let \(\mathcal {H}_{0}\) be a subset of \(\mathcal {H}\), defined by \(\mathcal {H}_{0}= C_0 \times \{0_{z}\}.\) Then by a similar argument as that of the proof of Proposition 2.1 in [12], we can deduce that (4) is equivalent to
3 Nonlinear separation functions
The separation functions play an important role in the optimality conditions for constrained optimization. In order to prove disjunction between the two sets \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H},\) we will show that \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) lie in two disjoint level sets of a linear or nonlinear separation function.
Definition 3.1
Let \(\Pi \) be a set of parameters and \(\mathcal {H} = C_0 \times D\).The class of all the functions \(\omega :Y\times Z\times \Pi \longrightarrow \mathbb {R},\) such that
and
is called the class of weak separation functions and is denoted by \(\mathbb {W}(\Pi ),\) in which \(\text{ lev }_{> 0}~~ \omega (.,.,{\pi }):=\{(u,v)\in Y\times Z : \omega (u,v,{\pi })> 0 \}\) denotes the level set of \(\omega (.,., {\pi }).\)
Definition 3.2
The class of all the functions \(\omega :Y\times Z\times \Pi \longrightarrow \mathbb {R}\), such that
is called the class of regular weak separation functions and is denoted by \(\mathbb {W}_{r}(\Pi )\).
Suppose that \(\Pi = Y^{*}\times \Gamma \) is the given set of parameters and the class of functions \(\omega _{1} :Y \times Z \times Y^{*}\times \Gamma \mapsto \mathbb {R }\) is given by
where \({\omega }_0\) fulfills the following conditions:
The above weak separation has been discussed by Giannessi in [9]. Note that the above conditions imply that
In the sequel, we consider the following assumptions:
One can show that (13) and (14) imply (10), see [12] and if \(Z=\mathbb {R}^{m},\) then (9) implies (13) and (14), see [9].
In the sequel, by using the oriented distance function we introduce a new nonlinear class of functions.
Definition 3.3
Suppose that \(A \subseteq Y\) and \(d_{A}(y)=\inf \{\Vert a-y \Vert : a \in A \}\) is the distance function from A. The function \(\bigtriangleup _{A}: Y \rightarrow \mathbb {R}\cup \{\pm \infty \}\) defined by
is called the oriented distance function.
This function was defined in [15] and some of its main properties are gathered in the following result.
Proposition 3.1
[18, 19, 27] If the set A is nonempty and \(A \varsubsetneq Y\) with nonempty interior, then:
-
(i)
\(\triangle _{A}\) is real valued and 1-Lipschitzian function;
-
(ii)
\(\triangle _{A}< 0\) for every \(y \in \text{ int }A,\) \(~\triangle _{A}= 0\) for every \(y \in \partial A,\) and \(\triangle _{A}> 0\) for every \(y \in \text{ int }(Y {\setminus } A);\)
-
(iii)
If A is closed, then it holds that \(A= \{y : \triangle _{A}(y)\le 0 \};\)
-
(iv)
If A is convex, then \(\triangle _{A}\) is convex;
-
(v)
If A is a cone, then \(\triangle _{A}\) is positively homogeneous;
-
(vi)
If A is a closed convex cone, then \(\triangle _{A}\) is nonincreasing with respect to the ordering relation induced by C on Y.
-
(vi)
If A is a convex cone, then \(\triangle _{A}(y) = \sup _{\{\theta \in C^{+} , \parallel \theta \parallel = 1 \}} -\langle \theta , y \rangle ,\) for all \(y \in Y\).
Now, by the oriented distance function \(\triangle ,\) we consider the nonlinear class of functions \(\omega _{2} :Y \times Z \times \Pi \mapsto \mathbb {R }\) defined by
Remark 3.1
The class of separation functions \(\omega _{1}\) and \(\omega _{2}\) have unified the following known linear or nonlinear separation functions:
-
(i)
\(\omega _{3} (u ,v ,\theta ,\gamma ):=\langle \theta , u \rangle + \langle \gamma , v \rangle ,\) \((\theta ,\gamma )\in \Pi = (C^{+} \times D^{+}){\setminus } \{(0 , 0)\} ,\)
-
(ii)
\(\omega _{4} (u ,v ,\theta ,\gamma ):=\langle \theta , u \rangle - \triangle _{\mathbb {R}_{+}}(\langle \gamma , v \rangle ),\) \((\theta ,\gamma )\in \Pi = (C^{+} \times D^{+}){\setminus } \{(0 , 0)\},\)
-
(iii)
\(\omega _{5} (u ,v ,\theta ,\gamma ):=\langle \theta , u \rangle - \gamma d_{D}(v),\) \((\theta ,\gamma )\in \Pi = (C^{+} \times \mathbb {R}){\setminus } \{(0 , 0)\},\)
-
(iv)
\(\omega _{6} (u ,v ,\theta ):=\langle \theta , u \rangle - \delta _{D}(v),\) where, \(\theta \in \Pi = C^{+} \) and \(\delta _{D},\) is the indicator function of D,
-
(v)
\(\omega _{7}(u ,v ,\gamma ):= -\triangle _{C}( u ) + \langle \gamma , v \rangle , ~~~~~\gamma \in \Pi = D^{+},\)
-
(vi)
\(\omega _{8}(u, v):= -\triangle _{C}( u ) - \delta _{D}(v),\)
-
(vii)
\(\omega _{9} (u ,v ,\theta ,T ):=\langle \theta , u \rangle - \triangle _{C}( Tv ),\) where \((\theta ,T ) \in \Pi =(C^{+}\times L_{+}(Z, Y)).\)
The linear weak separation \(\omega _{3}\) has been discussed by many authors. The separation functions \(\omega _{3}\), \(\omega _{4}\), \(\omega _{6}\), \(\omega _{7},\)and \(\omega _{8}\) are weak separation functions and regular weak separation functions for some parameter sets \(\Pi \), see [3, 17, 21].
Proposition 3.2
-
(i)
If \({\omega }_0\) fulfills both conditions (13) and (14), then \(\omega _{1} \in \mathbb {W}_{r}(\Pi )\), where \(\Pi = C^{+i}\times \Gamma \);
-
(ii)
If \({\omega }_0\) fulfills both conditions (13) and (14), then \(\omega _{2} \in \mathbb {W}(\Pi )\), where \(\Pi = D^{+}\).
-
(iii)
\(\omega _{5} \in \mathbb {W}_{r}(\Pi )\), where \(\Pi = C^{+i}\times \mathbb {R}^{+};\)
-
(iv)
\(\omega _{9} \in \mathbb {W}_{r}(\Pi )\), where \(\Pi = C^{+i}\times L_{+}(Z, Y).\)
Proof
-
(i)
With minor modifications in the proof of Proposition 4.3.3. in [9], we can deduce the proof.
-
(ii)
Let \((u ,v) \in \mathcal {H}\), by condition (14) and Proposition 3.1, we have \({\omega }_0( v , \pi )\ge 0\) and \(\triangle _{C}( u )\le 0,\) which implies
$$\begin{aligned} \mathcal {H} \subseteq \text{ lev }_{\ge 0}~~\omega _{2} (., .,\pi ),~~~~~ \forall \pi \in D^{+}. \end{aligned}$$We will prove the following inclusion:
$$\begin{aligned} {\bigcap }_{\pi \in D^{+} }\text{ lev }_{> 0}~~\omega _{2}(., .,\pi )\subseteq \mathcal {H}. \end{aligned}$$On the contrary, assume that there exists \((\hat{u } , \hat{v})\not \in \mathcal {H},\) such that
$$\begin{aligned} \omega _{2}(\hat{u} ,\hat{v} ,\pi )> 0 ,~~~~~~~~\forall \pi \in D^{+}. \end{aligned}$$(15)We consider two cases: Case (i) If \(\hat{u} \not \in C_{0}\) and \(\hat{v} \in Z\), then \(\hat{u} \in \partial C\) or \(\hat{u} \in Y {\setminus } C, \) by Proposition 3.1, we deduce that \(\triangle _{C}(\hat{u})\ge 0.\) From condition (11), there exists \(\hat{\pi }\in D^{+},\) such that \({\omega }_0( \hat{v} , \hat{\pi })= 0.\) So,
$$\begin{aligned} \omega _{2}(\hat{u} ,\hat{v} ,\hat{\pi })\le 0, \end{aligned}$$which contradicts (15). Case (ii) If \(\hat{u} \in C_{0}\) and \(\hat{v} \not \in D\), then from condition (10), there exists \(\hat{\pi } \in D^{+},\) such that
$$\begin{aligned} \omega _{2}(\hat{u} ,\hat{v} ,\hat{\pi }) = -\triangle _{C}(\hat{u}) + {\omega }_0( \hat{v} , \hat{\pi })< 0, \end{aligned}$$which again contradicts (15).
-
(iii)
Since \({\omega }_0( v , \gamma ) = -\gamma d_{D}(v)\) and \(\omega _0\) fulfills both conditions (13) and (14), by part (i) we obtain the result.
-
(iv)
Since \(\omega _9\) is linear with respect to u, so that it is a regular separation function provided \( \theta \in C^{+i}.\) Let \((u ,v) \in \mathcal {H}\), then \(\langle \theta , u \rangle > 0\), for each \( \theta \in C^{+}{\setminus } \{0\}\) and \(\triangle _{C}(Tv)\le 0\), for all \(T\in L_{+}(Z, Y).\) Hence we have,
$$\begin{aligned} \mathcal {H} \subseteq \text{ lev }_{> 0}~~\omega _{9} (., .,\theta , T) \end{aligned}$$Now we prove the following inclusion:
$$\begin{aligned} {\bigcap }_{(\theta , T)\in C^{+}{\setminus } \{0\} \times L_{+}(Z, Y) }\text{ lev }_{> 0}~~\omega _{9}(., .,\theta ,T)\subseteq \mathcal {H}. \end{aligned}$$On the contrary, assume that there exists \((\hat{u } , \hat{v})\not \in \mathcal {H}\) such that
$$\begin{aligned} \omega _{9}(\hat{u} ,\hat{v}, \theta , T )> 0 \quad \forall \theta \in C^{+}{\setminus } \{0\} \quad \forall T ~~\in L_{+}(Z, Y). \end{aligned}$$(16)We consider the following two cases: Case (i) If \(\hat{u} \not \in C_{0}\) and \(\hat{v} \in Z\), then there exists \(\hat{\theta } \in C^{+}{\setminus } \{0\}\) such that \(\langle \hat{\theta }, \hat{u}\rangle \le 0\). If we set \(T = 0 \in L_{+}(Z, Y)\), then
$$\begin{aligned} \omega _{9}(\hat{u} ,\hat{v} , \hat{\theta }, T)\le 0, \end{aligned}$$which contradicts (16). Case (ii) If \(\hat{u} \in C_{0}\) and \(\hat{v} \not \in D\), then there exists \(\hat{\gamma }\in D^{+},\) such that \(\langle \hat{\gamma } , \hat{v} \rangle < 0\). We define the operator \(T_{n}:Z\longrightarrow Y\), by
$$\begin{aligned} T_{n}(z)= n \langle \hat{\gamma } , z \rangle \hat{e}, \quad \forall z \in Z, \end{aligned}$$for some \(\hat{e}\in {\buildrel _{\circ }\over {\mathrm {C}}}\). Clearly, \(T_{n}\in L_{+}(Z, Y)\) and
$$\begin{aligned} \omega _{9}(\hat{u} ,\hat{v} , \hat{\theta }, T_{n})\le 0, \end{aligned}$$for sufficiently large \(n \in \mathbb {N}\), which contradicts (16). Therefore, we have \(\omega _{9} \in \mathbb {W}_{r}(\Pi )\). \(\square \)
Definition 3.4
Let \(\bar{x}\in R\) and \(\bar{p} =(\bar{x}, \bar{y})\in \text{ gr }~F.\) Then we say that \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation w.r.t. \(\omega _{i}\), for \(i = 1,2,3,4,5,6,7,8,9\), iff there exists \(\bar{\pi } \in \Pi , \) such that \(\omega _{i}(u ,v , \bar{\pi })\not \equiv 0 \) and
For \(i = 1,3,4,5,6,9\), if \(\bar{\pi }\in {C^{+i}}\times \Gamma ,\) then the separation is said to be regular.
In general, the existence of a nonlinear separation does not guarantee the disjunction of \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H};\) whereas, if the separation function \({\omega }_{1}\) is regular, then the strict inequality in (18) holds and we obtain a nonlinear version of Proposition 4.1 in [6] as follows.
Proposition 3.3
Let \(\bar{x}\in R\) and \(\bar{p} =(\bar{x}, \bar{y})\in \mathrm {gr}~F.\) If \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a regular nonlinear separation w.r.t. \(\omega _1{},\) then \(\bar{p}\) is a minimizer of Problem (1).
By a similar argument, as that of the proof of Theorem 4.2 in [6], we obtain its nonlinear version.
Proposition 3.4
Let \(\bar{x} \in R\), \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~F.\) Let \(\omega _{1}\) be a class of regular nonlinear separation functions satisfying both conditions (13) and (14). If
then \(\bar{p}\) is a minimizer of Problem (1).
Remark 3.2
Similar to the case of nonlinear separation \(\omega _1,\) the existence of a nonlinear separation \(\omega _2\) does not guarantee the disjunction of \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H};\) whereas, if both conditions (17) and (18) hold for some \({\bar{\pi }} \in \Pi \), and at least one of them is strict, i.e.
or
then we say that the nonlinear separation \({\omega }_{2}(u, v, \pi ) = -\triangle _{\mathcal {C}}(u) + {\omega }_0(v , \pi )\) is regular.
The following result is directly derived from Definition 3.4 and (3).
Proposition 3.5
Let \(\bar{x}\in R\) and \(\bar{p} =(\bar{x}, \bar{y})\in \mathrm {gr}~F.\) If \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation w.r.t. \(\omega _{2},\) then \(\bar{p}\) is a weak minimizer of Problem (1).
Remark 3.3
By a similar argument, as that of the proof of Proposition 4.1 in [17], we deduce that the following conditions
are equivalent to
respectively.
The next result is a nonlinear version of Theorem 4.2 in [6].
Theorem 3.1
Let \(\bar{x} \in R\), \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~F.\) Let \(\omega _{2}\) be a class of nonlinear separation functions satisfying both conditions (13) and (14). If for each \(z \in G(x)\cap (-D),\)
then \(\bar{p}\) is a minimizer of Problem (1).
Proof
Suppose, on the contrary, that \(\bar{p}\) is not a minimizer of Problem (1), then by (2), \(\mathcal {K}_{\bar{p}}\cap \mathcal {H}\ne \emptyset .\) Therefore, there exists \(\hat{x} \in R,~\hat{y} \in F(\hat{x})\) and \(\hat{z} \in G(\hat{x}),\) such that
Hence,
Since \(\inf _{\pi \in D^{+}}{\omega }_0( -\hat{z} , \pi ) = 0 \) and \((\bar{y} - \hat{y })\in C_{0},\) then
which is a contradiction. \(\square \)
In order to obtain saddle point conditions for the generalized Lagrangian function associated with Problem (1), we consider the generalized Lagrangian function
\(\mathcal {L}_{1}:U \times C^{+}\times \Gamma \mapsto \mathbb {R }\) defined by
where , F and G, are compact valued.The generalized Lagrangian function \(\mathcal {L}_{1}(x , \theta , \gamma )\) refines the ones in the literature.
For obtaining a saddle point of the generalized Lagrangian function in our context, we need the following stronger versions of conditions (13) and (14):
where \(D_1\) and \(D_2,\) are compact subsets of \(Z {\setminus } D\) and D, respectively and \(\omega _0\) is continuous in the first argument.
Remark 3.4
It is obvious that if two sets \(D_1\) and \(D_2,\) are singletons then the above conditions are equivalent to (13) and (14). Moreover, we note that (19) and (20) hold when \({\omega }_0( v , \gamma )= \langle \gamma , v \rangle \).
The following result shows that the existence of a nonlinear separation between \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) is equivalent to the existence of a saddle point for the generalized Lagrangian function \(\mathcal {L}_{1}(x , \theta , \gamma ).\) The proof is similar to the proof of Theorem 4.3 in [6]; therefore, it is omitted.
Theorem 3.2
Let \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F,\) and \(\omega _{1}\) be a class of nonlinear functions satisfying conditions (19) and (20).
-
(i)
If \((\bar{x} , \bar{\gamma })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}_{1}(x , \bar{\theta } , \gamma ),\) i.e.
$$\begin{aligned} \mathcal {L}_{1}(\bar{x} , \bar{\theta } , \gamma ) \le \mathcal {L}_{1}(\bar{x} , \bar{\theta } , \bar{\gamma } )\le \mathcal {L}_{1}(x , \bar{\theta } , \bar{\gamma } ),~~ \forall x \in U,~ \forall \gamma \in D^{+}, \end{aligned}$$for a fixed \(\bar{\theta } \in C^+,\) then \(\bar{x} \in R\) and \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation.
-
(ii)
Suppose that \(F(\bar{x})\subseteq \{\bar{y}\} + C.\) If there exists \((\bar{\theta } ,\bar{\gamma })\in C^{+}\times D^{+} \) for which \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation w.r.t. \(\omega _1(u, v, \bar{\theta }, \bar{\gamma }),\) then \((\bar{x} , \bar{\gamma })\) is a saddle point for the generalized Lagrangian function ,i.e.
$$\begin{aligned} \mathcal {L}_{1}(\bar{x} , \bar{\theta } , \gamma ) \le \mathcal {L}_{1}(\bar{x} , \bar{\theta } , \bar{\gamma } )\le \mathcal {L}_{1}(x , \bar{\theta } , \bar{\gamma } ),~~ \forall x \in U,~ \forall \gamma \in D^{+}. \end{aligned}$$
Remark 3.5
In Theorem 3.2, if we consider \(\bar{\theta }\in C^{+i}\), then we obtain a similar result for regular nonlinear separation.
The following result is directly derived from Proposition 3.2 and part (i) of Theorem 3.2.
Corollary 3.1
Assume \({\omega }_0\) satisfies both conditions (19) and (20). If \((\bar{x} , \bar{\gamma })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}_{1}({x} , \bar{\theta } , \gamma )\) for some \(\bar{\theta }\in C^{+i},\) then \(\bar{p}\) is a minimizer of Problem (1).
In order to obtain saddle point conditions for the generalized Lagrangian function associated with Problem (1) w.r.t. \(\omega _{2},\) we consider the generalized Lagrangian function \(\mathcal {L}_{2}:U \times \Gamma \mapsto \mathbb {R }\) defined by
where F and G are compact valued and \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F.\)
The next result shows that the existence of a regular nonlinear separation functions \({\omega }_2 (u , v , \pi )\) between \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) is equivalent to the existence of a saddle point for the generalized Lagrangian function \(\mathcal {L}_{2}(x , \pi ).\)
Theorem 3.3
Let \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F,\) \(F(\bar{x})\subseteq \{\bar{y}\} + C \) and \(\omega _{2}\) be the class of nonlinear functions satisfying both conditions (19) and (20).
-
(i)
If \((\bar{x} , \bar{\pi })\) is a saddle point for the generalized Lagrangian function, i.e.
$$\begin{aligned} \mathcal {L}_{2}(\bar{x} , \pi ) \le \mathcal {L}_{2}(\bar{x} , \bar{\pi } )\le \mathcal {L}_{2}(x , \bar{\pi } ),~~ \forall x \in U,~ \forall \pi \in D^{+}, \end{aligned}$$then \(\bar{x} \in R\) and \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\), admit a nonlinear separation.
-
(ii)
Suppose that \(\bar{x} \in R,\) if \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation, then \((\bar{x} , \bar{\pi })\) is a saddle point for the generalized Lagrangian function, i.e.
$$\begin{aligned} \mathcal {L}_{2}(\bar{x} , \pi ) \le \mathcal {L}_{2}(\bar{x} , \bar{\pi } )\le \mathcal {L}_{2}(x , \bar{\pi } ),~~ \forall x \in U,~ \forall \pi \in D^{+}, \end{aligned}$$
Proof
-
(i)
Suppose that \((\bar{x}, \bar{\pi })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}_{2}(x, \pi )\), then
$$\begin{aligned} \mathcal {L}_{2}(\bar{x} , \pi ) \le \mathcal {L}_{2}(\bar{x} , \bar{\pi } )\le \mathcal {L}_{2}(x , \bar{\pi } ),~~ \forall x \in U,~ \forall \pi \in D^+. \end{aligned}$$Or, equivalently for each \(\pi \in \Gamma ,\) and for each \(x \in U\), we have
$$\begin{aligned} \inf _{y \in F(x)}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(x)\cap -D}{\omega }_0( -z ,\bar{\pi })\ge \end{aligned}$$(21)$$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \bar{\pi })\ge \end{aligned}$$$$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \pi ). \end{aligned}$$First, we prove that \(\bar{x}\in R.\) On the contrary, suppose that \(\bar{x}\not \in R.\) Then \(G(\bar{x})\cap -D = \emptyset .\) So, in the second inequality in (21), we have
$$\begin{aligned} \inf _{\pi \in D^{+}}\sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \pi )= -\infty , \end{aligned}$$which contradicts the first inequality in (21). Therefore \(\bar{x}\in R.\) On the other hand, since for each \(y \in F(\bar{x}),\) we have \( y \in \bar{y} + C,\) then \(\inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y )= 0\). Now, from the inequality (21), we have
$$\begin{aligned} \inf _{y \in F(x)}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(x)\cap -D}{\omega }_0( -z ,\bar{\pi })\ge - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \pi ). \end{aligned}$$By using (20), we obtain
$$\begin{aligned} \sup _{y \in F(x)-D}(-\triangle _{\mathcal {C}}( \bar{y}- y )) + \sup _{z \in G(x)\cap -D} {\omega }_0( -z ,\bar{\pi })\le 0. \end{aligned}$$Hence,
$$\begin{aligned} -\triangle _{\mathcal {C}}( \bar{y}- y ) + {\omega }_0( -z , \bar{\pi })\le 0 . \end{aligned}$$Which shows that \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\) admit a nonlinear separation.
-
(ii)
Assume that \(\mathcal {K}_{\bar{p}}\) and \(\mathcal {H}\), admit a nonlinear separation. Then for each \(x \in U,~y \in F(x)\) and \( z \in G(x),\) we have
$$\begin{aligned} -\triangle _{\mathcal {C}}( \bar{y}- y ) + {\omega }_0( -z , \bar{\pi })\le 0, \end{aligned}$$or equivalently
$$\begin{aligned} \sup _{y \in F(x)}(-\triangle _{\mathcal {C}}( \bar{y}- y )) + \sup _{z \in G(x)\cap -D} {\omega }_0( -z ,\bar{\pi })\le 0, \end{aligned}$$thus
$$\begin{aligned} \inf _{y \in F(x)}\triangle _{\mathcal {C}}( \bar{y} - y ) - \sup _{z \in G(x)\cap -D} {\omega }_0( -z ,\bar{\pi })\ge 0. \end{aligned}$$In particular, for \(y \in F(\bar{x})\), by using (20), we obtain
$$\begin{aligned} \sup _{z \in G(\bar{x})\cap -D}{\omega }_0(-z, \bar{\pi })= 0, \end{aligned}$$(22)and
$$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y )=0, \end{aligned}$$(23)since \(F(\bar{x})\subseteq \{\bar{y}\} + C.\) On the other hand by (20), we obtain
$$\begin{aligned} 0= \sup _{z \in G(\bar{x})\cap -D}{\omega }_0(-z, \bar{\pi })\le \sup _{z \in G(\bar{x})\cap -D}{\omega }_0( -z , \pi ), \end{aligned}$$and from (22) and (23), we deduce
$$\begin{aligned} \inf _{y \in F(x)}\triangle _{\mathcal {C}}( \bar{y} - y ) - \sup _{z \in G(x)\cap -D} {\omega }_0( -z ,\bar{\pi })\ge \end{aligned}$$$$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0(-z, \bar{\pi })\ge \end{aligned}$$$$\begin{aligned} \inf _{y \in F(\bar{x})}\triangle _{\mathcal {C}}(\bar{y} -y ) - \sup _{z \in G(\bar{x})\cap -D}{\omega }_0(-z, \pi ). \end{aligned}$$Or
$$\begin{aligned} \mathcal {L}_{2}(\bar{x} , \pi ) \le \mathcal {L}_{2}(\bar{x} , \bar{\pi } )\le \mathcal {L}_{2}(x , \bar{\pi } ),~~ \forall x \in U,~ \forall \pi \in D^+, \end{aligned}$$i.e. \((\bar{x} , \bar{\pi })\) is a saddle point for the generalized Lagrangian function.
\(\square \)
The following result is directly derived from Proposition 3.5 and Theorem 3.3.
Corollary 3.2
Let \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F \) and \(\omega _{2}\) be the class of nonlinear functions satisfying both conditions (19) and (20). Suppose that \(F(\bar{x})\subseteq \{\bar{y}\} + C.\) If \((\bar{x} , \bar{\pi })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}_{2}(x , \pi ),\) then \(\bar{x} \in R\) and \(\bar{p}\) is a weak minimizer of Problem (1).
Remark 3.6
Using the class of regular separation functions \(\omega _{1},\) we can obtain an application to penalty methods for the constrained extremum Problem (1).
Let \(\bar{\theta }\in C^{+i}\) be fixed. Consider the following extremum Problem:
A point \(\bar{x}\in R\) is called a minimum point of Problem \((P_{\bar{\gamma }})\) iff
In this case, \((\bar{x}, \bar{y})\) is a minimizer for Problem \((P_{\bar{\gamma }}).\) If there exists \(\bar{\gamma } \in \mathbb {R}\), such that any solution of Problem \((P_{\bar{\gamma }})\), say \((\bar{x}, \bar{y})\), is a solution of Problem (1), then we say that the function \(\mathcal {L}^{\omega }(x, \bar{\gamma })\) is an exact penalty function of Problem (1) at \(\bar{x}\).
By a minor modification in the proof of Theorem 4.4 in [22], we can obtain the following result for set-valued optimization problems.
Theorem 3.4
Let Z be a reflexive space, \(\bar{x}\in R\), \(\bar{p} = (\bar{x}, \bar{y}) \in \mathrm {gr}~ F,\) \(F(\bar{x})\subseteq \{\bar{y}\} + C\) and \(\bar{\theta }\in C^{+i}.\) Then, the following statements are equivalent:
-
(i)
\(\text{ cl } \text{ cone }~{ \mathcal {E } _{\bar{p}}}\cap {\mathcal {H}_{u}} = \emptyset .\)
-
(ii)
there exists \(\bar{\gamma } \in \mathbb {R}_{+}{\setminus } \{ 0\},\) such that
$$\begin{aligned} \sup _{y \in F(x)}\langle \bar{\theta } , \bar{y}-y \rangle \le \bar{\gamma }\inf _{z \in G(x)} d_{D}(-z), ~~\forall x \in U. \end{aligned}$$ -
(iii)
there exists \(\bar{\gamma } \in \mathbb {R}_{+}{\setminus } \{ 0\},\) such that
$$\begin{aligned} \omega _1 (u, v,\bar{\theta },\bar{\gamma })\le 0 ,~~~~~~~~\forall (u, v)\in \mathcal {K}_{\bar{p}}, \end{aligned}$$where
$$\begin{aligned} \omega _1 (u, v, \theta , \gamma ) = \langle \theta , u \rangle + {\omega }_0(v , \gamma )= \langle \theta , u \rangle - \gamma d_{D}(v). \end{aligned}$$ -
(iv)
\(\mathcal {L}^{\omega }(x, \gamma )\) is an exact penalty function of Problem (1) at \(\bar{x}\).
Proof
Assume that (i) holds, and (ii) doesn’t satisfy. Then for any \(n \in \mathbb {N},\) there exist \(x_{n} \in U\), \(y_{n} \in F(x_{n})\) and \(z_{n} \in G(x_{n})\) such that
Since Z is a reflexive Banach space, the norm is a continuous convex coercive function and D is a closed convex set, then for any \(n \in \mathbb {N},\) there exists \(v_n \in D\) such that
Let \({\alpha }_n:= \frac{1}{\langle \bar{\theta } , \bar{y} - y_{n} \rangle } > 0\). Then,
Thus, \(\lim _{n\longrightarrow \infty }\alpha _{n}(-z_{n}- v_{n}) = 0.\) So,
Or equivalently
which contradicts (i).
Now, assume that (ii) holds and on the contrary, suppose that (i) is not fulfilled. Then there exists \((c, 0)\in \text{ cl } \text{ cone }~{\mathcal {E} _{\bar{p}}}\cap \mathcal {H}_{u}\). Hence, for any \(n \in \mathbb {N}\) there exist \({\alpha }_n > 0\), \(x_{n} \in U\), \(y_{n} \in F(x_{n})\), \(z_{n} \in G(x_{n})\) and \((u_{n}, v_{n})\in \text{ cl }~{\mathcal {H}}\) such that
Therefore, \(\langle \bar{\theta }, \bar{y} - y_{n} - u_{n}\rangle > 0,\) for sufficiently large n, and
Then
since,
So, we can deduce that for any \(\gamma > 0,\) there exist \(y_{n} \in F(x_{n})\), \(z_{n} \in G(x_{n})\) and \(u_{n}\in C\) such that
for sufficiently large \(n \in \mathbb {N}\), which contradicts (ii).
(ii) is equivalent to (iii), since (ii) is equivalent to
which is equivalent to (iii) by definition of \(\mathcal {K}_{\bar{p}}\).
(ii) is also equivalent to (iv). Indeed, (ii) is equivalent to
Or
since \(G(\bar{x})\cap -D \ne \emptyset \). Therefore, \(\mathcal {L}^{\omega }(\bar{x}, \bar{\gamma })\le \mathcal {L}^{\omega }(x, \bar{\gamma })\). i.e. \((\bar{x}, \bar{y})\) is a minimizer for Problem \((P_{\bar{\gamma }}).\) On the other hand from (iii) we have
Or
then, \(\bar{p}\) is a minimizer of Problem (1) by Proposition 3.4, and \(\mathcal {L}^{\omega }(x, \gamma )\) is an exact penalty function of Problem (1) at \(\bar{x}\) . \(\square \)
References
Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiely, New York (1984)
Castellani, G., Giannessi, F.: Decomposition of mathematical programs by means of theorems of alternative for linear and nonlinear systems, Survey of mathematical programming. In: Proc. Ninth Internat. Math. Programming Sympos., Budapest, vol. 2, pp. 423–439. North-Holland (1979)
Chen, J., Li, S., Wan, Z., Yao, J.C.: Vector variational-like inequalities with constraints: separation and alternative. J. Optim. Theory Appl. 166, 460–479 (2015)
Chinaie, M., Zafarani, J.: Image space analysis and scalarization of multivalued optimization. J. Optim. Theory Appl. 142, 451–467 (2009)
Chinaie, M., Zafarani, J.: Image space analysis and scalarization for \(\varepsilon \)-optimization of multifunctions. J. Optim. Theory Appl. 157, 685–695 (2013)
Chinaie, M., Zafarani, J.: A new approach to constrained optimization via image space analysis. Positivity 20, 99–114 (2016)
Dien, P.H., Mastroeni, G., Pappalardo, M., Quang, P.H.: Regularity condition for constrained extreme problems via image space. J. Optim. Theory Appl. 80, 19–37 (1994)
Giannessi, F.: Theorems of the alternative and optimality conditions. J. Optim. Theory Appl. 42, 331–365 (1984)
Giannessi, F.: Constrained Optimization and Image Space Analysis, vol. 1, Separation of Sets and Optimality Conditions. Springer, New York (2005)
Giannessi, F., Mastroeni, G.: Separation of sets and Wolfe duality. J. Global Optim. 42, 401–412 (2008)
Giannessi, F., Mastroeni, G., Pellegrini, L.: On the theory of vector optimization and variational inequalities.Image space analysis and separation. In: Giannessi, F. (ed.). Vector Variational Inequalities and Vector Equilibria, Mathematical Theories. Kluwer Academic Publishers, Dordrecht (1999)
Giannessi, F., Mastroeni, G., Yao, J.-C.: On maximum and variational principles via image space analysis. Positivity 16, 405–427 (2012)
Giannessi, F., Maugeri, A.: Variational Analysis and Applications, Non Convex Optimization and Its Applications, vol. 79. Springer, New York (2005)
Giannessi, F., Pellegrini, L.: Image Space Analysis for Vector Optimization and Variational Inequalities. Scalarization. Combinatorial and Global Optimization. Ser. Appl. Math., vol. 14, pp. 97–110. World Scientific Publishing, River Edge (2002)
Hiriart-Urruty, J.-B.: Tangent cones, generalized gradients and mathematical programming in Banach spaces. Math. Oper. Res. 4, 79–97 (1979)
Li, J., Feng, S.Q., Zhang, Z.: A unified approach for constrained extremum problems: image space analysis. J. Optim. Theory Appl. 159, 69–92 (2013)
Li, S.J., Xu, Y.D.: Nonlinear separation approaches to constrained extremum problems. J. Optim. Theory Appl. 54, 842–856 (2012)
Li, S.J., Xu, Y.D.: A new nonlinear scalarization function and applications. Optimization 65, 207–231 (2016)
Liu, C.G., Ng, K.F., Yang, W.H.: Merit functions in vector optimization. Math. Progr. 119, 215–237 (2009)
Luc, D.T.: Theory of Vector Optimization. Springer, Berlin (1989)
Luo, H.Z., Mastroeni, G., Wu, H.X.: Separation approach for augmented Lagrangians in constrained nonconvex optimization. J. Optim. Theory Appl. 144, 275–290 (2010)
Mastroeni, G.: Nonlinear separation in the image space with applications to penalty methods. Appl. Anal. 91, 1901–1914 (2012)
Mordukhovich, B.S.: Variational Analysis and Generalized Differential I, II. Springer, New York (2006)
Mordukhovich, B.S.: Multiobjective optimization with equilibrium constraints. Math. Progr. 117, 331–354 (2009)
Pappalardo, M.: Image space approach to penalty methods. J. Optim. Theory Appl. 64, 141–152 (1990)
Tardella, F.: On the image of a constrained extremum problem and some applications to existence of a minimum. J. Optim. Theory Appl. 69, 93–104 (1989)
Zaffaroni, A.: Degrees of efficiency and degrees of minimality. SIAM J. Control Optim. 42, 1071–1806 (2003)
Zhu, S.K., Li, S.J.: Unified duality theory for constrained extremum problems I: image space analysis. J. Optim. Theory Appl. 161, 738–762 (2014)
Acknowledgments
The authors are grateful to the Chief Editor and the reviewers for valuable remarks and comments. The second author was partially supported by the Center of Excellence for Mathematics, University of Isfahan, Iran.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chinaie, M., Zafarani, J. Nonlinear separation in the image space with applications to constrained optimization. Positivity 21, 1031–1047 (2017). https://doi.org/10.1007/s11117-016-0450-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11117-016-0450-0