Abstract
The nonsmooth semi-infinite programming \(\left( {\textit{SIP}}\right) \) is solved in the paper (Mishra et al. in J Glob Optim 53:285–296, 2012) using limiting subdifferentials. The necessary optimality condition obtained by the authors, as well as its proof, is false. Even in the case where the index set is a finite, the result remains false. Two major problems do not allow them to have the expected result; first, the authors were based on Theorem 3.2 (Soleimani-damaneh and Jahanshahloo in J Math Anal Appl 328:281–286, 2007) which is not valid for nonsmooth semi-infinite problems with an infinite index set; second, they would have had to assume a suitable constraint qualification to get the expected necessary optimality conditions. For the convenience of the reader, under a nonsmooth limiting constraint qualification, using techniques from variational analysis, we propose another proof to detect necessary optimality conditions in terms of Karush–Kuhn–Tucker multipliers. The obtained results are formulated using limiting subdifferentials and Fréchet subdifferentials.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Semi-infinite programming problems have been investigated by many authors [2,3,4,5,6, 9]. In the paper [6], the authors investigated the following nonsmooth semi-infinite programming problem
where I is an index set which is possibly infinite, f and \(g_{i},\ i\in I\), are locally Lipschitz functions from \({\mathbb {R}}^{n}\) to \(\mathbb {R\cup } \left\{ +\infty \right\} \).
Using limiting subdifferentials [7], they established necessary and sufficient optimality conditions for \(\left( {\textit{SIP}}\right) \) and gave some duality results. The main theorem, Theorem 3.1 [6], where the authors give necessary optimality conditions, is based on Theorem 3.2 [10] (see Theorem 1 below) obtained by Soleimani-damaneh and Jahanshahloo [10] in the case where I is a finite index set. For what follows, let \(I=\left\{ 1,\ ...,\ p\right\} ,\ p\in {\mathbb {N}}^{*}\), and let \(\left( P\right) \) be the nonsmooth optimization problem where one minimizes the function f over the set
Theorem 1
[10] Let \({\overline{x}}\in {\mathbb {S}}\) be an optimal solution of \(\left( P\right) \). Suppose that \(g_{i}\left( x\right) \) for \(i\in I\left( {\overline{x}}\right) \) is Lipschitz near \({\overline{x}}\) and \( g_{i}\left( x\right) \) for \(i\notin I\left( {\overline{x}}\right) \) is continuous at \({\overline{x}}\). Also suppose that there exists a \(d\in \mathbb { R}^{n}\) such that \(\eta ^{t}d<0\) for all \(\eta \in \underset{i\in I\left( {\overline{x}}\right) }{\cup }\partial _{L}g_{i}\left( {\overline{x}}\right) \). Then
where
Looking closely at the proof of Theorem 3.2 [10], we note that it is neither valid nor usable for the infinite case; one can not guarantee that \(\delta \ne 0\) is not zero, and as a result one can not deduces (1). As Theorem 1 is an integral part of the proof of Theorem 3.1 [6], the result obtained by the authors, as well as its proof, is false (setting \(f\left( x\right) =x\) and \(g_{n}\left( x\right) =\exp \left( -nx\right) -1,\ n\in {\mathbb {N}}\), yields a simple counterexample). Furthermore, since the authors have not assumed any constraint qualification, Theorem 3.1 [6] remains false even in the finite case (Example 4.2.10 [1] yields a simple counterexample). To overcome all those problems, under a nonsmooth limiting constraint qualification and using techniques from variational analysis, we propose another proof to detect necessary optimality conditions of \(\left( {\textit{SIP}}\right) \) in terms of Karush–Kuhn–Tucker multipliers. The obtained results are formulated using limiting subdifferentials [7] and Fré chet subdifferentials [7]. Theorem 8 and Theorem 10 are actually two corrected versions of Theorem 3.1 [6].
For all the sequel, unless otherwise stated, \({\mathbb {B}}_{\mathbb {R }^{n}}\) denotes the closed unit ball of \({\mathbb {R}}^{n}\) and \(\left\| \left( x,y\right) \right\| :=\left\| x\right\| +\left\| y\right\| \) is the \(l_{1}\)-norm of \(\left( x,y\right) \). For a multifunction \(F:{\mathbb {R}}^{n}\rightrightarrows {\mathbb {R}}^{n}\), the expressions
and
signify, respectively, the sequential Painlevé-Kuratowski upper/outer and lower/inner limits in the norm topology in \({\mathbb {R}}^{n}\).
The rest of the paper is organized in this way: Sect. 2 contains basic definitions and preliminary material from nonsmooth variational analysis. Section 3 addresses main results (optimality conditions ).
2 Preliminaries
For a subset \(D\subseteq {\mathbb {R}}^{n},\ cl\ D,\ co\ D\) and \(cone\ D\) stand for the closure, the convex hull and the convex cone generated by D, repectively. For a function \(f:{\mathbb {R}}^{n}\rightarrow \mathbb {R\cup } \left\{ +\infty \right\} \), the graph of f is the set of points in \( {\mathbb {R}}^{n+1}\) defined by
The following definitions are crucials for our investigation.
Definition 2
[7] Let \(\Omega _{1}\) and \(\Omega _{2}\) be nonempty closed subsets of \({\mathbb {R}}^{n}\). We say that \(\left\{ \Omega _{1},\Omega _{2}\right\} \) is an extremal system in \({\mathbb {R}}^{n}\) if these sets have at least one (locally) extremal point \({\bar{x}}\in \Omega _{1}\cap \Omega _{2}\); that is, there exists a neighborhood U of \({\bar{x}}\) such that for every \(\varepsilon >0\) there is a vector \(a\in \varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}\) with
In this case, \(\left\{ \Omega _{1},\Omega _{2},{\bar{x}}\right\} \) is said to be an extremal system in \({\mathbb {R}}^{n}\).
Definition 3
[7] Let \(\Omega \subset {\mathbb {R}}^{n}\) be locally closed around \({\bar{x}}\in \Omega \). Then the Fréchet normal cone\( {\widehat{N}}({\bar{x}};\Omega )\) and the Mordukhovich normal cone (limiting normal cone) \(N({\bar{x}};\Omega )\) to \(\Omega \) at \({\bar{x}}\) are defined by
where \(x\overset{\Omega }{\rightarrow }{\bar{x}}\) stands for \(x\rightarrow {\bar{x}}\) with \(x\in \Omega \).
Definition 4
[7] Let \(\varphi :{\mathbb {R}}^{n}\rightarrow \mathbb {R\cup }\left\{ +\infty \right\} \) be lower semicontinuous around \({\bar{x}}\).
-
1.
The Fréchet subdifferential of \(\varphi \) at \({\bar{x}}\) is
$$\begin{aligned} {\widehat{\partial }}\varphi ({\bar{x}}):=\left\{ x^{*}\in {\mathbb {R}} ^{n}:\;\liminf _{x\rightarrow {\bar{x}}}\frac{\varphi (x)-\varphi ({\bar{x}} )-\langle x^{*},x-{\bar{x}}\rangle }{\Vert x-{\bar{x}}\Vert }\ge 0\right\} . \end{aligned}$$ -
2.
The Mordukhovich (limiting) subdifferential of \(\varphi \) at \(\bar{ x}\) is defined by
$$\begin{aligned} \partial \varphi ({\bar{x}}):=\limsup _{x\overset{\varphi }{\rightarrow }{\bar{x}} }{\widehat{\partial }}\varphi (x), \end{aligned}$$(4)
where \(x\overset{\varphi }{\rightarrow }{\bar{x}}\) means that \(x\rightarrow {\bar{x}}\) with \(\varphi (x)\rightarrow \varphi ({\bar{x}})\).
One clearly has
where \(\delta (\cdot ;\Omega )\) is the indicator function of \(\Omega \).
Remark 5
[8]
-
1.
For any closed set \(\Omega \subset {\mathbb {R}}^{n}\) and \( {\overline{x}}\in \Omega \) one has
$$\begin{aligned} N_{c}({\bar{x}};\Omega )=cl\;coN({\bar{x}};\Omega ) \end{aligned}$$(5)and for any Lipschitz continuous function \(\varphi :{\mathbb {R}} ^{n}\rightarrow \overline{{\mathbb {R}}}\) around \({\bar{x}}\), one has
$$\begin{aligned} \partial _{c}\varphi ({\bar{x}})=cl\;co\partial \varphi ({\bar{x}}) \end{aligned}$$(6)where \(N_{c}({\bar{x}};\Omega )\) and \(\partial _{c}\varphi ({\bar{x}})\) denote respectively the Clarke’s normal cone and the Clarke’s subdifferential.
-
2.
The Fréchet normal cone\({\widehat{N}}({\bar{x}};\Omega )\) is always convex while the Mordukhovich normal cone\(N({\bar{x}};\Omega )\) is nonconvex in general.
Definition 6
[7] Let \(\left\{ \Omega _{1},\Omega _{2},{\bar{x}}\right\} \) be an extremal system in \({\mathbb {R}}^{n}\). \(\left\{ \Omega _{1},\Omega _{2},{\bar{x}} \right\} \) satisfies the approximate extremal principle if for every \( \varepsilon >0\) there are \(x_{1}\in \Omega _{1}\cap \left( {\overline{x}} +\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}\right) \), \(x_{2}\in \Omega _{2}\cap \left( {\overline{x}}+\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}\right) \) and \( x^{*}\in {\mathbb {R}}^{n}\) such that \(\left\| x^{*}\right\| =1\) and
Remark 7
A common point \({\overline{x}}\) of sets is locally extremal if these sets can be locally pushed apart by a (linear) small translation in such a way that the resulting sets have empty intersection in some neighborhood of \({\overline{x}}\).
3 Necessary optimality conditions
Let S be the feasible set of \(\left( {\textit{SIP}}\right) \) defined by
Theorem 8
Assume that f is locally Lipschitz with constant k around the local optimal point \({\overline{u}}\in S\). Then, for any \(\varepsilon >0\), there exist \(u_{1},u_{2}\in {\overline{u}}+\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}},\;v_{1},v_{2}\in \left( f\left( {\overline{u}}\right) -\varepsilon ,\ f\left( {\overline{u}}\right) +\varepsilon \right) \) and \(\beta _{\varepsilon }^{*}\in {\mathbb {R}}_{+}\backslash \left\{ 0\right\} \) such that \( u_{1}\in S,\;v_{1}\le f\left( {\overline{u}}\right) ,\;v_{2}=f\left( u_{2}\right) \) and
Proof
Since \({\overline{u}}\) is a local optimal solution of \(\left( {\textit{SIP}}\right) \), there exists a neighborhood V of \({\overline{u}}\) such that for all \(u\in V\cap S\)
-
Take
$$\begin{aligned} \Omega _{1}:=S\times \left( -\infty ,f\left( {\overline{u}}\right) \right] \text { and }\Omega _{2}:=grf. \end{aligned}$$Then, it is easy to show that \(\left( {\overline{u}},f\left( {\overline{u}} \right) \right) \) is an extremal point of the system \(\left( \Omega _{1},\Omega _{2}\right) \). Indeed, suppose that is not the case, i.e., for any neighborhood U of \(\left( {\overline{u}},f\left( {\overline{u}}\right) \right) \) there is \(\varepsilon >0\) such that for all \(a\in \varepsilon B_{ {\mathbb {R}}^{n}\times {\mathbb {R}}}\) one has
$$\begin{aligned} (\Omega _{1}+a)\cap \Omega _{2}\cap U\ne \emptyset . \end{aligned}$$Let \(a=\left( 0,-\dfrac{\varepsilon }{2}\right) \ \)and \(\left( u,v\right) \in (\Omega _{1}+a)\cap \Omega _{2}\cap U\). Thus,
$$\begin{aligned} u\in S\text { and }f\left( u\right) \in ] -\infty ,f\left( {\overline{u}} \right) -\dfrac{\varepsilon }{2} ]. \end{aligned}$$Hence \(f\left( u\right) <f\left( {\overline{u}}\right) \), which contradicts the fact that \({\overline{u}}\) is a local optimal solution of \(\left( {\textit{SIP}}\right) \).
-
Since f is locally Lipschitz, one has
$$\begin{aligned} \left| f\left( u\right) -f\left( {\widehat{u}}\right) \right| \le k\Vert u-{\widehat{u}}\Vert \;\;\;\;\;\;\;\;\;\;\;\;\text {for }u,{\widehat{u}} \text { sufficiently close to }{\overline{u}}. \end{aligned}$$Let \(1/4>\varepsilon >0\). Since \(\left\{ \Omega _{1},\Omega _{2},\left( {\overline{u}},f\left( {\overline{u}}\right) \right) \right\} \) is an extremal system, due to [7, Theorem 2.10], the approximate extremal principle holds at \(\left( {\overline{u}},{\overline{v}}\right) \). Choosing \(\theta = \dfrac{\varepsilon }{4\left( k+1\right) }\), there exist \(u_{1},u_{2}\in {\overline{u}}+\theta {\mathbb {B}}_{{\mathbb {R}}^{n}},\;v_{1},v_{2}\in \left( f\left( {\overline{u}}\right) -\theta ,f\left( {\overline{u}}\right) +\theta \right) \) and \(\left( x^{*},y^{*}\right) \in {\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\) such that \(u_{1}\in S,\;v_{1}\in \left( -\infty ,f\left( {\overline{u}}\right) \right] ,\;v_{2}=f\left( u_{2}\right) ,\ \left\| \left( x^{*},y^{*}\right) \right\| =1\) and
$$\begin{aligned} \left( x^{*},y^{*}\right) \in \left[ {\widehat{N}}\left( \left( u_{1},v_{1}\right) ;\Omega _{1}\right) +\theta {\mathbb {B}}_{{\mathbb {R}} ^{n}\times {\mathbb {R}}}\right] \cap \left[ -{\widehat{N}}\left( \left( u_{2},v_{2}\right) ;\Omega _{2}\right) +\theta {\mathbb {B}}_{{\mathbb {R}} ^{n}\times {\mathbb {R}}}\right] . \end{aligned}$$(7)Hence we can find \(\left( u^{*},v^{*}\right) \in {\widehat{N}}\left( \left( u_{2},v_{2}\right) ;\Omega _{2}\right) \) and \(\left( a_{i}^{*},b_{i}^{*}\right) \in {\mathbb {B}}_{{\mathbb {R}}^{n}\times {\mathbb {R}}},\ i=1,\ 2\), and \(\left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \in {\widehat{N}}\left( \left( u_{1},v_{1}\right) ;\Omega _{1}\right) \) such that
$$\begin{aligned} \left( u^{*},v^{*}\right) =-\left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) +\theta \left( a_{2}^{*},b_{2}^{*}\right) -\theta \left( a_{1}^{*},b_{1}^{*}\right) , \end{aligned}$$and
$$\begin{aligned} \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) +\theta \left( a_{1}^{*},b_{1}^{*}\right) =\left( x^{*},y^{*}\right) . \end{aligned}$$(8) -
Since \(\left( u^{*},v^{*}\right) \in {\widehat{N}}\left( \left( u_{2},v_{2}\right) ;\Omega _{2}\right) \), one has for all \((u,v)\in grf\) sufficiently close to \(\left( u_{2},v_{2}\right) :\)
$$\begin{aligned} \left\langle u^{*},u-u_{2}\right\rangle +\left\langle v^{*},v-v_{2}\right\rangle -2\theta \left\| \left( u-u_{2},v-v_{2}\right) \right\| \le 0. \end{aligned}$$Since the definition of Fréchet normals (2) implies that
$$\begin{aligned} \underset{\delta >0}{\inf }\ \underset{\left( u,v\right) \in {\mathbb {B}}_{\delta }\left( u_{2},v_{2}\right) \cap \Omega _{2}}{\sup }\frac{ \left\langle u^{*},u-u_{2}\right\rangle +\left\langle v^{*},v-v_{2}\right\rangle }{\left\| \left( u-u_{2},v-v_{2}\right) \right\| }\le 0, \end{aligned}$$thus there exists \(\delta >0\) such that for all \(\left( u,v\right) \in {\mathbb {B}}_{\delta }\left( u_{2},v_{2}\right) \cap \Omega _{2}\),
$$\begin{aligned} \frac{\left\langle u^{*},u-u_{2}\right\rangle +\left\langle v^{*},v-v_{2}\right\rangle }{\left\| \left( u-u_{2},v-v_{2}\right) \right\| }\le 2\theta , \end{aligned}$$or
$$\begin{aligned} \left\langle u^{*},u-u_{2}\right\rangle +\left\langle v^{*},v-v_{2}\right\rangle -2\theta \left\| \left( u-u_{2},v-v_{2}\right) \right\| \le 0. \end{aligned}$$Consequently,
$$\begin{aligned} 0\ge & {} \langle -\alpha _{\varepsilon }^{*},u-u_{2}\rangle +\langle -\beta _{\varepsilon }^{*},f\left( u\right) -f\left( u_{2}\right) \rangle \\&+\,\theta \left\langle a_{2}^{*}-a_{1}^{*},u-u_{2}\right\rangle +\theta \left( b_{2}^{*}-b_{1}^{*}\right) \left( f\left( u\right) -f\left( u_{2}\right) \right) \\&-\,2\theta \Vert (u-u_{2},f\left( u\right) -f\left( u_{2}\right) )\Vert \end{aligned}$$for \(\left( u,v\right) \in grf\) sufficiently close to \(\left( u_{2},v_{2}\right) \).
-
The locally Lipschitz property of f together with the fact that \( \left( a_{i}^{*},b_{i}^{*}\right) \in {\mathbb {B}}_{{\mathbb {R}}^{n}\times {\mathbb {R}}}\) for \(i=1,\ 2\), gives us for each u sufficiently close to \(u_{2}\),
$$\begin{aligned} \langle \alpha _{\varepsilon }^{*},u-u_{2}\rangle +\beta _{\varepsilon }^{*}\left( f\left( u\right) -f\left( u_{2}\right) \right)\ge & {} -4\theta \left[ \left\| u-u_{2}\right\| +\left| f\left( u\right) -f\left( u_{2}\right) \right| \right] \\\ge & {} -4\theta \left( k+1\right) \left\| u-u_{2}\right\| \\\ge & {} -\varepsilon \left\| u-u_{2}\right\| . \end{aligned}$$for \(\theta =\dfrac{\varepsilon }{4\left( k+1\right) }\) and u sufficiently close to \(u_{2}\). Then, \(u_{2}\) minimizes locally the function
$$\begin{aligned} \Psi \left( u\right) :=\langle \alpha _{\varepsilon }^{*},u-u_{2}\rangle +\beta _{\varepsilon }^{*}\left( f\left( u\right) -f\left( u_{2}\right) \right) +\varepsilon \Vert u-u_{2}\Vert . \end{aligned}$$Using [7, Proposition 1.107] together with the fuzzy sum rule [7, Theorem 2.33], we can find a point \({\widetilde{u}}_{2}\in u_{2}+\frac{ \varepsilon }{2}{\mathbb {B}}_{{\mathbb {R}}^{n}}\) such that
$$\begin{aligned} 0\in & {} \alpha _{\varepsilon }^{*}+{\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f+\varepsilon \left\| .-u_{2}\right\| \right) \left( u_{2}\right) \end{aligned}$$(9)$$\begin{aligned}\subseteq & {} \alpha _{\varepsilon }^{*}+{\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f\right) \left( {\widetilde{u}}_{2}\right) +\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}. \end{aligned}$$(10) -
\(\beta _{\varepsilon }^{*}\ne 0\). Indeed, from (9), there exist \(s^{*}\in {\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f\right) \left( {\widetilde{u}}_{2}\right) \) and \(e^{*}\in {\mathbb {B}}_{ {\mathbb {R}}^{n}}\) such that
$$\begin{aligned} 0=\alpha _{\varepsilon }^{*}+s^{*}+\varepsilon e^{*}. \end{aligned}$$Then,
$$\begin{aligned} \left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \right\| =\left\| \alpha _{\varepsilon }^{*}\right\| +\left\| \beta _{\varepsilon }^{*}\right\| =\left\| s^{*}+\varepsilon e^{*}\right\| +\left\| \beta _{\varepsilon }^{*}\right\| \le \left\| s^{*}\right\| +\varepsilon +\beta _{\varepsilon }^{*}. \end{aligned}$$Moreover,
$$\begin{aligned} \left\| \left( x^{*},y^{*}\right) \right\| =\left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) +\theta \left( a_{1}^{*},b_{1}^{*}\right) \right\| \le \left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \right\| +\theta \left\| \left( a_{1}^{*},b_{1}^{*}\right) \right\| . \end{aligned}$$Since \(\left( a_{1}^{*},b_{1}^{*}\right) \in {\mathbb {B}}_{{\mathbb {R}}^{n+1}},\;\left\| \left( x^{*},y^{*}\right) \right\| =1\) and \( \theta =\dfrac{\varepsilon }{4\left( k+1\right) }<\varepsilon <1/4\), one gets
$$\begin{aligned} \dfrac{3}{4}\le 1-\theta \le \left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \right\| . \end{aligned}$$Thus,
$$\begin{aligned} \dfrac{3}{4}\le \left\| s^{*}\right\| +\varepsilon +\beta _{\varepsilon }^{*}. \end{aligned}$$Using the Lipschitz property of f, one deduces that
$$\begin{aligned} \dfrac{3}{4}\le \beta _{\varepsilon }^{*}k+\varepsilon +\beta _{\varepsilon }^{*}. \end{aligned}$$Consequently,
$$\begin{aligned} 0<\frac{\dfrac{3}{4}-\varepsilon }{k+1}\le \beta _{\varepsilon }^{*}. \end{aligned}$$Hence, \(\beta _{\varepsilon }^{*}>0\). This implies part 1 by (9). \(\square \)
The following constraint qualification will be used to get necessary optimality conditions in terms of limiting subdifferentials and Karush–Kuhn–Tucker multipliers.
Definition 9
We say that the nonsmooth limiting constraint qualification holds at \({\overline{u}}\in S\) if
where
Theorem 10 gives exact optimality conditions for our nonsmooth semi-infinite programming problem.
Theorem 10
Assume that f is locally Lipschitz at \({\overline{u}}\in S\) with Lipschitz constant k and that \({\overline{u}}\) is a local optimal solution of \(\left( {\textit{SIP}}\right) \). Suppose that the nonsmooth limiting constraint qualification holds at \({\overline{u}}\). Then,
Proof
Fix arbitrary \(\varepsilon >0\). Since \({\overline{u}}\) is a local optimal solution of \(\left( {\textit{SIP}}\right) \), there exist \(u_{1},u_{2}\in {\overline{u}} +\varepsilon {\mathbb {B}}_{R^{n}},\;v_{1},v_{2}\in \left( f\left( {\overline{u}} \right) -\varepsilon ,\ f\left( {\overline{u}}\right) +\varepsilon \right) \) and \(\beta _{\varepsilon }^{*}\in {\mathbb {R}}_{+}\backslash \left\{ 0\right\} \) such that \(u_{1}\in S,\;v_{1}\le f\left( {\overline{u}}\right) ,\;v_{2}=f\left( u_{2}\right) \) and
Using the Lipschitz property of f, from (9), there exist \( s_{\varepsilon }^{*}\in {\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f\right) \left( u_{2}\right) ,\;\alpha _{\varepsilon }^{*}\in {\widehat{N}}(u_{1};S)\) and \(e_{\varepsilon }^{*}\in {\mathbb {B}}_{{\mathbb {R}} ^{n}}\) such that
and
Since \(\left\| \left( x^{*},y^{*}\right) \right\| =1,\;\left\| \left( a_{1}^{*},b_{1}^{*}\right) \right\| \le 1\) and
one gets
Letting \(\varepsilon \rightarrow 0,\;u_{2}\rightarrow {\overline{u}}\) and \( f\left( u_{2}\right) \rightarrow f\left( {\overline{u}}\right) \), there exist \( s^{*}\in -N\left( {\overline{u}};S\right) \) and \(0<\beta ^{*}\le 1\) such that
Thus,
Then,
Consequently,
The nonsmooth limiting constraint qualification implies that
\(\square \)
Example 11
Consider the following optimization problem :
We remark that \({\overline{u}}=\left( 0,0\right) \in S\ \)is an optimal solution of \(\left( {\textit{SIP}}^{*}\right) \ \)with
The nonsmooth limiting constraint qualification holds at \({\overline{u}}\). It is easy to show that
On the other hand, \(\partial f\left( {\overline{u}}\right) =\left\{ -3\right\} \times \left[ -2,\ 2\right] \), hence we get
References
Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. Wiley, Hoboken (2013)
Canovas, M.J., Lopez, M.A., Mordukhovich, B.S., Parra, J.: Variational analysis in semi-infinite and finite programming, I: stability of linear inequality systems of feasible solutions. SIAM J. Optim. 20, 1504–1526 (2009)
Goberna, M.A., Lopez, M.A.: Linear semi-infinite programming theory: an updated survey. Eur. J. Oper. Res. 143, 390–405 (2002)
Hettich, R., Kortanek, K.O.: Semi-infinite programming: theory, methods and applications. SIAM Rev. 35, 380–429 (1993)
Lopez, M., Still, G.: Semi-infinite programming. Eur. J. Oper. Res. 180, 491–518 (2007)
Mishra, S.K., Jaiswal, M., Le Thi, H.A.: Nonsmooth semi-infinite programming problem using limiting subdifferentials. J. Glob. Optim. 53, 285–296 (2012)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory, Grundlehren Series (Fundamental Principles of Mathematical Sciences), vol. 330. Springer, Berlin (2006)
Mordukhovich, B.S., Shao, Y.: Nonsmooth sequential analysis in Asplund spaces. Trans. Am. Math. Soc. 348, 1235–1280 (1996)
Shapiro, A.: On duality theory of convex semi-infinite programming. Optimization 54, 535–543 (2005)
Soleimani-damaneh, M., Jahanshahloo, G.R.: Nonsmooth multiobjective optimization using limiting subdifferentials. J. Math. Anal. Appl. 328, 281–286 (2007)
Acknowledgements
My sincere acknowledgements to the anonymous referees for their insightful remarks and suggestions. This work has been supported by the Alexander-von-Humboldt foundation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gadhi, N.A. Necessary optimality conditions for a nonsmooth semi-infinite programming problem. J Glob Optim 74, 161–168 (2019). https://doi.org/10.1007/s10898-019-00742-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-019-00742-9
Keywords
- Nonsmooth semi-infinite optimzation
- Extremal principle
- Fréchet subdifferential
- Limiting subdifferential
- Fréchet normal cone
- Limiting normal cone
- Optimality conditions
- Constraint qualification