1 Introduction

Semi-infinite programming problems have been investigated by many authors [2,3,4,5,6, 9]. In the paper [6], the authors investigated the following nonsmooth semi-infinite programming problem

$$\begin{aligned} \left( {\textit{SIP}}\right) :\;\;\;\;\left\{ \left. \begin{array}{l} \text {Minimize} \\ \text {s.t.}\ \ \ \ \ \ \ \ \end{array} \right. \left. \begin{array}{l} f(x)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ g_{i}(x)\le 0\;\;\ \ \forall i\in I, \end{array} \right. \right. \end{aligned}$$

where I is an index set which is possibly infinite, f and \(g_{i},\ i\in I\), are locally Lipschitz functions from \({\mathbb {R}}^{n}\) to \(\mathbb {R\cup } \left\{ +\infty \right\} \).

Using limiting subdifferentials [7], they established necessary and sufficient optimality conditions for \(\left( {\textit{SIP}}\right) \) and gave some duality results. The main theorem, Theorem 3.1 [6], where the authors give necessary optimality conditions, is based on Theorem 3.2 [10] (see Theorem 1 below) obtained by Soleimani-damaneh and Jahanshahloo [10] in the case where I is a finite index set. For what follows, let \(I=\left\{ 1,\ ...,\ p\right\} ,\ p\in {\mathbb {N}}^{*}\), and let \(\left( P\right) \) be the nonsmooth optimization problem where one minimizes the function f over the set

$$\begin{aligned} \mathbb {S=}\left\{ x\in {\mathbb {R}}^{n}:g_{i}(x)\le 0,\;\;\ \ \forall i\in I\right\} . \end{aligned}$$

Theorem 1

[10] Let \({\overline{x}}\in {\mathbb {S}}\) be an optimal solution of \(\left( P\right) \). Suppose that \(g_{i}\left( x\right) \) for \(i\in I\left( {\overline{x}}\right) \) is Lipschitz near \({\overline{x}}\) and \( g_{i}\left( x\right) \) for \(i\notin I\left( {\overline{x}}\right) \) is continuous at \({\overline{x}}\). Also suppose that there exists a \(d\in \mathbb { R}^{n}\) such that \(\eta ^{t}d<0\) for all \(\eta \in \underset{i\in I\left( {\overline{x}}\right) }{\cup }\partial _{L}g_{i}\left( {\overline{x}}\right) \). Then

$$\begin{aligned} d\in D^{{\overline{x}}}=\left\{ d\in {\mathbb {R}}^{n}:d\ne 0\text { and }\exists \delta >0\text { such that }{\overline{x}}+\lambda d\in {\mathbb {S}},\ \forall \lambda \in ] 0,\ \delta [ \right\} , \end{aligned}$$
(1)

where

$$\begin{aligned} I\left( {\overline{x}}\right) =\left\{ i\in I:g_{i}(x)=0\right\} . \end{aligned}$$

Looking closely at the proof of Theorem 3.2 [10], we note that it is neither valid nor usable for the infinite case; one can not guarantee that \(\delta \ne 0\) is not zero, and as a result one can not deduces (1). As Theorem 1 is an integral part of the proof of Theorem 3.1 [6], the result obtained by the authors, as well as its proof, is false (setting \(f\left( x\right) =x\) and \(g_{n}\left( x\right) =\exp \left( -nx\right) -1,\ n\in {\mathbb {N}}\), yields a simple counterexample). Furthermore, since the authors have not assumed any constraint qualification, Theorem 3.1 [6] remains false even in the finite case (Example 4.2.10 [1] yields a simple counterexample). To overcome all those problems, under a nonsmooth limiting constraint qualification and using techniques from variational analysis, we propose another proof to detect necessary optimality conditions of \(\left( {\textit{SIP}}\right) \) in terms of Karush–Kuhn–Tucker multipliers. The obtained results are formulated using limiting subdifferentials [7] and Fré chet subdifferentials [7]. Theorem 8 and Theorem 10 are actually two corrected versions of Theorem 3.1 [6].

For all the sequel, unless otherwise stated, \({\mathbb {B}}_{\mathbb {R }^{n}}\) denotes the closed unit ball of \({\mathbb {R}}^{n}\) and \(\left\| \left( x,y\right) \right\| :=\left\| x\right\| +\left\| y\right\| \) is the \(l_{1}\)-norm of \(\left( x,y\right) \). For a multifunction \(F:{\mathbb {R}}^{n}\rightrightarrows {\mathbb {R}}^{n}\), the expressions

$$\begin{aligned} \underset{x\rightarrow {\overline{x}}}{\limsup }F\left( x\right) :=\left\{ x^{*}\in {\mathbb {R}}^{n}\;|\;\exists x_{k}\rightarrow {\overline{x}} ,\;\exists x_{k}^{*}\rightarrow x^{*}:x_{k}^{*}\in F\left( x_{k}\right) \;\forall k\in {\mathbb {N}}\right\} \end{aligned}$$

and

$$\begin{aligned} \underset{x\rightarrow {\overline{x}}}{\liminf }F\left( x\right) :=\left\{ x^{*}\in {\mathbb {R}}^{n}\;|\;\forall x_{k}\rightarrow {\overline{x}} ,\;\exists x_{k}^{*}\rightarrow x^{*}:x_{k}^{*}\in F\left( x_{k}\right) \;\forall k\in {\mathbb {N}}\right\} \end{aligned}$$

signify, respectively, the sequential Painlevé-Kuratowski  upper/outer and lower/inner limits in the norm topology in \({\mathbb {R}}^{n}\).

The rest of the paper is organized in this way: Sect. 2 contains basic definitions and preliminary material from nonsmooth variational analysis. Section 3 addresses main results (optimality conditions ).

2 Preliminaries

For a subset \(D\subseteq {\mathbb {R}}^{n},\ cl\ D,\ co\ D\) and \(cone\ D\) stand for the closure, the convex hull and the convex cone generated by D, repectively. For a function \(f:{\mathbb {R}}^{n}\rightarrow \mathbb {R\cup } \left\{ +\infty \right\} \), the graph of f is the set of points in \( {\mathbb {R}}^{n+1}\) defined by

$$\begin{aligned} grf=\left\{ \left( x,y\right) \in {\mathbb {R}}^{n}\times {\mathbb {R}}:y=f\left( x\right) \right\} . \end{aligned}$$

The following definitions are crucials for our investigation.

Definition 2

[7] Let \(\Omega _{1}\) and \(\Omega _{2}\) be nonempty closed subsets of \({\mathbb {R}}^{n}\). We say that \(\left\{ \Omega _{1},\Omega _{2}\right\} \) is an extremal system in \({\mathbb {R}}^{n}\) if these sets have at least one (locally) extremal point \({\bar{x}}\in \Omega _{1}\cap \Omega _{2}\); that is, there exists a neighborhood U of \({\bar{x}}\) such that for every \(\varepsilon >0\) there is a vector \(a\in \varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}\) with

$$\begin{aligned} (\Omega _{1}+a)\cap \Omega _{2}\cap U=\emptyset . \end{aligned}$$

In this case, \(\left\{ \Omega _{1},\Omega _{2},{\bar{x}}\right\} \) is said to be an extremal system in \({\mathbb {R}}^{n}\).

Definition 3

[7] Let \(\Omega \subset {\mathbb {R}}^{n}\) be locally closed around \({\bar{x}}\in \Omega \). Then the Fréchet normal cone\( {\widehat{N}}({\bar{x}};\Omega )\) and the Mordukhovich normal cone (limiting normal cone) \(N({\bar{x}};\Omega )\) to \(\Omega \) at \({\bar{x}}\) are defined by

$$\begin{aligned}&{\widehat{N}}({\bar{x}};\Omega ):=\left\{ x^{*}\in {\mathbb {R}}^{n}:\;\limsup _{x\overset{\Omega }{\rightarrow }{\bar{x}}}\frac{\langle x^{*},x-{\bar{x}}\rangle }{\Vert x-{\bar{x}}\Vert }\le 0\right\} , \end{aligned}$$
(2)
$$\begin{aligned}&N({\bar{x}};\Omega ):=\underset{x\overset{\Omega }{\rightarrow }{\bar{x}}}{ \limsup }{\widehat{N}}(x;\Omega ), \end{aligned}$$
(3)

where \(x\overset{\Omega }{\rightarrow }{\bar{x}}\) stands for \(x\rightarrow {\bar{x}}\) with \(x\in \Omega \).

Definition 4

[7] Let \(\varphi :{\mathbb {R}}^{n}\rightarrow \mathbb {R\cup }\left\{ +\infty \right\} \) be lower semicontinuous around \({\bar{x}}\).

  1. 1.

    The Fréchet subdifferential of \(\varphi \) at \({\bar{x}}\) is

    $$\begin{aligned} {\widehat{\partial }}\varphi ({\bar{x}}):=\left\{ x^{*}\in {\mathbb {R}} ^{n}:\;\liminf _{x\rightarrow {\bar{x}}}\frac{\varphi (x)-\varphi ({\bar{x}} )-\langle x^{*},x-{\bar{x}}\rangle }{\Vert x-{\bar{x}}\Vert }\ge 0\right\} . \end{aligned}$$
  2. 2.

    The Mordukhovich (limiting) subdifferential of \(\varphi \) at \(\bar{ x}\) is defined by

    $$\begin{aligned} \partial \varphi ({\bar{x}}):=\limsup _{x\overset{\varphi }{\rightarrow }{\bar{x}} }{\widehat{\partial }}\varphi (x), \end{aligned}$$
    (4)

where \(x\overset{\varphi }{\rightarrow }{\bar{x}}\) means that \(x\rightarrow {\bar{x}}\) with \(\varphi (x)\rightarrow \varphi ({\bar{x}})\).

One clearly has

$$\begin{aligned} {\widehat{N}}({\bar{x}};\Omega )={\widehat{\partial }}\delta ({\bar{x}};\Omega ),\quad N({\bar{x}};\Omega )=\partial \delta ({\bar{x}};\Omega ), \end{aligned}$$

where \(\delta (\cdot ;\Omega )\) is the indicator function of \(\Omega \).

Remark 5

[8]

  1. 1.

    For any closed set \(\Omega \subset {\mathbb {R}}^{n}\) and \( {\overline{x}}\in \Omega \) one has

    $$\begin{aligned} N_{c}({\bar{x}};\Omega )=cl\;coN({\bar{x}};\Omega ) \end{aligned}$$
    (5)

    and for any Lipschitz continuous function \(\varphi :{\mathbb {R}} ^{n}\rightarrow \overline{{\mathbb {R}}}\) around \({\bar{x}}\), one has

    $$\begin{aligned} \partial _{c}\varphi ({\bar{x}})=cl\;co\partial \varphi ({\bar{x}}) \end{aligned}$$
    (6)

    where \(N_{c}({\bar{x}};\Omega )\) and \(\partial _{c}\varphi ({\bar{x}})\) denote respectively the Clarke’s normal cone and the Clarke’s subdifferential.

  2. 2.

    The Fréchet normal cone\({\widehat{N}}({\bar{x}};\Omega )\) is always convex while the Mordukhovich normal cone\(N({\bar{x}};\Omega )\) is nonconvex in general.

Definition 6

[7] Let \(\left\{ \Omega _{1},\Omega _{2},{\bar{x}}\right\} \) be an extremal system in \({\mathbb {R}}^{n}\). \(\left\{ \Omega _{1},\Omega _{2},{\bar{x}} \right\} \) satisfies the approximate extremal principle if for every \( \varepsilon >0\) there are \(x_{1}\in \Omega _{1}\cap \left( {\overline{x}} +\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}\right) \), \(x_{2}\in \Omega _{2}\cap \left( {\overline{x}}+\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}\right) \) and \( x^{*}\in {\mathbb {R}}^{n}\) such that \(\left\| x^{*}\right\| =1\) and

$$\begin{aligned} x^{*}\in \left( {\widehat{N}}(x_{1};\Omega _{1})+\varepsilon {\mathbb {B}}_{ {\mathbb {R}}^{n}}^{*}\right) \cap \left( -{\widehat{N}}(x_{2};\Omega _{2})+\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}^{*}\right) . \end{aligned}$$

Remark 7

A common point \({\overline{x}}\) of sets is locally extremal if these sets can be locally pushed apart by a (linear) small translation in such a way that the resulting sets have empty intersection in some neighborhood of \({\overline{x}}\).

3 Necessary optimality conditions

Let S be the feasible set of \(\left( {\textit{SIP}}\right) \) defined by

$$\begin{aligned} S=\left\{ x\in {\mathbb {R}}^{n}:g_{i}(x)\le 0,\ \forall i\in I\right\} . \end{aligned}$$

Theorem 8

Assume that f is locally Lipschitz with constant k around the local optimal point \({\overline{u}}\in S\). Then, for any \(\varepsilon >0\), there exist \(u_{1},u_{2}\in {\overline{u}}+\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}},\;v_{1},v_{2}\in \left( f\left( {\overline{u}}\right) -\varepsilon ,\ f\left( {\overline{u}}\right) +\varepsilon \right) \) and \(\beta _{\varepsilon }^{*}\in {\mathbb {R}}_{+}\backslash \left\{ 0\right\} \) such that \( u_{1}\in S,\;v_{1}\le f\left( {\overline{u}}\right) ,\;v_{2}=f\left( u_{2}\right) \) and

$$\begin{aligned} 0\in {\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f\right) \left( u_{2}\right) +{\widehat{N}}(u_{1};S)+\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}. \end{aligned}$$

Proof

Since \({\overline{u}}\) is a local optimal solution of \(\left( {\textit{SIP}}\right) \), there exists a neighborhood V of \({\overline{u}}\) such that for all \(u\in V\cap S\)

$$\begin{aligned} f\left( u\right) \ge f\left( {\overline{u}}\right) . \end{aligned}$$
  • Take

    $$\begin{aligned} \Omega _{1}:=S\times \left( -\infty ,f\left( {\overline{u}}\right) \right] \text { and }\Omega _{2}:=grf. \end{aligned}$$

    Then, it is easy to show that \(\left( {\overline{u}},f\left( {\overline{u}} \right) \right) \) is an extremal point of the system \(\left( \Omega _{1},\Omega _{2}\right) \). Indeed, suppose that is not the case, i.e., for any neighborhood U of \(\left( {\overline{u}},f\left( {\overline{u}}\right) \right) \) there is \(\varepsilon >0\) such that for all \(a\in \varepsilon B_{ {\mathbb {R}}^{n}\times {\mathbb {R}}}\) one has

    $$\begin{aligned} (\Omega _{1}+a)\cap \Omega _{2}\cap U\ne \emptyset . \end{aligned}$$

    Let \(a=\left( 0,-\dfrac{\varepsilon }{2}\right) \ \)and \(\left( u,v\right) \in (\Omega _{1}+a)\cap \Omega _{2}\cap U\). Thus,

    $$\begin{aligned} u\in S\text { and }f\left( u\right) \in ] -\infty ,f\left( {\overline{u}} \right) -\dfrac{\varepsilon }{2} ]. \end{aligned}$$

    Hence \(f\left( u\right) <f\left( {\overline{u}}\right) \), which contradicts the fact that \({\overline{u}}\) is a local optimal solution of \(\left( {\textit{SIP}}\right) \).

  • Since f is locally Lipschitz, one has

    $$\begin{aligned} \left| f\left( u\right) -f\left( {\widehat{u}}\right) \right| \le k\Vert u-{\widehat{u}}\Vert \;\;\;\;\;\;\;\;\;\;\;\;\text {for }u,{\widehat{u}} \text { sufficiently close to }{\overline{u}}. \end{aligned}$$

    Let \(1/4>\varepsilon >0\). Since \(\left\{ \Omega _{1},\Omega _{2},\left( {\overline{u}},f\left( {\overline{u}}\right) \right) \right\} \) is an extremal system, due to [7, Theorem 2.10], the approximate extremal principle holds at \(\left( {\overline{u}},{\overline{v}}\right) \). Choosing \(\theta = \dfrac{\varepsilon }{4\left( k+1\right) }\), there exist \(u_{1},u_{2}\in {\overline{u}}+\theta {\mathbb {B}}_{{\mathbb {R}}^{n}},\;v_{1},v_{2}\in \left( f\left( {\overline{u}}\right) -\theta ,f\left( {\overline{u}}\right) +\theta \right) \) and \(\left( x^{*},y^{*}\right) \in {\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\) such that \(u_{1}\in S,\;v_{1}\in \left( -\infty ,f\left( {\overline{u}}\right) \right] ,\;v_{2}=f\left( u_{2}\right) ,\ \left\| \left( x^{*},y^{*}\right) \right\| =1\) and

    $$\begin{aligned} \left( x^{*},y^{*}\right) \in \left[ {\widehat{N}}\left( \left( u_{1},v_{1}\right) ;\Omega _{1}\right) +\theta {\mathbb {B}}_{{\mathbb {R}} ^{n}\times {\mathbb {R}}}\right] \cap \left[ -{\widehat{N}}\left( \left( u_{2},v_{2}\right) ;\Omega _{2}\right) +\theta {\mathbb {B}}_{{\mathbb {R}} ^{n}\times {\mathbb {R}}}\right] . \end{aligned}$$
    (7)

    Hence we can find \(\left( u^{*},v^{*}\right) \in {\widehat{N}}\left( \left( u_{2},v_{2}\right) ;\Omega _{2}\right) \) and \(\left( a_{i}^{*},b_{i}^{*}\right) \in {\mathbb {B}}_{{\mathbb {R}}^{n}\times {\mathbb {R}}},\ i=1,\ 2\), and \(\left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \in {\widehat{N}}\left( \left( u_{1},v_{1}\right) ;\Omega _{1}\right) \) such that

    $$\begin{aligned} \left( u^{*},v^{*}\right) =-\left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) +\theta \left( a_{2}^{*},b_{2}^{*}\right) -\theta \left( a_{1}^{*},b_{1}^{*}\right) , \end{aligned}$$

    and

    $$\begin{aligned} \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) +\theta \left( a_{1}^{*},b_{1}^{*}\right) =\left( x^{*},y^{*}\right) . \end{aligned}$$
    (8)
  • Since \(\left( u^{*},v^{*}\right) \in {\widehat{N}}\left( \left( u_{2},v_{2}\right) ;\Omega _{2}\right) \), one has for all \((u,v)\in grf\) sufficiently close to \(\left( u_{2},v_{2}\right) :\)

    $$\begin{aligned} \left\langle u^{*},u-u_{2}\right\rangle +\left\langle v^{*},v-v_{2}\right\rangle -2\theta \left\| \left( u-u_{2},v-v_{2}\right) \right\| \le 0. \end{aligned}$$

    Since the definition of Fréchet normals (2) implies that

    $$\begin{aligned} \underset{\delta >0}{\inf }\ \underset{\left( u,v\right) \in {\mathbb {B}}_{\delta }\left( u_{2},v_{2}\right) \cap \Omega _{2}}{\sup }\frac{ \left\langle u^{*},u-u_{2}\right\rangle +\left\langle v^{*},v-v_{2}\right\rangle }{\left\| \left( u-u_{2},v-v_{2}\right) \right\| }\le 0, \end{aligned}$$

    thus there exists \(\delta >0\) such that for all \(\left( u,v\right) \in {\mathbb {B}}_{\delta }\left( u_{2},v_{2}\right) \cap \Omega _{2}\),

    $$\begin{aligned} \frac{\left\langle u^{*},u-u_{2}\right\rangle +\left\langle v^{*},v-v_{2}\right\rangle }{\left\| \left( u-u_{2},v-v_{2}\right) \right\| }\le 2\theta , \end{aligned}$$

    or

    $$\begin{aligned} \left\langle u^{*},u-u_{2}\right\rangle +\left\langle v^{*},v-v_{2}\right\rangle -2\theta \left\| \left( u-u_{2},v-v_{2}\right) \right\| \le 0. \end{aligned}$$

    Consequently,

    $$\begin{aligned} 0\ge & {} \langle -\alpha _{\varepsilon }^{*},u-u_{2}\rangle +\langle -\beta _{\varepsilon }^{*},f\left( u\right) -f\left( u_{2}\right) \rangle \\&+\,\theta \left\langle a_{2}^{*}-a_{1}^{*},u-u_{2}\right\rangle +\theta \left( b_{2}^{*}-b_{1}^{*}\right) \left( f\left( u\right) -f\left( u_{2}\right) \right) \\&-\,2\theta \Vert (u-u_{2},f\left( u\right) -f\left( u_{2}\right) )\Vert \end{aligned}$$

    for \(\left( u,v\right) \in grf\) sufficiently close to \(\left( u_{2},v_{2}\right) \).

  • The locally Lipschitz property of f together with the fact that \( \left( a_{i}^{*},b_{i}^{*}\right) \in {\mathbb {B}}_{{\mathbb {R}}^{n}\times {\mathbb {R}}}\) for \(i=1,\ 2\), gives us for each u sufficiently close to \(u_{2}\),

    $$\begin{aligned} \langle \alpha _{\varepsilon }^{*},u-u_{2}\rangle +\beta _{\varepsilon }^{*}\left( f\left( u\right) -f\left( u_{2}\right) \right)\ge & {} -4\theta \left[ \left\| u-u_{2}\right\| +\left| f\left( u\right) -f\left( u_{2}\right) \right| \right] \\\ge & {} -4\theta \left( k+1\right) \left\| u-u_{2}\right\| \\\ge & {} -\varepsilon \left\| u-u_{2}\right\| . \end{aligned}$$

    for \(\theta =\dfrac{\varepsilon }{4\left( k+1\right) }\) and u sufficiently close to \(u_{2}\). Then, \(u_{2}\) minimizes locally the function

    $$\begin{aligned} \Psi \left( u\right) :=\langle \alpha _{\varepsilon }^{*},u-u_{2}\rangle +\beta _{\varepsilon }^{*}\left( f\left( u\right) -f\left( u_{2}\right) \right) +\varepsilon \Vert u-u_{2}\Vert . \end{aligned}$$

    Using [7, Proposition 1.107] together with the fuzzy sum rule [7, Theorem 2.33], we can find a point \({\widetilde{u}}_{2}\in u_{2}+\frac{ \varepsilon }{2}{\mathbb {B}}_{{\mathbb {R}}^{n}}\) such that

    $$\begin{aligned} 0\in & {} \alpha _{\varepsilon }^{*}+{\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f+\varepsilon \left\| .-u_{2}\right\| \right) \left( u_{2}\right) \end{aligned}$$
    (9)
    $$\begin{aligned}\subseteq & {} \alpha _{\varepsilon }^{*}+{\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f\right) \left( {\widetilde{u}}_{2}\right) +\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}. \end{aligned}$$
    (10)
  • \(\beta _{\varepsilon }^{*}\ne 0\). Indeed, from (9), there exist \(s^{*}\in {\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f\right) \left( {\widetilde{u}}_{2}\right) \) and \(e^{*}\in {\mathbb {B}}_{ {\mathbb {R}}^{n}}\) such that

    $$\begin{aligned} 0=\alpha _{\varepsilon }^{*}+s^{*}+\varepsilon e^{*}. \end{aligned}$$

    Then,

    $$\begin{aligned} \left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \right\| =\left\| \alpha _{\varepsilon }^{*}\right\| +\left\| \beta _{\varepsilon }^{*}\right\| =\left\| s^{*}+\varepsilon e^{*}\right\| +\left\| \beta _{\varepsilon }^{*}\right\| \le \left\| s^{*}\right\| +\varepsilon +\beta _{\varepsilon }^{*}. \end{aligned}$$

    Moreover,

    $$\begin{aligned} \left\| \left( x^{*},y^{*}\right) \right\| =\left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) +\theta \left( a_{1}^{*},b_{1}^{*}\right) \right\| \le \left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \right\| +\theta \left\| \left( a_{1}^{*},b_{1}^{*}\right) \right\| . \end{aligned}$$

    Since \(\left( a_{1}^{*},b_{1}^{*}\right) \in {\mathbb {B}}_{{\mathbb {R}}^{n+1}},\;\left\| \left( x^{*},y^{*}\right) \right\| =1\) and \( \theta =\dfrac{\varepsilon }{4\left( k+1\right) }<\varepsilon <1/4\), one gets

    $$\begin{aligned} \dfrac{3}{4}\le 1-\theta \le \left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \right\| . \end{aligned}$$

    Thus,

    $$\begin{aligned} \dfrac{3}{4}\le \left\| s^{*}\right\| +\varepsilon +\beta _{\varepsilon }^{*}. \end{aligned}$$

    Using the Lipschitz property of f, one deduces that

    $$\begin{aligned} \dfrac{3}{4}\le \beta _{\varepsilon }^{*}k+\varepsilon +\beta _{\varepsilon }^{*}. \end{aligned}$$

    Consequently,

    $$\begin{aligned} 0<\frac{\dfrac{3}{4}-\varepsilon }{k+1}\le \beta _{\varepsilon }^{*}. \end{aligned}$$

    Hence, \(\beta _{\varepsilon }^{*}>0\). This implies part 1 by (9). \(\square \)

The following constraint qualification will be used to get necessary optimality conditions in terms of limiting subdifferentials and Karush–Kuhn–Tucker multipliers.

Definition 9

We say that the nonsmooth limiting constraint qualification holds at \({\overline{u}}\in S\) if

$$\begin{aligned} N\left( {\overline{u}},S\right) \subseteq cl\left( {\sum _{i\in I\left( {\overline{u}}\right) }}\ cone\ \partial g_{i}\left( {\overline{u}}\right) \right) , \end{aligned}$$

where

$$\begin{aligned} I\left( {\overline{u}}\right) =\{i\in I:\ g_{i}\left( {\overline{u}}\right) =0\}. \end{aligned}$$

Theorem 10 gives exact optimality conditions for our nonsmooth semi-infinite programming problem.

Theorem 10

Assume that f is locally Lipschitz at \({\overline{u}}\in S\) with Lipschitz constant k and that \({\overline{u}}\) is a local optimal solution of \(\left( {\textit{SIP}}\right) \). Suppose that the nonsmooth limiting constraint qualification holds at \({\overline{u}}\). Then,

$$\begin{aligned} 0\in \partial f\left( {\overline{u}}\right) +cl\left( {\sum _{i\in I(\overline{u })}}\ cone\ \partial g_{i}({\overline{u}})\right) . \end{aligned}$$

Proof

Fix arbitrary \(\varepsilon >0\). Since \({\overline{u}}\) is a local optimal solution of \(\left( {\textit{SIP}}\right) \), there exist \(u_{1},u_{2}\in {\overline{u}} +\varepsilon {\mathbb {B}}_{R^{n}},\;v_{1},v_{2}\in \left( f\left( {\overline{u}} \right) -\varepsilon ,\ f\left( {\overline{u}}\right) +\varepsilon \right) \) and \(\beta _{\varepsilon }^{*}\in {\mathbb {R}}_{+}\backslash \left\{ 0\right\} \) such that \(u_{1}\in S,\;v_{1}\le f\left( {\overline{u}}\right) ,\;v_{2}=f\left( u_{2}\right) \) and

$$\begin{aligned} 0\in {\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f\right) \left( u_{2}\right) +{\widehat{N}}(u_{1};S)+\varepsilon {\mathbb {B}}_{{\mathbb {R}}^{n}}. \end{aligned}$$

Using the Lipschitz property of f, from (9), there exist \( s_{\varepsilon }^{*}\in {\widehat{\partial }}\left( \beta _{\varepsilon }^{*}f\right) \left( u_{2}\right) ,\;\alpha _{\varepsilon }^{*}\in {\widehat{N}}(u_{1};S)\) and \(e_{\varepsilon }^{*}\in {\mathbb {B}}_{{\mathbb {R}} ^{n}}\) such that

$$\begin{aligned}&0=\alpha _{\varepsilon }^{*}+s_{\varepsilon }^{*}+\varepsilon e_{\varepsilon }^{*}, \\&\left\| s_{\varepsilon }^{*}\right\| \le \beta _{\varepsilon }^{*}k, \end{aligned}$$

and

$$\begin{aligned} \left( s_{\varepsilon }^{*},-\beta _{\varepsilon }^{*}\right) \in {\widehat{N}}\left( \left( u_{2},f\left( u_{2}\right) \right) ,grf\right) . \end{aligned}$$

Since \(\left\| \left( x^{*},y^{*}\right) \right\| =1,\;\left\| \left( a_{1}^{*},b_{1}^{*}\right) \right\| \le 1\) and

$$\begin{aligned} \left\| \alpha _{\varepsilon }^{*}\right\| +\left\| \beta _{\varepsilon }^{*}\right\|= & {} \left\| \left( \alpha _{\varepsilon }^{*},\beta _{\varepsilon }^{*}\right) \right\| =\left\| \left( x^{*},y^{*}\right) -\dfrac{\varepsilon }{4\left( k+1\right) } \left( a_{1}^{*},b_{1}^{*}\right) \right\| \\\le & {} \left\| \left( x^{*},y^{*}\right) \right\| +\dfrac{\varepsilon }{4\left( k+1\right) }\left\| \left( a_{1}^{*},b_{1}^{*}\right) \right\| \end{aligned}$$

one gets

$$\begin{aligned} 0<\beta _{\varepsilon }^{*}\le 1+\dfrac{\varepsilon }{4\left( k+1\right) }. \end{aligned}$$

Letting \(\varepsilon \rightarrow 0,\;u_{2}\rightarrow {\overline{u}}\) and \( f\left( u_{2}\right) \rightarrow f\left( {\overline{u}}\right) \), there exist \( s^{*}\in -N\left( {\overline{u}};S\right) \) and \(0<\beta ^{*}\le 1\) such that

$$\begin{aligned} \left( s^{*},-\beta ^{*}\right) \in N\left( \left( {\overline{u}} ,f\left( {\overline{u}}\right) \right) ,grf\right) . \end{aligned}$$

Thus,

$$\begin{aligned} \left\{ \begin{array}{l} s^{*}\in \partial \left( \beta ^{*}F\right) \left( {\overline{u}} \right) , \\ -s^{*}\in N\left( {\overline{u}},S\right) . \end{array} \right. \end{aligned}$$

Then,

$$\begin{aligned} 0\in \partial \left( \beta ^{*}f\right) \left( {\overline{u}}\right) +N\left( {\overline{u}},S\right) =\beta ^{*}\partial f\left( {\overline{u}} \right) +N\left( {\overline{u}},S\right) . \end{aligned}$$

Consequently,

$$\begin{aligned} 0\in \partial f\left( {\overline{u}}\right) +N\left( {\overline{u}},S\right) . \end{aligned}$$

The nonsmooth limiting constraint qualification implies that

$$\begin{aligned} 0\in \partial f\left( {\overline{u}}\right) +cl\left( {\sum _{i\in I\left( {\overline{u}}\right) }}\ cone\ \partial g_{i}\left( {\overline{u}}\right) \right) . \end{aligned}$$

\(\square \)

Example 11

Consider the following optimization problem :

$$\begin{aligned} \left( {\textit{SIP}}^{*}\right) :\;\;\;\;\left\{ \left. \begin{array}{l} \text {Minimize} \\ \text {s.t.}\ \ \ \ \ \ \ \ \end{array} \right. \left. \begin{array}{l} f\left( x,y\right) =-3x+2\left| y\right| \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ g_{i}\left( x,y\right) =x+e^{-i}y\le 0,\;\;\ \ \forall i\in \mathbb {N\cup } \left\{ 0\right\} . \end{array} \right. \right. \end{aligned}$$

We remark that \({\overline{u}}=\left( 0,0\right) \in S\ \)is an optimal solution of \(\left( {\textit{SIP}}^{*}\right) \ \)with

$$\begin{aligned} I\left( {\overline{u}}\right) =\mathbb {N\cup }\left\{ 0\right\} \text { and } S=\left\{ \left( x,y\right) \in {\mathbb {R}}^{2}:x\le 0\text { and }x+y\le 0\right\} . \end{aligned}$$

The nonsmooth limiting constraint qualification holds at \({\overline{u}}\). It is easy to show that

$$\begin{aligned} N\left( {\overline{u}},S\right) =\left\{ \left( d_{1},d_{2}\right) \in \mathbb { R}^{2}:0\le d_{2}\le d_{1}\right\} =cl \left( {\sum _{i\in I({\overline{u}})}} \ cone\ \partial g_{i}({\overline{u}})\right) \end{aligned}$$

On the other hand, \(\partial f\left( {\overline{u}}\right) =\left\{ -3\right\} \times \left[ -2,\ 2\right] \), hence we get

$$\begin{aligned} \left( -3,-1\right) \in \partial f\left( {\overline{u}}\right) \cap \left( -cl\left( {\sum _{i\in I({\overline{u}})}}\ cone\ \partial g_{i}({\overline{u}} )\right) \right) . \end{aligned}$$