1 Introduction

In this paper, we consider the following semi-infinite programming problem:

$$\begin{aligned} \begin{array}{lll } \text{(SIP) }&{}\quad \text{ Minimize } &{}\quad f(x)\\ &{}\quad \text{ s.t. } &{}\quad f_t (x)\leqslant 0,\quad t\in T,\\ &{}&{}\quad x\in A, \end{array} \end{aligned}$$

where T is an arbitrary (not necessarily finite) index set, f, \(f_t:X\rightarrow {\mathbb {R}}\), \(t\in T\), are locally Lipschitz functions on a Banach space X, and A is a nonempty closed convex subset of X. It is well known that semi-infinite programming problems became an active research topic in mathematical programming due to its extensive applications in many fields such as reverse Chebyshev approximate, robust optimization, minimax problems, design centering and disjunctive programming; see ([1,2,3]). Recently, a great deal of results have appeared in the literature; see [4,5,6,7,8,9,10,11,12,13,14,15] and the references therein.

We note that the approximate solutions of optimization problems are very important from both the theoretical and practical points of view because they exist under very mild hypotheses and a lot of solution methods (for example, iterative algorithms or heuristic algorithms) obtain this kind of solutions. Thus, it is meaningful to consider various concepts of approximate solutions to optimization problems. The first concept of approximate solutions for optimization problems was introduced by Kutateladze [16]. We remark that, in recent years, many authors devoted their efforts to propose some new notions of approximate solutions in connection with the optimization problems [17,18,19,20,21].

In 2009, Son et al. [22] obtained some necessary and sufficient conditions of approximate solutions for nonsmooth semi-infinite programming problems, where f is \(\varepsilon \)-semiconvex and \(f_t\), \(t\in T\) is convex. In the definition of \(\varepsilon \)-semiconvexity, it requires the regularity of f. It is worth mentioning that the regularity of f and the convexity of \(f_t\) play an important role in deriving the sufficient optimality condition of approximate solutions for the nonsmooth semi-infinite programming problem in [22]. Thus, it is natural and interesting to remove the regularity requirements on the objective function and weaken the convexity on constraint functions. The main purpose of this paper is to make an effort in this direction.

The rest of the paper is organized as follows. In Sect. 2 we recall some basic definitions and known results of Clark subdifferentials. We also introduce three new classes of functions, namely \(\varepsilon \)-pseudoconvex functions of type I and type II and \(\varepsilon \)-quasiconvex functions. Some examples illustrate this new functions. Section 3 contains the main results of this paper. By these new functions, sufficient optimality conditions of approximate solutions for the nonsmooth semi-infinite programming problem are derived. Examples are given to illustrate our results. It is important to note that, in our main results, the regularity on the objective function and the convexity on the constraint function are removed. Our results improve the corresponding results of Son et al. [22].

2 Preliminaries

Throughout this paper, we assume that X is a Banach space, T is a compact topological space. Let \(X^*\) be the dual space of X and we denote the closed unit ball in \(X^*\) by \(B^*\).

Let us denote by \({\mathbb {R}}^{(T)}\) the following linear vector space [1]:

$$\begin{aligned} {\mathbb {R}}^{(T)}:=\{\lambda =(\lambda _t)_{t\in T} : \lambda _t=0\;\text{ for } \text{ all }\;t\in T\;\text{ except } \text{ for } \text{ finitely } \text{ many }\;\lambda _t\ne 0\}. \end{aligned}$$

The nonnegative cone of \({\mathbb {R}}^{(T)}\) is denoted by

$$\begin{aligned} {\mathbb {R}}^{(T)}_+:=\{\lambda =(\lambda _t)_{t\in T}\in {\mathbb {R}}^{(T)} : \lambda _t\geqslant 0,\;t\in T\}. \end{aligned}$$

It is easy to see that \({\mathbb {R}}^{(T)}_+\) is a convex cone of \({\mathbb {R}}^{(T)}\). For \(\lambda \in {\mathbb {R}}_+^{(T)}\), the supporting set corresponding to \(\lambda \) is defined by \(T(\lambda ):=\{t\in T: \lambda _t>0\}\), which is a finite subset of T.

Let Z be a linear vector space. For \(\lambda \in {\mathbb {R}}^{(T)}\) and \(\{z_t\}_{t\in T}\subset Z\), we set

$$\begin{aligned} \sum _{t\in T}\lambda _t z_t :=\left\{ \begin{array}{ll} \sum _{t\in T(\lambda )}\lambda _t z_t,&{}\quad \text{ if }\; T(\lambda )\ne \emptyset ,\\ 0,&{}\quad \text{ if }\; T(\lambda )=\emptyset . \end{array} \right. \end{aligned}$$

Let \(f:X\rightarrow {\mathbb {R}}\) be a real-valued function. The function f is said to be directionally differentiable at \(x\in X\) if, for every direction \(d\in X\), the usual one-sided directional derivative

$$\begin{aligned} f'(x;d):=\lim _{t\downarrow 0}\frac{f(x+td)-f(x)}{t} \end{aligned}$$

of f at x in the direction d exists and is finite.

Let \(f:X\rightarrow {\mathbb {R}}\) be a locally Lipschitz function and \(x\in X\). The Clarke generalized directional derivative [23] of f at x in the direction \(d\in X\) is defined by

$$\begin{aligned} f^\circ (x;d):={\mathop {\mathop {\limsup }\limits _{y\rightarrow {x}}}\limits _{t\downarrow 0}}\frac{f(y+td)-f(y)}{t} \end{aligned}$$

and the Clarke generalized gradient [23] of f at x is denoted by

$$\begin{aligned} \partial _C{f(x)}:=\{\xi \in X^*: f^\circ (x;d)\geqslant \langle \xi , d\rangle , \forall \;d\in X\}. \end{aligned}$$

It is well known that

$$\begin{aligned} f^\circ (x;d)=\sup _{\xi \in \partial _C{f(x)}}\langle \xi ,d\rangle ,\;\forall \; d\in X. \end{aligned}$$

The function f is said to be regular [23] at \(x\in X\) if, for each \(d\in X\), the directional derivative \(f'(x;d)\) exists and coincides with \(f^\circ (x;d)\). When f is convex, \(\partial _C{f(x)}\) coincides with the subdifferential \(\partial {f(x)}\) in the sense of convex analysis.

Let \(E\subseteq X\) be a nonempty subset and \(x\in E\). Denote by \(d_E\) the distance function of E, i.e., \(d_E(x):=\inf \{\parallel x-y\parallel :y\in E\}\). A vector v in X is tangent to E at x provided \(d^\circ _E(x;v)=0\). The set of all tangents to E at x is denoted \(T_C (x)\). The Clarke normal cone to E at x is defined by

$$\begin{aligned} N_C(E;x):=\{\xi \in X^*: \langle {\xi ,v}\rangle \leqslant 0,\;\forall \;v\in T_C (x)\}. \end{aligned}$$

Let \(E\subseteq X\) be a nonempty closed convex subset. The normal cone to E at \(x\in E\) is defined by

$$\begin{aligned} N(E;x):=\{\xi \in X^*: \langle {\xi ,y-x}\rangle \leqslant 0,\;\forall \;y\in E\}. \end{aligned}$$

Obviously, if E is a closed convex set, then \(N_C(E;x)=N(E;x)\).

The following lemmas will be used in the sequel.

Lemma 2.1

[23, Corollary, p. 52] Let \(E\subseteq X\) be a nonempty subset and \(x\in E\). Suppose that \(f:X\rightarrow {\mathbb {R}}\) is Lipschitz near x and attains a minimum over E at x. Then, \(0\in \partial _C f(x)+N_C(E;x)\).

Lemma 2.2

[23, Prosition 2.3.3] Let \(f_i:X\rightarrow {\mathbb {R}}\), \(i=1,2,\cdots ,n\), be locally Lipschitz functions. Then, for any \(x\in X\),

$$\begin{aligned} \partial _C( f_1+ f_2+\cdots +f_n )(x)\subseteq \partial _C f_1(x)+ \partial _C f_2(x)+\cdots +\partial _C f_n(x). \end{aligned}$$

Let \(E\subseteq X\) be a nonempty subset. A locally Lipschitz function \(f:X\rightarrow {\mathbb {R}}\) is said to be pseudoconvex at \(x\in E\) if, for all \(y\in E\),

$$\begin{aligned} \langle {\xi ,y-x}\rangle \geqslant 0,\;\exists \;\xi \in \partial _C f(x)\;\Rightarrow \;f(y)\geqslant f(x). \end{aligned}$$

The function f is said to be quasiconvex at \(x\in E\) if, for all \(y\in E\),

$$\begin{aligned} f(y)\leqslant f(x)\;\Rightarrow \;\langle {\xi ,y-x}\rangle \leqslant 0,\;\forall \;\xi \in \partial _C f(x). \end{aligned}$$

Son et al. [22] introduced the following generalized convexity which is a generalization of the convexity and the semiconvexity.

Definition 2.1

Let \(E\subseteq X\) be a nonempty subset and \(\varepsilon \geqslant 0\). A locally Lipschitz function \(f:X\rightarrow {\mathbb {R}}\) is said to be \(\varepsilon \)-semiconvex at \(x\in E\) if f is regular at x and

$$\begin{aligned} f'(x;y-x)+\sqrt{\varepsilon }\Vert y-x\Vert \geqslant 0 \;\Rightarrow \;f(y)+\sqrt{\varepsilon }\Vert y-x\Vert \geqslant f(x),\quad \forall \;y\in E. \end{aligned}$$

Remark 2.1

It is worth mentioning that the notion of \(\varepsilon \)-semiconvex function introduced by Son et al. [22] is different from the one introduced by Loridan [20]. We also note that when \(\varepsilon =0\), the definition of \(\varepsilon \)-semiconvex function coincides the semiconvex function defined by Mifflin [24].

In order to obtain our main results, we introduce the following generalized convex functions.

Definition 2.2

Let \(E\subseteq X\) be a nonempty subset and \(\varepsilon \geqslant 0\). A locally Lipschitz function \(f:X\rightarrow {\mathbb {R}}\) is said to be

  1. (i)

    \(\varepsilon \)-pseudoconvex of type I at \(x\in E\) if, for all \(y\in E\),

    $$\begin{aligned} \langle {\xi ,y-x}\rangle +\sqrt{\varepsilon }\Vert y-x\Vert \geqslant 0,\;\exists \;\xi \in \partial _C f(x)\;\Rightarrow \;f(y)+\sqrt{\varepsilon }\Vert y-x\Vert \geqslant f(x). \end{aligned}$$
  2. (ii)

    \(\varepsilon \)-pseudoconvex of type II at \(x\in E\) if, for all \(y\in E\),

    $$\begin{aligned} \langle {\xi ,y-x}\rangle \geqslant 0,\;\exists \;\xi \in \partial _C f(x)\;\Rightarrow \;f(y)+\sqrt{\varepsilon }\Vert y-x\Vert \geqslant f(x). \end{aligned}$$
  3. (iii)

    \(\varepsilon \)-quasiconvex at \(x\in E\) if, for all \(y\in E\),

    $$\begin{aligned} f(y)\leqslant f(x)\;\Rightarrow \;\langle {\xi ,y-x}\rangle +\sqrt{\varepsilon }\Vert y-x\Vert \leqslant 0,\;\forall \;\xi \in \partial _C f(x). \end{aligned}$$

Remark 2.2

If f is \(\varepsilon \)-semiconvex at x, then it is also \(\varepsilon \)-pseudoconvex of type I at x. But the converse is not true in general.

Example 2.1

Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be defined by

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} x^3+x,&{}\quad \text{ if }\; x\geqslant 0,\\ \frac{3}{2}x,&{}\quad \text{ if }\; x<0. \end{array} \right. \end{aligned}$$

By a simple computation, we have

$$\begin{aligned} f^\circ (0;d)=\left\{ \begin{array}{ll} \frac{3}{2} d,&{}\quad \text{ if }\; d\geqslant 0,\\ d,&{}\quad \text{ if }\;d<0, \end{array} \quad \right. f'(0;d)=\left\{ \begin{array}{ll} d,&{}\quad \text{ if }\; d\geqslant 0,\\ \frac{3}{2}d,&{}\quad \text{ if }\;d<0. \end{array} \right. \end{aligned}$$

Moreover, \(\partial _C f(0)=[1,\frac{3}{2}]\). Let \(\varepsilon \geqslant 0\). It is easy to see that f is \(\varepsilon \)-pseudoconvex of type I at \(x=0\). However, f is not \(\varepsilon \)-semiconvex at \(x=0\) because f is not regular at \(x=0\).

Remark 2.3

We would like to point out that the definition of \(\varepsilon \)-pseudoconvex of type I is different from the definition of approximate pseudoconvex introduced by Gupta et al. [25]; see Definition 6 in [25]. In Example 2.1, f is \(\varepsilon \)-pseudoconvex of type I but not approximate pseudoconvex at \(x=0\).

Remark 2.4

If f is \(\varepsilon \)-pseudoconvex of type I at x, then it is also \(\varepsilon \)-pseudoconvex of type II at x. But the converse is not true in general.

Example 2.2

Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be defined by

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} -x,&{}\quad \text{ if }\; x\geqslant 0,\\ 2x,&{}\quad \text{ if }\; x<0. \end{array} \right. \end{aligned}$$

By simple computation, we have

$$\begin{aligned} f^\circ (0;d)=\left\{ \begin{array}{ll} 2 d,&{}\quad \text{ if }\; d\geqslant 0,\\ -d,&{}\quad \text{ if }\;d<0, \end{array} \right. \end{aligned}$$

and \(\partial _C f (0)=[-1,2]\). Let \(\varepsilon \geqslant 1\). Take \(\xi \in (0,2]\). It is easy to see that f is \(\varepsilon \)-pseudoconvex of type II but not \(\varepsilon \)-pseudoconvex of type I at \(x=0\).

Remark 2.5

We note that the definition of \(\varepsilon \)-pseudoconvex of type II is different from the definition of approximate pseudoconvex of type I introduced by Bhatia et al. [26]; see Definition 8 in [26]. In Example 2.2, f is \(\varepsilon \)-pseudoconvex of type II but not approximate pseudoconvex of type I at \(x=0\).

We now recall some definitions of approximate solutions. Denote the feasible set of SIP by K, i.e., \(K:=\{x\in A: f_t (x)\leqslant 0,\;\forall \;t\in T\}\).

Definition 2.3

[20] Let \(\varepsilon \geqslant 0\). A point \(x_0\in K\) is said to be

  1. (i)

    an \(\varepsilon \)-minimum for SIP if \(f(x_0)\leqslant f(x)+\varepsilon \) for all \(x\in K\).

  2. (ii)

    an \(\varepsilon \)-quasi-minimum for SIP if \(f(x_0)\leqslant f(x)+\sqrt{\varepsilon }\Vert x-x_0\Vert \) for all \(x\in K\).

  3. (iii)

    a regular \(\varepsilon \)-minimum for SIP if it is an \(\varepsilon \)-minimum and an \(\varepsilon \)-quasi-minimum for SIP.

Remark 2.6

When \(\varepsilon =0\), Definition 2.3 reduces to the usual notion of a minimum. Thus, the study of approximate minimum the case \(\varepsilon >0\) is of interest.

For SIP, denote by \(K_\varepsilon \) the \(\varepsilon \)-feasible set, where \(K_\varepsilon :=\{x\in A:f_t (x)\leqslant \sqrt{\varepsilon }\;\text{ for } \text{ all }\;t\in T\}\). It is easy to see that the set \(K_\varepsilon \) is nonempty and closed.

Definition 2.4

[20] Let \(\varepsilon \geqslant 0\). A point \(x_0\in X\) is said to be an almost \(\varepsilon \)-quasi-minimum for SIP if \(x_0\) satisfies the following conditions: (i) \(x_0\in K_\varepsilon \); (ii) \(f(x_0)\leqslant f(x)+\sqrt{\varepsilon }\Vert x-x_0\Vert \) for all \(x\in K\).

3 Optimality Conditions

In this section, we establish some necessary and sufficient optimality conditions of approximate solutions for nonsmooth semi-infinite programming problems.

To obtain the necessary optimality condition, we consider the following constraint qualification condition:

$$\begin{aligned} (CQ)_{x_0}\quad \quad N_C(K;x_0)\subseteq \bigcup _{\lambda \in B(x_0)}\left( \sum _{t\in T}\lambda _t \partial _C f_t (x_0)\right) +N(A;x_0), \end{aligned}$$

where \(B(x_0):=\{\lambda \in {\mathbb {R}}_+ ^{(T)}: \lambda _t f_t (x_0)=0,\;\forall \;t\in T\}\) and \(x_0\in K\).

First, we obtain the following necessary optimality condition for SIP.

Theorem 3.1

Let \(x_0\) be an \(\varepsilon \)-quasi-minimum for SIP and let the constraint qualification condition \((\textit{CQ})_{x_0}\) hold. Then, there exists \(\lambda \in {\mathbb {R}}_+ ^{(T)}\) such that

$$\begin{aligned} 0\in \partial _C f(x_0)+\sum _{t\in T}\lambda _t \partial _C f_t (x_0)+N(A;x_0)+\sqrt{\varepsilon }B^*,\; f_t (x_0)=0,\;\forall \;t\in T(\lambda ).\nonumber \\ \end{aligned}$$
(3.1)

Proof

Since \(x_0\) is an \(\varepsilon \)-quasi-minimum for SIP, it is a minimum for the following optimization problem:

$$\begin{aligned} \text{ Minimize }\;f(x)+\sqrt{\varepsilon }\Vert x-x_0\Vert ,\;\text{ s.t. }\;x\in K. \end{aligned}$$

By Lemmas 2.1 and 2.2,

$$\begin{aligned} 0&\in \partial _C (f+\sqrt{\varepsilon }\Vert \cdot -x_0\Vert )(x_0)+N_C(K;x_0)\\&\subseteq \partial _C f(x_0)+N_C(K;x_0)+\sqrt{\varepsilon }B^*. \end{aligned}$$

This fact together with the condition \((CQ)_{x_0}\) yields the conclusion. The proof is complete.

Remark 3.1

When \(f_t\), \(t\in T\) is a convex function, the constraint qualification condition \((CQ)_{x_0}\) is considered by Dinh et al. [7]. They also gave a sufficient condition guaranteeing this condition.

We next formulate some sufficient conditions for an almost \(\varepsilon \)-quasi-minimum for SIP.

Theorem 3.2

Let \((x_0,\lambda )\in K_\varepsilon \times {\mathbb {R}}_+ ^{(T)}\) be such that

$$\begin{aligned}&0\in \partial _C f(x_0)+\sum _{t\in T}\lambda _t \partial _C f_t (x_0)+N(A;x_0)+\sqrt{\varepsilon }B^*,\nonumber \\&\quad f_t (x_0)\geqslant 0,\;\forall \;t\in T(\lambda ). \end{aligned}$$
(3.2)

If f is \(\varepsilon \)-pseudoconvex of type I at \(x_0\) and \(f_t\), \(t\in T\) is quasiconvex at \(x_0\), then \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP.

Proof

Let \((x_0,\lambda )\in K_\varepsilon \times \mathbb { R}_+ ^{(T)}\) be such that (3.2) holds. Then, there exist \(u\in \partial _C f(x_0)\), \(v_t\in \partial _C f_t (x_0)\) for all \(t\in T\) with \(w\in N(A;x_0)\) and \(b\in B^*\) such that \(f_t (x_0)\geqslant 0\) for all \(t\in T(\lambda )\) and

$$\begin{aligned} u+\sum _{t\in T}\lambda _t v_t +w+\sqrt{\varepsilon }b=0. \end{aligned}$$
(3.3)

Since \(w\in N(A;x_0)\) and \(b\in B^*\), one has

$$\begin{aligned} \langle {w,x-x_0}\rangle \leqslant 0,\; \;b(x-x_0)\leqslant \Vert x-x_0\Vert ,\;\forall \; x\in A. \end{aligned}$$

This fact together with (3.3) yields

$$\begin{aligned} \left\langle u+\sum _{t\in T}\lambda _t v_t,x-x_0\right\rangle +\sqrt{\varepsilon }\Vert x-x_0\Vert \geqslant 0. \end{aligned}$$

It follows that

$$\begin{aligned} \left\langle u+\sum _{t\in T(\lambda )}\lambda _t v_t,x-x_0\right\rangle +\sqrt{\varepsilon }\Vert x-x_0\Vert \geqslant 0, \end{aligned}$$

or equivalently,

$$\begin{aligned} \langle u,x-x_0\rangle +\sqrt{\varepsilon }\Vert x-x_0\Vert \geqslant -\left\langle \sum _{t\in T(\lambda )}\lambda _t v_t,x-x_0\right\rangle . \end{aligned}$$
(3.4)

Note that \(f_t(x_0)\geqslant 0\) for all \(t\in T(\lambda )\) and \(f_t(x)\leqslant 0\) for all \(t\in T\), \(x\in K\). It follows that \( f_t(x)\leqslant f_t(x_0)\) for all \(x\in K\) and \(t\in T(\lambda )\) . Since \(f_t\) is quasiconvex at \(x_0\) and \(v_t\in \partial _C f_t (x_0)\) for any \(t\in T\),

$$\begin{aligned} \langle {v_t,x-x_0}\rangle \leqslant 0. \end{aligned}$$
(3.5)

Combining (3.4) and (3.5) yields

$$\begin{aligned} \langle u,x-x_0\rangle +\sqrt{\varepsilon }\Vert x-x_0\Vert \geqslant 0. \end{aligned}$$

As f is \(\varepsilon \)-pseudoconvex of type I at \(x_0\),

$$\begin{aligned} f(x_0)\leqslant f(x)+\sqrt{\varepsilon }\Vert x-x_0\Vert . \end{aligned}$$

Therefore, \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP. The proof is complete.

Remark 3.2

Theorem 3.2 generalizes Theorem 4.3 of Son et al. [22], one of the main results of [22], from the following two aspects: (i) the convexity of \(f_t\), \(t\in T\) is relaxed to the quasiconvexity at \(x_0\); (ii) the \(\varepsilon \)-semiconvexity of f is relaxed to the \(\varepsilon \)-pseudoconvexity of type I at \(x_0\). It is worth mentioning that the regularity of f plays an important role in the proof of Theorem 4.3 in [22]. However, our proof does not require any regularity conditions.

Since an \(\varepsilon \)-semiconvex function is \(\varepsilon \)-pseudoconvex, we have the following corollary.

Corollary 3.1

Let \((x_0,\lambda )\in K_\varepsilon \times {\mathbb {R}}_+ ^{(T)}\) be such that (3.2) holds. If f is \(\varepsilon \)-semiconvex at \(x_0\) and \(f_t\), \(t\in T\) is quasiconvex at \(x_0\), then \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP.

Since a convex function is quasiconvex, we have the following corollary.

Corollary 3.2

[22, Theorem 4.3] Let \((x_0,\lambda )\in K_\varepsilon \times {\mathbb {R}}_+ ^{(T)}\) be such that (3.2) holds. If f is \(\varepsilon \)-semiconvex at \(x_0\) and \(f_t\), \(t\in T\) is convex at \(x_0\), then \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP.

The following example illustrates that Theorem 3.2 holds but Corollary 3.2 is not applicable.

Example 3.1

Consider the following problem:

$$\begin{aligned} \begin{array}{lll} \text{(SIP) }&{}\quad \text{ Minimize } &{}\quad f(x)\\ &{}\quad \text{ s.t. } &{} \quad f_t (x)\leqslant 0, \quad t\in T=[0,1],\\ &{}&{} \quad x\in A=[-1,1], \end{array} \end{aligned}$$

where

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} 0,&{} \quad \text{ if }\; x\geqslant 0,\\ x,&{} \quad \text{ if }\; x<0, \end{array} \right. \end{aligned}$$

and \(f_t (x):=tx\), for \(x\in {\mathbb {R}}\) and \(t\in T\). By simple calculations,

$$\begin{aligned} f^\circ (0;d)=\left\{ \begin{array}{ll} d,&{} \quad \text{ if }\; d\geqslant 0,\\ 0,&{} \quad \text{ if }\;d<0, \end{array} \quad \right. f'(0;d)=\left\{ \begin{array}{ll} 0,&{} \quad \text{ if }\; d\geqslant 0,\\ d,&{} \quad \text{ if }\;d<0. \end{array} \right. \end{aligned}$$

Let \(\varepsilon =1\). It is easy to check that f is \(\varepsilon \)-pseudoconvex at \(x_0=0\), and \(f_t\), \(t\in T\), is quasiconvex at \(x_0=0\) and \(K=[-1,0]\). Moreover, \(\partial _C f (0)=[0,1]\), \(\partial _C f_t (0)=t\) for all \(t\in T\), and \(N(A;0)=\{0\}\). Let \(\lambda \) satisfy that \(\lambda _0=1\) and \(\lambda _t=0\) for all \(t\in T\backslash \{0\}\). We can check that the optimality condition (3.2) corresponding \((0,\lambda )\) holds. In this case we obtain \(T(\lambda )=\{0\}\). By Theorem 3.2, \(x_0=0\) is an almost \(\varepsilon \)-quasi-minimum for SIP. However, Theorem 4.3 of Son et al. [22] is not applicable since f is not \(\varepsilon \)-semiconvex at \(x_0=0\). We also point out that there is not a minimum for this problem.

In the following theorem, we give another sufficient optimality condition of an almost \(\varepsilon \)-quasi-minimum for SIP.

Theorem 3.3

Let \((x_0,\lambda )\in K_\varepsilon \times {\mathbb {R}}_+ ^{(T)}\) be such that (3.2) holds. If f is \(\varepsilon \)-pseudoconvex of type II at \(x_0\) and \(f_t\) is \(\varepsilon \)-quasiconvex at \(x_0\), then \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP.

Proof

Similarly to the proof of Theorem 3.2, there exist \(u\in \partial _C f(x_0)\), \(v_t\in \partial _C f_t (x_0)\), \(\forall \;t\in T\), \(w\in N(A;x_0)\) and \(b\in B^*\) such that \(f_t (x_0)\geqslant 0\) for all \(t\in T(\lambda )\) and

$$\begin{aligned} \langle u,x-x_0\rangle \geqslant -\sqrt{\varepsilon }\Vert x-x_0\Vert -\left\langle \sum _{t\in T(\lambda )}\lambda _t v_t,x-x_0\right\rangle . \end{aligned}$$
(3.6)

It is easy to see that \(f_t(x)\leqslant f_t(x_0)\) for all \(x\in K\) and \(t\in T(\lambda )\). By the \(\varepsilon \)-quasiconvexity of \(f_t\) at \(x_0\),

$$\begin{aligned} \langle {v_t,x-x_0}\rangle +\sqrt{\varepsilon }\Vert x-x_0\Vert \leqslant 0. \end{aligned}$$

It follows that

$$\begin{aligned} \left\langle {\sum _{t\in T(\lambda )}\lambda _t v_t,x-x_0}\right\rangle +\sqrt{\varepsilon }\Vert x-x_0\Vert \leqslant 0. \end{aligned}$$

This fact together with (3.6) yields

$$\begin{aligned} \langle u,x-x_0\rangle \geqslant 0. \end{aligned}$$

Since f is \(\varepsilon \)-pseudoconvex of type II at \(x_0\),

$$\begin{aligned} f(x_0)\leqslant f(x)+\sqrt{\varepsilon }\Vert x-x_0\Vert . \end{aligned}$$

Therefore, the conclusion holds. The proof is complete.

Remark 3.3

When we replace \(f_t (x_0)\geqslant 0\) by \(f_t (x_0)=0\) in (3.2), Theorems 3.2 and 3.3 also hold. Moreover, when X is a finite dimensional space, the results obtained in this paper also hold.