1 Introduction

The following infinite programming problem is regarded:

$$\begin{aligned} \begin{array}{ll} \text{ minimize } &{} f(x) \\ \text{ subject } \text{ to } &{} g_\alpha (x) \le 0,\quad \alpha \in A, \\ &{} h_\beta (x) = 0,\quad \beta \in B, \end{array} \end{aligned}$$
(IP)

where \(f,g_\alpha ,h_\beta : X \rightarrow {\mathbb {R}}\) for \(\alpha \in A\) and \(\beta \in B,\ X\) is a real Banach space and \(A\) and \(B\) are the index sets (which, usually, are subsets of Euclidean spaces).

We are able to cite only some few works where the infinite programming problem was studied exactly in the formulation above. See, for instance, Cánovas et al. [4], Dinh et al. [6], Fang et al. [8], Schirotzek [24] and Tapia and Trosset [26], where first order necessary conditions of optimality can be found. In Cánovas et al. [4] the authors regard an infinite programming problem with linear inequalities, but the objective function may be nonsmooth and nonconvex. They illustrate the results with an application on water resource management. Dinh et al. [6] study an infinite programming problem posed in locally convex topological spaces such that the defining functions are lower semicontinuous and convex. They obtain optimality conditions and duality and stability theorems. Fang et al. [8] deal with the problem defined on locally convex Hausdorff topological spaces where the involved functions are assumed to be proper convex, but not necessarily lower semicontinuous. They focus on constraint qualifications. Schirotzek [24] considered \(X\) as a locally convex Hausdorff topological space, but he regarded the problem with inequalities constraints only. Tapia and Trosset [26] considered \(X\) as a Hilbert space, and their results are applied to obtain necessity conditions for a problem on probability density estimation. In de Oliveira and Rojas-Medar [20], first and second order necessary conditions of optimality are provided for a multiobjective infinite programming problem. Sufficient conditions are also given by means of generalized convexity assumptions. In de Oliveira and Rojas-Medar [21], the same multiobjective problem is tackled and some results on the scalarization method are presented.

However, (IP) can be formulated in a different way as follows. We assume that the functions \(\alpha \mapsto g_\alpha \) and \(\beta \mapsto h_\beta \) are continuous and define \(G : X \rightarrow C(A)\) and \(H : X \rightarrow C(B)\) respectively by \( G(x)(\alpha ) = g_\alpha (x) ~ \text{ and } ~ H(x)(\beta ) = h_\beta (x), \) where \(C(A)\) denotes the space of all real continuous functions defined on \(A\). Similarly for \(C(B)\). So (IP) can be reformulated as

$$\begin{aligned} \begin{array}{ll} \text{ minimize } &{} f(x) \\ \text{ subject } \text{ to } &{} G(x) \le \theta _1,\ H(x) = \theta _2, \end{array} \end{aligned}$$

where \(\theta _1\) and \(\theta _2\) denote the zeros of \(C(A)\) and \(C(B)\), respectively. In this new formulation, the infinite problem can be viewed as an optimization problem between Banach spaces and for this kind of problem there is a wide bibliography. We cite, for example, Ben-Tal and Zowe [2], Gfrerer [10], Girsanov [11], Guignard [12], Luenberger [17], Mordukhovich and Mou [19], Robinson [22], and Varaiya [27].

We should mention that semi-infinite programming is subsumed by infinite programming. For the reader who are interested specifically in mathematical programming defined in finite dimensional spaces with an infinite number of constraints (i.e., semi-infinite programming) we refer the papers by Hettich and Kortanek [14] and Stein [25] and references therein.

The necessary optimality conditions obtained in de Oliveira and Rojas-Medar [20] and reproduced here, were established through the unified theory for extremum problems in topological vector spaces. This theory was first developed in Girsanov [11], where first order necessary optimality conditions for an abstract optimization problem are provided. Girsanov’s approach was based on the original ideas of Dubovickiĭ and Miljutin [7], who were the first ones to suggest that general extremum problems can be dealt with a unified functional-analytic approach. This theory was generalized to include second order conditions in Ben-Tal and Zowe [2], and was the basis of the necessary optimality conditions furnished in [20].

In this work we provide sufficient conditions for global optimality. The results on sufficient conditions are obtained by making use of the notion of invexity. Invex functions were introduced by Hanson [13].

It is well known that a differentiable real-valued function is invex if and only if every stationary point is a global minimizer. See Craven and Glover [5]. It is also well known that every stationary point is a global minimizer when a mathematical programming problem is invex (a problem is said to be invex if the defining functions are invex with the same invexity kernel). See Hanson [13]. Nevertheless, the converse result is not valid. Martin [18] weakened the invexity assumption and was able to show that if a mathematical programming problem is such that every stationary point is a global minimizer, necessarily it satisfies this weakened invexity condition, while keeping the sufficiency property of the new generalized convexity notion. Summarizing, Martin showed that every stationary point is a global minimizer if and only if the problem satisfies the weakened notion of invexity, called KT-invexity. We believe that it is more appropriated to call it as KKT-invexity. KKT-invexity for infinite programming problems was introduced in de Oliveira and Rojas-Medar [20].

In the same spirit, Ivanov [15] defined the second order KKT-invexity (for mathematical programming problems). He proved that every point which satisfies the second order optimality conditions is a global minimizer if and only if the problem fulfills the second order KKT-invexity conditions. Similar results were obtained by Santos et al. [23] for multiobjective mathematical programming problems. Here, we set this generalized invexity concept for infinite programming problems and establish a similar result.

Ivanov continued his work on optimality conditions for mathematical programming problems in [16], where higher order necessary conditions of Fritz John and Karush–Kuhn–Tucker types were proved by means of higher order Dini directional derivatives. He also provided sufficient conditions through higher order pseudoinvexity of the objective function and prequasiinvexity of the constraint functions.

This work is organized as follows. In the next section we have some preliminary assumptions and definitions. In Sect. 3 we present first and second order necessary conditions of Fritz John and Karush-Kuhn-Tucker types. Some constraint qualifications are given. Sufficient conditions are exhibited in Sect. 4, where in Sect. 4.1 we define first order KKT-invexity for infinite programming problems and show that this concept is a sufficient optimality condition which is necessarily satisfied for any problem with the property that every stationary solution is a global minimizer; in Sect. 4.2 we define second order KKT-invexity and show that an infinite programming problem obeys it if, and only if, every second order stationary solution is an optimal solution.

2 Preliminaries

Here we introduce some useful assumptions and notations.

Let us present the following hypotheses:

(H1):

\(f,\ g_\alpha ,\ \alpha \in A\), and \(h_\beta ,\ \beta \in B\), are Fréchet differentiable;

(H2):

\(f\) and \(g_\alpha ,\ \alpha \in A\), are Fréchet differentiable and \(h_\beta ,\ \beta \in B\), are continuously Fréchet differentiable;

(H3):

\(f,\ g_\alpha ,\ \alpha \in A\), and \(h_\beta ,\ \beta \in B\), are twice Fréchet differentiable;

(H4):

\(\alpha \mapsto g_\alpha (x),\ \alpha \mapsto g_\alpha ^\prime (x)d,\ \beta \mapsto h_\beta (x)\) and \(\beta \mapsto h_\beta ^\prime (x)d\) are continuous for all \(x,d \in X\);

(H5):

\(\alpha \mapsto g_\alpha (x),\ \alpha \mapsto g_\alpha ^\prime (x)d,\ \alpha \mapsto g_\alpha ^{\prime \prime }(x)(d,z),\ \beta \mapsto h_\beta (x),\ \beta \mapsto h_\beta ^\prime (x)d\) and \(\beta \mapsto h_\beta ^{\prime \prime }(x)(d,z)\) are continuous for all \(x,d,z \in X\).

We assume throughout this paper that \(A\) and \(B\) are compact Hausdorff spaces. We denote by \(M(A)\) and \(M(B)\) the sets of all real Radon measures on \(A\) and \(B\), respectively. We denote by \(F\) the set of all feasible solutions of (IP):

$$\begin{aligned} F = \{ x \in X : g_\alpha (x) \le 0,\ \alpha \in A,\ h_\beta (x) = 0,\ \beta \in B \}, \end{aligned}$$

by \(A(\bar{x})\) the index set of the active constraints at \(\bar{x} \in F\):

$$\begin{aligned} A(\bar{x}) = \{ \alpha \in A : g_\alpha (\bar{x}) = 0 \}, \end{aligned}$$

and when (H1) holds, by \(D(\bar{x})\) the set of critical directions at \(\bar{x}\):

$$\begin{aligned} D(\bar{x}) = \{ d \in X : f^\prime (\bar{x})d \le 0,\ g_\alpha ^\prime (\bar{x})d \le 0,\ \alpha \in A,\ h_\beta ^\prime (\bar{x})d = 0,\ \beta \in B \}. \end{aligned}$$

If \(h_\beta \) is Fréchet differentiable at \(\bar{x}\) and \(\beta \mapsto h_\beta ^\prime (\bar{x})z\) is continuous for every \(z \in X\), we denote

$$\begin{aligned} R(\bar{x}) = \{ \varphi \in C(B), ~ \varphi : \beta \mapsto h_\beta ^\prime (\bar{x})z, ~ z \in X \}. \end{aligned}$$

Observe that \(R(\bar{x})\) is the range of \(H^\prime (\bar{x})\), where \(H : X \rightarrow C(B)\) is given by \(H(x)(\beta ) = h_\beta (x)\) for \(\beta \in B\).

3 First and second order necessary conditions

In this section, we state necessary optimality conditions of Fritz John and Karush–Kuhn–Tucker types. The omitted proofs can be found in de Oliveira and Rojas-Medar [20], where these conditions were established for a multiobjective infinite programming problem. Indeed, the results below follow from de Oliveira and Rojas-Medar [20] as a particular case by taking the number of objective functions \(p=1\).

Theorem 1

Suppose that assumptions (H3) and (H5) are valid. Let \(\bar{x} \in F\) be an optimal solution of (IP). Assume that \(R(\bar{x})\) is closed. Then for every \(d \in D(\bar{x})\) there correspond a real number \(\lambda \ge 0\), a positive Radon measure \(\mu \in M(A)\) and a signed Radon measure \(\nu \in M(B)\), not all zero, such that

$$\begin{aligned}&\lambda f^\prime (\bar{x})z + \int _A g_\alpha ^\prime (\bar{x})z \mu (d \alpha ) + \int _B h_\beta ^\prime (\bar{x})z \nu (d \beta ) = 0\quad \forall ~ z \in X, \end{aligned}$$
(1)
$$\begin{aligned}&\int _A g_\alpha (\bar{x}) \mu (d \alpha ) = 0, \end{aligned}$$
(2)
$$\begin{aligned}&\lambda f^{\prime \prime }(\bar{x})(d,d) + \int _A g_\alpha ^{\prime \prime }(\bar{x})(d,d) \mu (d \alpha ) + \int _B h_\beta ^{\prime \prime }(\bar{x})(d,d) \nu (d \beta ) \ge 0, \end{aligned}$$
(3)
$$\begin{aligned}&\lambda f^\prime (\bar{x})d = 0, \end{aligned}$$
(4)
$$\begin{aligned}&\int _A g_\alpha ^\prime (\bar{x})d \mu (d \alpha ) = 0. \end{aligned}$$
(5)

Proof

See Theorem 4.2 in [20]. \(\square \)

Remark 1

In the theorem above one sees that the Lagrange multipliers for infinite programming problems are represented by Radon measures. Radon measures, by definition, are outer regular on Borel sets and inner regular on open sets. Moreover, it is possible to identify \(L^1(E)\), \(E=A\) or \(E=B\), as a subspace of \(M(E)\). In fact, if \(\ell \) denotes the Lebesgue measure on \(E\), the measure \(d\rho _\phi = \phi \, d\ell \), \(\phi \in L^1(E)\), is a Radon measure such that \(\Vert \rho _\phi \Vert = \Vert \phi \Vert \), so that \(\phi \mapsto \rho _\phi \) is an isometric embedding from \(L^1(E)\) into \(M(E)\). From the Radon–Nikodym TheoremFootnote 1 one finds that the range of such an isometry is exactly the set of all measures which are absolutely continuous with respect to \(\ell \). Therefore, if both the measures \(\mu \) and \(\nu \) in Theorem 1 happen to be absolutely continuous with respect to \(\ell \), then they can be represented by functions in \(L^1(E)\), namely, their Radon–Nikodym derivative with respect to \(\ell \). In this case, the integrals above can be calculated as Lebesgue integrals. For instance,

$$\begin{aligned}&\lambda f^\prime (\bar{x})z + \int _A g_\alpha ^\prime (\bar{x})z \mu (d \alpha ) + \int _B h_\beta ^\prime (\bar{x})z \nu (d \beta ) \\&\qquad = \lambda f^\prime (\bar{x})z + \int _A g_\alpha ^\prime (\bar{x})z \frac{d\mu }{d\ell }(\alpha )\, d\alpha + \int _B h_\beta ^\prime (\bar{x})z \frac{d\nu }{d\ell }(\beta )\, d\beta . \end{aligned}$$

Investigating what would be the additional conditions we should impose on the data of the problem in order to obtain in advance the absolutely continuity of measures \(\mu \) and \(\nu \) with respect to \(\ell \) is going to be a topic for future work. Another interesting topic we will study in the future is the sensitivity analysis of the multipliers.

In the last theorem we have Fritz John type optimality conditions. In the absence of further assumptions, conditions (1)–(5) may degenerate, that is, they may be valid with \(\lambda = 0\). In such a case the information about the objective function is missed, so that the conditions become useless. In order to guarantee that \(\lambda > 0\), we need to ask to the so called constraints qualifications.

Definition 1

We say that \(\bar{x} \in F\) is a second order Karush–Kuhn–Tucker solution of (IP) if, given \(d \in D(\bar{x})\), there exist a scalar \(\lambda > 0\), a positive Radon measure \(\mu \in M(A)\) and a signed Radon measure \(\nu \in M(B)\) such that (1)–(5) hold.

Definition 2

Suppose that (H3) is satisfied. We say that the constraints of (IP) satisfy the CQ(\(d\)) condition for \(d \in X\) at \(\bar{x} \in F\) if

  1. (i)

    \(R(\bar{x}) = C(B)\);

  2. (ii)

    there exists \(\hat{z} \in X\) such that

    $$\begin{aligned}&g_\alpha ^\prime (\bar{x})\hat{z} + g_\alpha ^{\prime \prime }(\bar{x})(d,d) < 0,\quad \alpha \in A, \\&h_\beta ^\prime (\bar{x})\hat{z} + h_\beta ^{\prime \prime }(\bar{x})(d,d) = 0,\quad \beta \in B. \end{aligned}$$

Corollary 1

If condition CQ(\(d\)) is satisfied for every \(d \in D(\bar{x})\) at \(\bar{x}\), then \(\bar{x}\) is a second order Karush–Kuhn–Tucker solution.

Proof

It follows directly from Theorem 1 and Corollary 8.3 in Ben-Tal and Zowe [2]. \(\square \)

Definition 3

We say that \(\bar{x} \in F\) is a first order Karush–Kuhn–Tucker solution (or simply a Karush–Kuhn–Tucker solution) of (IP) if there exist a scalar \(\lambda > 0\), a positive Radon measure \(\mu \in M(A)\) and a signed Radon measure \(\nu \in M(B)\) such that (1)–(2) hold.

We now analyze the first order conditions separately. Let us present an additional constraint qualification. Namely, we introduce the Mangasarian–Fromovitz type constraint qualifications for (IP).

Definition 4

Suppose that (H1) is satisfied. We say that the constraints of (IP) satisfy the Mangasarian–Fromovitz type constraint qualification at \(\bar{x} \in F\) if

  1. (i)

    there does not exist a nonzero signed measure \(\nu \in M(B)\) such that

    $$\begin{aligned} \int _B h_\beta ^\prime (\bar{x})z \nu (d \beta ) = 0\quad \forall ~ z \in X; \end{aligned}$$
  2. (ii)

    there exists \(\hat{z} \in X\) such that

    $$\begin{aligned}&g_\alpha ^\prime (\bar{x})\hat{z} < 0,\quad \alpha \in A(\bar{x}), \\&h_\beta ^\prime (\bar{x})\hat{z} = 0,\quad \beta \in B. \end{aligned}$$

Remark 2

Let \(\bar{x} \in F\) and \(\bar{d}=0 \in D(\bar{x})\). If \(R(\bar{x})\) is closed, it follows from Hahn-Banach and Riesz TheoremsFootnote 2 that the nonexistence of a nonzero signed measure \(\nu \in M(B)\) such that

$$\begin{aligned} \int _B h_\beta ^\prime (\bar{x})z \nu (d \beta ) = 0\quad \forall ~ z \in X \end{aligned}$$

is equivalent to \(R(\bar{x}) = C(B)\), that is, \(H^\prime (\bar{x})\) being onto (observe that in finite dimensions, \(R(\bar{x})\) is always closed, so that this equivalence always holds true). Therefore, when \(R(\bar{x})\) is closed, CQ(0) is exactly the Mangasarian-Fromovitz constraint qualification defined above.

Corollary 2

Suppose that assumptions (H2) and (H4) are valid. Let \(\bar{x}\) be an optimal solution of (IP). Assume that \(R(\bar{x})\) is closed. If the constraints of (IP) satisfy the Mangasarian–Fromovitz type constraint qualification at \(\bar{x}\), then \(\bar{x}\) is a Karush–Kuhn–Tucker solution.

Proof

See Theorem 4.8 in [20]. \(\square \)

4 Sufficient conditions

In this section we present sufficient conditions of optimality under generalized convexity assumptions. Initially, we show that every first order Karush–Kuhn–Tucker solution is an optimal solution when the defining functions are invex with a common invexity kernel. Then we make a study for (IP) similar to that Martin [18] made for classical mathematical programming problems. Martin introduced a weaker notion of invexity, called KKT-invexity, and showed that this notion is not only sufficient but also a necessary condition for the validity of the property that every stationary point (first order Karush–Kuhn–Tucker solution) is a global minimizer of the problem. The same kind of study is carried out by Ivanov [15] for second order Karush–Kuhn–Tucker solutions. As Martin, Ivanov worked with mathematical programming problems in finite dimensions. We generalize Ivanov’s results for (IP).

4.1 First order sufficient conditions

The definition of KKT-invexity given here is slightly different from the definition provided in de Oliveira and Rojas-Medar [20]. It is closer to Martin’s definition. In fact, in [20] instead of requiring the invexity of the objetive function, as Martin did and as we do in the Definition 6 below, we request the pseudo-invexity of it. It is well known that, in finite dimensions, invexity and pseudo-invexity are equivalent (see Ben-Israel and Mond [1] for example). The equivalence between KKT-invexity given in [20] and KKT-invexity given here will be clear after Theorem 3 in the sequel is established.

Definition 5

Let \(X\) be a normed space. We say that a Fréchet differentiable function \(f : X \rightarrow {\mathbb {R}}\) is invex at \(\bar{x} \in X\) if there exists a map \(\eta : X \times X \rightarrow X\) such that

$$\begin{aligned} f(x) - f(\bar{x}) \ge f^\prime (\bar{x}) \eta (x,\bar{x})\quad \forall ~ x \in X. \end{aligned}$$

We say that \(f\) is invex if \(f\) is invex at each \(\bar{x} \in X\).

Theorem 2

Suppose that assumption (H1) is valid. Let \(\bar{x} \in F\) be a Karush-Kuhn-Tucker solution. Assume that \(f,\ g_\alpha ,\ \alpha \in A(\bar{x}),\ h_\beta \) and \(-h_\beta ,\ \beta \in B\), are invex at \(\bar{x}\) with a common \(\eta \). Then \(\bar{x}\) is a global optimal solution of (IP).

Proof

See Theorem 4.9 in [20]. \(\square \)

Now we turn to introduce the notion of KKT-invexity for (IP) and to establish a result similar to that of Martin [18].

Definition 6

Assume that (H1) is valid. We say that (IP) is Karush–Kuhn–Tucker invex (or KKT-invex) if there exists a map \(\eta : X \times X \rightarrow X\) such that

$$\begin{aligned} f(x) - f(\bar{x})&\ge f^\prime (\bar{x}) \eta (x,\bar{x}) \\ 0&\ge g_\alpha ^\prime (\bar{x}) \eta (x,\bar{x}),\quad \alpha \in A(\bar{x}), \\ 0&= h_\beta ^\prime (\bar{x}) \eta (x,\bar{x}),\quad \beta \in B, \end{aligned}$$

for all \(x,\bar{x} \in F\).

Remark 3

We remark that if there are no equality constraints, \(X\) has finite dimension and \(A\) is a finite set, the definition above coincides exactly with the definition given by Martin. So, this definition is in fact a generalization of the Martin’s definition.

Notice that if functions \(f,\ g_\alpha ,\ \alpha \in A(\bar{x}),\ h_\beta \) and \(-h_\beta ,\ \beta \in B\), are invex, then (IP) is clearly KKT-invex.

The next example provides a KKT-invex problem which is not invex. In addition, it holds the property that every KKT-solution is a global optimal solution. This shows that invexity is sufficient for the validity of such a property, but it may be valid without the problem being necessarily invex.

Example 1

Let \(X = C([0,1]) \times C([0,1])\). Consider the problem

$$\begin{aligned} \begin{array}{ll} \text{ minimize } &{} \displaystyle {f(x,y) = \int _0^1 [1-\exp (-y(t))]dt} \\ \text{ subject } \text{ to } &{} g_\alpha (x,y) = -y(\alpha ) \le 0,\ \alpha \in [0,1], \\ &{} h_\beta (x,y) = \displaystyle {1+ \int _0^\beta (-x(t)+y(t)^2)dt - x(\beta ) = 0,\ \beta \in [0,1].} \end{array} \end{aligned}$$

Set \((\hat{x}(t),\hat{y}(t)) = (\exp (-t),0),~t\in [0,1]\). It is easy to see that \((\hat{x},\hat{y})\) is a feasible Karush–Kuhn–Tucker solution with \(\hat{\lambda } =1\), \(\hat{\mu }\) as the Lebesgue measure and \(\hat{\nu }\) as the null measure. Defining \(\eta = (\eta _1,\eta _2) \in X\) as

$$\begin{aligned} \eta _1(x,y,\bar{x},\bar{y})(t)\!&= \! 2\exp (-t)\int _0^t\bar{y}(s)[\exp (s)-\exp (s-y(s)+\bar{y}(s))]ds,~t\in [0,1], \\ \eta _2(x,y,\bar{x},\bar{y})(t)&= 1 - \exp (-y(t)+\bar{y}(t)),~t\in [0,1], \end{aligned}$$

it is easily seen that the problem is KKT-invex, so that \((\hat{x},\hat{y})\) is an optimal solution.

However, this problem is not invex. In fact, if one assumes the existence of a map \(\eta = (\eta _1,\eta _2)\) such that both objective and constraints functions are invex with this common kernel, then

$$\begin{aligned} -y(\alpha ) + \bar{y}(\alpha ) = g_\alpha (x,y) - g_\alpha (\bar{x},\bar{y}) \ge g_\alpha ^\prime (\bar{x},\bar{y})\eta (x,y,\bar{x},\bar{y}) = -\eta _2(x,y,\bar{x},\bar{y})(\alpha ), \end{aligned}$$

for all \(\alpha \in [0,1]\), so that

$$\begin{aligned}&f(x,y) - f(\bar{x},\bar{y}) - \int _0^1 \exp (-\bar{y}(t))[y(t)-\bar{y}(t)]dt\\&\quad \ge f(x,y) - f(\bar{x},\bar{y}) - \int _0^1\exp (-\bar{y}(t))\eta _2(x,y,\bar{x},\bar{y})(t))dt \ge 0 \end{aligned}$$

for all \((x,y),(\bar{x},\bar{y}) \in X\). A contradiction follows for the particular choice \(y(t)=0\), \(\bar{y}(t)=t,\ t \in [0,1]\), and \(x,\bar{x}\) arbitrary.

Now let us see that \((\hat{x}(t),\hat{y}(t)) = (\exp (-t),0),~t\in [0,1]\), is the only Karush–Kuhn–Tucker solution. Assume that \((\tilde{x},\tilde{y})\) is a Karush–Kuhn–Tucker solution with \(\tilde{y}(t) > 0, ~ t \in [0,1]\). Then there exist \(\tilde{\lambda }=1\), a positive measure \(\tilde{\mu } \in M([0,1])\) and a signed measure \(\tilde{\nu } \in M([0,1])\) such that (1)–(2) hold. Since \(\tilde{y}(t) > 0, ~ t \in [0,1]\), it follows from (2) that \(\tilde{\mu }=0\). By choosing \(\tilde{w} \in C([0,1])\) such that \(\tilde{w}(t) > 0, ~ t \in [0,1]\), and setting \(\tilde{z}(t) = 2\int _0^t\exp (s-t)\tilde{y}(s)\tilde{w}(s)ds \in C([0,1])\), and substituting in (1), a contradiction follows. Therefore, in this problem, every Karush–Kuhn–Tucker solution is an optimal solution.

Theorem 3

Suppose that assumptions (H2) and (H4) are valid. Assume that the constraints of (IP) satisfy the Mangasarian–Fromovitz type constraint qualification at each \(\bar{x} \in F\) and that \(R(\bar{x})\) is closed for each \(\bar{x} \in F\). Then, every Karush–Kuhn–Tucker solution is a global optimal solution if, and only if, (IP) is Karush–Kuhn–Tucker invex.

Proof

Observe that Theorem 3 is not a direct consequence of Theorem 4.11 in [20] since here the invexity of the objective function is required in the definition of KKT-invexity, while in [20], the pseudo-invexity. Although not the same, the proof is very similar and will not be included. \(\square \)

4.2 Second order sufficient conditions

Now we generalize Ivanov’s definition of second order KKT-invexity for infinite programming problems as well as his results concerning this concept of generalized invexity. Due to the infinite dimensions setting, the techniques of the proofs are not all the same as in Ivanov [15].

Definition 7

Assume that (H3) is valid. We say that (IP) is second order Karush–Kuhn–Tucker invex (or second order KKT-invex) if there exist maps \(d,\eta : X \times X \rightarrow X\) and \(\omega : X \times X \rightarrow {\mathbb {R}}\) obeying the following conditions

for all \(x,y \in F\).

Remark 4

Observe that this definition is the natural extension for infinite programming of the corresponding one given by Ivanov [15] for mathematical programming. Moreover, this notion generalizes the first order Karush–Kuhn–Tucker invexity defined in the last section. Indeed, by taking \(d(x,y)=0\in X\) or \(\omega (x,y)=0\) in the definition above we see that every KKT-invex infinite programming problem is also second order KKT-invex.

Proposition 1

Assume that (H3) holds. If \(\bar{x}\in F\) is a second order Karush–Kuhn–Tucker solution of (IP), then for each fixed \(d \in D(\bar{x})\), the system

$$\begin{aligned} \left\{ \begin{array}{l} f^{\prime }(\bar{x})\eta +\omega f^{\prime \prime }(\bar{x})(d,d)<0, \\ g_{\alpha }^{\prime }(\bar{x})\eta +\omega g_{\alpha }^{\prime \prime }(\bar{x})(d,d)\le 0,\quad \alpha \in A(\bar{x}), \\ h_{\beta }^{\prime }(\bar{x})\eta +\omega h_{\beta }^{\prime \prime }(\bar{x})(d,d) = 0,\quad \beta \in B, \end{array} \right. \end{aligned}$$
(6)

does not have any solution \((\eta ,\omega ) \in X \times {\mathbb {R}}_{+}\).

Proof

Let \(\bar{x}\in F\) be a second order Karush–Kuhn–Tucker solution of (IP). Let us suppose, on the contrary, that there exists \(d\in D(\bar{x})\) such that system (6) does have a solution \((\eta ,\omega ) \in X \times {\mathbb {R}}_{+}\).

Provided \(\bar{x}\) is a second order Karush–Kuhn–Tucker solution, there exist a scalar \(\lambda > 0\), a positive measure \(\mu \in M(A)\) and a signed measure \(\nu \in M(B)\) satisfying (1)–(5), from where it follows that

$$\begin{aligned}&\lambda f^{\prime }(\bar{x})\eta +\int _{A(\bar{x})}g_{\alpha }^{\prime }(\bar{x})\eta \mu (d \alpha )+\int _{B}h_{\beta }^{\prime }(\bar{x})\eta \nu (d \beta ) + \lambda \omega f^{\prime \prime }(\bar{x})(d,d)\nonumber \\&\quad +\int _{A(\bar{x})}\omega g_{\alpha }^{\prime \prime }(\bar{x})(d,d) \mu (d \alpha ) + \int _{B}\omega h_{\beta }^{\prime \prime }(\bar{x})(d,d) \nu (d \beta ) \ge 0. \end{aligned}$$
(7)

On the other hand, from (6) one has

$$\begin{aligned}&\lambda f^{\prime }(\bar{x})\eta +\lambda \omega f^{\prime \prime }(\bar{x})(d,d)+\int _{A(\bar{x})}[g_{\alpha }^{\prime }(\bar{x})\eta +\omega g_{\alpha }^{\prime \prime }(\bar{x})(d,d)] \mu (d \alpha ) \nonumber \\&\quad +\int _{B}[h_{\beta }^{\prime }(\bar{x})\eta +\omega h_{\beta }^{\prime \prime }(\bar{x})(d,d)] \nu (d \beta ) < 0, \end{aligned}$$

which contradicts (7). \(\square \)

We set out to show that the converse of Proposition 1 holds true. Consider the auxiliary infinite programming problem stated below:

$$\begin{aligned} \begin{array}{ll} \mathrm {minimize} &{} \varphi (\eta ,\omega ) \\ \mathrm {subject~to} &{} \psi _{\alpha }(\eta ,\omega ) \le 0,\quad \alpha \in A(\bar{x}) \cup \{\alpha _{0}\}, \\ &{} \gamma _{\beta }(\eta ,\omega ) = 0,\quad \beta \in B, \end{array} \end{aligned}$$
(AP)

where \(\varphi ,\psi _\alpha ,\gamma _\beta : X \times {\mathbb {R}} \rightarrow {\mathbb {R}}\), \(\alpha \in A(\bar{x}) \cup \{\alpha _{0}\},\ \beta \in B\), are defined as

$$\begin{aligned}&\varphi (\eta ,\omega ) = f^{\prime }(\bar{x})\eta +\omega f^{\prime \prime }(\bar{x})(d,d), \\&\psi _{\alpha }(\eta ,\omega ) = \left\{ \begin{array}{l@{\quad }l} g_{\alpha }^{\prime }(\bar{x})\eta +\omega g_{\alpha }^{\prime \prime }(\bar{x})(d,d),&{} \alpha \in A(\bar{x}), \\ -\omega ,&{} \alpha =\alpha _{0}, \end{array} \right. \\&\gamma _{\beta }(\eta ,\omega ) = h_{\beta }^{\prime }(\bar{x})\eta +\omega h_{\beta }^{\prime \prime }(\bar{x})(d,d),\ \beta \in B, \end{aligned}$$

with \(\alpha _0 \notin A(\bar{x})\), and \(d \in X\) is an arbitrary fixed direction and \(\bar{x} \in F\) is an arbitrary fixed point.

It is easy to see that \((\bar{\eta },\bar{\omega }) = (0,0)\) is an optimal solution of (AP) when (6) does not have any solution. Let us verify that the assumptions of Corollary 1 are satisfied.

Lemma 1

Assume that (H3) is valid. Let \(\bar{x} \in F\). Assume that the constraints of (IP) satisfy the CQ(\(d\)) condition at \(\bar{x}\) for a given direction \(d \in D(\bar{x})\). Then the constraints of (AP) also satisfy the CQ(\(d\)) condition for every \((d_1,d_2) \in D(\bar{\eta },\bar{\omega })\) at \((\bar{\eta },\bar{\omega })\).

Proof

Provided CQ(\(d\)) is satisfied, one has \(R(\bar{x})=C(B)\) and the existence of a point \(\hat{z} \in X\) such that

$$\begin{aligned}&g_\alpha '(\bar{x})\hat{z} + g_\alpha ''(\bar{x})(d,d) < 0,\quad \alpha \in A, \\&h_\beta '(\bar{x})\hat{z} + h_\beta ''(\bar{x})(d,d) = 0,\quad \beta \in B. \end{aligned}$$

Let \(\varphi \in C(B)\). Then there exists \(\tilde{z} \in X\) such that \(\varphi (\beta ) = h^\prime (\bar{x})\tilde{z}\) for all \(\beta \in B\). For \((\tilde{z}_1,\tilde{z}_2) = (\tilde{z},0) \in X \times {\mathbb {R}}\), it follows that

$$\begin{aligned} \gamma _\beta ^\prime (\bar{\eta },\bar{\omega })(\tilde{z}_1,\tilde{z}_2) = h_{\beta }^{\prime }(\bar{x})\tilde{z}_1 +\tilde{z}_2 h_{\beta }^{\prime \prime }(\bar{x})(d,d) = h_{\beta }^{\prime }(\bar{x})\tilde{z} = \varphi (\beta ) \quad \forall ~ \beta \in B, \end{aligned}$$

so that \(C(B) \subset R(\bar{\eta },\bar{\omega })\). Thus, \(R(\bar{\eta },\bar{\omega }) = C(B)\).

Let \((d_1,d_2) \in D(\bar{\eta },\bar{\omega })\). Taking \((\hat{z}_1,\hat{z}_2) = (\hat{z},1) \in X \times {\mathbb {R}}\), one obtains

$$\begin{aligned}&\psi _\alpha ^\prime (\bar{\eta },\bar{\omega })(\hat{z}_1,\hat{z_2}) + \psi _\alpha ^{\prime \prime }(\bar{\eta },\bar{\omega })((d_1,d_2),(d_1,d_2))\\&\quad = \left\{ \begin{array}{l@{\quad }l}g_\alpha '(\bar{x})\hat{z} + g_\alpha ''(\bar{x})(d,d) < 0,&{} \alpha \in A(\bar{x}), \\ -1 < 0,&{} \alpha = \alpha _0, \end{array}\right. \\&\gamma _\beta ^\prime (\bar{\eta },\bar{\omega })(\hat{z}_1,\hat{z}_2) +\gamma _\beta ^{\prime \prime }(\bar{\eta },\bar{\omega })((d_1,d_2),(d_1,d_2))\\&\quad = h_\beta '(\bar{x})\hat{z} + h_\beta ''(\bar{x})(d,d) = 0,\ \beta \in B. \end{aligned}$$

Therefore the constraints of (AP) satisfy CQ(\(d\)) for arbitrary \((d_1,d_2) \in D(\bar{\eta },\bar{\omega })\).

\(\square \)

Now we state and prove the converse of Proposition 1.

Proposition 2

Suppose that (H3) and (H5) are valid. Assume that CQ(\(d\)) is satisfied for every \(d \in D(\bar{x})\) at \(\bar{x}\), for all \(\bar{x}\in F\). Let \(\bar{x} \in F\) be such that system (6) does not have any solution \((\eta ,\omega ) \in X \times {\mathbb {R}}_{+}\), for each \(d \in D(\bar{x})\). Then \(\bar{x}\) is a second order Karush–Kuhn–Tucker solution of (IP).

Proof

Let \(d \in D(\bar{x})\) be an arbitrarily fixed critical direction. It is easily verified that \((\bar{\eta },\bar{\omega })=(0,0) \in X \times {\mathbb {R}}\) is an optimal solution of (AP) when (6) does not have any solution \((\eta ,\omega ) \in X \times {\mathbb {R}}_{+}\). By Lemma 1 we know that the constraints of (AP) satisfy CQ(\(d\)) for every \((d_1,d_2) \in D(\bar{\eta },\bar{\omega })\). Therefore we can apply Corollary 1 and get that \((\bar{\eta },\bar{\omega })\) is a second order Karush–Kuhn–Tucker solution of (AP). For \((d,0) \in X \times {\mathbb {R}}\) we have that

$$\begin{aligned} \varphi ^{\prime }(\bar{\eta },\bar{\omega })(d,0)&= f^{\prime }(\bar{x})d \le 0, \\ \psi _{\alpha }^{\prime }(\bar{\eta },\bar{\omega })(d,0)&= \left\{ \begin{array}{l@{\quad }l} g_{\alpha }^{\prime }(\bar{x})d\le 0,&{} \alpha \in A(\bar{x}), \\ 0,&{} \alpha =\alpha _{0}, \end{array} \right. \\ \gamma _{\beta }^{\prime }(\bar{\eta },\bar{\omega })(d,0)&= h_{\beta }^{\prime }(\bar{x})d = 0,~\beta \in B. \end{aligned}$$

Hence, \((d,0) \in X \times {\mathbb {R}}\) is a critical direction for (AP) at \((\bar{\eta },\bar{\omega })=(0,0)\), that is, \((d,0) \in D(\bar{\eta },\bar{\omega })\). Let us denote \(\delta = (d,0)\). Then, as \((\bar{\eta },\bar{\omega })\) is a second order Karush–Kuhn–Tucker solution, there exist a scalar \(\lambda >0\), a positive Radon measure \(\tilde{\mu } \in M(A(\bar{x}) \cup \{\alpha _{0}\})\) and a signed Radon measure \(\nu \in M(B)\) satisfying

$$\begin{aligned}&\lambda \varphi ^{\prime }(\bar{\eta },\bar{\omega })(z_{1},z_{2}) + \int _{A(\bar{x}) \cup \{\alpha _{0}\}} \psi _{\alpha }^{\prime }(\bar{\eta },\bar{\omega })(z_{1},z_{2})\,\tilde{\mu }(d\alpha ) \nonumber \\&\quad \quad +\int _{B}\gamma _{\beta }^{\prime }(\bar{\eta },\bar{\omega })(z_{1},z_{2}) \nu (d \beta ) = 0\quad \forall ~ (z_{1},z_{2}) \in X \times {\mathbb {R}}, \end{aligned}$$
(8)
$$\begin{aligned}&\int _{A(\bar{x}) \cup \{\alpha _0\}} \psi _{\alpha }(\bar{\eta },\bar{\omega })(z_{1},z_{2}) \,\tilde{\mu }(d\alpha ) = 0, \end{aligned}$$
(9)
$$\begin{aligned}&\lambda \varphi ^{\prime \prime }(\bar{\eta },\bar{\omega })(\delta ,\delta ) + \int _{A(\bar{x}) \cup \{\alpha _{0}\}} \psi _{\alpha }^{\prime \prime }(\bar{\eta },\bar{\omega })(\delta ,\delta ) \,\tilde{\mu }(d\alpha ) \nonumber \\&\quad \quad + \int _{B} \gamma _{\beta }^{\prime \prime }(\bar{\eta },\bar{\omega })(\delta ,\delta ) \nu (d \beta ) \ge 0, \end{aligned}$$
(10)
$$\begin{aligned}&\lambda \varphi ^{\prime }(\bar{\eta },\bar{\omega })\delta = 0, \end{aligned}$$
(11)
$$\begin{aligned}&\int _{A(\bar{x}) \cup \{\alpha _{0}\}} \psi _{\alpha }^{\prime }(\bar{\eta },\bar{\omega }) \delta \,\tilde{\mu }(d\alpha ) = 0. \end{aligned}$$
(12)

Particularly, letting \((z_1,z_2) = (z,0) \in X \times {\mathbb {R}}\) in (8), we obtain

$$\begin{aligned} \lambda f^{\prime }(\bar{x})z + \int _{A(\bar{x})} g_{\alpha }^{\prime }(\bar{x})z \,\tilde{\mu }(d\alpha ) + \int _{B} h_{\beta }^{\prime }(\bar{x})z \nu (d \beta ) = 0\quad \forall z\in X. \end{aligned}$$
(13)

Now set \((z_1,z_2) = (d,1) \in X \times {\mathbb {R}}\) in (8) to get

$$\begin{aligned} 0&= \lambda [f^{\prime }(\bar{x})d+1f^{\prime \prime }(\bar{x})(d,d)] + \int _{A(\bar{x})}[g_{\alpha }^{\prime }(\bar{x})d+1g_{\alpha }^{\prime \prime }(\bar{x})(d,d)]\,\tilde{\mu }(d\alpha ) \\&+ \int _{\{\alpha _{0}\}} (-1)\,\tilde{\mu }(d\alpha ) + \int _{B}[h_{\beta }^{\prime }(\bar{x})d+h_{\beta }^{\prime \prime }(\bar{x})(d,d)] \nu (d \beta ) \\&= \lambda f^{\prime }(\bar{x})d + \int _{A(\bar{x})} g_{\alpha }^{\prime }(\bar{x})d\,\tilde{\mu }(d\alpha ) + \int _{B} h_{\beta }^{\prime }(\bar{x}) \nu (d \beta ) \\&+\,\lambda f^{\prime \prime }(\bar{x})(d,d) + \int _{A(\bar{x})} g_{\alpha }^{\prime \prime }(\bar{x})(d,d)\,\tilde{\mu }(d\alpha ) + \int _{B} h_{\beta }^{\prime \prime }(\bar{x})(d,d) \nu (d \beta ) \\&-\, \tilde{\mu }(\{\alpha _{0}\}). \end{aligned}$$

Taking (13) into account, we get

$$\begin{aligned}&\lambda f^{\prime \prime }(\bar{x})(d,d) + \int _{A(\bar{x})} g_{\alpha }^{\prime \prime }(\bar{x})(d,d)\,\tilde{\mu }(d\alpha ) + \int _{B} h_{\beta }^{\prime \prime }(\bar{x})(d,d) \nu (d \beta ) \nonumber \\&\quad = \tilde{\mu }(\{\alpha _{0}\}) \ge 0, \end{aligned}$$
(14)

provided \(\tilde{\mu }\) is a positive Radon measure. From (11) we have

$$\begin{aligned} 0 = \lambda \varphi ^{\prime }(\bar{\eta },\bar{\omega })\delta = \lambda f^{\prime }(\bar{x})d. \end{aligned}$$
(15)

As a consequence of (12) we obtain

$$\begin{aligned} 0&= \int _{A(\bar{x}) \cup \{\alpha _{0}\}} \psi _{\alpha }^{\prime }(\bar{\omega },\bar{\eta })\delta \,\tilde{\mu }(d\alpha ) = \int _{A(\bar{x})} g_{\alpha }^{\prime }(\bar{x}))d\tilde{\mu }(d\alpha ) + \int _{\{\alpha _{0}\}} 0\,\tilde{\mu }(d\alpha ) \nonumber \\&= \int _{A(\bar{x})} g_{\alpha }^{\prime }(\bar{x})d\,\tilde{\mu }(d\alpha ). \end{aligned}$$
(16)

It is clear that \(g_{\alpha }(\bar{x}) = 0\) for all \(\alpha \in A(\bar{x})\), so that

$$\begin{aligned} \int _{A(\bar{x})} g_{\alpha }(\bar{x})\,\tilde{\mu }(d\alpha ) = 0. \end{aligned}$$
(17)

DefineFootnote 3

$$\begin{aligned} \mu (K)=\left\{ \begin{array}{l@{\quad }l} \tilde{\mu }(K),&{}\text{ if }~K \in {\mathcal {B}}(A(\bar{x})), \\ 0,&{} \text{ if }~K \in {\mathcal {B}}(A){\setminus } {\mathcal {B}}(A(\bar{x})). \end{array} \right. \end{aligned}$$

Then \(\mu \) is a positive Radon measure on \(A\). We have that

$$\begin{aligned}&\int _{A(\bar{x})} g_{\alpha }^{\prime }(\bar{x})z\,\tilde{\mu }(d\alpha ) = \int _{A} g_{\alpha }^{\prime }(\bar{x})z \mu (d \alpha ), \end{aligned}$$
(18)
$$\begin{aligned}&\int _{A(\bar{x})} g_{\alpha }^{\prime \prime }(\bar{x})(d,d) \,\tilde{\mu }(d\alpha ) = \int _{A}g_{\alpha }^{\prime \prime }(\bar{x})(d,d) \mu (d \alpha ), \end{aligned}$$
(19)
$$\begin{aligned}&\int _{A(\bar{x})} g_{\alpha }(\bar{x})\,\tilde{\mu }(d\alpha ) = \int _{A} g_{\alpha }(\bar{x}) \mu (d \alpha ). \end{aligned}$$
(20)

From (13)–(20) we see that \(\bar{x}\) is a second order Karush–Kuhn–Tucker solution of (IP). \(\square \)

The following proposition brings forward an alternative characterization for the second order KKT-invexity (Definition 7).

Proposition 3

Assume that (H3) holds. (IP) is second order KKT-invex if, and only if, for each pair \(x,y\in F\) with \(f(y)<f(x)\) there exist a direction \(d(x,y)\in D(x)\) and maps \(\eta : X \times X \rightarrow X\) and \(\omega : X \times X \rightarrow {\mathbb {R}}_+\) such that

$$\begin{aligned}&f^{\prime }(x)\eta (x,y)+\omega (x,y)f^{\prime \prime }(x)(d(x,y),d(x,y))<0, \end{aligned}$$
(21)
$$\begin{aligned}&g_{\alpha }^{\prime }(x)\eta (x,y)+\omega (x,y)g_{\alpha }^{\prime \prime }(x)(d(x,y),d(x,y)) \le 0,~\alpha \in A(x), \end{aligned}$$
(22)
$$\begin{aligned}&h_{\beta }^{\prime }(x)\eta (x,y)+\omega (x,y)h_{\beta }^{\prime \prime } (x)(d(x,y),d(x,y))=0,~\beta \in B. \end{aligned}$$
(23)

Proof

Let \(x,y\in F\) with \(f(y)<f(x)\). It is straightforward to verify that a second order KKT-invex problem obeys (21)–(23).

Let us establish the converse. Suppose that given \(x,y\in F\) with \(f(y)<f(x)\), there exist \(d(x,y)\in D(x)\), \(\omega (x,y) \ge 0\) and \(\eta (x,y) \in X\) obeying (21)–(23). On the contrary, assume that (IP) is not second order KKT-invex, so that, there exist \(x,y\in F\) such that at least one of the inequalities

$$\begin{aligned} f(y)-f(x)&\ge f^{\prime }(x)u + v f^{\prime \prime }(x)(z,z), \end{aligned}$$
(24)
$$\begin{aligned} 0&\ge g_{\alpha }^{\prime }(x)u + v g_{\alpha }^{\prime \prime }(x)(z,z), ~\alpha \in A(x), \end{aligned}$$
(25)
$$\begin{aligned} 0&= h_{\beta }^{\prime }(x)u + v h_{\beta }^{\prime \prime }(x)(z,z), ~\beta \in B, \end{aligned}$$
(26)

is not satisfied for any given triple \((u,v,z) \in X \times {\mathbb {R}}_{+} \times D(x)\). Choosing \(\bar{u}=0 \in X\), \(\bar{v} =1 \ge 0\) and \(\bar{z}=0 \in D(x)\) we see that \((\bar{u},\bar{v},\bar{z})\) satisfies (25) and (26). So (24) is not valid, and hence \(f(y)<f(x)\). Then, by hypothesis, there exist \(d(x,y) \in D(x)\), \(\eta (x,y) \in X\) and \(\omega (x,y) \ge 0\) such that (21)–(23) are verified. Arbitrarily fix \(t>0\) and take \(\hat{u}=t\eta (x,y) \in X\), \(\hat{v}=t\omega (x,y) \ge 0\) and \(\hat{z}=d(x,y) \in D(x)\). Then (22) and (23) imply in (25) and (26). Therefore (24) is violated, that is,

$$\begin{aligned} f(y)-f(x) < t [f^{\prime }(x)\eta (x,y)+\omega (x,y) f^{\prime \prime }(x)(d(x,y),d(x,y))] \end{aligned}$$

for all \(t>0\), which is not possible once, from (21), the term in brackets is negative. Thus (IP) is second order KKT-invex. \(\square \)

We are now able to set the main result of this section up.

Theorem 4

Suppose that (H3) and (H5) hold. Assume that CQ(\(d\)) is satisfied for every \(d \in D(\bar{x})\) at \(\bar{x}\), for all \(\bar{x} \in F\). Then, (IP) is second order KKT-invex if, and only if, every second order Karush–Kuhn–Tucker solution is an optimal solution of (IP).

Proof

Assume that (IP) is second order KKT-invex, but there exists a second order Karush–Kuhn–Tucker solution \(x \in F\) which is not an optimal solution of (IP), so that there exists \(y \in F\) with \(f(y)<f(x)\). By Proposition 3, there exist a direction \(d(x,y) \in D(x)\) and maps \(\eta : X \times X \rightarrow X\) and \(\omega : X \times X \rightarrow {\mathbb {R}}_+\) such that

$$\begin{aligned} \left\{ \begin{array}{l} f^{\prime }(x)\eta (x,y)+\omega (x,y)f^{\prime \prime }(x)(d(x,y),d(x,y))<0, \\ g_{\alpha }^{\prime }(x)\eta (x,y)+\omega (x,y)g_{\alpha }^{\prime \prime }(x)(d(x,y),d(x,y)) \le 0,~\alpha \in A(\bar{x}), \\ h_{\beta }^{\prime }(x)\eta (x,y)+\omega (x,y)h_{\beta }^{\prime \prime }(x)(d(x,y),d(x,y))=0,~\beta \in B, \end{array} \right. \end{aligned}$$

which contradicts Proposition 1.

Conversely, assume that every second order Karush–Kuhn–Tucker solution is an optimal solution of (IP).

Let us arbitrarily fix \(x \in F\) and \(d \in D(x)\). Consider the following systems:

$$\begin{aligned} \left\{ \begin{array}{l} f^{\prime }(x)\eta +\omega f^{\prime \prime }(x)(d,d)<0, \\ g_{\alpha }^{\prime }(x)\eta +\omega g_{\alpha }^{\prime \prime }(x)(d,d) \le 0,~\alpha \in A(\bar{x}), \\ h_{\beta }^{\prime }(x)\eta +\omega h_{\beta }^{\prime \prime }(x)(d,d)=0,~\beta \in B, \end{array} \right. \end{aligned}$$
(S1)

whose variable is \((\eta ,\omega ) \in X \times {\mathbb {R}}_+\), and

$$\begin{aligned} \left\{ \begin{array}{l} f(y)-f(x)<0, \\ g_{\alpha }(y)\le 0,~\alpha \in A, \\ h_{\beta }(y)=0,~\beta \in B, \end{array} \right. \end{aligned}$$
(S2)

whose variable is \(y\in X\). We claim that, if (S2) has a solution \(y\in X\) then (S1) has a solution \((\eta ,\omega ) \in X \times {\mathbb {R}}\). Indeed, if (S2) has a solution \(y\in X\) then \(x \in F\) is not an optimal solution of (IP), so that, by assumption, \(x\) is not a second order Karush–Kuhn–Tucker solution. From Proposition 2, there exists \(d\in D(x)\) such that system (6) has a solution \((\eta ,\omega )\), that is, (S1) has a solution. As claimed.

Therefore, if (IP) were not second order KKT-invex, then, from Proposition 3, there exist \(x,y \in F\) with \(f(y)<f(x)\) such that (21)–(23) are not satisfied for any given \(d\in D(x)\), \(\eta \in X\) and \(\omega \ge 0\), so that (S1) does not have any solution. Thence we have that (S2) does have a solution while (S1) does not have any solution, in disagreement with the claim above. Thus, (IP) is second order KKT-invex. \(\square \)

The next example provides a second-order KKT-invex problem which is not KKT-invex.

Example 2

Let \(X = C([0,1]) \times C([0,1])\). Consider the problem

$$\begin{aligned} \begin{array}{ll} \text{ minimize } &{} \displaystyle {f(x,y) = \int _0^1 [1-x(t)]y(t)dt} \\ \text{ subject } \text{ to } &{} g_\alpha (x,y) = [y(\alpha )]^2 - 2y(\alpha ) \le 0,\ \alpha \in [0,1], \\ &{} h_\beta (x,y) = \displaystyle {x(\beta ) - \int _0^\beta y(t)dt = 0,\ \beta \in [0,1].} \end{array} \end{aligned}$$

Set, for each \(t\in [0,1]\), \((\bar{x}(t),\bar{y}(t)) = (0,0)\), \((\hat{x}(t),\hat{y}(t)) = (2t,2)\) and \((\tilde{x}(t),\tilde{y}(t)) = (t,1)\). It is easy to see that

  • \((\bar{x},\bar{y})\) is a second order Karush–Kuhn–Tucker solution with \(\bar{\lambda } =2\), \(\bar{\mu } = \ell \) and \(\bar{\nu } = 0\), and \(f(\bar{x},\bar{y}) = 0\);

  • \((\hat{x},\hat{y})\) is a second order Karush–Kuhn–Tucker solution with \(\hat{\lambda } =1/2\), \(\hat{\mu } = \ell /4\) and \(\hat{\nu } = \ell \), and \(f(\hat{x},\hat{y}) = 0\);

  • \((\tilde{x},\tilde{y})\) is a first order Karush–Kuhn–Tucker solution with \(\tilde{\lambda } =1\), \(\tilde{\mu } = 0\) and \(\tilde{\nu } = \ell \), and \(f(\tilde{x},\tilde{y}) = 1/2\).

As \(f(x,y) \ge 0\) for all feasible solution \((x,y)\), it follows that \((\bar{x},\bar{y})\) and \((\hat{x},\hat{y})\) are optimal solutions, and \((\tilde{x},\tilde{y})\) is not an optimal solution. Thus, from Theorem 3, one sees that the problem is not KKT-invex. It is not difficult to show that \((\bar{x},\bar{y})\) and \((\hat{x},\hat{y})\) are the only second order Karush–Kuhn–Tucker solutions. Therefore, from Theorem 4, one sees that the problem is second order KKT-invex.