1 Introduction

Variational and extremal principles of modern variational analysis have been widely recognized as fundamental ingredients to deal with theoretical and numerical issues arising in optimization theory and its applications; see, e.g., the books [13, 14, 21] and the references therein. Despite numerous successful applications of variational principles and techniques to various classes of constrained optimization problems, some important areas are still largely underinvestigated, while advanced methods of variational analysis seem to be very appropriate and promising for required applications. Among such areas we mention broad classes of constrained problems in stochastic and semi-infinite programming. We refer the reader to [6, 22] for fundamental aspects of these disciplines and to [2, 3, 7, 14,15,16,17, 20] for some publications that apply variational analysis and generalized differentiation to problems of such types.

In this paper we study optimization problems given in the form

$$\begin{aligned} \begin{array}{c} \text{ minimize } \;h(x)\; \text{ subject } \text{ to }\\ x\in M(\omega )\;\text { for almost all }\;\omega \in \varOmega , \end{array} \end{aligned}$$
(1)

where \((\varOmega ,{\mathcal {A}},\mu )\) is a \(\sigma \)-finite measure space, where is a measurable multifunction with closed values, and where \(h:{\mathbb {R}}^n\rightarrow \overline{{\mathbb {R}}}:=(-\infty ,\infty ]\) is a lower semicontinuous (l.s.c.) extended-real-valued function. The framework of (1) is quite general and includes—among other classes—robust optimization problems, bilevel programs, and semi-infinite programs with some uncertainties in the data of \(M(\omega )\). It is obvious that problem (1) can be equivalently written in the unconstrained format:

$$\begin{aligned} \text{ minimize } \;h(x)+\delta _{M_\cap }(x)\; \text{ over } \;x\in {\mathbb {R}}^n, \end{aligned}$$
(2)

where \(M_\cap \) is the essential intersection of M defined by

$$\begin{aligned} M_{\cap }:=\big \{x\in {\mathbb {R}}^n\big |\;x\in M(\omega )\;\text { for almost all } \;\omega \in \varOmega \big \}, \end{aligned}$$
(3)

and where \(\delta _\varTheta (x)\) stands for the indicator function of the set \(\varTheta \) that is equal to 0 if \(x\in \varTheta \) and \(\infty \) otherwise. Note that the constrained problem (2) is intrinsically nonsmooth, even when h is a smooth function. As a rule of thumb, necessary optimality conditions for local minimizers of (2) are formulated as

$$\begin{aligned} 0\in \partial h(x)+N(x;M_\cap ) \end{aligned}$$

via appropriate subdifferential and normal cone notions under suitable qualification conditions. To proceed efficiently in this direction, we have to select adequate subdifferential and normal cone constructions and to be able to calculate (or at least to estimate from above) the normal cone to sets of type (3). To the best of our knowledge, it has not been done in the literature, except for the cases where \(\varOmega \) consists of finitely many or countably many points.

The main goals of this paper are to establish efficient calculus rules of regular and limiting normal cones (see Sect. 2 for the definitions) to the set \(M_\cap \) from (3) generated by measurable multifunctions and then to apply the obtained results to deriving necessary optimal conditions in general constrained problems of stochastic and semi-infinite programming. These issues happen to be very challenging, and we accomplish our goals by establishing new extremal principles for measurable multifunctions that are certainly of their independent interest, besides the applications presented below. Our developments in this direction follow the lines of [18] (see also [14]), where the notion of extremality and appropriate versions of the extremal principle were given for countable systems of sets. In the case of finitely many sets, these notions and results reduce to those originated in [10] and then have been extensively developed and applied in variational analysis and optimization; see, e.g., [13, 14] with comprehensive commentaries and references therein. Note the sequential extremal principle obtained below for measurable multifunctions in new even for systems of countably many sets. The latter corresponds to the setting of (3), where the set \(\varOmega \) consists of countably many points with the measure \(\mu \) being atomic at these points. The case of an arbitrary measure \(\mu \) and a \(\mu \)-measurable multifunction M in (3) defined on an arbitrary set \(\varOmega \) allows us to cover in the framework of (1) general problems of stochastic programming, which has never been done before, and also to significantly extend the applications of [19] from countable to general constraint systems in nonsmooth and nonconvex semi-infinite programming.

The rest of the paper is organized as follows. In Sect. 2 we present some constructions and preliminaries from variational analysis and generalized differentiation that are widely used below.

Section 3 is devoted to the introduction and study of new concepts of extremality for measurable multifunctions and deriving extremal principles for them. We establish two extremal principles that play crucial roles in deriving the subsequent calculus rules and applications. The first extremal principle addresses general measurable multifunctions with closed values and is expressed in the sequential/approximating form via regular normals at nearby random points. The second principle concerns measurable cone-valued multifunctions extremal at the origin and is given in the exact form, i.e., it is expressed in terms of the limiting normal cone exactly at the origin in \(L^p(\varOmega ,{\mathbb {R}}^n)\), \(1\le p<\infty \), as the extremal point. The statements of both extremal principles involve integrals over \(\varOmega \) with respect to the given measure on \(\varOmega \).

In Sect. 4 we develop a variational approach, based on employing the obtained extremal principles and related variational results, to derive integral representations and upper estimates of regular and limiting normals to essential intersections of measurable multifunctions with the main results obtained here for cone-valued measurable mappings.

The next Sect. 5 extends this approach to evaluating the normal cones to essential intersections (3) of arbitrary measurable multifunctions with closed values in finite-dimensional spaces by involving in addition an appropriate extension of the so-called conical hull intersection property (CHIP) to the case of measurable multifunctions that is introduced and studied in this section. A typical calculus rule of this type is given by

$$\begin{aligned} N(\bar{x};M_\cap )\subset \mathrm{cl}\Big (\int _{\varOmega } N \big (\bar{x};M(\omega )\big )d\mu (\omega )\Big ) \end{aligned}$$

in terms of the closure of the Aumann integral of set-valued mappings. The obtained calculus results are crucial for the subsequent applications.

Section 6 is devoted to applications of the results developed above to general problems of stochastic programming. First we derive necessary optimality conditions for nonsmooth and nonconvex stochastic programs with random constraints described by measurable set-valued mappings . Then we specify these conditions in the case of stochastic programs with inequality constraints under appropriate constraint qualifications. All the obtained qualification and optimality conditions are expressed in terms of limiting normals and subgradients calculated precisely at the local minimizer in question.

Section 7 concerns general problems of semi-infinite programming with nonsmooth and nonconvex data and index sets given by an arbitrary metric space. Similarly to Sect. 6, we derive pointwise necessary optimality conditions for such problems considering first programs with set-valued constraints and then specifying the results in the case of infinite inequality systems.

2 Preliminaries from variational analysis

In this section we present some preliminaries from variational analysis and generalized differentiation that are broadly used in what follows. Our notation and terminology are standard; see, e.g., [14, 21]. Recall that \({\mathbb {B}}\) stands for the closed unit ball of the finite-dimensional Euclidean space in question, that \({\mathbb {B}}_r(x):=x+r{\mathbb {B}}\) for \(x\in {\mathbb {R}}^n\) and \(r>0\), and that \({\mathbb {N}}:=\{1,2,\ldots \}\). Given a nonempty set \(\varTheta \subset {\mathbb {R}}^n\), we use the symbols \(\mathrm{int}\,\varTheta \), \(\mathrm{ri}\,\varTheta \), \(\mathrm{cl}\,\varTheta \), \(\mathrm{co}\,\varTheta \), and \(\mathrm{cone}\,\varTheta \) to denote the interior, relative interior, closure, convex hull, and conic hull of \(\varTheta \), respectively. The symbol \(^*\) indicates the duality correspondence. In particular, \(\varTheta ^*:=\{v\in {\mathbb {R}}^n|\;\langle v,x\rangle \le 0\; \text{ for } \text{ all } \;x\in \varTheta \}\), and \(A^*\) stands for the matrix transposition (adjoint operator). The distance function of \(\varTheta \) is denoted by \(d_\varTheta (x):=\inf _{u\in \varTheta }\Vert x-u\Vert \) for all \(x\in {\mathbb {R}}^n\).

Given further a set-valued mapping , define the (Painlevé–Kuratowski) outer limit of F as \(x\rightarrow \bar{x}\) by

$$\begin{aligned} \mathop {\mathrm{Lim}\,\mathrm{sup}}_{x\rightarrow \bar{x}}F(x):=\big \{y\in {\mathbb {R}}^m\big |\;\exists \,x_k\rightarrow \bar{x}, \;y_k\rightarrow y\; \text{ with } \;y_k\in F(x_k),\;k\in {\mathbb {N}}\big \}. \end{aligned}$$

In this paper we use the following collections of generalized normals to arbitrary sets. The (Fréchet) regular normal cone to \(\varTheta \) at \(\bar{x}\in \varTheta \) is defined by

$$\begin{aligned} \widehat{N}(\bar{x};\varTheta ):=\Big \{x^*\in {\mathbb {R}}^n\Big |\;\limsup _{x{\mathop {\rightarrow }\limits ^{\varTheta }}\bar{x}} \frac{\langle x^*,x-\bar{x}\rangle }{\Vert x-\bar{x}\Vert }\le 0\Big \}, \end{aligned}$$
(4)

where the symbol \(x{\mathop {\rightarrow }\limits ^{\varTheta }}\bar{x}\) means that \(x\rightarrow \bar{x}\) with \(x\in \varTheta \). The (Mordukhovich) basic/limiting normal cone to \(\varTheta \) at \(\bar{x}\in \varTheta \) is defined by

$$\begin{aligned} N(\bar{x};\varTheta ):=\mathop {\mathrm{Lim}\,\mathrm{sup}}_{x{\mathop {\rightarrow }\limits ^{\varTheta }}\bar{x}}\widehat{N}(x;\varTheta ). \end{aligned}$$
(5)

Recall the well-known duality relation \(\widehat{N}(\bar{x};\varTheta )=T^*(\bar{x};\varTheta )\) between (4) and the (Bouligand–Severi) tangent/contingent cone to \(\varTheta \) at \(\bar{x}\) given by

$$\begin{aligned} T(\bar{x};\varTheta ):=\mathop {\mathrm{Lim}\,\mathrm{sup}}\limits _{\tau \downarrow 0}\frac{\varTheta -\bar{x}}{\tau }. \end{aligned}$$

Note that, due to its nonconvexity, the limiting normal cone (5) cannot be dual to any tangential approximation of \(\varTheta \) at \(\bar{x}\). Nevertheless, the normal cone (5) and the associated subdifferential and coderivative constructions for functions and mappings enjoy comprehensive calculus rules based on variational/extremal principles of variational analysis; see [13, 14, 21]. The set \(\varTheta \) is called normally regular at \(\bar{x}\in \varTheta \) if \(\widehat{N}(\bar{x};\varTheta )=N(\bar{x};\varTheta )\).

Let \(f:{\mathbb {R}}^n\rightarrow \overline{{\mathbb {R}}}\) be an extended-real-valued function with the domain \(\mathrm{dom}\,f:=\{x\in {\mathbb {R}}^n|\;f(x)<\infty \}\) and the epigraph \(\mathrm{epi}\,f:=\{(x,\alpha )\in {\mathbb {R}}^{n+1}|\;\alpha \ge f(x)\}\). The (Fréchet) regular subdifferential of f at \(\bar{x}\in \mathrm{dom}\,f\) is given by

$$\begin{aligned} \widehat{\partial } f(\bar{x}):=\big \{x^*\in {\mathbb {R}}^n\big |\;(x^*,-1)\in \widehat{N}\big ((\bar{x},f(\bar{x}));\mathrm{epi}\,f\big )\big \}. \end{aligned}$$
(6)

Using now the limiting normal cone (5), we define the limiting subdifferential constructions known as the (Mordukhovich) basic subdifferential and singular subdifferential of f at \(\bar{x}\in \mathrm{dom}\,f\), respectively:

$$\begin{aligned} \partial f(\bar{x}):=\big \{x^*\in {\mathbb {R}}^n\big |\;(x^*,-1)\in N \big ((\bar{x},f(\bar{x}));\mathrm{epi}\,f\big )\big \}, \end{aligned}$$
(7)
$$\begin{aligned} \partial ^\infty f(\bar{x}):=\big \{x^*\in {\mathbb {R}}^n\big |\;(x^*,0) \in N\big ((\bar{x},f(\bar{x}));\mathrm{epi}\,f\big )\big \}. \end{aligned}$$
(8)

The construction \(\widehat{\partial }^\infty f(\bar{x})\) is defined similarly to (8) by using \(\widehat{N}\) therein.

If f is convex, then (6) and (7) reduce to the subdifferential of convex analysis. If f is l.s.c. around \(\bar{x}\), then the condition \(\partial ^\infty f(\bar{x})=\{0\}\) fully characterizes the local Lipschitz continuity of f around this point. We refer the reader to the books [13, 14, 21] and the bibliographies therein for various results and applications of the subdifferential constructions (6)–(8) including full calculi for the limiting ones (7) and (8).

Next we consider a complete \(\sigma \)-finite measure space \(({\varOmega },{\mathcal {A}},\mu )\) with \(\mu (\varOmega )>0\). For any \(p\in [1,\infty ]\), denote by \(\Vert \cdot \Vert _p\) the norm of the classical Lebesgue space \(L^p(\varOmega ,{\mathbb {R}}^n)\). A set-valued mapping is said to be measurable if for every open set \(U\subset {\mathbb {R}}^n\) the inverse image \(M^{-1}(U)\) is measurable, i.e., \(M^{-1}(U)\in {\mathcal {A}}\). The essential intersection \(M_{\cap }\) of M was defined in (3). Recall also that the (Aumann) integral of over \(A\in {\mathcal {A}}\) is given by

$$\begin{aligned} \int _A M(\omega )d\mu (\omega ):=\left\{ \int _A x^*(\omega )d \mu (\omega )\Bigg |x^*\in {L}^1({\varOmega },{\mathbb {R}}^n)\text { and } x^*(\omega )\in M(\omega )\text { a.e.}\right\} . \end{aligned}$$

Let us now formulate two known results on subdifferentiation of integral functionals needed in what follows. The first result is classical in convex analysis of integral functionals; see, e.g., [21, Chapter 14]. A mapping \(f:\varOmega \times {\mathbb {R}}^n\rightarrow \overline{{\mathbb {R}}}\) is called a normal integrand if it is \({\mathcal {A}}\otimes {\mathcal {B}} ({\mathbb {R}}^n)\)-measurable (where \({\mathcal {B}}({\mathbb {R}}^n)\) is the Borel \(\sigma \)-algebra, i.e., the \(\sigma \)-algebra generated by all open sets of \({\mathbb {R}}^n\)), and if \(f_\omega :=f(\omega ,\cdot )\) is l.s.c. for every \(\omega \in {\varOmega }\). If in addition \(f_\omega \) is convex for all \(\omega \in \varOmega \), then it is said to be a convex normal integrand.

Proposition 2.1

(generalized Leibniz rule for convex integrals) Given a convex normal integrand \(f:\varOmega \times {\mathbb {R}}^n\rightarrow \overline{{\mathbb {R}}}\), define the integral \({E}_{f}(x):=\int _\varOmega f(\omega ,x)d\mu (\omega )\). If \(\bar{x}\) is a point where \({E}_{f}\) is continuous, then we have

$$\begin{aligned} \partial {E}_{f}(\bar{x})=\int _\varOmega \partial f_\omega (\bar{x})d\mu (\omega ). \end{aligned}$$
(9)

Hence \({E}_{f}\) is differentiable at \(\bar{x}\) if the right-hand side of (9) is a singleton.

The second result has been recently established in [4]; it provides a sequential evaluation of regular subgradients of integral functionals involving nonconvex normal integrands.

Proposition 2.2

(sequential subdifferentiation of nonconvex integral functionals) Let \(\mu \) be a finite measure on \(\varOmega \), and let \(f:\varOmega \times {\mathbb {R}}^n\rightarrow [0,\infty ]\) be a normal integrand. Take \(p,q\in [1,\infty ]\) with \(1/p+1/q=1\). Then for every \(x^*\in \widehat{\partial }{E}_{f}(\bar{x})\) with \(\bar{x}\in \mathrm{dom}\,{E}_{f}\) there exist sequences of elements \(y_k\in {\mathbb {R}}^n\), \(x_k\in {L}^p(\varOmega ,{\mathbb {R}}^n )\), and \(x_k^*\in {L}^q(\varOmega ,{\mathbb {R}}^n)\) as \(k\rightarrow \infty \) such that:

(i):

\(x_k^*(\omega )\in \widehat{\partial }f(\omega ,x_k(\omega ))\) a.e., \(\Vert \bar{x}-y_k\Vert \rightarrow 0\), \(\Vert \bar{x}-x_k(\cdot )\Vert _p\rightarrow 0\);

(ii):

\(\displaystyle \int _{\varOmega }\Vert x_k^*(\omega ) \Vert \cdot \Vert x_k(\omega )-y_k\Vert d\mu (\omega )\rightarrow 0\), \(\displaystyle \int _{\varOmega }\langle x_k^*(\omega ),x_k(\omega )-\bar{x}\rangle d\mu (\omega )\rightarrow 0\);

(iii):

\(\displaystyle \int _{\varOmega }x_k^*(\omega )d\mu (\omega )\rightarrow x^*\), \(\displaystyle \int _{\varOmega }|f(\omega ,x_k(\omega ))-f(\omega ,\bar{x})|d\mu (\omega )\rightarrow 0\).

The final result of this section provides simple subdifferential relations concerning the distance functions to cones.

Proposition 2.3

(subdifferentiation of distance functions for cones) Let \(K\subset {\mathbb {R}}^n\) be a closed cone. Then we have the inclusions

$$\begin{aligned} \widehat{\partial } d_K(\bar{x})\subset \partial d_K(0)\; \text{ for } \text{ all } \;\bar{x}\in {\mathbb {R}}^n\; \text{ and } \;\widehat{N}(\bar{x};K)\subset N(0;K) \; \text{ for } \text{ all } \;\bar{x}\in K. \end{aligned}$$

Proof

Picking \(x^*\in \widehat{\partial } d_K(\bar{x})\) gives us

$$\begin{aligned} \liminf _{x\rightarrow \bar{x}}\frac{d_K(x)-d_K(\bar{x})-\langle x^*,x-\bar{x}\rangle }{\Vert x-\bar{x}\Vert }\ge 0. \end{aligned}$$

Since the mapping \(x\mapsto d_K(x)\) is positive homogeneous, for every \(s>0\) we get

$$\begin{aligned} \frac{d_K(sx)-d_K(s\bar{x})-\langle x^*,sx-s\bar{x}\rangle }{\Vert sx-s\bar{x}\Vert } =\frac{d_K(x)-d_K(\bar{x})-\langle x^*,x-\bar{x}\rangle }{\Vert x-\bar{x}\Vert }, \end{aligned}$$

which implies by denoting \(\tilde{x}:=sx\) the following inequality:

$$\begin{aligned} \liminf _{\tilde{x}\rightarrow s\bar{x}}\frac{d_K(\tilde{x})-d_K(s\bar{x}) -\langle x^*,\tilde{x}-s\bar{x}\rangle }{\Vert \tilde{x}-s\bar{x}\Vert }\ge 0 \end{aligned}$$

that ensures in turn that \(x^*\in \hat{\partial }d_K(s\bar{x})\). By passing to the limit \(s\rightarrow 0\), we readily arrive at \(x^*\in \partial d_K(0)\).

The second claimed inclusion follows from the relationships between the regular and limiting subdifferentials of the distance function and the corresponding normal cones; see, e.g., [13, Corollary 1.96 and Theorem 1.97]. \(\square \)

3 Extremal principles for measurable set-valued mappings

The concept of extremality for finitely many sets and the extremal principle for them were first formulated by Kruger and Mordukhovich [10]; see also [12] where this notion was coined and [13, 14] for further developments, references, and applications. Such an extremal principle formulated via the limiting normal cone (5) can be viewed as a far-going variational counterpart of the classical separation theorem in the case of nonconvex sets. Various extensions of this extremal principle to countably many sets can be found in [9, 14, 18, 19].

Following the line of [18], we introduce a new notion of extremality for measurable mappings and obtain an extremal principle for this notion.

Definition 3.1

(local extremality for set-valued mappings) Consider a measure space \((\varOmega ,{\mathcal {A}},\mu )\) and a measurable set-valued mapping , and let \(M_\cap \) be taken from (3). Then \(M(\cdot )\) is said to be locally extremal at \(\bar{x}\in M_\cap \) in \(L^p(\varOmega ,{\mathbb {R}}^n)\) with some \(p\in (1,\infty )\) if there exists a sequence of \(a_k\in L^p(\varOmega ,{\mathbb {R}}^n)\) with \(\Vert a_ k(\cdot )\Vert _p\rightarrow 0\) as \(k\rightarrow \infty \) and an (open) neighborhood U of \(\bar{x}\) such that for all \(k\in {\mathbb {N}}\) we have

$$\begin{aligned} \bigcap _{\omega \in \varOmega \text { a.e. }}\big (M(\omega )-a_k(\omega )\big )\cap U=\emptyset , \end{aligned}$$
(10)

where the notation in the left-hand side of (10) means that

$$\begin{aligned} \bigcap _{\omega \in \varOmega \text { a.e.} }\big (M(\omega )-a_k(\omega )\big ) \cap U:=\big \{x\in U\big |\;x\in M(\omega )-a_k(\omega )\text { a.e.} \big \}. \end{aligned}$$

The crucial result of this paper establishes necessary conditions for extremality in the sense of Definition 3.1. This novel extremal principle for measurable multifunctions is basic for the subsequent applications to generalized differential calculus of integral functionals derived via a variational approach, as well as to necessary conditions for general constrained problems of stochastic and semi-infinite programming. It is expressed in terms of sequences and involves regular normals to the values of the given set-valued mapping \(M(\cdot )\). Note that the extremal principle of the following theorem is new even for the case of countably many sets considered in [14, 18], where this result was not established. In the case of finitely many sets, the obtained sequential extremal principle can be equivalently reduced to the exact one given in [10, 13, 14].

Theorem 3.2

( sequential extremal principle) Let be a closed-valued measurable multifunction with respect to a finite measure \(\mu \). Assume that M is locally extremal at \(\bar{x}\) in \(L^p(\varOmega ,{\mathbb {R}}^n)\) with some \(p\in (1,\infty )\), and that the following nonoverlapping condition holds at \(\bar{x}\): there exists neighborhood U around \(\bar{x}\) such that

$$\begin{aligned} \bigcap _{\omega \in \varOmega \text { a.e.}}M(\omega )\cap U=\{\bar{x}\}. \end{aligned}$$
(11)

Then we get the sequential extremal principle in \(L^p(\varOmega ,{\mathbb {R}}^n)\) meaning that there exist sequences of \(x_k^*\in L^q(\varOmega ,{\mathbb {R}}^n)\) and \(x_k\in L^p(\varOmega ,{\mathbb {R}}^n)\) satisfying the conditions \(x_k^*(\omega )\in \widehat{N}(x_k(\omega );M(\omega ))\) a.e., \(\Vert x_k(\cdot )-\bar{x}\Vert _p\rightarrow 0\) as \(k\rightarrow \infty \),

$$\begin{aligned} \int _\varOmega x_k^*(\omega )d\mu (\omega )=0,\;\text { and }\;\Vert x_k^*\Vert _q=1 \; \text{ for } \text{ all } \;k\in {\mathbb {N}}, \end{aligned}$$

where \(\frac{1}{p}+\frac{1}{q}=1\). Furthermore, we can find \(\varepsilon _k\downarrow 0\) such that

$$\begin{aligned} \Vert x_k(\omega )-\bar{x}\Vert \le 2\Vert a_k(\omega )\Vert + \varepsilon _k \;\text { a.e. }\;\omega \in \varOmega ,\;k\in {\mathbb {N}}. \end{aligned}$$
(12)

Proof

For each \(k\in {\mathbb {N}}\) define the function

$$\begin{aligned} \varphi _k(x):=\int _{\varOmega }d^p_{M(\omega )}\big (x+a_k(\omega )\big ) d\mu (\omega )+\delta _{\mathrm{cl}\,U}(x),\quad x\in {\mathbb {R}}^n, \end{aligned}$$
(13)

where \(a_k\in L^p(\varOmega ,{\mathbb {R}})\) and a neighborhood U of \(\bar{x}\) from Definition 3.1. We also assume that U is the one for which the nonoverlapping condition (11) holds. Let us split the subsequent proof into six claims.

Claim 1

For each \(k\in {\mathbb {N}}\) the function \(\varphi _k\) from (13) is proper, l.s.c., and attains its minimum on \({\mathbb {R}}^n\).

To verify the claim, observe first that due to the fact that \(\bar{x}\in M(\omega )\) a.e.

$$\begin{aligned} \varphi _k(\bar{x})\le \int _{\varOmega }\Vert a_k(\omega )\Vert ^p d\mu (\omega )<\infty . \end{aligned}$$
(14)

Furthermore, for fixed \(k\in {\mathbb {N}}\) and for any sequence \(z_j\rightarrow x\) we get by using Fatou’s lemma that

$$\begin{aligned} \varphi _k(x)&=\int _{\varOmega }d_{M(\omega )}^p\big (x+a_k(\omega )\big ) d\mu (\omega )+\delta _{\mathrm{cl}\,U}(x)\\&\le \liminf _{j\rightarrow \infty }\left( \int _{\varOmega } d^p_{M(\omega )} \big (z_j+a_k(\omega )\big )d\mu (\omega )\right) +\liminf _{j\rightarrow \infty } \delta _{\mathrm{cl}\,U}( z_j)\\&\le \liminf _{j\rightarrow \infty }\varphi _k(z_j), \end{aligned}$$

which shows that the function \(\varphi _k\) is proper and l.s.c. Since U is bounded, it follows that \(\varphi _k\) attains its minimum on \({\mathbb {R}}^n\).

Claim 2

Let \(\hat{x}_k\) be a minimizer of \(\varphi _k\). Then \(\varphi _k(\hat{x}_k)>0\), \(\hat{x}_k\rightarrow \bar{x}\), and \(\varphi _k(\hat{x}_k)\rightarrow 0\) as \(k\rightarrow \infty \).

Indeed, due to the construction of \(\varphi _k\) in (13) we have \(\hat{x}_k\in \mathrm{cl}\,U\) for all \(k\in {\mathbb {N}}\), which yields the boundedness of \(\{\hat{x}_k\}\). Moreover, it follows from the extremality condition (10) that \(\varphi _k(\hat{x}_k)>0\) as \(k\in {\mathbb {N}}\), since the negation of it tells us that \(\hat{x}_k\in M(\omega )-a_k(\omega )\) for almost all \(\omega \in \varOmega \), a contradiction. Considering now a cluster point \(\hat{x}\) of \(\{\hat{x}_k\}\), we assume without relabeling that \(\hat{x}_k\rightarrow \hat{x}\) and \(a_k(\omega )\rightarrow 0 \) a.e. (recall that \(\Vert a_k(\cdot )\Vert _p\rightarrow 0\)) as \(k\rightarrow \infty \); therefore

$$\begin{aligned} d_{M(\omega )}^p(\hat{x})=\liminf _{k\rightarrow \infty }d^p_{M(\omega )} \big (\hat{x}_{k}+a_{k}(\omega )\big ). \end{aligned}$$

Hence, by employing Fatou’s lemma again, we get

$$\begin{aligned} \int _{\varOmega }d_{M(\omega )}^p(\hat{x})d\mu (\omega )&\le \int _{\varOmega }\liminf _{k\rightarrow \infty }d^p_{M(\omega )}\big (\hat{x}_{k} +a_{k}(\omega )\big )d\mu (\omega )\\&\le \liminf _{k\rightarrow \infty }\int _{\varOmega }d^p_{M(\omega )} \big (\hat{x}_{k}+a_{k}(\omega )\big )d\mu (\omega )\le \liminf _{k\rightarrow \infty } \varphi _{k}(\hat{x}_{k})\\&\le \liminf _{k\rightarrow \infty }\varphi _k(\bar{x})\le \lim _{k\rightarrow \infty }\int _{\varOmega } \Vert a_k(\omega )\Vert ^p d\mu (\omega )=0, \end{aligned}$$

where in the last line we used (14) and the fact that \(\Vert a_k(\cdot )\Vert _p \rightarrow 0\). This implies that \(\varphi _{k}(\hat{x}_k)\rightarrow 0\) and \(\hat{x}\in M(\omega )\) for almost all \(\omega \in \varOmega \), which ensures by the nonoverlapping condition (11) that \(\hat{x}=\bar{x}\). From now on we suppose without loss of generality that \(\hat{x}_k\in U\) for all \(k\in {\mathbb {N}}\).

Claim 3

There exists a sequence of measurable selections \(x_k(\omega )\in M(\omega )\) such that for all \(k\in {\mathbb {N}}\) we have

$$\begin{aligned} \Vert \hat{x}_k+a_k(\omega )-x_k(\omega )\Vert =d_{M(\omega )}\big (\hat{x}_k +a_k(\omega )\big )\;\text { for a.e. }\;\omega \in \varOmega , \end{aligned}$$
(15)

\(x_k\in L^p(\varOmega ,{\mathbb {R}}^n)\), and \(\Vert x_k(\cdot )-\bar{x}\Vert _p\rightarrow 0\) as \(k\rightarrow \infty \) with estimate (12).

Indeed, it follows from, e.g., [21, Theorem 14.37] that for each \(k\in {\mathbb {N}}\) there exists a measurable selection \(x_k(\omega )\in M(\omega )\) satisfying (15). Furthermore

$$\begin{aligned} \Vert \bar{x}-x_k(\omega )\Vert&\le \Vert \hat{x}_k+a_k(\omega )- x_k(\omega )\Vert +\Vert \hat{x}_k-\bar{x}\Vert +\Vert a_k(\omega )\Vert \\&=d_{M(\omega )}\big (\hat{x}_k+a_k(\omega )\big )+\Vert \hat{x}_k-\bar{x}\Vert +\Vert a_k(\omega )\Vert \\&\le \Vert \hat{x}_k+a_k(\omega )-\bar{x}\Vert +\Vert \hat{x}_k-\bar{x}\Vert +\Vert a_k(\omega )\Vert \\&\le 2\Vert \hat{x}_k-\bar{x}\Vert +2\Vert a_k(\omega )\Vert \; \text{ for } \text{ almost } \text{ all } \;\omega \in \varOmega . \end{aligned}$$

This readily yields estimate (12) considering \(\varepsilon _k:=2\Vert \hat{x}_k-\bar{x}\Vert \), and also ensures that

$$\begin{aligned} \int _{\varOmega }\Vert \bar{x}-x_k(\omega )\Vert ^p d\mu (\omega ) \le 2^{2p-1} \left( \Vert \hat{x}_k-\bar{x}\Vert ^p\mu (\varOmega )+\int _{\varOmega }\Vert a_k(\omega )\Vert ^p d\mu (\omega )\right) , \end{aligned}$$

which verifies the claimed properties of the measurable selections \(x_k(\omega )\).

Claim 4

For each \(k\in {\mathbb {N}}\) the function

$$\begin{aligned} \psi _k(x):=\int _{\varOmega }\psi _{\omega ,k}(x)d\mu (\omega ) +\delta _{\mathrm{cl}U}(x)\; \text{ with } \;\psi _{\omega ,k}(x) :=\Vert x+a_k(\omega )-x_k(\omega )\Vert ^p \end{aligned}$$

admits a minimizer \(\hat{x}_k\) over the whole space \({\mathbb {R}}^n\).

To verify this claim, observe that

$$\begin{aligned} \psi _k(x)\ge \varphi _k(x)\ge \varphi _k(\hat{x}_k)=\psi _k(\hat{x}_k) \; \text{ for } \text{ all } \;x\in {\mathbb {R}}^n, \end{aligned}$$

which tells us that \(\hat{x}_k\) is a minimizer of \(\psi _k\) on \({\mathbb {R}}^n\).

Claim 5

For every \(k\in {\mathbb {N}}\) there exists a measurable selection \(u^*_k(\omega )\in \partial \psi _{\omega ,k}(\hat{x}_k)\) such that \(u_k^*(\cdot )\in L^q(\varOmega ,{\mathbb {R}}^n)\),

$$\begin{aligned} \int _{\varOmega }u^*_k(\omega )d\mu (\omega )=0,\;\text { and } \;\int _{\varOmega }\Vert u^*_k\Vert ^q(\omega )d\mu (\omega )>0. \end{aligned}$$
(16)

To verify it, recall from Claim 4 that \(\hat{x}_k\) is a minimizer of the function \(\psi _k\) defined therein. Employing then Proposition 2.1 and the subdifferential Fermat rule, with taking into account that \(\hat{x}_k\in U\), gives us a measurable selection \(u^*_k(\omega )\in \partial \psi _{\omega ,k}(\hat{x}_k)\) such that \(\int _{\varOmega }u^*_k(\omega )d\mu (\omega )=0\). Define further the set

$$\begin{aligned} A_k:=\big \{\omega \in \varOmega \big |\;d_{M(\omega )}(\hat{x}_k+a_k(\omega ))>0\big \} \end{aligned}$$

and deduce from Claim 2 that \(\mu (A_k)>0\). Moreover, we have

$$\begin{aligned} u^*_k(\omega )=p\Vert \hat{x}_k+a_k(\omega )-x_k(\omega )\Vert ^{p-1} \frac{\hat{x}_k+a_k(\omega )-x_k(\omega )}{d_{M(\omega )} \big (\hat{x}_k+a_k(\omega )\big )}\;\text { for a.e. }\;\omega \in A_k. \end{aligned}$$

On the other hand, \(u_k^*(\omega )=0\) for almost all \(\omega \in \varOmega \backslash A\), which yields

$$\begin{aligned} \int _{\varOmega }\Vert u^*_k(\omega )\Vert ^q d\mu (\omega )=p^q\varphi _k(\hat{x}_k). \end{aligned}$$

Consequently, we get \(u^*_k\in L^q(\varOmega ,{\mathbb {R}}^n)\), and hence (16) holds.

Claim 6

Define \(x_k^*(\omega ):=\frac{u_k^*(\omega )}{\Vert u_k^*\Vert _q}\), \(k\in {\mathbb {N}}\). Then

$$\begin{aligned} x^*_k(\omega )\in \widehat{N}\big (x_k(\omega );M(\omega )\big ) \text { a.e. on }\;\varOmega \;\text { and }\int _{\varOmega }x^*(\omega )d\mu (\omega )=0. \end{aligned}$$
(17)

Indeed, it follows from (15) that \(\hat{x}_k+a_k(\omega )-x_k(\omega )\in \widehat{N}\big (x_k(\omega );M(\omega )\big )\) a.e. on \(\varOmega \); see, e.g., the statement and proof in [14, Theorem 1.6, Step 1]. Since \(\widehat{N}\big (x_k(\omega );M(\omega )\big )\) is a cone, we get \(x^*_k(\omega )\in \widehat{N}\big (x_k(\omega );M(\omega )\big )\) a.e. on \(\varOmega \). Furthermore, (16) tells us that the function \(x^*_k\) is well-defined and satisfies the second part of (17). This completes the proof of the theorem. \(\square \)

Next we consider measurable multifunctions with cone values and define for them another notion of extremality, which extends the one from [18] formulated for countable systems of cones.

Definition 3.3

(conic extremality at the origin) Let \((\varOmega ,{\mathcal {A}},\mu )\) be a measure space, and let be a measurable multifunction with cone values. We say that \(\varLambda (\cdot )\) is extremal at the origin in \(L^p(\varOmega ,{\mathbb {R}}^n)\) with some \(p\in (1,\infty )\) if there exists \(a(\cdot )\in L^p(\varOmega ,{\mathbb {R}}^n)\) such that

$$\begin{aligned} \bigcap _{\omega \in \varOmega \text { a.e. }}\big (\varLambda (\omega )-a(\omega )\big ) :=\big \{x\in {\mathbb {R}}^n\big |\;x\in \varLambda (\omega )-a(\omega )\text { a.e.}\big \}=\emptyset . \end{aligned}$$

The following result provides an extension of [18, Theorem 4.2] from countable set systems to measurable multifunctions. In contrast to Theorem 3.2, we now obtain the result in terms of the limiting normal cone (5) calculated exactly at the extremal point \(\bar{x}=0\); this motivates the name of the result.

Theorem 3.4

(exact extremal principle for cone-valued multifunctions) Let be a measurable multifunction defined on a finite measure space and taking closed cone values. Assume that \(\varLambda \) is extremal at \(0\in L^p(\varOmega ,{\mathbb {R}}^n)\) with some \(p\in (1,\infty )\), and that the nonoverlapping condition

$$\begin{aligned} \bigcap _{\omega \in \varOmega \text { a.e.}}\;\varLambda (\omega )=\{0\} \end{aligned}$$

fulfills. Then the (exact) conic extremal principle holds in \(L^p(\varOmega ,{\mathbb {R}}^n)\) with \(\frac{1}{p} +\frac{1}{q} =1\), i.e., there exists \(x^*(\cdot )\in L^q(\varOmega ,{\mathbb {R}}^n)\) such that \(x^*(\omega )\in N(0;\varLambda (\omega ))\) for almost all \(w\in \varOmega \) together with the equalities

$$\begin{aligned} \int _\varOmega x^*(\omega )d\mu (\omega )=0\;\text { and }\;\int _\varOmega \Vert x^*(\omega )\Vert ^q(\omega )d\mu (\omega )=1. \end{aligned}$$
(18)

Furthermore, we can find \(w(\cdot )\in L^p(\varOmega ,{\mathbb {R}}^n)\) for which

$$\begin{aligned} x^*(\omega )\in \widehat{N}\big (w(\omega );\varLambda (\omega )\big ) \text { a.e. }\omega \in \varOmega . \end{aligned}$$

Proof

Let us show that the conic extremality of the mapping \(\varLambda \) imposed in Theorem 3.4 implies that \(\varLambda \) is locally extremal at the origin in the sense of Definition 3.1. Indeed, take \(\alpha _k\downarrow 0\) as \(k\rightarrow \infty \) and define \(a_k(\omega ):=\alpha _k a(\omega )\). Then for all \(k\in {\mathbb {N}}\) we get the relationship

$$\begin{aligned} \bigcap _{\omega \in \varOmega \text { a.e.}}\big (\varLambda (\omega )-a_k(\omega )\big )=\emptyset , \end{aligned}$$

which verifies the claim. Applying now Theorem 3.2 to \(\varLambda \) with taking into account Proposition 2.3 gives us \(x^*(\omega )\in \widehat{N}(x_k(\omega );\varLambda (\omega ))\subset N(0;\varLambda (\omega ))\) such that the conditions in (18) are satisfied. This completes the proof. \(\square \)

It is easy to see that the case of countably many cones in [18, Theorem 4.2] and [14, Theorem 2.9] follows from Theorem 3.4 with \(p=2\) by considering the measure space \(({\mathbb {N}},{\mathcal {P}}({\mathbb {N}}),\mu )\), where \({\mathcal {P}}({\mathbb {N}})\) denotes the power set of \({\mathbb {N}}\), and where \(\mu \) is the atomic measure given by \(\mu (\{m\}):=(2^m)^{-1}\), \(m\in {\mathbb {N}}\). Note that the proofs in [14, 18] are significantly different from the one given above.

Remark 3.5

(on nonoverlapping condition) The nonoverlapping condition was introduced in [18] for developing extremal principles for countably many sets. It is needed to bypass the intrinsic infinite dimensionality of essential intersections. Observe that this condition is not so restrictive because, as shown in the next section, we can construct while proving calculus rules a family of sets that automatically satisfies the nonoverlapping property.

In what follows we are going to focus on applications of the sequential extremal principle for measurable multifunctions established in Theorem 3.2 while planning to present various applications of the conic extremal principle from Theorem 3.4 in our subsequent work; cf. some developments in [14, 18, 19] for the case of countably many sets.

4 Normals to essential intersections via optimization

The major goal of this section is to obtain efficient upper estimates and exact formulas for generalized normals to essential intersections (3) for measurable multifunctions by using a variational approach, which is mainly based on the extremal principle established above. Some of the results obtained here concern cone-valued mappings, and then they will be used in the next section in connection with the conical hull intersection property (CHIP).

We begin with presenting such a result employed in what follows. It provides sequential optimality conditions for problems of type (2).

Lemma 4.1

(sequential optimality conditions) Let \(\bar{x}\) locally minimize an l.s.c. function \(h:{\mathbb {R}}^n\rightarrow \overline{{\mathbb {R}}}\) subject to \(x\in M(\omega )\) a.e., where is a closed-valued measurable multifunction on a \(\sigma \)-finite measure space \((\varOmega ,{\mathcal {A}},\mu )\). Then for any \(p,q\in [1,\infty ]\) with \(1/p+1/q=1\) there exist sequences \(y_k,z_k,z_k^*\in {\mathbb {R}}^n\), \(x_k(\cdot )\in L^p(\varOmega ,{\mathbb {R}}^n)\), and \(x_k^*(\cdot )\in L^q({\varOmega },{\mathbb {R}}^n)\) satisfying the conditions

$$\begin{aligned}&z_k^*\in \widehat{\partial } h(z_k),\;x_k^*(\omega )\in \widehat{N} \big (x_k(\omega );M(\omega )\big )\ \mathrm{a.e.},\\&z_k\overset{h}{\rightarrow }\bar{x},\;\Vert y_k-\bar{x}\Vert \rightarrow 0,\;\Vert x_k(\cdot )-\bar{x}\Vert _p\rightarrow 0,\\&\displaystyle \int _{\varOmega }\Vert x_k^*(\omega )\Vert \cdot \Vert x_k(\omega )-y_k\Vert d\mu (\omega ) \rightarrow 0,\;z_k^*+\int _\varOmega x_k^*(\omega )d\mu (\omega )\rightarrow 0, \end{aligned}$$

where the symbol \(z_k\overset{h}{\rightarrow }\bar{x}\) means that \(z_k\rightarrow \bar{x}\) with \(h(z_k)\rightarrow h(\bar{x})\) as \(k\rightarrow \infty \).

Proof

Assume without loss of generality that the measure \(\mu \) is finite. We can always suppose that \(\varOmega \) is a subset of a larger set; e.g., the collections of all its subsets. Then Cantor’s theorem from measure theory (see, e.g., [8, Theorem 161]) tells us the cardinal number of the latter set is strictly larger than that of \(\varOmega \), and so there exists \(\omega _0\notin \varOmega \). Picking such a point \(\omega _0\), define the measure space \((\widetilde{\varOmega },\widetilde{{\mathcal {A}}},\widetilde{\mu })\) as follows: \(\widetilde{\varOmega }:=\varOmega \cup \{\omega _0\}\) and \(\widetilde{{\mathcal {A}}}\) is the \(\sigma \)-algebra generated by \({\mathcal {A}}\cup \{\{\omega _0\}\}\), which is nothing else than \(\widetilde{{\mathcal {A}}}={\mathcal {A}}\cup \{A\cup \{\omega _0\}|\;A\in {\mathcal {A}}\}\) with the measure

$$\begin{aligned} \widetilde{\mu }(A):=\left\{ \begin{array}{lc} \mu (A)&{}\text { if }\;\omega _0\notin A,\\ \mu (A\backslash \{\omega _0\})+1&{}\text { if }\;\omega _0\in A. \end{array}\right. \end{aligned}$$

Define now the integrand \(f:\widetilde{\varOmega }\times {\mathbb {R}}^n\rightarrow \overline{{\mathbb {R}}}\) by

$$\begin{aligned} f(\omega ,x):=\left\{ \begin{array}{cc} h(x)&{}\text { if }\;\omega =\omega _0,\\ \delta _{M(\omega )}(x)&{} \text { if }\;\omega \ne \omega _0 \end{array}\right. \end{aligned}$$

and consider the function \({E}_{f}(x):=\int _{\widetilde{\varOmega }} f(\omega ,x)d\widetilde{\mu }(\omega )\) for which \({E}_{f}(x)=h(x)\) if \( x\in M(\omega )\) a.e. and \({E}_{f}(x)=\infty \) otherwise. It is easy to see that \(\bar{x}\) is a local minimizer of \({E}_{f}\). Thus the subdifferential Fermat rule yields \(0\in \widehat{\partial }{E}_{f}(\bar{x})\). Applying finally Proposition 2.2 completes the proof of the lemma. \(\square \)

Now we are ready to derive integral upper estimates of regular and limiting normals to the essential intersection \(M_\cap \) by using the closure operation.

Theorem 4.2

(upper estimates of normals via integral closures) Let be a measurable multifunction with closed cone values. Then

$$\begin{aligned} N(x;M_{\cap })\subset \mathrm{cl}\,\left( \int _\varOmega N\big (0;M(\omega )\big )d \mu (\omega )\right) \; \text{ for } \text{ all } \;x\in {\mathbb {R}}^n. \end{aligned}$$
(19)

Proof

To justify (19), let us first verify the inclusion with the regular normal cone on the left-hand side. Indeed, take any \(x^*\in \widehat{N}(x;M_{\cap })\) with \(x\in {\mathbb {R}}^n\) and for every \(\varepsilon >0\) find by definition (4) such \(\eta \in (0,\varepsilon )\) that the function

$$\begin{aligned} y\mapsto -\langle x^*,y-x\rangle +\varepsilon \Vert y-x\Vert +\delta _{{\mathbb {B}}_\eta (x)}(y) +\delta _{M_{\cap }}(y) \end{aligned}$$

attains its minimum at x. Applying Lemma 4.1 gives us sequences \(v_k^*\in {\mathbb {B}}\) and \(x_k^*(\omega )\in \widehat{N}(x_k(\omega );M(\omega ))\) a.e. for which \(\Vert -x^*+\varepsilon v_k^*+\int _\varOmega x_k^*(\omega )d\mu (\omega )\Vert \rightarrow 0\) as \(k\rightarrow \infty \). Since \(M(\omega )\) are cones, we deduce from Proposition 2.3 that \(x_k^*(\omega ) \in N(0;M(\omega ))\) a.e. It implies therefore that

$$\begin{aligned} x^*\in \int _\varOmega N\big (0;M(\omega )\big )d\mu (\omega )+2\varepsilon {\mathbb {B}}. \end{aligned}$$

Taking into account that \(\varepsilon >0\) was chosen arbitrarily, we arrive at the claimed inclusion (19). The regular normal cone therein can be clearly replaced by the limiting one by definition (5). \(\square \)

The next result, which is a consequence of Theorem 4.2 and basic convex analysis, establishes the normal regularity of \(M_\cap \) and gives us the precise formulas for calculating the normal cone and its relative interior under the normal regularity assumption imposed on \(M(\omega )\) for almost all \(\omega \in \varOmega \).

Corollary 4.3

(precise formulas for normals under normal regularity) Let be a measurable multifunction with closed cone values. Assume that \(M(\omega )\) is normally regular at the origin for a.e. \(\omega \in \varOmega \). Then the set \(M_\cap \) is normally regular at the origin, and we have the equalities

$$\begin{aligned} N(0;M_{\cap })&=\mathrm{cl}\,\left( \int _\varOmega N \big (0;M(\omega )\big )d\mu (\omega )\right) , \end{aligned}$$
(20)
$$\begin{aligned} \mathrm{ri}\,\left( N(0;M_{\cap })\right)&=\mathrm{ri}\,\left( \int _\varOmega N \big (0;M(\omega )\big )d\mu (\omega )\right) . \end{aligned}$$
(21)

Proof

Take \(u^*\in \int _\varOmega N(0;M(\omega ))d\mu (\omega )\) and find integrable selection \(x^*(\omega )\in N(0;M(\omega ))\) a.e. with \(u^*=\int _\varOmega x^*(\omega )d\mu (\omega )\). It follows from the assumed normal regularity of \(M(\omega )\) a.e. and the definition of \(M_\cap \) that \(u^*\in \widehat{N}(0;M_\cap )\), and so \(\int _\varOmega N(0;M(\omega ))d\mu (\omega )\subset \widehat{N}(0;M_{\cap })\). Applying Theorem 4.2 yields

$$\begin{aligned} \widehat{N}(0;M_{\cap })=N(0;M_{\cap })=\mathrm{cl}\,\left( \int _\varOmega N \big (0;M(\omega )\big )d\mu (\omega )\right) , \end{aligned}$$

which verifies the normal regularity of \(M_{\cap }\) at 0 together with (20). Since the set \(\int _\varOmega N\big (0;M(\omega )\big )d\mu (\omega )\) is convex, the relative interior formula (21) follows from (20) due to the classical fact of convex analysis. \(\square \)

Our further intention is to find verifiable conditions that allow us to drop the closure operation in the normal cone evaluations of type (19). It is done below by using the extremal principle for measurable set-valued mappings established in Section 3. First we present the following lemma, which holds for general closed-valued measurable multifunctions.

Lemma 4.4

(sequential optimality conditions for strict minimizers) Let \(\bar{x}\in {\mathbb {R}}^n\) be a strict local minimizer of the optimization problem from Lemma 4.1, and let \(p\in (1,\infty )\). Then we have the alternative conditions:

(i):

either there exist sequences of functions \(x_k(\cdot )\in L^p(\varOmega ,{\mathbb {R}}^n)\) with \(\Vert x_k(\cdot )-\bar{x}\Vert _\infty \rightarrow 0\) and vectors \(y_k\in {\mathbb {R}}^n\) with \(\Vert y_k-\bar{x}\Vert \rightarrow 0\) as \(k\rightarrow \infty \) such that \(0\in \widehat{\partial } h(y_k)+\int _{\varOmega }\widehat{N} (x_k(\omega );M(\omega ))d\mu (\omega )\), \(k\in {\mathbb {N}}\);

(ii):

or there exist sequences of functions \(x_k(\cdot )\) and vectors \(y_k\) as in (i), and also sequences of nonzero adjoint functions \(x^*_k(\cdot )\in L^q(\varOmega ,{\mathbb {R}}^n)\) with \(1/p+1/q=1\) and \(x_k^*(\omega )\in \widehat{N}(x_k(\omega );M(\omega ))\) for a.e. \(\omega \in \varOmega \) as well as vectors \(u_k^*\in \widehat{\partial }^\infty h(y_k)\) such that \(u_k^*+\int _{\varOmega }x_k^*(\omega )d\mu (\omega ) =0\) for all \(k\in {\mathbb {N}}\).

Proof

Assume without loss of generality that the measure \(\mu \) is finite and pick \(\omega _0\notin \varOmega \). Then we construct a new measure space \((\widetilde{\varOmega },\widetilde{{\mathcal {A}}},\widetilde{\mu })\) exactly as in the proof of Lemma 4.1. Define further the measurable multifunction on the new measure space by

$$\begin{aligned} \widetilde{M}(\omega ):=\left\{ \begin{array}{cc} \mathrm{epi}\,h&{}\text { if }\;\omega =\omega _0,\\ M(\omega )\times (-\infty ,h(\bar{x})]&{}\text { if }\;\omega \ne \omega _0 \end{array}\right. \end{aligned}$$

and take a neighborhood V of \(\bar{x}\) on which this vector is a unique minimizer in the optimization problem under consideration.

Let us check that the mapping \(\widetilde{M}\) is locally extremal at the origin in \(L^p(\widetilde{\varOmega },{\mathbb {R}}^n)\) in the sense of Definition 3.1. Indeed, denote \(U:=V\times {\mathbb {R}}\) and consider the measurable functions \(a_k(\omega ):=-(0,k^{-1} \mathbb {1}_{\{\omega =\omega _0\}}(\omega ))\) a.e., where \(\mathbb {1}_{A}(\omega )\) is the characteristic function of the set A, i.e., it equals to 1 on A and 0 outside of A. Then we have that \(\Vert a_k\Vert _p=k^{-1} \rightarrow 0\). It is also easy to verify that \(\bigcap _{\omega \in \widetilde{\varOmega }\text { a.e.}}(\widetilde{M} (\omega )-a_k(\omega ))\cap U=\emptyset \), \(k\in {\mathbb {N}}\), which justifies the claim.

Next we show that the nonoverlapping condition (11) holds for \(\widetilde{M}\) with U defined above. Take any \((x,\alpha )\in \widetilde{M}_{\cap }\cap U\) and observe that \(\alpha \ge h(x)\) due to \((x,\alpha )\in M(\omega _0)=\mathrm{epi}\,h\). On the other hand, we have \(x\in M_{\cap }\cap U\) and \(\alpha \le h(\bar{x})\). Since \(\bar{x}\) is a strict minimizer of our problem, it implies that \((x,\alpha )=(\bar{x},h(\bar{x}))\), which readily verifies (11).

Applying now Theorem 3.2 gives us sequences \((x_k,\alpha _k)\in L^p(\varOmega ,{\mathbb {R}}^{n+1})\) and \((x_k^*,\alpha _k^*)\in L^q(\varOmega ,{\mathbb {R}}^{n+1})\) such that \(\Vert (x_k,\alpha _k)\Vert _p\rightarrow 0\) as \(k\rightarrow \infty \), \((x_k^*(\omega ),\alpha _k^*(\omega ))\in \widehat{N} ((x_k(\omega ),\alpha _k(\omega ));\widetilde{M}(\omega ))\) a.e. on \(\varOmega \), and

$$\begin{aligned} (x_k^*(\omega _0),\alpha ^*_k(\omega _0))+\int _{\varOmega } \big (x_k^*(\omega ),\alpha ^*_k(\omega )\big )d\mu (\omega )&=0, \end{aligned}$$
(22)
$$\begin{aligned} \Vert x_k^*(\omega _0)\Vert ^q+\Vert \alpha _k^*(\omega _0)\Vert ^q+\int _{\varOmega } \big (\Vert x_k^*(\omega )\Vert ^q+\Vert \alpha _k^*(\omega )\Vert ^q\big )d\mu (\omega )&=1. \end{aligned}$$
(23)

Note that estimate (12) of Theorem 3.2 ensures actually the stronger convergence \(\Vert (x_k,\alpha _k)\Vert _\infty \rightarrow 0\) as \(k\rightarrow \infty \) due to the above choice of the sequence \(\{a_k(\omega )\}\) in the extremality definition.

It follows from the constructions of \(\widetilde{M}\) that \(x_k^*(\omega )\in \widehat{N}(x_k(\omega );M(\omega ))\) and \(\alpha ^*_k(\omega )\ge 0\) a.e. on \(\varOmega \). Thus (22) yields \(\alpha ^*_k(\omega _0)\le 0\) for all \(k\in {\mathbb {N}}\). Furthermore

$$\begin{aligned} \widehat{N}\big ((x_k(\omega _0),h(x_k(\omega _0));\widetilde{M} (\omega _0)\big )=\widehat{N}\big ((x_k(\omega _0),h(x_k(\omega _0));\mathrm{epi}\,h\big ), \quad k\in {\mathbb {N}}. \end{aligned}$$

Supposing that \(\alpha _k^*(\omega _0)=0\) for infinitely many k gives us \(u^*_k:=x^*_k(\omega _0)\in \widehat{\partial }^\infty h(y_k)\) with \(y_k:=x_k(\omega _0)\rightarrow 0\) as \(k\rightarrow \infty \). Using (22) and (23) yields

$$\begin{aligned} u_k^*+\int _{\varOmega }x_k^*(\omega )d\mu (\omega )=0\; \text{ and } \;\Vert u_k^*\Vert ^q+\int _{\varOmega }\Vert x_k^*(\omega )\Vert ^qd\mu (\omega )=1 \end{aligned}$$

for all \(k\in {\mathbb {N}}\), which verifies assertion (ii) in this case. In the remaining case where \(\alpha ^*_k(\omega _0)<0\) for infinitely many k we get \(|\alpha _k^*(\omega _0)^{-1}|x_k^*(\omega _0)\in \widehat{\partial } h(y_k)\). Then (22) readily ensures the fulfillment of assertion (i). \(\square \)

To proceed further with dismissing the closure operation in the normal cone representations by employing Lemma 4.4, we need some qualification conditions for measurable multifunctions. Let us introduce two of them in the case of arbitrary closed-valued multifunctions.

Definition 4.5

(normal qualification conditions) Let be a measurable multifunction with closed values. We say that:

(i):

The regular normal qualification condition holds for M at \(\bar{x}\in M_\cap \) if there exists \(\varepsilon >0\) such that for all \(x(\omega )\in {\mathbb {B}}(\bar{x},\varepsilon )\) with a.e. \(\omega \in \varOmega \) we have

$$\begin{aligned} \left[ \int _{\varOmega }x^*(\omega )d\mu (\omega )=0,\;x^*(\omega ) \in \widehat{N}\big (x(\omega );M(\omega )\big )\right] \Longrightarrow \big [x^*(\omega )=0\big ]. \end{aligned}$$
(24)
(ii):

The limiting normal qualification condition holds for M at \(\bar{x}\in M_\cap \):

$$\begin{aligned} \left[ \int _{\varOmega }x^*(\omega )d\mu (\omega )=0,\;x^*(\omega )\in N \big (\bar{x};M(\omega )\big )\text { a.e. }\right] \Longrightarrow \big [x^*(\omega )=0\;\text { a.e. }\big ]. \end{aligned}$$

Both qualification conditions of Definition 4.5 are new, while the limiting one is a natural extension of that in [18, Definition 3.11] and [14, Definition 8.69] given for countably many sets, which extends in turn the standard normal qualification condition of variational analysis [13, 14, 21] for finite systems.

It is easy to see that the limiting qualification condition in Definition 4.5(ii) implies the regular one in (i) if the set \(\varOmega \) is finite. It also happens when M is cone-valued and \(\bar{x}=0\). Indeed, we can deduce the latter directly from the second inclusion of Proposition 2.3.

Let us present useful sufficient conditions for the validity of the limiting normal qualification condition from Definition 4.5(ii) that also imply the one in (24) in the conic case of our main interest in this section.

Proposition 4.6

(sufficient conditions for normal qualification) Let be a measurable multifunction with closed values, let \(\bar{x}\in M_\cap \), and let \(\bigcap _{\omega \in \varOmega \text { a.e.}}\mathrm{int}(M(\omega ))\ne \emptyset \). Assume that either M is convex-valued, or the sets \(M(\omega )\) are cones which are normally regular at \(\bar{x}=0\) for a.e. \(\omega \in \varOmega \). Then \(M(\cdot )\) satisfies the limiting normal qualification condition at \(\bar{x}\).

Proof

Considering the case of convex-valued mappings, take \(x^*(\omega )\in N(\bar{x};M(\omega ))\) with \(\int _{\varOmega }x^*(\omega )d\mu (\omega )=0\). Fix any \(x\in \bigcap _{\omega \in \varOmega \text { a.e.}}\mathrm{int}(M(\omega ))\) and \(A\in {\mathcal {A}}\) and then get by the convexity of \(M(\omega )\) that

$$\begin{aligned} 0\ge \int _{A}\big \langle x^*(\omega ),x-\bar{x}\big \rangle d\mu (\omega ) =-\int _{A^c}\big \langle x^*(\omega ),x-\bar{x}\big \rangle d\mu (\omega )\ge 0, \end{aligned}$$

where \(A^c\) stands for the complement of A. Since \(A\in {\mathcal {A}}\) was chosen arbitrarily, it shows that \(\langle x^*(\omega ),x-\bar{x}\rangle =0\) for almost all \(\omega \in \varOmega \).

For any set \(A\in {\mathcal {A}}\) with \(\mu (A)=0\) and for any \(\omega \in \widehat{\varOmega }:=\varOmega \backslash A\) with \(x\in \mathrm{int}M(\omega )\) we get the relationships

$$\begin{aligned} \langle x^*(\omega ),x-\bar{x}\rangle =0\; \text{ and } \;x^*(\omega ) \in N\big (\bar{x};M(\omega )\big ). \end{aligned}$$

Furthermore, for each selected \(\omega \) there exists a number \(r_{\omega }>0\) such that \({\mathbb {B}}(x,r_{\omega })\subset M(\omega )\). Thus \(\langle x^*(\omega ),h\rangle \le \langle x^*(\omega ),\bar{x}-x\rangle =0\) whenever \(h\in {\mathbb {B}}(0,r_{\omega })\), which implies in turn that \(x^*(\omega )=0\) for almost all \(\omega \in \varOmega \) and hence verifies the claimed normal regularity for convex-valued mappings. The proof for the case of cone-valued multifunctions is similar. \(\square \)

Finally in this section, we derive desired representations of the regular normal cone to essential intersections and its interior without using the closure operation. The first part of this theorem holds for general measurable mappings, while the second one addresses cone-valued multifunctions.

Theorem 4.7

(normal cone formulas without closure) Let be a measurable multifunction with closed values satisfying the regular normal qualification condition (24) at \(\bar{x}\in M_\cap \). Then the following hold:

(i):

Take any \(x^*\in \widehat{N}(\bar{x};M_\cap )\) for which there is \(\varepsilon >0\) with

$$\begin{aligned} \langle x^*,x-\bar{x}\rangle <0\; \text{ whenever } \;x\in M_\cap \cap {\mathbb {B}}(\bar{x},\varepsilon )\backslash \{\bar{x}\}. \end{aligned}$$
(25)

Then there exists a measurable selection \(x(\omega )\in M(\omega )\cap {\mathbb {B}}(\bar{x},\varepsilon )\) such that

$$\begin{aligned} x^*\in \int _{\varOmega }\widehat{N}\big (x(\omega );M(\omega )\big )d\mu (\omega ). \end{aligned}$$
(26)
(ii):

If the values of \(M(\cdot )\) are closed cones, then we have

$$\begin{aligned} \mathrm{int}\widehat{N}(0;M_{\cap })\subset \int _{\varOmega }N \big (0;M(\omega )\big )d\mu (\omega ). \end{aligned}$$
(27)

Proof

To verify (i), observe that for the vector \(x^*\) satisfying the assumptions therein we get that the function \(h(x):=-\langle x^*,x\rangle \) attains its strict local minimum at \(\bar{x}\) subject to the constraints \(x\in M(\omega )\) a.e. on \(\varOmega \). Then Lemma 4.4 gives us the two alternative conditions. It is easy to see that the second among them is ruled out by the imposed regular normal qualification condition (24). Thus we arrive at the necessary condition in Lemma 4.4(i), which reduces to inclusion (26) in the case of the selected function h(x).

To verify now assertion (ii), we show first that the inclusion

$$\begin{aligned} \mathrm{int}\widehat{N}(0;M_{\cap })\subset \bigcap _{\varepsilon >0} \bigcup _{x\in L^\infty (\varOmega ,{\mathbb {B}}(0,\varepsilon ))}\int _{\varOmega } \widehat{N}\big (x(\omega );M(\omega )\big )d\mu (\omega ) \end{aligned}$$
(28)

holds for any closed cone-valued measurable multifunction. To proceed, pick any \(\varepsilon >0\) and \(x^*\in \mathrm{int}\widehat{N}(0;M_{\cap })\), and then find \(r>0\) such that \({\mathbb {B}}(x^*,r)\subset \widehat{N}(0;M_{\cap })\). Fixing \(x\in M_\cap \backslash \{\bar{x}\}\) and defining \(u^*:=r\frac{x}{\Vert x\Vert }\), we get from the above constructions that \(x^*+u^*\in \widehat{N}(0;M_{\cap })\) and therefore

$$\begin{aligned} \langle x^*,x\rangle =\langle x^*+u^*,x\rangle -\langle u^*,x\rangle \le -r\Vert x\Vert <0. \end{aligned}$$

It tells us that (25) holds, which yields (26) with some measurable selection \(x(\omega )\in {\mathbb {B}}(\bar{x},\varepsilon )\) a.e. on \(\varOmega \) by assertion (i) established above. This clearly justifies (28). Furthermore, it follows from Proposition 2.3 that

$$\begin{aligned} \widehat{N}\big (x(\omega );M(\omega )\big )\subset N \big (0;M(\omega )\big )\text { a.e. on }\;\omega \in \varOmega \end{aligned}$$
(29)

due to the cone-valuedness assumption on \(M(\cdot )\). Hence (29) implies that the right-hand side of (28) is included in the right-hand side of (27), and thus we complete the proof of the theorem. \(\square \)

Note that assertion (ii) of Theorem 4.7 is a counterpart of formula (21) in Corollary 4.3 obtained without imposing any regularity condition.

5 Normals to essential intersections via CHIP

In this section we extend the major normal cone formulas obtained in Sect. 4 for cone-valued multifunctions to a general class of closed-valued multifunctions on measure spaces. To furnish this, we first introduce and investigate the so-called CHIP (conical hull intersection property) for measurable multifunctions, which has been studied in the literature under this name for the classes of finitely many convex sets (see, e.g., [1, 5] and the references therein) as well as countably many convex [11] and nonconvex [14, 19] sets.

Definition 5.1

(CHIP for measurable multifunctions) Let be a measurable multifunction on the measure space \((\varOmega ,{\mathcal {A}},\mu )\). We say that the measurable CHIP (conical hull intersection property) holds for \(M(\cdot )\) with respect to \((\varOmega ,{\mathcal {A}},\mu )\) at \(\bar{x}\in M_\cap \) if

$$\begin{aligned} T(\bar{x};M_{\cap })=\bigcap _{\omega \in \varOmega \text { a.e.}}T\big (\bar{x};M({\omega })\big ). \end{aligned}$$
(30)

When no confusion arises about the measure space, we simple say that the measurable CHIP holds for \(M(\cdot )\) at \(\bar{x}\).

It is important to mention that CHIP holds automatically for multifunctions which values at the original are closed cones.

Let us present some sufficient conditions for the fulfillment of CHIP. The following new property postulates a certain uniformity over the set of tangential directions. Having in mind applications to semi-infinite programming in Sect. 7, we consider below arbitrary index sets, not just measure spaces.

Definition 5.2

(tangential stability) We say that a set \(\varTheta \subset {\mathbb {R}}^n\) is tangentially stable at \(\bar{x}\in \varTheta \) with respect to some \(U\subset {\mathbb {R}}^n\) if

$$\begin{aligned} T(\bar{x};\varTheta )\cap U\subset (\varTheta -\bar{x}). \end{aligned}$$
(31)

A family of sets \(\{\varTheta _t\}_{t\in {\mathcal {T}}}\subset {\mathbb {R}}^n\) is uniformly tangentially stable at a common point \(\bar{x}\) if there exists an (open) neighborhood U of zero such that the sets \(\varTheta _t\) are tangentially stable at \(\bar{x}\) with respect to U for all \(t\in {\mathcal {T}}\). In the case where \({\mathcal {T}}\) is a measure space, \(\{\varTheta _t\}_{t\in {\mathcal {T}} }\) is (uniformly) almost everywhere tangentially stable at \(\bar{x}\) provided that the previous property holds for almost all \(t\in {\mathcal {T}}\).

Note that the tangential stability property (31) holds for \(\varTheta \) at \(\bar{x}\) if either \(\bar{x}\in \mathrm{int}\varTheta \), or \(\varTheta \) is a cone with \(\bar{x}=0\), or \(\varTheta \) is the complement of an open convex set. The next lemma establishes the validity of CHIP under the uniform tangential stability of set systems.

Lemma 5.3

(tangential stability implies CHIP) Consider a family of closed sets \(\{\varTheta _t\}_{t\in {\mathcal {T}}}\) with \(\bar{x}\in \bigcap _{t\in {\mathcal {T}}}\varTheta _t\) and assume that the system \(\{\varTheta _t\}_{t\in {\mathcal {T}}}\) is uniformly tangentially stable at \(\bar{x}\). Then we have

$$\begin{aligned} T\Big (\bar{x};\bigcap _{t\in {\mathcal {T}}}\varTheta _t\Big ) =\bigcap _{t\in {\mathcal {T}}}T(\bar{x};\varTheta _t). \end{aligned}$$
(32)

If \({{\mathcal {T}}}\) is a measure space and the family \(\{\varTheta _t\}_{t\in {\mathcal {T}}}\) is almost everywhere tangentially stable at \(\bar{x}\), then (32) holds with \(t\in {{\mathcal {T}}}\) a.e. therein.

Proof

It is sufficient to verify the nontrivial inclusion “\(\supset \)” in (32). By the assumed uniform tangential stability of \(\{\varTheta _t\}\) we can take an open neighborhood U of zero such that (31) holds. It clearly yields, by taking into account that the set \(\bigcap _{t\in {\mathcal {T}}}T(\bar{x};\varTheta _t)\) is a cone, the relationships

$$\begin{aligned} \bigcap _{t\in {\mathcal {T}}}T(\bar{x};\varTheta _t)&=T\left( 0;\bigcap _{t\in {\mathcal {T}}}T(\bar{x};\varTheta _t)\right) =T\left( 0;\left( \bigcap _{t\in {\mathcal {T}}}T(\bar{x};\varTheta _t)\right) \bigcap U\right) \\&\subset T\left( 0;\bigcap _{t\in {\mathcal {T}}}\left( \varTheta _t-\bar{x}\right) \right) =T\left( \bar{x};\bigcap _{t\in {\mathcal {T}}}\varTheta _t\right) , \end{aligned}$$

which ensure in turn the claimed CHIP of the family \(\{\varTheta _t\}_{t\in {{\mathcal {T}}}}\). \(\square \)

The following consequence of Lemma 5.3 is useful for applications to optimization problems with inequality constraints.

Corollary 5.4

(CHIP for infinite inequality systems) Let \(\varTheta _t:=\{x\in {\mathbb {R}}^n|\;f(t,x)\le 0\}\) with an arbitrary index set \({{\mathcal {T}}}\), where \(f(t,x):=\langle a(t),x\rangle -b(t)\) with \(a:{\mathcal {T}}\rightarrow {\mathbb {R}}^n\) and \(b:{\mathcal {T}}\rightarrow {\mathbb {R}}\). Taking \(\bar{x}\in {\mathbb {R}}^n\) and the collection of active indexes \({{\mathcal {T}}}_f:=\{t\in {\mathcal {T}}| \;f(t,\bar{x})=0\}\), suppose that \(\bar{x}\in \mathrm{int}\bigcap _{t\in T\backslash {\mathcal {T}}_f}\{x\in {\mathbb {R}}^n|\;f(t,x)<0\}\). Then we have that CHIP (32) holds for \(\{\varTheta _t\}_{t\in {{\mathcal {T}}}}\) at \(\bar{x}\). If \({{\mathcal {T}}}\) is a measure space with \(a(\cdot )\) and \(b(\cdot )\) being measurable on it, then the measurable CHIP is satisfied with \(t\in {{\mathcal {T}}}\) a.e. therein.

Proof

To verify the CHIP (32), it is sufficient to show by Lemma 5.3 that the system \((\varTheta _t)_{t\in {\mathcal {T}}}\) is uniformly tangentially stable at \(\bar{x}\). Indeed, consider the open set \(U:=\mathrm{int}\left( \bigcap _{t\in T\backslash {\mathcal {T}}_f} \{x\in {\mathbb {R}}^n| \;f(t,x)<0\}-\bar{x}\right) \) and observe that for every \(t\in {\mathcal {T}}_f\) the set \(\varTheta _t-\bar{x}\) is a cone. This implies that \(T(\bar{x};\varTheta _t)=\varTheta _t-\bar{x}\). Furthermore, for all \(t\notin {\mathcal {T}}_f\) we have that \(\bar{x}\) is an interior point of \(\varTheta _t\), and hence \(T(\bar{x};\varTheta _t)={\mathbb {R}}^n\). It tells us therefore that \(T(\bar{x};\varTheta _t)\cap U\subset U\subset \varTheta _t-\bar{x}\). \(\square \)

The next theorem presents major counterparts for general measurable multifunctions of the normal cone formulas from Theorem 4.2 and Corollary 4.3 obtained above for cone-valued multifunctions.

Theorem 5.5

(normal cone evaluations for measurable multifunctions) Let be a measurable multifunction with closed values, and let \(\bar{x}\in M_{\cap }\) for its essential intersection (3). Assume that the measurable CHIP holds for \(M(\cdot )\) at \(\bar{x}\). Then we have the upper estimate

$$\begin{aligned} \widehat{N}(\bar{x};M_\cap )\subset \mathrm{cl}\,\left( \int _\varOmega N \big (\bar{x};M(\omega )\big )d\mu (\omega )\right) . \end{aligned}$$
(33)

If in addition \(M(\omega )\) is normally regular at \(\bar{x}\) for almost all \(\omega \in \varOmega \), then inclusion (33) holds as equality, and we also get

$$\begin{aligned} \mathrm{ri}\,\left( \widehat{N}(\bar{x};M_\cap )\right) =\mathrm{ri}\,\left( \int _\varOmega N \big (\bar{x};M(\omega )\big )d\mu (\omega )\right) . \end{aligned}$$
(34)

Proof

It follows from the definitions that \(\widehat{N}(\bar{x};M_\cap )=\widehat{N}(0;T(\bar{x};M(\omega )))\). Furthermore, the imposed CHIP yields the fulfillment of (30). Hence, by the duality between \(T(\cdot ;\varTheta )\) and \(\widehat{N}(\cdot ;\varTheta )\), we get

$$\begin{aligned} \widehat{N}\big (\bar{x};M_\cap \big )=\widehat{N}\big (0;T(\bar{x};M_\cap )\big ) =\widehat{N}\Big (0;\bigcap _{\omega \in \varOmega \;a.e}T\big (\bar{x};M(\omega )\big )\Big ). \end{aligned}$$
(35)

Then applying Theorem 4.2 to the cones in (35) gives us the inclusion

$$\begin{aligned} \widehat{N}\Big (0;\bigcap _{\omega \in \varOmega \text { a.e. }} T \big (\bar{x};M(\omega )\big )\Big )\subset \mathrm{cl}\,\left( \int _\varOmega N \big (0;T(\bar{x};M(\omega ))\big )d\mu (\omega )\right) , \end{aligned}$$

which yields (33) due to \(N(0;T(\bar{x};M(\omega )))\subset N(\bar{x};M(\omega ))\) for all \(\omega \in \varOmega \).

Proceeding now in the case where \(M(\cdot )\) is normally regular at \(\bar{x}\), we can easily observe that the tangent cone \(T(\bar{x};M(\omega ))\) is also normally regular at the origin for almost all \(\omega \in \varOmega \). Applying then Corollary 4.3 together with (30) and (35) tells us that

$$\begin{aligned} \widehat{N}\big (\bar{x};M_\cap )&=\widehat{N}\big (0;T(\bar{x};M_\cap )\big ) =\mathrm{cl}\,\Bigg (\int _\varOmega \widehat{N}\big (0;T(\bar{x}; M(\omega ))\big )d\mu (\omega )\Bigg )\\&=\mathrm{cl}\,\Bigg (\int _\varOmega \widehat{N}\big (\bar{x};M(\omega )\big )d\mu (\omega )\Bigg ), \end{aligned}$$

which justifies the equality in (33). The relative interior formula (34) is verified similarly by employing Corollary 4.3. \(\square \)

Finally, we use CHIP and the conic result of Theorem 4.7(ii) to dismiss the closure operation in the normal cone estimate for general multifunctions.

Proposition 5.6

(normal cone estimate without closure) Let be a measurable multifunction with closed values, and let \(\bar{x}\in M_{\cap }\). Suppose that the measurable CHIP holds for \(M(\cdot )\) at \(\bar{x}\). Then we have the interior estimate

$$\begin{aligned} \mathrm{int}\widehat{N}(\bar{x};M_\cap )\subset \int _\varOmega N \big (\bar{x};M(\omega )\big )d\mu (\omega ). \end{aligned}$$

Proof

First \(\widehat{N}(\bar{x};M_\cap )=\widehat{N}(0;T(\bar{x};M_\cap ))\). Then using the assumed CHIP and Theorem 4.7(ii) for cone-valued mappings gives us

$$\begin{aligned} \mathrm{int}\widehat{N}(0;M_\cap )\subset \int _{\varOmega }N \big (0; T(\bar{x}; M(\omega )) \big )d\mu (\omega )\subset \int _{\varOmega }N\big (\bar{x};M(\omega )\big )d\mu (\omega ), \end{aligned}$$

which verifies the claimed upper estimate. \(\square \)

6 Applications to stochastic programming

In this section we consider the stochastic optimization problem with the random constraint sets formulated in (1). Suppose in what follows that \(M:\varOmega \rightarrow {\mathbb {R}}^n\) is a measurable multifunction with closed values and that \(h:{\mathbb {R}}^n\rightarrow \overline{{\mathbb {R}}}\) is an l.s.c. function around the reference point.

Based on the normal cone formulas for the essential intersection of M obtained in Sects. 4 and 5, we are now ready to derive new necessary optimality conditions for local minimizers of (1).

Theorem 6.1

(necessary optimality conditions for general stochastic programs) For a local minimizer \(\bar{x}\in M_\cap \) of (1) the following hold:

(i):

Assume that \(M_\cap \) is normally regular at \(\bar{x}\), that \(M(\cdot )\) satisfies the measurable CHIP at \(\bar{x}\), and that the qualification condition

$$\begin{aligned} \big (-\partial ^\infty h(\bar{x})\big )\cap \mathrm{cl}\,\Big (\int _{\varOmega }N \big (\bar{x};M(\omega )\big )d\mu (\omega )\Big )=\{0\} \end{aligned}$$
(36)

is satisfied. Then we have the necessary optimality condition

$$\begin{aligned} 0\in \partial h(\bar{x})+\mathrm{cl}\,\Big (\int _{\varOmega }N\big (\bar{x}; M(\omega )\big )d\mu (\omega )\Big ). \end{aligned}$$
(37)
(ii):

Assume that the sets \(M(\omega )\) are cones for a.e. \(\omega \in \varOmega \) and that \(\bar{x}=0\). Then the fulfilment of (36) ensures that (37) holds at this point.

Proof

It follows from basic variational analysis (see, e.g., [13, Proposition 5.3]) that \(0\in \partial h(\bar{x})+N(\bar{x};M_{\cap })\) if \((-\partial ^\infty h(\bar{x}))\cap N(\bar{x};M_{\cap })=\{0\}\). Employing now Theorem 5.5 under the assumptions made in (i), we get

$$\begin{aligned} N(\bar{x};M_\cap )=\widehat{N}(\bar{x};M_{\cap })\subset \mathrm{cl}\,\Big (\int _{\varOmega }N\big (\bar{x};M(\omega )\big )d\mu (\omega )\Big ), \end{aligned}$$

which clearly ensures the fulfillment of condition (37) if (36) holds.

To verify (ii) in the case of cone values, we proceed similarly by using the second statement of Theorem 4.2 instead of Theorem 5.5. Note that in this case neither CHIP nor normal regularity assumptions are needed. \(\square \)

Let us discuss the obtained estimates and their consequences.

Remark 6.2

(discussions on optimality conditions) Observe the following:

(a):

The qualification condition (36) holds automatically if h is locally Lipschitzian around \(\bar{x}\) due the characterization \(\partial ^\infty h(\bar{x})=\{0\}\) of this property.

(b):

The result of Theorem 6.1(i) clearly yields those from [19, Theorem 4.2] and [14, Theorem 8.77] for problem (1) with countably many geometric constraints. Moreover, we significantly extend the previous developments in the case of countable constraints by dropping the normal qualification condition from Definition 4.5(ii) imposed in [14, 19]. The result of Theorem 6.1(ii) for cone-valued multifunctions without the normal regularity has never been observed earlier even for countably many constraints.

(c):

The result of Theorem 4.7(i) allows us, under the additional assumption therein, to avoid the closure operation in both assertions of Theorem 6.1.

(d):

Finally, let us mention that the normal regularity of \(M_\cap \) and the normal qualification condition (36) can be replaced by the assumption that h is Fréchet differentiable at \(\bar{x}\). Indeed, we get it by using [13, Proposition 1.107] instead of [13, Proposition 5.3] to derive the necessary optimality conditions.

We conclude this section by specifying the results of Theorem 6.1 to the class of stochastic optimization problems with random inequality constraints

$$\begin{aligned} M(\omega ):=\big \{x\in {\mathbb {R}}^n\big |\;f(\omega ,x)\le 0\big \}, \end{aligned}$$
(38)

where \(f:\varOmega \times {\mathbb {R}}^n\rightarrow \overline{{\mathbb {R}}}\) is a normal integrand such that \(f_\omega (\cdot ):=f(\omega ,\cdot )\) is locally Lipschitzian around \(\bar{x}\) for almost all \(\omega \in \varOmega \).

Corollary 6.3

(necessary optimality conditions for stochastic programs with inequality constraints) Let \(\bar{x}\) be a local minimizer of problem (1) with the inequality constraints (38). The following assertions hold:

(i):

In addition to the normal regularity and CHIP assumptions of Theorem 6.1, impose the qualification conditions

$$\begin{aligned}&0\notin \partial f_\omega (\bar{x})\;\text { for almost all }\;\omega \in \varOmega _{f} :=\big \{\omega \in \varOmega \big |\;f(\omega ,\bar{x})=0\big \}, \end{aligned}$$
(39)
$$\begin{aligned}&\big (-\partial ^\infty h(\bar{x})\big )\cap \mathrm{cl}\,\Big (\int _{\varOmega _f}\mathrm{cone} \big (\partial f_\omega (\bar{x})\big )d\mu (\omega )\Big )=\{0\}. \end{aligned}$$
(40)

Then we have the necessary optimality condition

$$\begin{aligned} 0\in \partial h(\bar{x})+\mathrm{cl}\,\Big (\int _{\varOmega _f}\mathrm{cone} \big (\partial f_\omega (\bar{x})\big )d \mu (\omega )\Big ). \end{aligned}$$
(41)
(ii):

Assume that \(f(\omega ,\lambda x)\le \lambda f(\omega ,x)\) for all \(\lambda \ge 0\), all \(x\in {\mathbb {R}}^n\), and a.e. \(\omega \in \varOmega \). Then the fulfillment of (40) ensures that (41) holds at \(\bar{x}=0\).

Proof

It follows from (39) by [21, Proposition 10.3] that

$$\begin{aligned} N\big (\bar{x};M(\omega )\big )\subset {\mathbb {R}}_{+}\partial f_\omega (\bar{x}) \;\text { for almost all }\omega \in \varOmega \;\text { with }\;f_\omega (\bar{x})=0. \end{aligned}$$

If \(f_\omega (\bar{x})<0\), we get by the continuity of \(f_\omega \) that \(N(\bar{x};M(\omega ))=\{0\}\). Thus

$$\begin{aligned} \int _{\varOmega }N\big (\bar{x};M(\omega )\big )d\mu (\omega )\subset \mathrm{cl}\,\Big (\int _{\varOmega _f}\mathrm{cone}\big (\partial f_\omega (x)\big )d\mu (\omega )\Big ), \end{aligned}$$

which allows us to deduce assertion (i) from Theorem 6.1(i). To verify assertion (ii), it is sufficient to observe that the additional assumption therein ensures that the sets \(M(\omega )\) in (38) are cones, and then apply Theorem 6.1(ii). \(\square \)

Note that the normal regularity assumption on the mapping (38) can be replaced by the subdifferential/lower regularity of \(f_\omega \) at \(\bar{x}\) (see [13, 21]) and that the sufficient conditions for the CHIP assumption for inequality constraints are given in Proposition 5.4.

7 Applications to semi-infinite programming

The concluding section of the paper is devoted to applications of the results obtained above to general problems of semi-infinite programming given by

$$\begin{aligned} \text{ minimize } \;h(x)\; \text{ subject } \text{ to } \;x\in M(t) \; \text{ for } \text{ all } \;t\in {{\mathcal {T}}}, \end{aligned}$$
(42)

where is a multifunction with closed values, and where the index set \({{\mathcal {T}}}\) is a metric space. The conventional setting of (42) concerns linear and convex problems with inequality constraints defined on compact sets \({{\mathcal {T}}}\), while more recently various classes of semi-infinite programs with inequality constraints on noncompact sets have been also under consideration; see, e.g., [3, 6, 11, 14, 19] and the references therein. Problems of type (42) with countable set constraints were studied in [14, 19].

Note that, in contrast to problem (1) from the previous section, program (42) does not explicitly contain any measure. However, we can associate with (42) a measure space constructed as follows. For a closed set \(A\subset {\mathcal {T}}\), let \({\mathcal {B}}(A)\) be the Borel \(\sigma \)-algebra on A. We say that a measure on \({\mathcal {B}}(A)\) is strictly positive if every nonempty open subset of A has strictly positive measure and then denote by \(\mathfrak {M}_+(A)\) the set of all the finite strictly positive measures on \({\mathcal {B}}(A)\). For simplicity, we confine ourselves to the case where M is an outer semicontinuous, i.e., \(\mathop {\mathrm{Lim}\,\mathrm{sup}}_{s\rightarrow t}M(s)\subset M(t)\) for all \(t\in {{\mathcal {T}}}\).

The next theorem presents general necessary optimality conditions for nonsmooth and nonconvex semi-infinite programs of type (42) with infinitely many set constraints indexed via arbitrary metric spaces.

Theorem 7.1

(necessary optimality conditions for semi-infinite programs with set constraints) Let \(\bar{x}\) be a local minimizer of problem (42), where the cost function \(h(\cdot )\) is locally Lipschitzian around \(\bar{x}\).

(i):

Assume that the set \(\bigcap _{t\in {\mathcal {T}}}M(t)\) is normally regular at \(\bar{x}\) and that for each dense set \(A\subset {{\mathcal {T}}}\) the CHIP condition

$$\begin{aligned} T\Big (\bar{x};\bigcap _{t\in A}M(t)\Big )=\bigcap _{t\in A}T\big (\bar{x};M(t)\big ) \end{aligned}$$
(43)

is satisfied. Then for every measure \(\nu \in \mathfrak {M}_+({{\mathcal {T}}})\) we have

$$\begin{aligned} 0\in \partial h\big (\bar{x}\big )+\mathrm{cl}\Big (\int _{{{\mathcal {T}}}}N \big (\bar{x};M(t)\big )d\nu (t)\Big ). \end{aligned}$$
(44)
(ii):

Assume that the set M(t) is a cone for each \(t\in {{\mathcal {T}}}\), and that \(\bar{x}=0\). Then the optimality condition (37) holds at the origin.

Proof

Denote by \(({{\mathcal {T}}},{\mathcal {A}},\mu )\) the completion of \(({{\mathcal {T}}},{\mathcal {B}}({{\mathcal {T}}}),\nu )\) and consider the following optimization problem of type (1) from Sect. 6:

$$\begin{aligned} \text{ minimize } \;h(x)\; \text{ subject } \text{ to } \;x\in M_{\cap }. \end{aligned}$$
(45)

Let us check that \(\bar{x}\) is a local minimizer of (45) and that all the assumptions of Theorem 6.1 are satisfied for (45). Indeed, the imposed outer semicontinuity of \(M(\cdot )\) ensures that the distance function \(t\mapsto d_{M(t)}(x)\) is l.s.c. on \({{\mathcal {T}}}\) for all \(x\in {\mathbb {R}}^n\), which yields the measurability of \(M(\cdot )\) with respect to \(({{\mathcal {T}}},{\mathcal {A}},\mu )\) by employing [21, Theorems 14.2 and 14.8].

It is easy to see that \(M_{\cap }\) contains the feasible set of (42). Furthermore, if \(x\in M_{\cap }\), then there exists a set \(A\in {\mathcal {A}}\) such that \(\mu ({{\mathcal {T}}}\backslash A)=0\) and \(x\in M(s)\) for all \(s\in A\). Observe that A is dense on \({{\mathcal {T}}}\); otherwise there exists an open set U such that \(A\cap U\ne \emptyset \), which contradicts the strict positivity of \(\mu \). In particular, for every \(t\in {{\mathcal {T}}}\) there exists a sequence \(t_k\rightarrow t\) as \(k\rightarrow \infty \) with \(x\in M(t_k)\) for all \(k\in {\mathbb {N}}\), and then the outer semicontinuity of \(M(\cdot )\) yields \(x\in M(t)\). This shows that \(x\in M(t)\) for all \({{\mathcal {T}}}\) if and only if \(x\in M_{\cap }\). It follows also from the above arguments that \(\bar{x}\) is a local minimizer of (45).

To verify now assertion (i) of the theorem, we are going to apply Theorem 6.1(i) to problem (45). As follows from the above, \(M_\cap \) is normally regular at \(\bar{x}\). Furthermore, the qualification condition (36) holds due the imposed Lipschitz continuity of \(h(\cdot )\) around \(\bar{x}\); see Remark 6.2(a). To apply the result of Theorem 6.1(i), it remains to show that the assumed CHIP (43) yields the validity of the measurable CHIP for \(M_\cap \) at \(\bar{x}\) with respect to \(\mu \). To proceed, pick any \(v\in \bigcap _{t\in {{\mathcal {T}}}\;\mu \text {-a.e.}}T(\bar{x};M(t))\) and find a dense set \(A\subset {{\mathcal {T}}}\) of full measure such that \(v\in \bigcap _{t\in A}T(\bar{x};M(t))\). Hence (43) tells us that \(v\in T(\bar{x};\bigcap _{t\in A}M(t))\). It follows from the outer semicontinuity of \(M(\cdot )\) that

$$\begin{aligned} \bigcap _{t\in A}M(t)=\bigcap _{t\in {{\mathcal {T}}}}M(t), \; \text{ and } \text{ so } \;\bigcap _{t\in {{\mathcal {T}}}\;\mu \text {-a.e.}}T\big (\bar{x};M(t)\big )\subset T(\bar{x};M_\cap ), \end{aligned}$$

which justifies the measurable CHIP for \(M_\cap \) at \(\bar{x}\). Using finally the necessary optimality condition (37) of Theorem 6.1(i), we arrive at (44) and thus complete the proof of assertion (i). Assertion (ii) of the theorem follows directly from Theorem 6.1(ii) and the arguments above. \(\square \)

Remark 7.2

(Fréchet differentiable costs) It easily follows from the proof of Theorem 7.1 that the normal regularity of \(\bigcap _{t\in {\mathcal {T}}}M(t)\) at \(\bar{x}\) therein is not needed if the cost function h is Fréchet differentiability at \(\bar{x}\); see Remark 6.2(d).

Next we consider semi-infinite programs with inequality constraints:

$$\begin{aligned} \text{ minimize } \;h(x)\; \text{ subject } \text{ to } \;x \in M(t) :=\big \{x\big |\;f(t,x)\le 0\big \},\;t\in {{\mathcal {T}}}, \end{aligned}$$
(46)

where \(h:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\) is continuously differentiable while \(f:{{\mathcal {T}}}\times {\mathbb {R}}^n\rightarrow {\mathbb {R}}\) is continuous with respect to t and continuously differentiable with respect to x.

Theorem 7.3

(optimality conditions for semi-infinite programs with inequality constraints) Let \(\bar{x}\) be a local minimizer of (46) such that

$$\begin{aligned} \bar{x}\in \mathrm{int}\Big (\bigcap _{t\in {{\mathcal {T}}}\backslash {{\mathcal {T}}}_f} \big \{x\in {\mathbb {R}}^n\big |\;f(t,x)<0\big \}\Big )\; \text{ with } \;{{\mathcal {T}}}_f:=\big \{t\in {{\mathcal {T}}}\big |\;f(t,\bar{x})=0\big \}, \end{aligned}$$

that \(\nabla _x f(t,\bar{x})\ne 0\) for all \(t\in {{\mathcal {T}}}_f\), and that the mapping \(t\mapsto \nabla _x f(t,\bar{x})\) is continuous on \({{\mathcal {T}}}_f\). Furthermore, suppose that the CHIP assumption (43) is satisfied at \(\bar{x}\) with \(A:={{\mathcal {T}}}_f\) therein. Then we have

$$\begin{aligned} 0\in \nabla h(\bar{x})+\mathrm{cl}\Big (\int _{{{\mathcal {T}}}_f}\mathrm{cone} \big \{\nabla _x f(t,\bar{x})\big \}d\nu (t)\Big )\; \text{ for } \text{ any } \;\nu \in \mathfrak {M}_{+}({{\mathcal {T}}}_f). \end{aligned}$$
(47)

Proof

Without loss of generality, from now on we consider problem (46) for \(t\in {{\mathcal {T}}}_f\). Applying to this problem the results of Theorem 7.1 and Remark 7.2), observe that all the corresponding assumptions can be easily verified with the exception of CHIP (43) over the set \({{\mathcal {T}}}_f\). Thus it remains to check that the imposed CHIP assumption yields the fulfillment of CHIP (43) for any dense subset \(A\subset {{\mathcal {T}}}_f\). Indeed, it follows from [21, Exercise 6.7] that

$$\begin{aligned} T\big (\bar{x};M(t)\big )=\big \{w\in {\mathbb {R}}^n\big |\; \big \langle \nabla _x f(t,\bar{x}),w\big \rangle \in T \big (f(t,\bar{x});{\mathbb {R}}_{-}\big )\big \},\quad t\in {{\mathcal {T}}}_f, \end{aligned}$$

which implies in turn the representation

$$\begin{aligned} T\big (\bar{x},M(t)\big )=\big \{w\in {\mathbb {R}}^n\big |\; \big \langle \nabla _x f(t,\bar{x}),w\big \rangle \le 0\big \}, \end{aligned}$$

and hence \(\bigcap _{t\in {{\mathcal {T}}}_f}T(\bar{x};M(t)) =\{w|\;\langle \nabla _x f(t,\bar{x}),w\rangle \le 0\; \text{ for } \text{ all } \;t\in {{\mathcal {T}}}_f\}\). Taking now any dense set \(A\subset {{\mathcal {T}}}_f\), we have that \(\bigcap _{t\in A}M(t)=\bigcap _{t\in {{\mathcal {T}}}_f}M(t)\). Furthermore, the continuity of \(t\mapsto \nabla _x f(t,\bar{x})\) ensures that

$$\begin{aligned}&T\Big (\bar{x};\bigcap _{t\in A} M(t)\Big )=T\Big (\bar{x}; \bigcap _{t\in {{\mathcal {T}}}_f}M(t)\Big )=\bigcap _{t\in {{\mathcal {T}}}_f}T \big (\bar{x};M(t)\big )\\&\quad =\big \{w\in {\mathbb {R}}^n\big |\;\big \langle \nabla _x f(t,\bar{x}),w \big \rangle \le 0\; \text{ for } \text{ all } \;t\in A\big \}=\bigcap _{t\in A}T \big (\bar{x};M(t)\big ), \end{aligned}$$

which verifies the CHIP assumption of Theorem 7.1. Applying finally Theorem 7.1 to (46) and arguing as in the proof of Corollary 6.3, we finish the verification of both assertions of the theorem. \(\square \)

To conclude, let us present a useful consequence of Theorem 7.3(i), where the CHIP assumption is automatically satisfied.

Corollary 7.4

(optimality conditions for semi-infinite programs with linear inequality constraints) Let \(\bar{x}\) be a local minimizer of the problem:

$$\begin{aligned} \text{ minimize } \;h(x)\; \text{ subject } \text{ to } \;\langle a(t),x\rangle \le b(t)\; \text{ for } \text{ all } \;t\in {{\mathcal {T}}}, \end{aligned}$$

where \(a:{{\mathcal {T}}}\rightarrow {\mathbb {R}}^n\) and \(b:{{\mathcal {T}}}\rightarrow {\mathbb {R}}\) are continuous functions with \(a(t)\ne 0\) for all \(t\in {{\mathcal {T}}}_f\) from Theorem 7.3 with \(f(t,x):=\langle a(t),x\rangle -b(t)\). Assume that \(\bar{x}\in \mathrm{int}\bigcap _{t\in {{\mathcal {T}}} \backslash {{\mathcal {T}}}_f}\{x\in {\mathbb {R}}^n|\;\langle a(t),x\rangle <b(t)\}\). Then we have

$$\begin{aligned} 0\in \nabla h(\bar{x})+\mathrm{cl}\Big (\int _{{{\mathcal {T}}}_f}\mathrm{cone} \big \{a(t)\big \}d\nu (t)\Big )\; \text{ for } \text{ any } \;\nu \in \mathfrak {M}_{+}({{\mathcal {T}}}_f). \end{aligned}$$

Proof

It follows directly from Theorem 7.3(i), where the normal regularity assumption is replaced by the Fréchet differentiability of h by Remark 6.2(d), and the CHIP assumption holds by Corollary 5.4. \(\square \)

We refer the reader to [3, 6, 11, 14,15,16,17] and the bibliographies therein for various qualification conditions that lead to the possibility to avoid the closure operation in necessary optimality conditions of type (47) for particular forms of semi-infinite programs with inequality constraints.