1 Introduction

Optimization problems, in which both minimization and maximization processes are considered, are known in the area of mathematical programming as minimax problems. Such kind of optimization problems is an important research focus in mathematics, economics, and computer science (e.g., Chebyshev’s theory of best approximation [8], multiobjective optimization [10], game theory [5, 30],...). According to Demyanov and Malozemov [8], the minimax programming problems can be split into two classes which have their own interests: discrete minimax problems and continuous minimax problems. The best approximation in Chebyshev’s sense is a nonsmooth discrete minimax problem (see [8, p. 1]), while a matrix game leads to a continuous minimax problem (see [8, p. 3], [30]).

Optimal value functions of parametric optimization problems play a crucial role in optimal control and in the study of optimization problems. Investigations on the sensitivity of parametric optimization problems are vital in optimization and variational analysis. They allow us to understand behaviors of the optimal value function when parameters appearing in the problem under consideration modify according to some perturbation via their generalized derivatives/subdifferentials. Studies on differentiability properties of optimal value functions in parametric mathematical programming are usually classified as studies on sensitivity analysis of optimization problems. For nonconvex optimization problems, Mordukhovich and his co-authors in [26, 20, Chapter 1], [25], Huong et al. [11], Jourani [17], Penot and his co-workers in [16, 28, Chapter 4] have derived formulas for computing or estimating the Fréchet (regular) subdifferential, Mordukhovich (limiting) subdifferential, G-subdifferential of optimal value functions. While, by using sum rules which are known as the Moreau-Rockafellar theorems and appropriate regularity conditions, the papers [1, 2, 11] and the works [21, Chapter 3], [22, 23, Chapter 4] [24] have provided formulas for computing the subdifferential of convex analysis of optimal value functions in convex optimization problems.

To solve optimization problems, one often uses optimality conditions, in particular, necessary optimality conditions. Necessary optimality conditions help us solve problems through manual calculations and are useful as stopping criteria in algorithms. It happens that the theory of optimality conditions is strongly linked with sensitivity analysis, see, e.g., [4, Chapter 4] and [29, Chapter 3]. In fact, various results concerning optimality conditions were obtained as products of research on sensitivity analysis.

In this paper, we are interested in discrete minimax problems. We aim at studying optimality conditions and sensitivity analysis of parametric nonconvex minimax programming problems. A major part of the topic on minimax programming problems is the study of optimality conditions as well as duality results with both differentiable and non-differentiable cases, see, e.g., [6, 8, Chapters 3, 4, and 6], [3, 9, 31, 32] and the references therein. Namely, Chuong and Kim [6], Zhong and Jin [32] have studied optimality conditions and duality in nondifferentiable minimax programming and vector optimization in an Asplund space setting/a finite-dimensional space setting by using Mordukhovich subdifferentials and some advanced tools of variational analysis. Among necessary conditions, the Lipschitz continuity of the functions is the main key. Meanwhile, Bao et al. [3] have considered minimax optimization problems with inequality, equality, and geometric constraints in the setting of nondifferentiable and non-Lipschitz functions in Asplund spaces. In detail, necessary optimality conditions in terms of upper and/or lower subdifferentials of both cost and constraint functions as well as necessary optimality conditions in the fuzzy form have been given.

Related to the study of sensitivity analysis in parametric minimax programming problems, the results in [7] are the first ones, following our knowledge. Namely, the authors in [7] obtained a differential expansion of the optimal value function in the neighborhood of a certain point (Theorems 1 and 2) under some suitable conditions. Until now, we have not found any works which are investigated the sensitivity analysis of the parametric minimax programming problems.

Qualification conditions are sufficient conditions for the validity of fundamental calculus rules in variational analysis, convex analysis, and optimization theory. For instance, the well-known Moreau-Rockafellar theorem in convex analysis supplies the formula for computing the subdifferential sum of two proper convex functions under the condition that one of these functions is continuous at a point belonging to the domain of the other. Meanwhile, in variational analysis, the Lipschitz continuity guarantees the sum rule for the Mordukhovich subdifferentials. Metric qualification conditions which are formulated through the distance functions were first studied and developed in the papers by Ioffe, Penot, Jourani and Thibault [12, 16, 18] and recently by Huong et al. [11] for the nonconvex case. According to Ioffe [14], the metric qualification condition is even less restrictive than the weakest qualification conditions used in convex analysis. The relationship between the metric qualification condition and other qualification conditions (e.g., the calmness qualification condition, the subregularity qualification condition, the metric regularity condition, the uniformly lower semicontinuity) was examined in [14, 15, 19]. These conditions play vital roles in deriving intersection rules for normal cones see, e.g., [11, 13, 27, 28, Theorem 6.41]. The latter rules play an important role in the study of optimality conditions and sensitivity analysis of parametric optimization problems.

As mentioned above, we will study optimality conditions as well as sensitivity analysis of parametric nonconvex minimax programming problems in this paper. Our main tools are the sum rule for subdifferentials and subdifferentiation of maximum functions under two kinds of qualification conditions: the metric qualification condition and the Lipschitz continuity among other necessary conditions. More precisely, the formula for estimating the Mordukhovich subdifferential of maximum functions is presented in [20, Theorem 3.46] by employing the Lipschitz continuity of the component functions. Meanwhile, using the metric qualification condition to investigate the Mordukhovich subdifferential of maximum functions has not been mentioned in the literature. So we present here a detail proof for the latter (Proposition 3.3). For estimating or computing the sum rule for subdifferentials of nondifferentiable and non-Lipschitz functions in infinite-dimensional spaces, the sequentially normally compact (SNC) property of sets together with normal qualification condition are the main ingredients. However, it has been shown in [27, Proposition 3.7] that the metric qualification condition is really weaker than the normal qualification condition. In other words, the metric qualification condition is a very weak condition stated in terms of some distance estimates and without any additional compactness assumptions. We then use these results to characterize the necessary optimality condition for the problem under consideration (Theorems 3.1 and 3.2). We also design an example to show that Theorem 3.1 where the metric qualification conditions are required can be applicable, meanwhile, Theorem 3.2 based on the Lipschitz continuity cannot. Moreover, it is worth emphasizing that the study of sensitivity for parametric nonconvex minimax programming problems has not received much attention since the first work by Darkhovskii and Levitin [7] in 1976. With wide applications of the class of minimax programming problems, we hope that the contributions (Theorems 3.13.24.1 and 4.2 ) in this paper will promote to enrich the results on optimality conditions as well as sensitivity analysis of the minimax programming problem.

The remaining sections are as follows. Section 2 is for the problem formulation and some auxiliary concepts in variational analysis from the books [20, 28]. Optimality conditions are established in Sect. 3. These results are then applied to multiobjective optimization problems. Section 4 is devoted to the study of differential stability of the parametric nonconvex minimax programming problem under two kinds of qualification conditions: the metric qualification condition and the Lipschitz continuity among other necessary conditions. The last section provides some concluding remarks.

Throughout the paper, we use the standard notations in variational analysis; see e.g., [20]. The topological dual spaces of Banach spaces \((X, \Vert \cdot \Vert _X)\) and \((Y, \Vert \cdot \Vert _Y)\) are denoted, respectively, by \(X^*\) and \(Y^*\). We write \(\mathbb {B} (x, r)\) and \(\mathbb {B} _{X^*}\) for the open ball centered at x with radius \(r>0\) and the closed unit ball of \(X^*\), respectively. The notation \(x_k^*\rightarrow x^*\) means the norm convergence to \(x^*\) of the sequence \(\{x_k^*\}_{k\in \mathbb {N}}\) with \(\mathbb {N}:=\{1,2,\ldots \}\), while \(x_k^* \xrightarrow {w^*} x^*\) indicates the convergence to \(x^*\) of \(\{x_k^*\}_{k\in \mathbb {N}}\) in the weak\(^*\) topology of \(X^*\).

2 Preliminaries

Let X and Y be two Banach spaces and let m be a positive integer number. We consider the parametric nonconvex minimax programming problem

figure a

depending on the parameter x, where \(G:X \rightrightarrows Y\) is a given multifunction, \(K:=\{1, \ldots , m\}\), \(\varphi _k:X\times Y\rightarrow \overline{\mathbb {R}}: = \mathbb {R}\cup \{- \infty , + \infty \}\), \(k\in K\), are proper extended-real-valued functions. Here and subsequently, we denote

$$\begin{aligned} \varphi (x,y):=\max _{k\in K}\varphi _k(x,y) \end{aligned}$$

for all \((x, y)\in X\times Y\). Then the optimal value function \(\mu :X\rightarrow \overline{\mathbb {R}}\) of (\(P_x\)) is

$$\begin{aligned} \mu (x)=\inf _{y\in G(x)} \varphi (x,y), \end{aligned}$$
(2.1)

where by convention, \(\mu (x)\) is defined to be \(+\infty \) if \(x\notin \text{ dom }\,G\). We define the solution map \(M: X \rightrightarrows Y\) of (\(P_x\)) by

$$\begin{aligned} M(x):=\{y\in G(x)\,|\, \mu (x)=\varphi (x,y)\}. \end{aligned}$$

Definition 2.1

For a given parameter x, a point \({\bar{y}}\in G(x)\) is said to be a local optimal solution of problem (\(P_x\)) if there exists a neighborhood U of \({\bar{y}}\) such that

$$\begin{aligned} \varphi (x, {\bar{y}})\le \varphi (x,y)\ \ \forall y\in U\cap G(x). \end{aligned}$$
(2.2)

If the inequality in (2.2) holds for every \(y\in G(x)\), then \({\bar{y}}\) is called a global optimal solution (or simply, optimal solution) of (\(P_x\)). The latter means that \({\bar{y}}\in M(x)\).

In this paper, we aim to investigate optimality conditions and sensitivity analysis of parametric minimax programming problem (\(P_x\)). To do that, we need some concepts and results from the books [20, 28].

For a set-valued map \(F:X \rightrightarrows X^*\), the limiting construction

$$\begin{aligned} \mathop {\textrm{Lim}\,\textrm{sup}}\limits _{x\rightarrow {\bar{x}}} F(x)\!:=\! \left\{ \! x^*\in X^* \!\mid \! \exists x_k \!\rightarrow \bar{x},\, \ x_k^* \xrightarrow {w^*} x^* \ \text{ with }\ x^*_k\in F(x_k), \forall k\in \mathbb {N} \right\} \end{aligned}$$

is known as the sequential Painlevé-Kuratowski upper limit of F as \(x\rightarrow {\bar{x}}\) with respect to the norm topology of X and the weak\(^*\) topology of \(X^*\).

If \(\varOmega \subset X\) is a given subset, the notation \(u \xrightarrow {\varOmega }x\) means that \(u\rightarrow x\) and \(u\in \varOmega \).

Definition 2.2

(See [20, Definition 1.1]) Let \(\varOmega \) be a nonempty subset of X.

(i) For any \(x \in \varOmega \) and \(\varepsilon \ge 0\), the set of \(\varepsilon \)-normals to \(\varOmega \) at x is defined by

$$\begin{aligned} {\widehat{N}}_\varepsilon (x;\varOmega ):=\Big \{ x^*\in X^*\mid \limsup \limits _{u \xrightarrow {\varOmega }x} \dfrac{\langle x^*, u-x \rangle }{\Vert u-x\Vert } \le \varepsilon \Big \}. \end{aligned}$$

The set \({\widehat{N}}(x;\varOmega ):={\widehat{N}}_0(x;\varOmega )\) is called the Fréchet normal cone or the regular normal cone to \(\varOmega \) at x. If \(x \not \in \varOmega \), we put \({\widehat{N}}_\varepsilon (x;\varOmega )=\emptyset \) for all \(\varepsilon \ge 0.\)

(ii) Let \({\bar{x}} \in \varOmega \). The set

$$\begin{aligned} N({\bar{x}}; \varOmega ) :=\mathop {\textrm{Lim}\,\textrm{sup}}_{x \rightarrow {\bar{x}},\varepsilon \downarrow 0}\, {\widehat{N}}_\varepsilon (x; \varOmega ), \end{aligned}$$

is called the Mordukhovich normal cone or the limiting normal cone to \(\varOmega \) at \({\bar{x}}\). We put \(N({\bar{x}};\varOmega )=\emptyset \) if \({\bar{x}} \not \in \varOmega .\)

It is clear that \({\widehat{N}}(x; \varOmega ) \subset N(x; \varOmega )\) for all \(\varOmega \subset X\) and \(x \in \varOmega \). In the case, where \(\varOmega \) is convex, the Mordukhovich normal cone and the Fréchet normal cone are coincident and they coincide with the normal cone of convex analysis, i.e.,

$$\begin{aligned} N({\bar{x}}; \varOmega )={\widehat{N}}({\bar{x}}; \varOmega )=\{x^*\in X^* \mid \langle x^*, x-{\bar{x}}\rangle \le 0, \ \forall x\in \varOmega \}, \end{aligned}$$

(see [28, Excercise 6, p. 174 and Proposition 6.6]).

Let \(f: X\rightarrow \overline{\mathbb {R}}\) be an extended-real-valued function. The domain and epigraph of f are given, respectively, by

$$\begin{aligned} \mathrm{{dom}}\, f:=\{ x \in X \mid f (x)< \infty \} \end{aligned}$$

and

$$\begin{aligned} \mathrm{{epi}}\, f:=\{ (x, \alpha ) \in X \times \mathbb {R}\mid \alpha \ge f (x)\}. \end{aligned}$$

One says that f is proper if \(\text{ dom }\,f\) is nonempty, and if \(f(x)>-\infty \) for all \(x\in X\). Recall that f is lower semicontinuous (l.s.c.) at some \({\bar{x}} \in X\) if for every real number \(r < f({\bar{x}})\) there exists some member V of the family \({\mathcal {N}}({\bar{x}})\) of neighborhoods of \({\bar{x}}\) such that \(r< f(v)\) for all \(v\in V\). We say that f is l.s.c. around \({\bar{x}}\) when it is l.s.c. at any point of some neighborhood of \({\bar{x}}\). If f is l.s.c. at every \(x \in X,\) f is said to be l.s.c. on X. We observe that f is automatically lower semicontinuous at \({\bar{x}}\) when \(f ({\bar{x}}) = -\infty \); when \(f ({\bar{x}}) = +\infty \) the lower semicontinuity of f means that the values of f remain as large as required, provided one stays in some small neighborhood of \({\bar{x}}\).

The upper semicontinuity (u.s.c.) of f is defined symmetrically from the lower semicontinuity of \(-f\). Obviously, the function f is continuous at \({\bar{x}}\) iff it is both lower semicontinuous and upper semicontinuous at \({\bar{x}}\).

Following [28, Proposition 1.15.], f is lower semicontinuous iff \(\text{ epi }\,f\) is a closed set in \(X \times Y\).

Definition 2.3

(See [20, Definition 1.77]) Let \(f: X\rightarrow \overline{\mathbb {R}}\) be an extended-real-valued function and let \({\bar{x}} \in X\). Suppose that \(f(\bar{x})\) is finite.

(i) The set

$$\begin{aligned} \partial f(\bar{ x}):=\{x^* \in X^*\mid (x^*, -1) \in N( (\bar{x}, f(\bar{ x})); \mathrm{{epi}}\, f)\} \end{aligned}$$

is said to be the Mordukhovich subdifferential or the limiting subdifferential of f at \(\bar{x}\).

(ii) The set

$$\begin{aligned} \partial ^\infty f(\bar{ x}):=\{x^* \in X^*\mid (x^*, 0) \in N( (\bar{x}, f(\bar{ x})); \mathrm{{epi}}\, f)\} \end{aligned}$$

is called the singular subdifferential of f at \(\bar{x}\).

One puts \(\partial f (\bar{x}): =\partial ^\infty f(\bar{x}):=\emptyset \) when \(f({\bar{x}})\in \{- \infty , +\infty \}\).

Recall that the function f is Lipschitz continuous around \({\bar{x}}\) (cf. [20, p. 19]) if there are a neighborhood U of \({\bar{x}}\) and a constant \(\ell \ge 0\) such that

$$\begin{aligned} { | f(x)-f(y)|} \le \ell \Vert x-y\Vert , \ \forall x, y \in U. \end{aligned}$$

Since \(\partial ^\infty f({\bar{x}})=\{0\}\) if f is Lipschitz continuous around \({\bar{x}}\) (see [20, Corollary 1.81]), the singular subdifferential occurs to be useful for the study of non-Lipschitzian functions. Namely, it is utilized in establishing appropriate qualification conditions for subdifferential calculus rules as in [20, Chapter 3].

If f is convex, the Mordukhovich subdifferential and the subdifferential of convex analysis coincide, i.e.,

$$\begin{aligned} \partial f({\bar{x}})=\{x^*\in X^*\mid f(x)-f({\bar{x}}) \ge \langle x^*, x-{\bar{x}}\rangle , \ \forall x\in X\}, \end{aligned}$$

(see [28, Proposition 6.17(b)]).

Consider the indicator function \(\delta _\varOmega : X \rightarrow \mathbb {R}\cup \{+\infty \}\) defined by \(\delta _\varOmega (x) =~0\) for \(x\in \varOmega \) and \(\delta _\varOmega (x) =+\infty \) otherwise. We have a relation between the Mordukhovich normal cone and Mordukhovich subdifferential of the indicator function as follows (see [20, Proposition 1.79]

$$\begin{aligned} N({\bar{x}};\varOmega )=\partial \delta _\varOmega (\bar{x})=\partial ^\infty \delta _\varOmega (\bar{x}), \ \forall {\bar{x}}\in \varOmega . \end{aligned}$$

Let \(F: X \rightrightarrows Y\) be a set-valued map between Banach spaces. The graph and the domain of F are given respectively by the formulas

$$\begin{aligned}{} & {} {\text{ gph }\,} F:=\{ (x,y) \in X \times Y \mid y \in F(x)\}, \\{} & {} \quad \mathrm{{dom}}\, F:=\{x \in X \mid F(x) \not = \emptyset \}. \end{aligned}$$

Recall that F is closed if \(\text{ gph }\,F\) is a closed subset of \(X\times Y\).

Equipping the product space \(X\times Y\) with the norm \(\Vert (x,y)\Vert :=\Vert x\Vert +\Vert y\Vert \), by the above notion of the Mordukhovich normal cone, one can define the concept of the Mordukhovich coderivative (also called the limiting coderivative) of set-valued maps as follows.

Definition 2.4

(See [20, p. 40, 41]) The Mordukhovich coderivative of F at \(({\bar{x}}, {\bar{y}}) \in {\text{ gph }\,}F\) is the multifunction \( D^* F({\bar{x}}, {\bar{y}}): Y^* \rightrightarrows X^*\) given by

$$\begin{aligned} D^* F({\bar{x}}, {\bar{y}})(y^*):=\left\{ x^* \in X^* \mid (x^*, -y^*) \in N ( ({\bar{x}}, {\bar{y}}); {\text{ gph }\,} F)\right\} , \ \forall y^* \in Y^*. \end{aligned}$$

If \(({\bar{x}}, {\bar{y}}) \notin {\text{ gph }\,} F\) then we accept the convention that the set \(D^* F({\bar{x}}, {\bar{y}})(y^*)\) is empty for any \(y^* \in Y^*\).

Given a nonempty subset \(\varOmega \) of a Banach space X, the distance function to \(\varOmega \) is given by

$$\begin{aligned} d(x,\varOmega ):=\inf \limits _{y\in \varOmega } \Vert x-y\Vert , \quad x \in X. \end{aligned}$$

Clearly, \(d(\cdot ,\varOmega ): X \rightarrow \mathbb {R}\) is a Lipschitz continuous function with modulus one.

We now recall the concept of the metric qualification condition.

Definition 2.5

Let \(C_i\), \(i=1,2,\ldots , m\), be nonempty subsets of a Banach space X and let \({\bar{x}} \in C:=\bigcap \limits _{i=1}^m C_i \). We say that the sets \(C_1,C_2,\ldots ,C_m\) satisfy the metric qualification condition (MQC) at \({\bar{x}}\) if there exist two numbers \(\gamma >0\) and \(\delta >0\) such that

$$\begin{aligned} d(x, C) \le \gamma \big [d(x, C_1)+d(x, C_2)+\ldots +d(x, C_m) \big ], \quad \forall x\in \mathbb {B}(\bar{x}, \delta ). \end{aligned}$$
(MQC)

The qualification condition (MQC) is the key factor for the main results of this paper. They have appeared in several works under different names. For instance, in [27] the authors named (MQC) the metric inequality, while it is called the linear coherence condition as in [28, Theorems 4.75 and 6.41]).

We next represent the intersection rule for normal cones under the metric qualification condition (MQC). It appeared in [28, Theorem 6.41], in [27, Theorem 3.8, (i) implies (iii)] and in [11, Proposition 3.6 and Theorem 3.1]. Let us notice that the assumption X being an Asplund space is indispensable, as shown by a counterexample in [27, p. 202]. Recall that a Banach space is Asplund if every continuous convex function defined on an open convex subset W of X is Fréchet differentiable on a dense \(G_\delta \) subset D of W (see [28, Definition 3.96]).

Theorem 2.1

Suppose that X is an Asplund space and \(C_i\), \(i=1,2, \ldots , m\), are closed. If the condition (MQC) is satisfied at \({\bar{x}}\in C=\cap _{i=1}^m C_i\), then

$$\begin{aligned} N({\bar{x}}; C) \subset N({\bar{x}}; C_1) + N({\bar{x}}; C_2)+\ldots +N({\bar{x}}; C_m) . \end{aligned}$$

We end this section with relevant concepts and results from [20].

Definition 2.6

(See [20, Definition 1.63] and [25]) Let \(M: X \rightrightarrows Y\) be the solution map of the parametric optimization problem (\(P_x\)). One says that M is \(\mu \)-inner semicontinuous at \(({\bar{x}}, {\bar{y}})\in {\text{ gph }\,}M\) if for every sequence \(x_k \xrightarrow {\mu } {\bar{x}}\) there exists a sequence \(y_k \in M(x_k)\) converging to \({\bar{y}}\) as \(k\rightarrow \infty \).

The properties of the solution map considered in the above definition extend the corresponding notions in [20, Definition 1.63] and adapt them to the solution map M of (\(P_x\)). The only difference is that the condition \(x_k \rightarrow {\bar{x}}\) in [20] is replaced by the weaker condition \(x_k \xrightarrow {\mu } {\bar{x}}\). This does not cause any effect on the conclusions of [20, Theorem 1.108], as observed in [25].

Definition 2.7

(i) A subset \(\varOmega \) in a Banach space X is called sequentially normally compact (SNC) at \({\bar{x}}\) if for any sequences \(\varepsilon _k \downarrow 0,\ x_k \xrightarrow {\varOmega } {\bar{x}}\) and \(x^*_k \in {\widehat{N}}_{\varepsilon _k} (x_k; \varOmega )\) one has

$$\begin{aligned} \big [ x_k^* \xrightarrow {w^*} 0\big ] \Longrightarrow \big [ \Vert x_k^*\Vert \rightarrow 0\big ]\ \;\text{ as }\ k \rightarrow \infty . \end{aligned}$$

(ii) A function \(f: X \rightarrow \overline{\mathbb {R}}\) is called sequentially normally epi-compact (SNEC) at \({\bar{x}}\) if its epigraph is SNC at \(({\bar{x}}, f ({\bar{x}}))\).

3 Optimality conditions

3.1 Optimality conditions for parametric minimax programming problems

Hereafter, unless otherwise specified, all spaces under consideration are Asplund spaces.

The first result of this section gives a useful representation of the coderivative for the intersection

$$\begin{aligned} (F_1 \cap F_2 \cap \ldots \cap F_m)(x):= F_1(x) \cap F_2(x)\cap \ldots \cap F_m(x) \end{aligned}$$

of set-valued maps, which is important for applications to the subdifferentiation of maximum functions.

Proposition 3.1

Let \(F_i: X \rightrightarrows Y\), \(i = 1, 2,\ldots ,m\) be closed. Assume that the sets \(\text{ gph }\,F_1\), \(\text{ gph }\,F_2,\ldots ,\text{ gph }\,F_m\) satisfy the metric qualification condition (MQC) at \(({\bar{x}}, {\bar{y}})\in \text{ gph }\,F_1 \cap \text{ gph }\,F_2\cap \ldots \cap \text{ gph }\,F_m\). Then, for any \(y^*\in Y^*\) we have

$$\begin{aligned} D^*\bigg (\bigcap _{i=1}^mF_i\bigg ) ({\bar{x}}, {\bar{y}}) (y^*) \subset \bigcup \limits _{\sum \limits _{i=1}^m y_i^*=y^*} \left[ \sum \limits _{i=1}^m D^* F_i ({\bar{x}}, {\bar{y}})(y_i^*) \right] . \end{aligned}$$

Proof

For every \({\bar{y}} \in (\cap _{i=1}^m F_i )({\bar{x}}), \) \(y^* \in Y^*\), take any \(x^*\in D^*(\cap _{i=1}^mF_i) ({\bar{x}}, {\bar{y}}) (y^*)\). By the definition of the coderivative, we have

$$\begin{aligned} (x^*,-y^*) \in N\big (({\bar{x}},{\bar{y}}); \text{ gph }\,(F_1\cap F_2\cap \ldots \cap F_m)\big ). \end{aligned}$$

Clearly, \(\text{ gph }\,(F_1 \cap F_2\cap \ldots \cap F_m) = \text{ gph }\,(F_1) \cap \text{ gph }\,(F_2)\cap \ldots \cap \text{ gph }\,(F_m)\). Under the validity of (MQC), we apply Theorem 2.1 to get

$$\begin{aligned} (x^*,-y^*) \in \sum \limits _{i=1}^m N\big (({\bar{x}},{\bar{y}}); \text{ gph }\,F_i\big ). \end{aligned}$$

The latter means that we can find \((x_i^*,- y_i^*) \in N(({\bar{x}},{\bar{y}}); \text{ gph }\,F_i)\), \(i=1,2,\ldots ,m\) such that \(x^*=x_1^*+x_2^*+\ldots +x_m^*\) and \(-y^*=-y_1^*-y_2^*-\ldots -y_m^*.\) Therefore \(x^* \in D^*F_1({\bar{x}}, {\bar{y}})(y_1^*) + D^*F_2({\bar{x}}, {\bar{y}})(y_2^*)+\ldots + D^*F_m({\bar{x}}, {\bar{y}})(y_m^*)\) and \(y^* = \sum \limits _{i=1}^m y_i^*,\) which justify the claimed inclusion. \(\square \)

Proposition 3.2

Let \(F_i: X \rightrightarrows Y\), \(i = 1, 2,\ldots ,m\) be set-valued maps. Assume that \(({\bar{x}}, {\bar{y}})\in \cap _{i=1}^m(\text{ gph }\,F_i)\) and \(({\bar{x}}, {\bar{y}})\in \textrm{int}\,(\text{ gph }\,F_i)\), \(i=1,2,\ldots , m-1\). Then, for any \(y^*\in Y^*\), one has

$$\begin{aligned} D^*\bigg (\bigcap _{i=1}^mF_i\bigg ) ({\bar{x}}, {\bar{y}}) (y^*) = D^* F_m ({\bar{x}}, {\bar{y}})(y^*). \end{aligned}$$

Proof

Clearly, \(\text{ gph }\,(F_1 \cap F_2\cap \ldots \cap F_m) = \text{ gph }\,F_1 \cap \text{ gph }\,F_2\cap \ldots \cap \text{ gph }\,F_m\). Hence,

$$\begin{aligned} N\big (({\bar{x}},{\bar{y}}); \text{ gph }\,(F_1\cap F_2\cap \ldots \cap F_m)\big )= N\big (({\bar{x}},{\bar{y}}); \cap _{i=1}^m\text{ gph }\,F_i \big ). \end{aligned}$$

Since \(({\bar{x}}, {\bar{y}})\in \textrm{int}\,(\text{ gph }\,F_i)\), \(i=1,2,\ldots , m-1\), there exists \(\delta >0\) such that \( \mathbb {B}( ({\bar{x}}, {\bar{y}}); \delta ) \subset \text{ gph }\,F_i\) for all \(i\in \{1,2,\ldots , m-1\}\). Hence,

$$\begin{aligned} \mathbb {B}( ({\bar{x}}, {\bar{y}}); \delta ) \!\cap \! \big (\cap _{i=1}^m\text{ gph }\,F_i\big )&\!=\! \big [ \mathbb {B}( ({\bar{x}}, {\bar{y}}); \delta ) \!\cap \! \text{ gph }\,F_1\cap ... \cap \text{ gph }\,F_{m-1}\big ]\cap \text{ gph }\,F_m\\&= \mathbb {B}( ({\bar{x}}, {\bar{y}}); \delta ) \cap \text{ gph }\,F_m. \end{aligned}$$

This implies that

$$\begin{aligned} \widehat{N}_\varepsilon ( (x,y); \cap _{i=1}^m\text{ gph }\,F_i)= \widehat{N}_\varepsilon ( (x,y); \text{ gph }\,F_m), \end{aligned}$$

for any \(\varepsilon \ge 0\) and \((x,y)\in \cap _{i=1}^m\text{ gph }\,F_i\) satisfying \((x,y)\in \mathbb {B}( ({\bar{x}}, {\bar{y}}); \delta )\). Consequently,

$$\begin{aligned} N\big (({\bar{x}},{\bar{y}}); \cap _{i=1}^m\text{ gph }\,F_i\big )=N\big (({\bar{x}},{\bar{y}}); \text{ gph }\,F_m\big ). \end{aligned}$$

Therefore, \( D^*\big (\bigcap _{i=1}^mF_i\big ) ({\bar{x}}, {\bar{y}}) (y^*) = D^* F_m ({\bar{x}}, {\bar{y}})(y^*)\) for any \(y^*\in Y^*\). The proof is complete. \(\square \)

Given \(f: X \rightarrow \overline{\mathbb {R}}\), we associate with it the epigraphical multifunction \(E_f:X \rightrightarrows \mathbb {R}\) defined by

$$\begin{aligned} E_f (x):= \{r \in \mathbb {R} \mid f(x)\le r\}. \end{aligned}$$

Note that \({\text{ gph }\,} E_f = \text{ epi }\,f\). Thus, for every \({\bar{x}}\) where f is finite, we can equivalently define the Mordukhovich and singular subdifferentials of f at \({\bar{x}}\) through the coderivative of \(E_f:\)

$$\begin{aligned} \partial f ({\bar{x}}) =D^* E_f ({\bar{x}}, f({\bar{x}})) (1) \ \text{ and }\ \partial ^\infty f ({\bar{x}}) =D^* E_f ({\bar{x}}, f({\bar{x}})) (0). \end{aligned}$$
(3.1)

The following proposition contains results for computing Mordukhovich and singular subdifferentials of the maximum function \(\varphi \) in Asplund spaces. Given \(({\bar{x}}, {\bar{y}})\in X \times Y\), we define the sets

$$\begin{aligned}{} & {} I({\bar{x}}, {\bar{y}}):= \{k\in K \mid \varphi _k({\bar{x}}, {\bar{y}})= \varphi ({\bar{x}}, {\bar{y}}) \},\\{} & {} \quad {\varLambda } ({\bar{x}}, {\bar{y}}):= \left\{ (\lambda _1,\ldots ,\lambda _m) \mid \lambda _k \ge 0, \ \sum \limits _{k\in K} \lambda _k =1, \lambda _k (\varphi _k ({\bar{x}}, {\bar{y}})- \varphi ({\bar{x}}, {\bar{y}}))=0 \right\} , \\{} & {} \quad \bar{\varLambda } ({\bar{x}}, {\bar{y}}):= \left\{ (\lambda _1,\ldots ,\lambda _m) \mid \ \sum \limits _{k\in K} \lambda _k =1, \lambda _k (\varphi _k ({\bar{x}}, {\bar{y}})- \varphi ({\bar{x}}, {\bar{y}}))=0 \right\} , \end{aligned}$$

where we recall that \(\varphi (x,y):=\max \limits _{k\in K}\varphi _k(x,y)\).

Proposition 3.3

Given \(({\bar{x}}, {\bar{y}})\in X\times Y\). Let \(\varphi _k\) be l.s.c. for \(k\in I({\bar{x}}, {\bar{y}})\) and be u.s.c. at \(({\bar{x}}, {\bar{y}})\) for \(k \in K {\setminus } I({\bar{x}}, {\bar{y}}).\) Suppose that there are \(\gamma >0\) and \(\delta >0\) such that

$$\begin{aligned} \begin{aligned}&d(((x,y),r), \text{ epi }\,\bar{\varphi }) \\&\, \le \gamma \!\sum \limits _{k\in I({\bar{x}},{\bar{y}})} \! d(((x,y),r), \text{ epi }\,\varphi _k), \, \forall (x,y)\in \mathbb {B}(({\bar{x}}, {\bar{y}}), \delta ), r \in \mathbb {B}(\varphi ({\bar{x}},\bar{y}), \delta ), \end{aligned} \end{aligned}$$
(3.2)

where \(\bar{\varphi }(x, y):=\max \limits _{k\in I({\bar{x}}, {\bar{y}})}\varphi _k(x,y)\). Then, one has

$$\begin{aligned}&\partial \varphi ({\bar{x}}, {\bar{y}}) \subset \bigcup \bigg \{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \circ \partial \varphi _k ({\bar{x}}, {\bar{y}}), \ (\lambda _1, \ldots ,\lambda _k)\in {\varLambda }({\bar{x}},{\bar{y}})\bigg \}, \nonumber \\&\partial ^\infty \varphi ({\bar{x}}, {\bar{y}})\subset \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \partial ^\infty \varphi _k ({\bar{x}},{\bar{y}}), \end{aligned}$$
(3.3)

where \(\lambda \circ \partial \varphi ({\bar{x}},{\bar{y}})\) is defined as \(\lambda \partial \varphi ({\bar{x}},{\bar{y}})\) when \(\lambda >0\) and as \(\partial ^\infty \varphi ({\bar{x}},{\bar{y}}) \) when \(\lambda = 0.\)

Proof

Let \((x^*, y^*)\in \partial \varphi ({\bar{x}}, {\bar{y}})\). By (3.1), one has

$$\begin{aligned} (x^*, y^*)\in D^*E_{\varphi }(({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))(1). \end{aligned}$$

Clearly, \(E_{\varphi }( x, y)=\bigcap _{k\in K} E_{\varphi _k}(x, y)\) for all \((x, y)\in X\times Y\). Observe that \((({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))\in \textrm{int}\,( \text{ gph }\,E_{\varphi _k})\) for any \(k\notin I({\bar{x}}, {\bar{y}})\) due to the upper semicontinuity assumption. By Proposition 3.2, we have

$$\begin{aligned} \begin{aligned} D^*E_\varphi (({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))(1)&= D^*\big (\cap _{k\in I({\bar{x}}, {\bar{y}})} E_{\varphi _k}\big ) (({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))(1)\\ {}&=D^*E_{\bar{\varphi }} (({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))(1). \end{aligned} \end{aligned}$$
(3.4)

It follows from (3.2) that \(\text{ gph }\,E_{\varphi _k}\), \(k\in I({\bar{x}}, {\bar{y}})\), satisfy the metric qualification condition (MQC) at \(({\bar{x}}, {\bar{y}})\). In addition, since \(\varphi _k\), \(k\in I({\bar{x}}, {\bar{y}})\), are l.s.c., it holds that \(\text{ epi }\,\varphi _k\), \(k\in I({\bar{x}}, {\bar{y}})\), are closed. In other words, \(\text{ gph }\,E_{\varphi _k}\), \(k\in I({\bar{x}}, {\bar{y}})\), are closed. By (3.4) and Proposition 3.1, we have

$$\begin{aligned} \begin{aligned} D^*E_{\varphi }(({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))&(1)=D^*E_{\bar{\varphi }} (({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))(1)\\&\subset \!\!\! \bigcup _{\sum \limits _{k\!\in \! I({\bar{x}}, {\bar{y}})}\lambda _k\!=\!1} \left[ \sum \limits _{k\in I({\bar{x}}, {\bar{y}})}\!\!D^*E_{\varphi _k}(({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))(\lambda _k)\right] . \end{aligned} \end{aligned}$$

Hence,

$$\begin{aligned} \partial \varphi ({\bar{x}}, {\bar{y}}) \subset \bigcup \bigg \{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} D^*E_{\varphi _k}(({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))(\lambda _k), \ (\lambda _1, \ldots ,\lambda _m)\in \bar{\varLambda }({\bar{x}},{\bar{y}})\bigg \}. \end{aligned}$$

So, there exist \((\lambda _1, \ldots ,\lambda _m)\in \bar{\varLambda }({\bar{x}},{\bar{y}})\) and \((x^*_k, y^*_k)\in D^*E_{\varphi _k}(({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}}))(\lambda _k)\), \(k\in I({\bar{x}}, {\bar{y}})\), satisfying

$$\begin{aligned} (x^*, y^*)=\sum _{k\in I({\bar{x}}, {\bar{y}})}(x^*_k, y^*_k). \end{aligned}$$

For each \(k\in I({\bar{x}}, {\bar{y}})\), we have

$$\begin{aligned} ((x^*_k, y^*_k), -\lambda _k)\in N((({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}})); \text{ gph }\,E_{\varphi _k})= N((({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi _k). \end{aligned}$$

By [20, Proposition 1.76], \(\lambda _k\ge 0\) for all \(k\in I({\bar{x}}, {\bar{y}})\), i.e., \((\lambda _1, \ldots ,\lambda _m)\in ~{\varLambda }({\bar{x}},{\bar{y}})\). In the case, if \(\lambda _k>0\), then \(\dfrac{1}{\lambda _k}((x^*_k, y^*_k), -1)\in N((({\bar{x}}, {\bar{y}}), \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi _k)\) and hence, \(\dfrac{1}{\lambda _k}(x^*_k, y^*_k)\in \partial \varphi _k ({\bar{x}}, {\bar{y}})\), or, equivalently, \((x^*_k, y^*_k)\in \lambda _k \partial \varphi _k ({\bar{x}}, {\bar{y}})\). Otherwise, \((x^*_k, y^*_k)\in \partial ^\infty \varphi _k ({\bar{x}}, {\bar{y}})\). Therefore \((x^*_k, y^*_k)\in \lambda _k \circ \partial \varphi _k ({\bar{x}}, {\bar{y}})\) for all \(k\in I({\bar{x}}, \bar{y})\) and (3.3) is proved.

The proof for singular subdifferential of \(\varphi \) is quite similar to that of the proof of (3.3), so it is omitted. \(\square \)

We are now in a position to give necessary optimality conditions for local optimal solutions of problem (\(P_x\)). By using the metric qualification condition, we obtain the first result as follows.

Theorem 3.1

(Necessary optimality condition I) Let \({\bar{x}} \in X\). Assume that \(\varphi _k({\bar{x}},\cdot )\), \(k\in I({\bar{x}}, {\bar{y}}),\) are l.s.c. and \(G({\bar{x}})\) is a closed set. If \({\bar{y}}\in Y\) is a local optimal solution of problem \((P_{{\bar{x}}})\) and the following conditions hold:

  1. (i)

    \(\text{ epi }\,\bar{\varphi }({\bar{x}}, \cdot )\) and \(G({\bar{x}})\times \mathbb {R}\) satisfy the metric qualification condition (MQC) at \(({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}}))\), where \(\bar{\varphi }(x, y):=\max \limits _{k\in I({\bar{x}}, {\bar{y}})}\varphi _k(x,y)\).

  2. (ii)

    \(\varphi _k({\bar{x}},\cdot )\), \(k \notin I({\bar{x}}, {\bar{y}})\), are u.s.c. at \( {\bar{y}}\) and the sets \(\text{ epi }\,\varphi _k ({\bar{x}}, \cdot )\), \(k\in I({\bar{x}}, {\bar{y}})\), satisfy the metric qualification condition (MQC) at \(({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}}))\).

Then, one has

$$\begin{aligned} 0\in \bigcup \left\{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \circ \partial _y \varphi _k ({\bar{x}}, {\bar{y}}), (\lambda _1, \ldots ,\lambda _m)\in {\varLambda }({\bar{x}},{\bar{y}})\right\} +N({\bar{y}};G({\bar{x}})). \end{aligned}$$
(3.5)

Proof

Given \({\bar{x}}\in X\). Let \({\bar{y}}\) be a local optimal solution of \((P_{{\bar{x}}})\). Consider the function \(\psi : Y \rightarrow \overline{\mathbb {R}}\) given by \({\psi (y)}:= \varphi ({\bar{x}}, y) + \delta _{G({\bar{x}})}( y)\), where \(\delta _{G({\bar{x}})}(\cdot )\) is the indicator function of the set \(G({\bar{x}})\). The latter means that \(\delta _{G({\bar{x}})}(y)=0\) for \(y\in G({\bar{x}})\) and \(\delta _{G({\bar{x}})}(y)=+\infty \) for \(y\notin G({\bar{x}})\). It is not difficult to see that \({\bar{y}}\) is a local optimal solution of \((P_{{\bar{x}}})\) if and only if the function \(\psi \) attains its local minimum at this point. By Proposition 1.114 in [20], one has

$$\begin{aligned} 0 \in \partial {\psi }({\bar{y}})=\partial \big (\varphi ({\bar{x}}, \cdot )+\delta _{G({\bar{x}})}(\cdot )\big )({\bar{y}}). \end{aligned}$$
(3.6)

This is equivalent to

$$\begin{aligned} (0,-1)\in N (({\bar{y}}, \psi ({\bar{y}})); \text{ epi }\,\psi )= N \big (({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})+ \delta _{G({\bar{x}})}({\bar{y}})\big ); \text{ epi }\,\big (\varphi ({\bar{x}}, \cdot )+ \delta _{G({\bar{x}})})\big ). \end{aligned}$$

It is not hard to show that \( \text{ epi }\,(\varphi ({\bar{x}}, \cdot )+ \delta _{G({\bar{x}})}) =\text{ epi }\,\varphi ({\bar{x}}, \cdot ) \cap (G({\bar{x}}) \times \mathbb {R})\) and \(\varphi ({\bar{x}}, {\bar{y}})+ \delta _{G({\bar{x}})}({\bar{y}})= \varphi ({\bar{x}},{\bar{y}}). \) Consequently, we have

$$\begin{aligned} \begin{aligned} (0,-1)\in N \big (({\bar{y}}, \varphi ({\bar{x}},{\bar{y}})\big ); \text{ epi }\,\varphi ({\bar{x}}, \cdot ) \cap \big (G({\bar{x}}) \times \mathbb {R})\big )\\ =N \big (({\bar{y}}, \varphi ({\bar{x}},{\bar{y}})\big ); \text{ epi }\,\bar{\varphi }({\bar{x}}, \cdot ) \cap \big (G({\bar{x}}) \times \mathbb {R})\big ), \end{aligned} \end{aligned}$$

where the last equality holds due to the upper semicontinuity of \(\varphi _k({\bar{x}}, \cdot )\) for \(k\notin I({\bar{x}}, {\bar{y}})\).

Under the assumption (i), we apply Theorem 2.1 to get

$$\begin{aligned} \begin{aligned} (0,-1)&\in N \big (({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\bar{\varphi }({\bar{x}}, \cdot ) \big ) + N\big (({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); G({\bar{x}}) \times \mathbb {R}\big )\\ {}&=N \big (({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi ({\bar{x}}, \cdot ) \big ) + N\big (({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); G({\bar{x}}) \times \mathbb {R}\big ). \end{aligned} \end{aligned}$$

Thus, we can find

$$\begin{aligned} (y^*_1,\alpha ^*_1) \in N \big (({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi ({\bar{x}}, \cdot ) \big ) \end{aligned}$$
(3.7)

and

$$\begin{aligned} (y^*_2,\alpha ^*_2) \in N(({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); G({\bar{x}}) \times \mathbb {R}) \end{aligned}$$
(3.8)

such that \((0,-1) = (y^*_1,\alpha ^*_1)+ (y^*_2,\alpha ^*_2).\) This means that

$$\begin{aligned}&y_1^*=-y_2^* \end{aligned}$$
(3.9)
$$\begin{aligned}&\alpha _1^*+\alpha _2^*=-1. \end{aligned}$$
(3.10)

On one hand, from (3.8) we get \(y_2^*\in N({\bar{y}}; G({\bar{x}}))\) and \(\alpha _2^* \in N(\varphi ({\bar{x}}, {\bar{y}}); \mathbb {R})=~\{0\}.\) On the other hand, from (3.7), (3.9) and (3.10) we obtain

$$\begin{aligned} (y_1^*, -1)\in N (({\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi ({\bar{x}}, \cdot ) ). \end{aligned}$$

The latter means that \(y_1^*\in \partial _y \varphi ({\bar{x}}, {\bar{y}}).\) Therefore,

$$\begin{aligned} 0\in \partial _y \varphi ({\bar{x}}, {\bar{y}}) + N({\bar{y}}; G({\bar{x}})). \end{aligned}$$
(3.11)

We now estimate \( \partial _y \varphi ({\bar{x}}, {\bar{y}})\). By the assumption (ii), we apply Proposition 3.3 to obtain

$$\begin{aligned} \partial _y \varphi ({\bar{x}}, {\bar{y}}) \subset \bigcup \left\{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \circ \partial _y \varphi _k ({\bar{x}}, {\bar{y}}), (\lambda _1, \ldots ,\lambda _m)\in {\varLambda }({\bar{x}},{\bar{y}})\right\} . \end{aligned}$$
(3.12)

Therefore, by combining (3.11) and (3.12) we derive (3.5), which completes the proof. \(\square \)

The next theorem provides a necessary condition for local optimal solutions of problem (\(P_x\)) under the condition related to the Lipschitz continuity.

Theorem 3.2

(Necessary optimality condition II) Let \({\bar{x}}\in X\). Suppose that \(G({\bar{x}})\) is closed and \(\varphi _k({\bar{x}}, \cdot )\), \(k\in K\), are Lipschitz continuous around \({\bar{y}}\in Y\). If \({\bar{y}}\) is a local optimal solution of (\(P_x\)), then

$$\begin{aligned} 0\in \bigcup \left\{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \partial _y \varphi _k ({\bar{x}}, {\bar{y}}), (\lambda _1, \ldots ,\lambda _m)\in {\varLambda }({\bar{x}},{\bar{y}})\right\} +N({\bar{y}}; G({\bar{x}})). \end{aligned}$$
(3.13)

Proof

We will use the proof scheme of Theorem 3.1 to prove this theorem.

Given \({\bar{x}}\in X\). Suppose that \({\bar{y}}\) is a local optimal solution of (\(P_x\)). We first consider the function \(\psi := \varphi ({\bar{x}}, \cdot )+\delta _{G({\bar{x}})}(\cdot )\). By the same arguments, we arrive at (3.6). Since \(\varphi _k({\bar{x}}, \cdot ), k\in K,\) are Lipschitz continuous around \({\bar{y}}\), it yields that \(\varphi _k({\bar{x}}, \cdot )\) is SNEC at \({\bar{x}}\) (see [20, p. 121]) and \(\varphi \) is Lipschitz continuous around \({\bar{y}}\). Moreover, as \(G({\bar{x}})\) is closed, it follows that \(\delta _{G({\bar{x}})}\) is l.s.c. at any \(y\in ~G({\bar{x}})\). Thus, by applying the sum rule for the Mordukhovich subdifferential [20, Theorem 3.36] one gets

$$\begin{aligned} 0 \in \partial {\psi }({\bar{y}})\subset \partial _y\varphi ({\bar{x}}, {\bar{y}}) +\partial \delta _{G({\bar{x}})}({\bar{y}}). \end{aligned}$$

On one hand, we have \( \partial \delta _{G({\bar{x}})}({\bar{y}})= N({\bar{y}}; G({\bar{x}}))\). On the other hand, \(\varphi _k({\bar{x}},\cdot )\) are l.s.c. around \( {\bar{y}}\) for \(k \in I({\bar{x}}, {\bar{y}})\) and u.s.c. at \( {\bar{y}}\) for \(k \notin I({\bar{x}}, {\bar{y}})\) due to the Lipschitz continuity of \(\varphi _k({\bar{x}},\cdot ), k\in K\). By Theorem 3.46 (ii) in [20], one has

$$\begin{aligned} \partial _y\varphi ({\bar{x}}, {\bar{y}})= \bigcup \left\{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \partial _y \varphi _k ({\bar{x}}, {\bar{y}}), (\lambda _1, \ldots ,\lambda _m)\in {\varLambda }({\bar{x}},{\bar{y}})\right\} . \end{aligned}$$

Therefore, we obtain (3.13). \(\square \)

Remark 3.1

In [6], the authors study necessary optimality conditions for minimax programming problems, where the constraint set is given by finite inequalities and equations under the Lipschitz continuity and SNC assumptions. Here, we consider the parametric minimax programming problems and we obtained two theorems on the necessary optimality conditions for the problem in question. We use the metric qualification condition in the first one, and the Lipschitz continuity assumption in the second one. The relationship between these two conditions is still unclear. Interestingly, we have found a simple example in which Theorem 3.1 can be applied, but Theorem 3.2 cannot (see Example 3.1 below).

Remark 3.2

In the Asplund space setting, Bao et al. [3] have considered minimax optimization problems with inequality, equality, and geometric constraints in the setting of nondifferentiable and non-Lipschitz functions. More precisely, necessary optimality conditions in terms of upper and/or lower subdifferentials of both cost and constraint functions as well as necessary optimality conditions in the fuzzy form have been given. The qualification conditions used in that paper is the SNC properties and the normal qualification condition

$$\begin{aligned} \partial ^\infty _y \varphi ({\bar{x}}, {\bar{y}}) \cap \big (-N({\bar{y}}; G({\bar{x}}) \big )=\{0\}. \end{aligned}$$
(3.14)

It is worth noting that, the metric qualification condition (MQC) is really weaker than (3.14) as proved in [27, Proposition 3.7].

We now establish a sufficient condition for optimal solutions of (\(P_x\)) by using the convexity in the next theorem.

Theorem 3.3

(Sufficient optimality condition) Assume that \({\bar{x}}\in X\), \({\bar{y}}\in G({\bar{x}})\), and \(({\bar{x}}, {\bar{y}})\) satisfies condition (3.5). If \(\varphi _k({\bar{x}}, \cdot )\), \(k\in K\), are convex functions and \(G({\bar{x}})\) is a convex set, then \({\bar{y}}\) is a global optimal solution of \((P_{{\bar{x}}})\), that is, \({\bar{y}}\in M({\bar{x}})\).

Proof

Since \(({\bar{x}}, {\bar{y}})\) satisfies condition (3.5), there exist \((\lambda _1, \ldots , \lambda _m)\in {\varLambda }({\bar{x}}, {\bar{y}})\) and \(y_k^*\in \lambda _k\circ \partial _y\varphi _k ({\bar{x}}, {\bar{y}})\), \(k\in I(\bar{ x}, \bar{y})\), such that

$$\begin{aligned} -\sum _{k\in I({\bar{x}}, {\bar{y}})} y^*_k\in N({\bar{y}}; G({\bar{x}})). \end{aligned}$$
(3.15)

Suppose on the contrary that \({\bar{y}}\) is not a global optimal solution of \((P_{{\bar{x}}})\). Then, there exist \({\hat{y}}\in G({\bar{x}})\) such that

$$\begin{aligned} \varphi (\bar{ x}, \bar{y})>\varphi ({\bar{x}}, {\hat{y}}). \end{aligned}$$
(3.16)

By the convexity of \(G({\bar{x}})\) and (3.15), we have

$$\begin{aligned} \left\langle \sum _{k\in I({\bar{x}}, {\bar{y}})} y^*_k, {\hat{y}}-{\bar{y}}\right\rangle \ge 0. \end{aligned}$$
(3.17)

For each \(k\in I(\bar{x}, \bar{y})\), we consider two following cases.

Case 1. \(\lambda _k>0\). Since \(y_k^*\in \lambda _k\circ \partial _y\varphi _k ({\bar{x}}, {\bar{y}})=\lambda _k\partial _y\varphi _k ({\bar{x}}, {\bar{y}})\), one has

$$\begin{aligned} \frac{y^*_k}{\lambda _k}\in \partial _y\varphi _k ({\bar{x}}, {\bar{y}}). \end{aligned}$$

By the convexity of \(\varphi _k({\bar{x}}, \cdot )\), we have

$$\begin{aligned} \varphi _k({\bar{x}}, {\hat{y}})-\varphi _k(\bar{ x}, \bar{y})\ge \left\langle \frac{y^*_k}{\lambda _k}, {\hat{y}}-\bar{y}\right\rangle , \end{aligned}$$

or, equivalently,

$$\begin{aligned} \lambda _k[\varphi _k({\bar{x}}, {\hat{y}})-\varphi _k(\bar{ x}, \bar{y})]\ge \left\langle y^*_k, {\hat{y}}-\bar{y}\right\rangle . \end{aligned}$$
(3.18)

Case 2. \(\lambda _k=0\). In this case, \(y^*_k\in \partial ^{\infty }_y\varphi _k (\bar{ x}, \bar{y})\). Since (3.16), \({\hat{y}}\in \text{ dom }\,\varphi ({\bar{x}}, \cdot )\). Furthermore, by [2, Proposition 4.2], one has

$$\begin{aligned} \partial ^{\infty }_y\varphi _k (\bar{ x}, \bar{y})=N({\bar{y}}; \text{ dom }\,\varphi ({\bar{x}}, \cdot )). \end{aligned}$$

Hence, we have

$$\begin{aligned} \langle y^*_k, {{\hat{y}}}-{\bar{y}}\rangle \le 0. \end{aligned}$$
(3.19)

Combining (3.17) and (3.19), we arrive at

$$\begin{aligned} \left\langle \sum _{k\in I({\bar{x}}, {\bar{y}}), \lambda _k>0} y^*_k, {\hat{y}}-{\bar{y}}\right\rangle \ge -\left\langle \sum _{k\in I({\bar{x}}, {\bar{y}}), \lambda _k=0} y^*_k, {\hat{y}}-{\bar{y}}\right\rangle \ge 0. \end{aligned}$$

This and (3.18) imply that

$$\begin{aligned} \sum _{k\in I({\bar{x}}, {\bar{y}}), \lambda _k>0}\lambda _k[\varphi _k({\bar{x}}, {\hat{y}})-\varphi _k(\bar{ x}, \bar{y})]\ge 0, \end{aligned}$$

or, equivalently

$$\begin{aligned} \sum _{k\in K}\lambda _k[\varphi _k({\bar{x}}, {\hat{y}})-\varphi _k(\bar{ x}, \bar{y})]\ge 0. \end{aligned}$$
(3.20)

On the other side, by \((\lambda _1, \ldots , \lambda _m)\in {\varLambda }({\bar{x}}, {\bar{y}})\), we have

$$\begin{aligned} \sum _{k\in K}\lambda _k\varphi _k(\bar{ x}, \bar{y})=\varphi ({\bar{x}}, {\bar{y}}) \ \ \text {and}\ \ \varphi ({\bar{x}}, {{\hat{y}}})\ge \sum _{k\in K}\lambda _k\varphi _k(\bar{ x}, {\hat{y}}). \end{aligned}$$

Now, taking (3.20) into account, we arrive at

$$\begin{aligned} \varphi ({\bar{x}}, {{\hat{y}}})\ge \varphi ({\bar{x}}, {\bar{y}}), \end{aligned}$$

which contraries to (3.16). The proof is complete. \(\square \)

Remark 3.3

The sufficient optimality condition for minimax programming problems is investigated in [6] as well by imposing assumptions of L-invexity-infineness. Although the class of L-invex-infine functions contains some nonconvex functions it is quite difficult to check.

Let us see an example to illustrate the results of this subsection.

Example 3.1

Let \(X=Y=\mathbb {R}\) equipped with norm \(\Vert (x,y)\Vert :=|x|+|y|\) for any \((x,y)\in \mathbb {R}^2\). Let \(C_1=\{0\} \times (-\infty , 0]\) and \(C_2=\{0\} \times [0,+\infty )\) be two closed convex subsets in \(X \times Y\). The corresponding indicator functions associated with \(C_1\) and \(C_2\) are

$$\begin{aligned} \varphi _1 (x,y) := \delta _{C_1} (x,y) ={\left\{ \begin{array}{ll} 0 &{} \text{ if } \ (x,y)\in C_1,\\ +\infty &{} \text{ otherwise }, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \varphi _2 (x,y) := \delta _{C_2} (x,y) ={\left\{ \begin{array}{ll} 0 &{} \text{ if } \ (x,y)\in C_2,\\ +\infty &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$

Consider the problem (\(P_x\)) with

$$\begin{aligned} \varphi (x,y)&=\max \{\varphi _1(x,y), \varphi _2(x,y)\} = {\left\{ \begin{array}{ll} 0 &{} \text{ if } \ x=y=0,\\ +\infty &{} \text{ otherwise }, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} G(x)= {\left\{ \begin{array}{ll} \{y\in \mathbb {R} \mid y \ge x-2 \} &{} \text{ if } \ x \ge 0,\\ \emptyset &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$

Let \({\bar{x}}={\bar{y}}=0\). Clearly, \(\varphi _k({\bar{x}}, \cdot ), k=1,2\) are l.s.c., and \(G({\bar{x}})\) is a closed set. It is worth emphasizing that \(\varphi _k({\bar{x}},\cdot )\), \(k=1,2\) are not Lipschitz continuous around \({\bar{y}}\). So we are not applicable Theorem 3.2. We now check the assumptions of Theorem 3.1.

One has \(\text{ epi }\,\varphi ({\bar{x}},\cdot )=\{0\}\times [0, +\infty )\) and \(G({\bar{x}})\times \mathbb {R}=[-2, +\infty ) \times \mathbb {R}\). Then, \(\text{ epi }\,\varphi ({\bar{x}},\cdot ) \cap G({\bar{x}})\times \mathbb {R}= \text{ epi }\,\varphi ({\bar{x}},\cdot ) \), and hence, for any \((y, \alpha )\in \mathbb {R}\times \mathbb {R}\), we have

$$\begin{aligned} d\big ((y, \alpha ), \text{ epi }\,\varphi ({\bar{x}},\cdot ) \cap G({\bar{x}})\times \mathbb {R}\big ) \le d\big ((y, \alpha ), \text{ epi }\,\varphi ({\bar{x}},\cdot ) \big ) + d\big ((y, \alpha ), G({\bar{x}}) \times \mathbb {R} \big ). \end{aligned}$$

In other words, the sets \(\text{ epi }\,\varphi ({\bar{x}},\cdot )\) and \(G({\bar{x}})\times \mathbb {R}\) satisfy the metric qualification condition (MQC) at (0, 0) with \(\delta \) being arbitrary and \(\gamma =1\). Similarly, it is not difficult to show that \(\text{ epi }\,\varphi _1({\bar{x}},\cdot )\) and \(\text{ epi }\,\varphi _2({\bar{x}},\cdot )\) satisfy the metric qualification condition (MQC) at (0, 0) with \(\delta \) being arbitrary and \(\gamma =1\). Moreover, \(\varphi _k({\bar{x}},\cdot )\), \(k=1,2\) are convex functions and G is convex. Then, the necessary optimality conditions in Theorem 3.1 is the sufficient one. By a simple computation, we get \(\partial _y \varphi (0,0)= \partial ^\infty _y \varphi (0,0)=\mathbb {R},\) \(N({\bar{y}}; G({\bar{x}})) =\{0\}.\) Consequently, \({\bar{y}}=0\) is a global solution of (\(P_x\)).

3.2 Applications to multiobjective optimization problems

This section is devoted to applying results on optimality conditions of the minimax programming problem to a multiobjective optimization problem. More precisely, we employ the necessary as well as sufficient optimality conditions obtained for the minimax programming problem in the previous subsection to derive the corresponding ones for a multiobjective optimization problem.

Consider the following parametric multiobjective optimization problem:

figure b

depending on the parameter x, where \(\mathbb {R}^m_+\) is the nonnegative orthant of \(\mathbb {R}^m\).

Definition 3.1

Let \(x\in X\) and \({\bar{y}}\in G(x)\). \({\bar{y}}\) is a weakly Pareto solution of problem (\(MOP_x\)) if there is no \(y\in G(x)\) such that

$$\begin{aligned} \varphi _k(x,y)<\varphi _k(x,{\bar{y}}), \ \ \forall k\in K. \end{aligned}$$

We will start with a necessary condition for weakly Pareto solutions of problem (\(MOP_x\)).

Theorem 3.4

Let \({\bar{x}}\in X\) and \({\bar{y}}\in G({\bar{x}})\). Assume that all assumptions of Theorem 3.1 are satisfied. If \({\bar{y}}\) is a weakly Pareto solution of (\(MOP_x\)), then there exists \((\alpha _1, \ldots , \alpha _m)\in \mathbb {R}^m_+\), \(\sum \limits _{k\in K} \alpha _k =1\) such that

$$\begin{aligned} 0\in \sum _{k\in K} \alpha _k\circ \partial _y\varphi _k({\bar{x}}, {\bar{y}})+N({\bar{y}}; G({\bar{x}})). \end{aligned}$$
(3.21)

Proof

For each \(y\in Y\) and \(k\in K\), we put

$$\begin{aligned} \widetilde{\varphi }_k({\bar{x}}, y):=\varphi _k({\bar{x}}, y)-\varphi _k({\bar{x}}, {\bar{y}}). \end{aligned}$$

Then, we will show that \({\bar{y}}\) is a local optimal solution of the following minimax programming problem:

$$\begin{aligned} \min _{y\in G({\bar{x}})}\, \max _{k\in K} \widetilde{\varphi }_k({\bar{x}}, y). \end{aligned}$$
(3.22)

To see this, define the function \(\widetilde{\varphi }({\bar{x}}, \cdot ):=\max _{k\in K} \widetilde{\varphi }_k({\bar{x}}, \cdot )\). Since, \({\bar{y}}\) is a weakly Pareto solution of (\(MOP_x\)), there is no \(y\in G({\bar{x}})\) such that

$$\begin{aligned} \varphi _k({\bar{x}}, y)<\varphi _k({\bar{x}}, {\bar{y}}) \ \ \forall k\in K. \end{aligned}$$

The latter means that, for each \(y\in G({\bar{x}})\), there exists \(k_0\in K\) such that

$$\begin{aligned} \varphi _{k_0}({\bar{x}}, y)\ge \varphi _{k_0}({\bar{x}}, {\bar{y}}), \end{aligned}$$

or, equivalently,

$$\begin{aligned} \widetilde{\varphi }({\bar{x}}, y)\ge \widetilde{\varphi }({\bar{x}}, {\bar{y}})=0 \end{aligned}$$

for all \(y\in G({\bar{x}})\). Hence, \({\bar{y}}\) is a local optimal solution of problem (3.22). We now can apply Theorem 3.1 to problem (3.22), where \(\widetilde{\varphi }_k\) and \(\widetilde{\varphi }\) play the corresponding roles of \(\varphi _k\) and \(\varphi \). Then we find \((\alpha _1, \ldots , \alpha _m)\in \mathbb {R}^m_+\), \(\sum \limits _{k\in K}\alpha _k =1\) satisfying

$$\begin{aligned} 0\in \sum _{k\in K} \alpha _k\circ \partial _y\widetilde{\varphi }_k({\bar{x}}, {\bar{y}})+N({\bar{y}}; G({\bar{x}})). \end{aligned}$$

Clearly, \(\partial _y\widetilde{\varphi }_k({\bar{x}}, {\bar{y}})=\partial _y\varphi _k({\bar{x}}, {\bar{y}})\) for all \(k\in K\) and we therefore get (3.21). \(\square \)

The necessary condition for weakly Pareto solutions of problem (\(MOP_x\)) under the Lipschitz continuity is given in the next theorem.

Theorem 3.5

Let \({\bar{x}}\in X\) and \({\bar{y}}\in G({\bar{x}})\). Assume that all assumptions of Theorem 3.2 are satisfied. If \({\bar{y}}\) is a weakly Pareto solution of (\(MOP_x\)), then there exists \((\alpha _1, \ldots , \alpha _m)\in \mathbb {R}^m_+\), \(\sum \limits _{k\in K} \alpha _k =1\) such that

$$\begin{aligned} 0\in \sum _{k\in K} \alpha _k \partial _y\varphi _k({\bar{x}}, {\bar{y}})+N({\bar{y}}; G({\bar{x}})). \end{aligned}$$

Proof

Following the proof scheme in Theorem 3.4 and applying Theorem 3.2 instead of Theorem 3.1 we derive the assertion of the theorem. \(\square \)

We close this section by the sufficient condition for optimal solutions of problem (\(MOP_x\)) by using the convexity in the following theorem.

Theorem 3.6

Suppose that \({\bar{x}}\in X\) and \({\bar{y}}\in G({\bar{x}})\). Assume further that \(\varphi _k({\bar{x}}, \cdot )\), \(k\in K\), are convex functions and G is a convex multifunction. If there exists \((\alpha _1, \ldots , \alpha _m)\in \mathbb {R}^m_+\), \(\sum \limits _{k\in K} \alpha _k=1\) satisfying condition (3.21), then \({\bar{y}}\) is a weakly Pareto solution of problem (\(MOP_x\)).

Proof

Similar to the proof of Theorem 3.4, we put

$$\begin{aligned} \widetilde{\varphi }_k({\bar{x}}, y):=\varphi _k({\bar{x}}, y)-\varphi _k({\bar{x}}, {\bar{y}}), \ \ k\in K,\, y\in Y \end{aligned}$$

and

$$\begin{aligned} \widetilde{\varphi }({\bar{x}}, \cdot ):=\max _{k\in K} \widetilde{\varphi }_k({\bar{x}}, \cdot ). \end{aligned}$$

Clearly, \(\widetilde{\varphi }_k({\bar{x}}, \cdot )\), \(k\in K\), are convex functions. Thus, we apply the sufficient optimality conditions in Theorem 3.3 to conclude that \({\bar{y}}\) is a global optimal solution of problem (3.22). This means that

$$\begin{aligned} \widetilde{\varphi }({\bar{x}}, y)\ge 0 \ \ \forall y\in G({\bar{x}}), \end{aligned}$$

or, equivalently,

$$\begin{aligned} \max _{k\in K}\{\varphi _k({\bar{x}}, y)-\varphi _k({\bar{x}}, {\bar{y}})\}\ge 0 \ \ \forall y\in G({\bar{x}}). \end{aligned}$$

Hence, there is no \(y\in G({\bar{x}})\) such that

$$\begin{aligned} \varphi _k({\bar{x}}, y)<\varphi _k({\bar{x}}, {\bar{y}}) \ \ \forall k\in K. \end{aligned}$$

Consequently, \({\bar{y}}\) is a weakly Pareto solution of problem (\(MOP_x\)). \(\square \)

4 Sensitivity analysis

In this section, we will present upper estimates for the Mordukhovich subdifferential of the optimal value function \(\mu \) of (\(P_x\)). These estimations are established via the Mordukhovich subdifferentials of \(\varphi _k\) and the Mordukhovich normal cone of \(\text{ gph }\,\, G.\)

Theorem 4.1

Consider the constrained problem (\(P_x\)). Suppose that G has closed graph, \(\varphi _k\), \(k\in I({\bar{x}}, {\bar{y}}),\) are l.s.c., the optimal value function \(\mu \) given by (2.1) be finite at \(\bar{x} \in \mathrm{{dom}}\, M\) and \({\bar{y}}\in M({\bar{x}})\). Suppose further that:

  1. (i)

    The solution map M is \(\mu \)-inner semicontinuous at \(({\bar{x}}, {\bar{y}}) \in \text{ gph }\,M\);

  2. (ii)

    \(\text{ epi }\,\bar{\varphi }\) and \({\text{ gph }\,} G \times \mathbb {R}\) satisfy the condition (MQC) at \(({\bar{x}}, {\bar{y}},\varphi ({\bar{x}}, {\bar{y}}))\), where \(\bar{\varphi }(x, y):=\max \limits _{k\in I({\bar{x}}, {\bar{y}})}\varphi _k(x,y)\);

  3. (iii)

    \(\varphi _k({\bar{x}},{\bar{y}})\), \(k \notin I({\bar{x}}, {\bar{y}})\), are u.s.c. at \( ({\bar{x}}, {\bar{y}})\) and the sets \( \text{ epi }\,\varphi _k\), \(k\in I({\bar{x}}, {\bar{y}})\), satisfy the condition (MQC) at \(({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}}))\).

Then \(x^*\in \partial \mu (\bar{x})\), it is necessary that

$$\begin{aligned} \begin{aligned} (x^*,0)\in \bigcup&\left\{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \circ \partial \varphi _k ({\bar{x}}, {\bar{y}}), (\lambda _1, \ldots ,\lambda _m)\in {\varLambda }({\bar{x}},{\bar{y}})\right\} \\ {}&\quad \quad \quad +N(({\bar{x}}, {\bar{y}});\text{ gph }\,G). \end{aligned} \end{aligned}$$
(4.1)

Proof

Since the solution map M is \(\mu \)-inner semicontinuous at \(({\bar{x}},{\bar{y}})\), applying [20, Theorem 1.108] for the unconstrained problem with \(\varphi + \delta _{\text{ gph }\,\, G}\) being the objective function yields the following inclusions

$$\begin{aligned} \partial \mu ({\bar{x}}) \subset \{ x^* \in X^* \mid (x^*,0) \in \partial (\varphi + \delta _{\text{ gph }\,\, G})({\bar{x}}, {\bar{y}})\}. \end{aligned}$$
(4.2)

We now prove

$$\begin{aligned} \partial (\varphi + \delta _{\text{ gph }\,\, G})({\bar{x}}, {\bar{y}})\subset \partial \varphi ({\bar{x}}, {\bar{y}}) + \partial \delta _{\text{ gph }\,\, G} ({\bar{x}}, {\bar{y}}) \end{aligned}$$
(4.3)

under the validity of condition (ii). Indeed, take any \((u^*,0)\in \partial (\varphi + \delta _{\text{ gph }\,\, G})({\bar{x}}, {\bar{y}})\). This means that

$$\begin{aligned} (u^*, 0,-1)\in N \bigg (\big ({\bar{x}}, {\bar{y}}, (\varphi +\delta _{\text{ gph }\,G})({\bar{x}}, {\bar{y}})\big ); \text{ epi }\,(\varphi +\delta _{\text{ gph }\,G}) \bigg ). \end{aligned}$$
(4.4)

As observed in the proof of Theorem 3.1, one has \( (\varphi +\delta _{\text{ gph }\,G})({\bar{x}}, {\bar{y}})= \varphi ({\bar{x}}, {\bar{y}})\) and \( \text{ epi }\,(\varphi +\delta _{\text{ gph }\,G})=\text{ epi }\,\varphi \cap (\text{ gph }\,G \times \mathbb {R}).\) So, (4.4) is written as

$$\begin{aligned} \begin{aligned} (u^*,0,-1)&\in N \big (\big ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})\big ); \text{ epi }\,\varphi \cap (\text{ gph }\,G \times \mathbb {R}) \big )\\ {}&=N \big (\big ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})\big ); \text{ epi }\,\bar{\varphi }\cap (\text{ gph }\,G \times \mathbb {R}) \big ). \end{aligned} \end{aligned}$$
(4.5)

Note that \(\text{ epi }\,\bar{\varphi }= \bigcap \limits _{k\in I({\bar{x}}, {\bar{y}})} \text{ epi }\,\varphi _k\). By conditions (ii), (iii) and Theorem 2.1, we have that

$$\begin{aligned} N \big (\big ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})\big ); \text{ epi }\,\bar{\varphi }\cap (\text{ gph }\,G\times \mathbb {R}) \big )&\subset N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\bar{\varphi })\\ {}&\quad +N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ gph }\,G \times \mathbb {R})\\ {}&= N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi )\\ {}&\quad + N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ gph }\,G \times \mathbb {R}). \end{aligned}$$

Combining this with (4.5) gives

$$\begin{aligned} (u^*,0,-1)\in N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi ) + N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ gph }\,G \times \mathbb {R}). \end{aligned}$$

Thus, we can find a triple \((u^*_1,v_1^*,\alpha _1)\in N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi )) \) and a triple \( (u^*_2,v_2^*,\alpha _2)\in N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}}); \text{ gph }\,G \times \mathbb {R}) \) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} u^*=u_1^*+u^*_2;\\ 0=v_1^*+v_2^*;\\ -1=\alpha _1+\alpha _2. \end{array}\right. } \end{aligned}$$
(4.6)

As \(N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ gph }\,G \times \mathbb {R}) =N ( ({\bar{x}}, {\bar{y}}); \text{ gph }\,G) \times N(\varphi ({\bar{x}}, {\bar{y}})); \mathbb {R})\), it follows that \(\alpha _2=0\). Hence, \(\alpha _1=-1\) from the last equation of (4.6). Consequently, \((u_1^*,v_1^*, -1) \in N ( ({\bar{x}}, {\bar{y}}, \varphi ({\bar{x}}, {\bar{y}})); \text{ epi }\,\varphi ))\) and \( (u_2^*, -v_1^*) \in N ( ({\bar{x}}, {\bar{y}}); \text{ gph }\,G).\) In other words, \( (u_1^*, v_1^*) \in \partial \varphi ({\bar{x}}, {\bar{y}})\) and \((u_2^*, -v_1^*) \in \partial \delta _{\text{ gph }\,G} ({\bar{x}}, {\bar{y}}).\) Combining the latter with the equations of (4.6), one gets

$$\begin{aligned} (u^*, 0) \in \partial \varphi ({\bar{x}}, {\bar{y}}) + \partial \delta _{\text{ gph }\,G} ({\bar{x}}, {\bar{y}}), \end{aligned}$$

which proves (4.3).

We now rewrite (4.2) as follows

$$\begin{aligned} \partial \mu ({\bar{x}}) \subset \{ x^* \in X^* \mid (x^*,0) \in \partial \varphi ({\bar{x}}, {\bar{y}}) + \partial \delta _{\text{ gph }\,\, G} ({\bar{x}}, {\bar{y}})\}. \end{aligned}$$

On one hand, under condition (iii), we get

$$\begin{aligned} \partial \varphi ({\bar{x}}, {\bar{y}})= \bigcup \left\{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \circ \partial \varphi _k ({\bar{x}}, {\bar{y}}), (\lambda _1, \ldots ,\lambda _m)\in {\varLambda }({\bar{x}},{\bar{y}})\right\} \end{aligned}$$

from Proposition 3.3. On the other hand, \(\partial \delta _{\text{ gph }\,\, G} ({\bar{x}}, {\bar{y}})= N(( {\bar{x}}, {\bar{y}}); \text{ gph }\,G).\) Therefore, we arrive at (4.1). \(\square \)

Theorem 4.2

Consider the constrained problem (\(P_x\)). Let the optimal value function \(\mu \) given by (2.1) is finite at \({\bar{x}}\in \text{ dom }\,M\) and \({\bar{y}} \in M({\bar{x}})\). Suppose that G has closed graph, \(\varphi _k\) are Lipschitz continuous around \(({\bar{x}}, {\bar{y}})\) for \(k\in K\), and the solution map M is \(\mu \)-inner semicontinuous at \(({\bar{x}}, {\bar{y}}) \in \text{ gph }\,M\). Then \(x^*\in \partial \mu (\bar{x})\) if

$$\begin{aligned} \begin{aligned} (x^*,0)\in&\bigcup \left\{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \partial \varphi _k ({\bar{x}}, {\bar{y}}), (\lambda _1, \ldots ,\lambda _m)\in {\varLambda }({\bar{x}},{\bar{y}})\right\} \\ {}&\quad \quad \quad +N(({\bar{x}}, {\bar{y}});\text{ gph }\,G). \end{aligned} \end{aligned}$$

Proof

In the same manner as in the proof of Theorem 4.1, we also obtain (4.2). Since \(\varphi _k\) are Lipschitz continuous around \(({\bar{x}},{\bar{y}})\), so is \(\varphi \). It follows that \(\varphi \) is SNEC at \(({\bar{x}}, {\bar{y}})\) (see [20, p. 121]). In addition, \(\delta _{\text{ gph }\,G}\) is l.s.c. because of the closedness of \(\text{ gph }\,G\). So, by using the sum rule for the Mordukhovich subdifferential [20, Theorem 3.36] one gets

$$\begin{aligned} \partial (\varphi + \delta _{\text{ gph }\,\, G})({\bar{x}}, {\bar{y}}) \subset \partial \varphi ({\bar{x}}, {\bar{y}}) + \partial \delta _{\text{ gph }\,\, G} ({\bar{x}}, {\bar{y}}). \end{aligned}$$

Consequently,

$$\begin{aligned} \partial \mu ({\bar{x}}) \subset \{ x^* \in X^* \mid (x^*,0) \in \partial \varphi ({\bar{x}}, {\bar{y}}) + \partial \delta _{\text{ gph }\,\, G} ({\bar{x}}, {\bar{y}})\}. \end{aligned}$$

We note that \( \partial \delta _{\text{ gph }\,\, G} ({\bar{x}}, {\bar{y}})= N(( {\bar{x}}, {\bar{y}}); \text{ gph }\,G)\). Meanwhile, \(\varphi _k\) are l.s.c. around \(({\bar{x}}, {\bar{y}})\) for \(i\in I({\bar{x}}, {\bar{y}})\) and u.s.c. at \(({\bar{x}}, {\bar{y}})\) for \(i\notin I({\bar{x}}, {\bar{y}})\) following the Lipschitz continuity around \(({\bar{x}},{\bar{y}})\) for \(k\in K\) of \(\varphi _k\). Thus, by applying Theorem 3.46 (ii) in [20], we obtain

$$\begin{aligned} \partial \varphi ({\bar{x}}, {\bar{y}}) = \bigcup \left\{ \sum \limits _{k\in I({\bar{x}},{\bar{y}})} \lambda _k \partial \varphi _k ({\bar{x}}, {\bar{y}}), (\lambda _1, \ldots ,\lambda _m)\in {\varLambda }({\bar{x}},{\bar{y}})\right\} . \end{aligned}$$

Therefore, the proof is complete. \(\square \)

5 Concluding remarks

In this paper, optimality conditions and sensitivity analysis for parametric nonconvex minimax programming problems are investigated. Namely, we study the necessary optimality conditions by using the Mordukhovich subdifferential and give upper estimations for the Mordukhovich subdifferential of the optimal value function \(\mu \). The optimality conditions and sensitivity analysis are obtained by using upper estimates for the Mordukhovich subdifferential of the maximum function. The results on optimality conditions are then applied to parametric multiobjective optimization problems. An example is given to illustrate our results.