Abstract
In the paper, we develop sum and chain rules of the generalized contingent derivative for set-valued mappings. Then, their applications to sensitivity analysis and optimality conditions for some particular optimization problems are given. Our results extend some recent existing ones in the literature.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and notation
The concept of derivative plays an important role in optimality conditions. One of the first and most popular derivative of set-valued mappings is the contingent derivative, see [13]. In the last decades, many other kinds of generalized derivatives have been proposed with their applications to optimality conditions and duality. Each of them is compatible with some classes of problems, but not all. Recently, the higher-order contingent derivative, inspired by the contingent derivative, for set-valued mappings was introduced and applied to optimality conditions in [15]. The advantage of this derivative is that corresponding optimality conditions are established without assumptions of convexity and domination. Some properties of the generalized contingent derivative were also discussed. However, its calculus rules have not been provided yet.
The above observation motivates us to study calculus rules of the generalized contingent derivative to ensure that it can be employed in practice. Although these results are different from calculus of the contingent derivative in the primal approach (see [13]) and coderivatives in the dual approach (see [10, 11]), it turns out that the generalized contingent derivative has some fundamental calculus rules. Then, we consider relationships between its calculus rules and their applications to some topics in set-valued optimization. In detail, we apply sum rules to sensitivity analysis in parameterized optimization. Besides, optimality conditions for weakly efficient solutions of some particular optimization problems are obtained by virtue of chain and sum rules.
The organization of the paper is as follows. The rest of this section is devoted to giving some preliminaries and notations needed for the paper. In Sect. 2, sum rules and chain rules for the generalized contingent derivative are established. We obtain in Sect. 3 applications of these rules to sensitivity analysis and optimality conditions in set-valued optimization.
Let X, Y be normed spaces, and C be a closed pointed convex cone in Y. For \(A\subseteq X\), \(\hbox {int}A\) and \(\hbox {cl}A\) denote its interior and closure, respectively (resp, for short). For \(A\subseteq X\), we recall the following cone
The domain, image, and graph of a given set-valued mapping \(F: X\rightarrow 2^Y\) are denoted by, resp,
Let \(\mathrm{int}C\ne \emptyset \), a point \((x_0,y_0)\in \mathrm{gr}F\) is said to be a weakly efficient solution of F if \((F(X)-y_0)\cap -\mathrm{int}C=\emptyset \).
Definition 1.1
([15]) The mth-order generalized contingent set of a subset K in X at \(x_0\in K\) with respect to \(u_1,\dots ,u_{m-1}\in X\) is defined by
Definition 1.2
([15]) The mth-order generalized contingent derivative of F at \((x_0,y_0)\) with respect to \((u_i,v_i)\) is the set-valued mapping \(G{\text{- }}D^mF(x_0,y_0,u_1,v_1,\dots ,\)\(u_{m-1},v_{m-1}): X\rightarrow 2^Y\) defined by
The above definitions can be expressed equivalently by
When \(m=1\), Definitions 1.1, 1.2 coincide with the radial cone and the radial derivative introduced in [14], resp. Note that the radial cone carries global information. Hence, the corresponding radial derivative is considered to be suitable for global optimal solutions. To get more informations in optimality conditions, a kind of higher-order radial derivative was proposed in [1, 2] as follows. The mth-order radial derivative of F at \((x_0,y_0)\) with respect to \((u_i,v_i)\), \(i=1,\dots ,m-1\), is
Since the generalized contingent derivative has a global character, it can be considered as another kind of higher-order radial derivative. When \(m=1\), \(G{\text{- }}D^1F(x_0,y_0)(u)\) and \(D^1_RF(x_0,y_0)(u)\) coincide, but for higher orders, these derivatives are different. We can check that \(G{\text{- }}D^mF(x_0,y_0,\)\(u_1,v_1,\dots ,u_{m-1},v_{m-1})(u)\) is empty if one of the following conditions \((u_1,v_1)\in \mathrm{gr}G{\text{- }}D^1F(x_0,y_0),\dots , (u_{m-1},v_{m-1})\in \mathrm{gr}G{\text{- }}D^{m-1}F(x_0,y_0,u_1,v_1,\dots ,u_{m-2},v_{m-2})\) is violated. However, this property is not valid for \(D^m_RF(x_0,y_0,u_1,v_1,\dots ,u_{m-1},\)\(v_{m-1})(u)\). Indeed, the simple example with \(X=Y={\mathbb {R}}\), \(F:X\rightarrow 2^Y\) be defined by \(F(x)=\{0\}\) for \(x\in \{0,2\}\) shows that \((2,-2)\not \in \mathrm{gr}D^1_RF((0,0)) (=\mathrm{gr}G{\text{- }}D^1F((0,0)))\) and \(G{\text{- }}D^2F((0,0),\)\((2,-2))(0)=\emptyset \), while \(2\in D^2_RF((0,0), \)\((2,-2))(0)\).
With the above remark, Corollary 4.1 in [15] should be stated for global weakly efficient solutions instead of local ones. On the other hand, by the proof of Theorem 4.1 in [15], we can see that Corollary 4.1, which was implied from Theorem 4.1, may be invalid for local weakly efficient solutions. Indeed, in the rest of the proof of Theorem 4.1, the authors showed the existence of a sequence \(\{{\overline{x}}_n\}\) in K for their conclusions, but this sequence may not be contained in any neighborhood U (\(U\varsubsetneq \mathrm{dom}F\)) of \(x_0\).
To illustrate the above assertion, we consider the following example.
Example 1.3
Suppose that \(X=Y={\mathbb {R}}\), \(C={\mathbb {R}}_+\), and \(F:X\rightarrow Y\) is defined by
Let \((x_0,y_0)=(0,0)\). We can see that \((x_0,y_0)\) is a local weakly efficient solution of F with respect to the neighborhood \((-1,1)\) of \(x_0\). However, the conclusion of Corollary 4.1 in [15] does not hold since \(-1\in G{\text{- }}D^1F(x_0,y_0)(1)\). The reason is that \((x_0,y_0)\) is not a global weakly efficient solution.
Several examples and other properties of the higher-order generalized contingent derivative were discussed in [15].
2 Calculus rules
The generalized contingent derivatvie is proved to be a useful concept for establishing optimality conditions without convexity assumptions. However, its calculus rules have not been provided yet. In this section, we develop chain and sum rules for this derivative. Without loss of generality, we consider only the second-order generalized contingent derivatives.
Let \(G:X\rightarrow 2^Y\) and \(F:Y\rightarrow 2^Z\). The following result gives us an inclusion for the chain rule of \(F \circ G\), where \( (F \circ G)(x): = \bigcup \nolimits _{y \in G(x)} {F(y)}\).
Proposition 2.1
Let \((x, z) \in \mathrm{gr}(F \circ G)\), \(y \in R(x, z)\), where \(R(x,z):=G(x)\cap F^{-1}(z)\), and \((u_1,v_1,w_1)\in X\times Y\times Z\). Suppose that, for \((u,w)\in X\times Z\),
Then,
Proof
Let \(w\in G{\text{- }}D^{2}F(y, z,v_1,w_1)[G{\text{- }}D^2G(x, y,u_1,v_1)(u)]\), i.e., there exists \({\overline{y}}\in G{\text{- }}D^2G(x, y,u_1,v_1)(u)\) such that \({\overline{y}}\in G{\text{- }}D^{2}F^{-1}(z,y,w_1,v_1)(w)\). It follows from (1) that \({\overline{y}}\in G{\text{- }}{D}^{2}R((x,z), y,(u_1,w_1),v_1)(u,w)\), i.e., there exist \(t_n\rightarrow 0^+\), \(h_n >0\), \((u_n, {\overline{y}}_n,w_n)\rightarrow (u,{\overline{y}},w)\) with
which implies that \(w\in G{\text{- }}D^{2}(F \circ G)(x, z,u_1,w_1)(u)\). \(\square \)
The following example shows that the assumption (1) in Proposition 2.1 is essential.
Example 2.2
Let \(X=Y=Z={{\mathbb {R}}}\), and \(F:Y\rightarrow 2^Z,\; G: X\rightarrow 2^Y\) be defined by
Then,
Direct calculations yield
Thus, \(G{\text{- }}{D}^1F(0,1)[G{\text{- }}{D}^1G(0,0)(1)]=\{1/4,1/2\},\) and
The reason is that \(G{\text{- }}DG^1(0,0)(1)\cap G{\text{- }}DF^{-1}(1,0)(1/4)\not \subseteq G{\text{- }}D^1R((0,1),0)(1,1/4)\).
On the other hand, by the similar calculations, we get
Thus,
and
Definition 2.3
Let \(F:X\rightarrow 2^Y\), \((x,y)\in \mathrm{gr}F\), and \((u_1,v_1)\in X\times Y\). The asymptotic second-order radial derivative of F at (x, y) in the direction \((u_1,v_1)\) is the set-valued mapping \(D_r^{p(2)}F(x,y,u_1,v_1):X\rightarrow 2^Y\) defined by
By taking \((u_1,v_1)=(0,0)\), \(D^{p(2)}_RF(x,y,0,0)(u)\) reduces to the first-order radial derivaive \(D_R^1F(x,y)\) (\(\equiv G{\text{- }}D^1F(x,y)\)).
Via the asymptotic second-order radial derivative, we get the converse inclusion for the chain rule in Proposition 2.1 as follows.
Proposition 2.4
Let \((x, z) \in \mathrm{gr}(F \circ G)\), \(y\in R(x,z)\), where R is defined as in Proposition 2.1, and \((u_1,v_1,w_1)\in X\times Y\times Z\). Suppose that Y is finite dimensional and
Then,
If, additionally, (1) holds for \(y\in R(x,z)\) with respect to \((u_1,v_1,w_1)\), then (3) becomes an equality.
Proof
Let \(w\in G{\text{- }}D^2(F \circ G)(x,z,u_1,w_1)(u)\), i.e., there exist \(t_n\rightarrow 0^+\), \(h_n >0\), and \((u_n,w_n)\rightarrow (u,v)\) such that
Thus, there exists \(y_{n}\in G(x+h_n(t_nu_1 + t_n^2 u_n))\) with \(z+h_n(t_nw_1+ t_n^2 w_n)\in F(y_n)\), i.e., \(y_n\in R(x+h_n(t_nu_1 + t_n^2 u_n), z+h_n(t_nw_1+ t_n^2 w_n))\). By setting
it is obvious to see that
and \(v_n\) has a convergent subsequence tending to v with \(||v||=1\). Without loss of generality, we denote \(v_n\rightarrow v\).
If \(\dfrac{t_n}{||k_n||}\rightarrow 0^+\), it follows from (4) that
With \(l_n:=t_nh_n\), \(s_n:=||k_n||\), \({\overline{u}}_n:=\dfrac{t_n}{||k_n||}u_n\), and \({\overline{w}}_n:=\dfrac{t_n}{||k_n||}w_n\), then \({\overline{u}}_n\rightarrow 0\), \({\overline{w}}_n\rightarrow 0^+\), and
i.e., \(v\in D^{p(2)}_R R((x,z),y,(u_1,w_1),v_1)(0,0)\), which contradicts (2). Thus, we may assume that \(\dfrac{||k_n||}{t_n}\) tends to \(q \ge 0\). From (4), ones get
which implies that \(qv\in G{\text{- }}D^{2}G(x, y,u_1,v_1)(u)\) and \(w\in G{\text{- }}D^{2}F({y}, z,v_1,w_1)(qv)\). Hence, (3) is fulfilled. If, additionally, (1) holds, then it follows from Proposition 2.1 that (3) becomes an equality. \(\square \)
For Example 2.2, we can check that all assumptions of Proposition 2.4 are fulfilled and
In some existing results, authors also used conditons similar to (2) for establishing calculus rules for other kinds of generalized derivatives, such as coderivatives in [9], contingent epiderivatives in [7], and variational sets of type 1 in [3]. These results (in forms of equality) were stated only for the first order, while the equality expression in the paper is established for the second order.
For sum rules of \(M, N : X\rightarrow 2^Y\), we have
Proposition 2.5
Let \((x, z) \in \mathrm{gr}(M+N)\), \(y \in S(x, z)\), where \(S(x,z) : = M(x)\cap (z -N(x))\), and \((u_1,v_1,w_1)\in X\times Y \times Y\). Suppose that, for \((u,v)\in X\times Y\),
Then,
Proof
Let \(v\in G{\text{- }}D^{2}M(x, y,u_1,v_1)(u) + G{\text{- }}D^{2}N(x, z - y,u_1,w_1)(u)\), i.e., there exists \({\overline{y}}\in G{\text{- }}D^{2}M(x, y,u_1,v_1)(u)\) such that \({\overline{y}}\in v- G{\text{- }}D^{2}N(x, z - y,u_1,w_1)(u)\). By (5), we get that \({\overline{y}}\in G{\text{- }}D^{2}S((x, z), y,(u_1,v_1+w_1),v_1)(u,v)\), i.e., there exist \(t_n\rightarrow 0^+\), \(h_n>0\), and \((u_n,{\overline{y}}_n,v_n)\rightarrow (u,{\overline{y}},v)\) such that
so \(v\in G{\text{- }}D^{2}(M +N)(x, z,u_1,v_1+w_1)(u).\)\(\square \)
To illustrate Proposition 2.5, we consider the following examples.
Example 2.6
Let \(X=Y={{\mathbb {R}}}\) and \(M, N: X\rightarrow 2^Y\) be given by
Then,
Choose \(x=0\), \(z=1\), \(y=0\in S(x,z)\) and \(u=v=0\). Then,
Thus,
Direct calculations show that the conclusion of Proposition 2.5 holds since
Example 2.7
Let \(X={\mathbb {R}}\), and \(Y=l^2:=\left\{ (y_i)_{i\in {\mathbb {N}}}: y_i\in {\mathbb {R}}, \sum \nolimits _{i = 1}^{ + \infty } {y_i^2 } < + \infty \right\} \). Consider \(F,G: X\rightarrow 2^Y\) be defined by
and
Then,
and S(x, z) is the set \(\{(y_i)_{i\in {\mathbb {N}}}\in Y: x^2\le y_1\le z_1-x^4, y_i=0,\forall i\in {\mathbb {N}}{\setminus } \{1\}\}\) if \((x,z)\in \{(x,z)\in X\times Y: z_1 - x^4\ge x^2, z_i=0,\forall i\in {\mathbb {N}}{\setminus }\{1\}\}\), and is emptyset otherwise. Let \((x_0,y_0)=(0,0)\), \((u_1,v_1,w_1)=(1,0,0)\). By calculating, we get
It is easy to check that
and
To get an equality for the sum rule, we propose the following result.
Proposition 2.8
Let \((x, z) \in \mathrm{gr}(M + N)\), \(y\in S(x,z)\), where S is defined as in Proposition 2.5, and \((u_1,v_1,w_1)\in X\times Y\times Y\). Suppose that Y is finite dimensional and
Then,
If, additionally, (5) holds for \(y\in S(x,z)\) with respect to \((u_1,v_1,w_1)\), then (7) becomes an equality.
Proof
Let \(w\in G{\text{- }}D^2(M+N)(x,z,u_1,v_1+w_1)(u)\), i.e., there exist \(t_n\rightarrow 0^+\), \(h_n >0\), \((u_n,w_n)\rightarrow (u,w)\) with
which implies that there is a sequence \(y_n\in M(x+h_n(t_nu_1 + t_n^2u_n))\) and \(y_n\in z+h_n(t_n(v_1+w_1)+t_n^2w_n)-N(x+h_n(t_nu_1 + t_n^2u_n))\). Thus, \(y_n\in S(x+h_n(t_nu_1 + t_n^2u_n),z+h_n(t_n(v_1+w_1)+t_n^2w_n))\). By setting
then \(v_n\) (taking its subsequence if necessary) converges to v with \(||v||=1\) and
If \(||k_n||/t_n\rightarrow q\), some \(q \ge 0\), then we get
It follows from the definition of S that \(qv\in G{\text{- }}D^{2}M(x, y,u_1,v_1)(u)\), \(w - qv \in G{\text{- }}D^2N(x,z-y,u_1,w_1)(u)\), and we are done. Hence, it is enough to prove that the sequence \(\{||k_n||/t_n\}\) (or its subsequence) is convergent. Suppose to the contrary, i.e., \(||k_n||/t_n \rightarrow +\infty \), it follows from (8) that
It is easy to see that \((t_n/||k_n||)u_n\rightarrow 0\) and \((t_n/||k_n||)w_n\rightarrow 0\), so \(v\in D_R^{p(2)}S(x,z,y,\)\(u_1,(v_1+w_2),v_1)(0,0)\), which contradicts (6).
The rest of the proof follows from Proposition 2.5. \(\square \)
3 Applications
3.1 Sensitivity analysis
Suppose that \(F: P\times X \rightarrow 2^Z\) and \(N: X\rightarrow 2^Z\) are set-valued mappings between normed spaces, and K is a subset of X. Let
When K is convex, N(x) is the normal cone to K at x, and p is a parameter, M is known as the solution mapping of a parameterized variational inequality.
The solution mapping (9) was studied in [8] and in [3] in terms of the contingent derivatives and variational sets of type 1, resp, but only for the first order. We now apply sum rules of the second-order generalized contingent derivative to get the second-order sensitivity analysis for (9).
Let \(N_K: P\times X\rightarrow 2^Z\) be defined by
Then, by setting \(Q:=F + N_K\), we get a relationship between M and Q as follows
Let \({\hat{S}}: P\times X\times Z\rightarrow 2^Z\) be defined by
The following theorem gives us a relationship between the second-order generalized contingent derivative of M and those of \(F,N_K\).
Proposition 3.1
For the solution mapping M(p, z) given by (9), let \(x_0\in M(p_0,z_0)\), \(y_0\in {\hat{S}}(p_0,x_0,z_0)\), \((p_1,u_1)\in P\times X\) and \(v_1,w_1\in Z\). Suppose that Z is finite dimensional and the following condition is satisfied
Then,
If, additionally,
then (11) becomes an equality.
Proof
It follows from (10) that
Let \(x\in G{\text{- }}D^{2}M((p_0,z_0),x_0,(p_1,v_1+w_1),u_1)(p,z)\), then \(z\in G{\text{- }}D^{2}Q((p_0,x_0),z_0,\)\((p_1,u_1),v_1+w_1)(p,x)\). By the proof similar to that of Proposition 2.8, we have
which implies (11).
We now suppose, additionally, that (12) holds and x belongs to the set on the right-hand side of (11). Then, there exists \(z\in Z\) satisfying
By Proposition 2.5, we get that \(z\in G{\text{- }}D^{2}Q((p_0,x_0),z_0,(p_1,u_1),v_1+w_1)(p,x)\). It follows from (13) that \(x\in G{\text{- }}D^{2}M((p_0,z_0),\)\(x_0,(p_1,v_1+w_1),u_1)(p,z)\). \(\square \)
3.2 Optimality conditions for particular optimization problems
In the rest of the paper, we apply chain and sum rules in Sect. 2 to establish optimality conditions for weakly efficient solutions of two particular optimization problems.
Suppose that X, Y are normed spaces, and Y is partially order by a closed pointed convex cone C. For a given set-valued map \(F: X\rightarrow 2^Y\), we denote \(F_+(.):= F(x) + C\).
Lemma 3.2
Let \(F: X\rightarrow 2^Y\), \((x_0,y_0)\in \mathrm{gr}F\), and \((u_1,v_1)\in X\times Y\). Suppose that \((x_0,y_0)\) is a weakly efficient solution of F and \((u_1,v_1)\in X\times (-C)\). Then, for \(u\in X\),
If (14) holds for \((u_1,v_1)\in \{0\}\times C\), then \((x_0,y_0)\) is a weakly efficient solution of F.
Proof
If \((x_0,y_0)\) is a weakly efficient solution of F, the proof of (14) is similar to that of Theorem 4.1 in [15].
Suppose that (14) holds for \((u_1,v_1)\in \{0\}\times C\), by Proposition 3.2 in [15], we get that \((F(X)-y_0)\cap -\mathrm{int}C=\emptyset \), i.e., \((x_0,y_0)\) is a weakly efficient solution of F. \(\square \)
Let \(F:X\rightarrow 2^Y\) and \(G: X\rightarrow 2^X\). Consider the following problem
The above problem can be expressed as the unconstrained problem Min \((F\circ G)(x)\). Optimality conditions for weakly efficient solutions of (\(\mathrm{{P_1}}\)) are established as follows.
Proposition 3.3
For the problem \(\mathrm{{(P_1)}}\), let \((x_0,z_0)\in \mathrm{gr}(F\circ G)\), \(y_0\in R(x_0,z_0)\), where \(R(x,z):=G(x)\cap F^{-1}_+(z)\), and \((u_1,v_1,w_1)\in X\times X\times Y\).
(i) (Necessary condition) Suppose that \((x_0,z_0)\) is a weakly efficient solution, and (1) in Proposition 2.1 holds for \(((x_0,z_0),y_0)\) with respect to \((u_1,v_1,w_1)\in X\times X\times (-C)\). Then,
(ii) (Sufficient condition) Assume that X is finite dimensional and (2) is fulfilled for \(((x_0,z_0),y_0)\) with respect to \((u_1,v_1,w_1)\in \{0\}\times X\times C\). Then, \((x_0,z_0)\) is a weakly efficient solution if (15) holds.
Proof
It follows from Propositions 2.1, 2.4, and Lemma 3.2. \(\square \)
For the application of sum rule, we consider the problem as follows
where \(g: X\rightarrow Y\). By setting \(S:=\{x\in X| g(x)\in -C\}\) (the feasible set) and \(G:X\rightarrow 2^Y\) by
For an arbitrary positive s, consider the following unconstrained set-valued optimization problem
We now give optimality conditions for \(\mathrm{{(P_C)}}\) in the following proposition.
Proposition 3.4
For the problem \(\mathrm{{(P_C)}}\), let \((x_0,z_0)\in \mathrm{gr}(F + sG)\), \(y_0\in S(x_0,z_0)\), where \(S(x,z):=F_+(x)\cap (z-(sG)_+(x))\), and \(u_1,v_1,w_1\in X\times Y\times Y\).
(i) (Necessary condition) Suppose that \((x_0,z_0)\) is a weakly efficient solution and (5) in Proposition 2.5 holds for \(((x_0,z_0),y_0)\) with respect to \((u_1,v_1,w_1)\in X\times (-C)\times (-C)\). Then,
(ii) (Sufficient condition) Assume that Y is finite dimensional and (6) is fulfilled for \(((x_0,z_0),y_0)\) with respect to \((u_1,v_1,w_1)\in \{0\}\times C\times C\). Then, \((x_0,z_0)\) is a weakly efficient solution if (16) holds.
Proof
It follows from Propositions 2.5, 2.8, and Lemma 3.2. \(\square \)
Optimality conditions for weakly efficient solutions of these above problems were established in terms of other generalized derivatives, such as contingent epiderivatives in [7], variational sets in [3], radial derivatives in [1, 4], and radial-contingent derivatives in [6]. In these papers, authors used some concepts in assumptions of their results, like the proto-variational set in [3], the proto-radial set, the proto-radial derivative in [1], the radial semi-derivative in [6] (inspired by the semi-differentiablity proposed in [12]) and the epi-Lipschitz-like property in [5]. These assumptions are not directly comparable to ours. However, in the paper, we have obtained not only necessary optimality conditions but also sufficient optimality conditions for the second order, while the above-mentioned results give us only necessary optimality conditions.
4 Perspectives
In [10, 11], Mordukhovich introduced other kinds of generalized derivatives, called coderivatives, according to dual approach; while the derivative employed in our results are based on the primal one. Thus, studying connections between our results and those using coderivatives may be a promising development.
References
Anh, N.L.H., Khanh, P.Q.: Higher-order optimality conditions in set-valued optimization using radial sets and radial derivatives. J. Glob. Optim. 56, 519–536 (2013)
Anh, N.L.H., Khanh, P.Q.: Higher-order optimality conditions for proper efficiency in nonsmooth optimization using radial sets and radial derivatives. J. Glob. Optim. 58, 693–709 (2014)
Anh, N.L.H., Khanh, P.Q., Tung, L.T.: Variational sets: calculus and applications to nonsmooth vector optimization. Nonlinear Anal. Theory Methods Appl. 74, 2358–2379 (2011)
Anh, N.L.H., Khanh, P.Q., Tung, L.T.: Higher-order radial derivatives and optimality conditions in nonsmooth vector optimization. Nonlinear Anal. Theory Methods Appl. 74, 7365–7379 (2011)
Borwein, J.M.: Epi-Lipschitz-like sets in Banach space: theorems and examples. Nonlinear Anal. Theory Methods Appl. 11, 1207–1217 (1987)
Diem, H.T.H., Khanh, P.Q., Tung, L.T.: On higher-order sensitivity analysis in nonsmooth vector optimization. J. Optim. Theory Appl. 162, 463–488 (2014)
Jahn, J., Khan, A.A.: Some calculus rules for contingent epiderivatives. Optimization 52, 113–125 (2003)
Li, S.J., Meng, K.W., Penot, J.P.: Calculus rules for derivatives of multimaps. Set-Valued Anal. 17, 21–39 (2009)
Mordukhovich, B.S.: Generalized differential calculus for nonsmooth and set-valued mapping. J. Math. Anal. Appl. 183, 250–288 (1994)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, Volume I: Basic Theory. Springer, Berlin (2006)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, Volume II: Applications. Springer, Berlin (2006)
Penot, J.P.: Differentiability of relations and differential stability of perturbed optimization problems. SIAM J. Control Optim. 22, 529–551 (1984)
Rockafellar, R.T., Wets, R.J.B.: Variational Analysis, 3rd edn. Springer, Berlin (2009)
Taa, A.: Set-valued derivatives of multifunctions and optimality conditions. Numer. Funct. Anal. Optim. 19, 121–140 (1998)
Wang, Q.L., Li, S.J., Teo, K.L.: Higher-order optimality conditions for weakly efficient solutions in nonconvex set-valued optimization. Optim. Lett. 4, 425–437 (2010)
Acknowledgements
This research was funded by Vietnam National University Hochiminh City (VNU-HCM) under Grant Number B2018-28-02. We are thankful to the anonymous referee for his useful comments to improve the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Anh, N.L.H., Thoa, N.T. Calculus rules of the generalized contingent derivative and applications to set-valued optimization. Positivity 24, 81–94 (2020). https://doi.org/10.1007/s11117-019-00667-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11117-019-00667-3
Keywords
- Generalized contingent derivative
- Sum rule
- Chain rule
- Set-valued optimization
- Optimality conditions
- Sensitivity analysis