Abstract
The primary goal of this paper is to present a unified approach and shed new light on convex and Clarke generalized differentiation theories using the concepts and techniques from Mordukhovich’s developments. We show that the concepts and techniques used by Mordukhovich are important, not only to his generalized differentiation theory itself, but also to many other aspects of nonsmooth analysis. In particular, they can be used to derive convex subdifferential calculus rules as well as many important calculus rules of Clarke subdifferentials.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
For centuries, differential calculus has served as an indispensable tool for science and technology, but the rise of more complex models requires new tools to deal with nondifferentiable data. Geometric properties of convex functions and sets were the topics of study for many accomplished mathematicians such as Fenchel and Minkowski in the early twentieth century. However, the beginning era of nonsmooth/variational analysis did not start until the 1960s when Moreau and Rockafellar independently introduced a concept called subgradient for convex functions and, together with many other mathematicians, started developing the theory of generalized differentiation for convex functions and sets. The choice to start with convex functions and sets comes from the fact that they have several interesting properties that are important for applications to optimization. The theory is now called convex analysis, a mature field serving as the mathematical foundation for convex optimization.
The beauty and tremendous applications of convex analysis motivated the search for a new theory to deal with broader classes of functions and sets where convexity is not assumed. This work was initiated by Clarke, a student of Rockafellar. In the early 1970s, Clarke began to develop a generalized differentiation theory for the class of Lipschitz continuous functions. This theory revolves around a notion called generalized gradient, a concept that has led to numerous works, developments, and applications.
The theory of generalized differentiation for nonsmooth functions relies on the interaction between the analytic and geometric properties of functions and sets. In the mid 1970s, the work of Mordukhovich brought this key idea to a very high level of generality and beauty as reflected in two important objects of his theory of generalized differentiation: nonsmooth functions and set-valued mappings. The generalized derivative notions for nonsmooth functions and set-valued mappings introduced by Mordukhovich are now called under the names Mordukhovich subdifferentials and coderivatives, respectively. Mordukhovich’s generalized differentiation theory is effective for many applications, especially to optimization theory. In spite of the nonconvexity of the Mordukhovich generalized derivative constructions, they possess well-developed calculus rules in many important classes of Banach spaces including reflexive spaces.
Our main goal in this paper is to develop a unified approach for generalized differential calculus of convex subdifferentials and Clarke subdifferentials. We show that the concepts and techniques used by Mordukhovich are important, not only to his generalized differentiation theory itself, but also to many other aspects of nonsmooth analysis.
The paper is organized as follows. In Section 2, we introduce important concepts of nonsmooth analysis that are used throughout the paper. In Section 3, we obtain a subdifferential formula in the sense of convex analysis for the optimal value function under convexity, and then use it to derive many important subdifferential rules of convex analysis. Section 4 is devoted to deriving coderivative and subdifferential calculus rules in the sense of Clarke using Mordukhovich’s constructions.
2 Definitions and Preliminaries
In this section, we present basic concepts and results of nonsmooth analysis to be used in the next sections.
Let X be a Banach space. For a function φ : X → ℝ which is Lipschitz continuous around \(\overline {x} \in X\) with Lipschitz modulus ℓ ≥ 0, the Clarke generalized directional derivative of φ at \(\overline {x}\) in direction v ∈ X is defined by
The generalized directional derivatives of φ at \(\overline {x}\) are employed to define the Clarke subdifferential of the function at this point:
Given a nonempty closed set Ω and given a point \(\overline {x}\in \Omega \), the Clarke normal cone to Ω at \(\overline {x}\) is a subset of the dual space X ∗ defined by
where dist(⋅;Ω) is the distance function to Ω with the representation
With the definition of Clarke normal cones to nonempty closed sets in hand, the Clarke subdifferential \(\partial _{C}\varphi (\overline {x})\) for a lower semicontinuous extended-real-valued function φ : X → (−∞, ∞] at \(\overline {x}\in ~\text {dom}~\varphi := \{x\in X\; |\; \varphi (x)< \infty \}\) can be defined in terms of Clarke normal cones to the epigraph of the function by
Similarly, the Clarke singular subdifferential of φ at \(\overline {x}\) is defined by
Another important normal cone structure of nonsmooth analysis is called the Fréchet normal cone to Ω at \(\overline {x} \in \Omega \) and is defined by
We also define the set of Fréchet ε-normals to Ω at \(\overline {x}\) by
If \(\overline {x}\notin \Omega \), we put \(\widehat N(\overline {x}; \Omega ):=\emptyset \) and \(\widehat N_{\varepsilon }(\overline {x}; \Omega ):=\emptyset \).
The Mordukhovich normal cone to Ω at \(\overline {x}\) is defined in terms of the Fréchet ε-normals to the set around \(\overline {x}\) using the Kuratowski upper limit:
If X is an Asplund space (see [3]) and Ω is locally closed around \(\overline {x}\), then the limiting normal cone can be equivalently represented as
The Mordukhovich subdifferential and singular subdifferential of an extended-real-valued lower semicontinuous function φ : X → (−∞, ∞] at \(\overline {x}\in \text {dom}\,\varphi \) are then, respectively, defined by
It is well-known that when φ is a convex function, both subdifferential constructions reduce to the subdifferential in the sense of convex analysis
Moreover, if the set Ω is convex, both normal cone structures reduce to the normal cone in the sense of convex analysis
The relation between the Mordukhovich normal cone and subdifferential constructions can also be represented by
where δ(⋅; Ω) is the indicator function associated with Ω given by δ(x; Ω) = 0 if x ∈ Ω, and δ(x; Ω) = +∞ otherwise. Similar relations also hold true for Clarke and Fréchet normal cones and subdifferentials.
Let F : X ⇉ Y be a set-valued mapping between two Banach spaces X and Y. That means F(x) is a subset of Y for every x ∈ X. The domain and graph of F are given, respectively, by
We say that F has convex graph if its graph is a convex set in X × Y. Similarly, we say that F has closed graph if its graph is a closed set in X × Y. It is easy to see that if A : X → Y is a bounded linear mapping, then A has closed convex graph.
Let us continue with generalized derivative concepts for set-valued mappings introduced by Mordukhovich. The set-valued mapping \(D^{*}F(\overline {x},\overline {y}): Y^{*}\rightrightarrows X^{*}\) defined by
is called the Mordukhovich coderivative of F at \((\overline {x},\overline {y})\). By convention, \(D^{*}F(\overline {x},\overline {y})(y^{*}) = \emptyset \) for all \((\overline {x},\overline {y})\notin \text {gph}\, F\) and y ∗ ∈ Y ∗. When F is single-valued, one writes \(D^{*}F (\overline {x})\) instead of \(D^{*}F(\overline {x},\overline {y})\), where \(\overline {y}=F(\overline {x})\). The corresponding Clarke coderivative is similarly defined by
In general, one has
where the inclusion holds as equality if F has convex graph.
The following theorem which establishes the relation between Clarke and Mordukhovich generalized differentiation constructions will play an important role in our paper; see [3] for the definition and proof.
Theorem 1
Let X be an Asplund space. The following assertions hold:
-
(i)
Let Ω ⊆ X be a closed set. Then
$$N_{C}(\overline{x};\Omega) = \text{cl}^{*}\text{co}\,N(\overline{x};\Omega). $$ -
(ii)
Let φ : X → (−∞, ∞] be lower semicontinuous and let \(\overline {x} \in ~\text {dom}\, \varphi \). Then
$$\partial_{C}\varphi(\overline{x}) = \text{cl}^{*}\text{co}[\partial\varphi(\overline{x})+\partial^{\infty}\varphi(\overline{x})]. $$In particular, if φ is Lipschitz continuous around \(\overline {x}\), then
$$\partial_{C} \varphi(\overline{x}) = \text{cl}^{*}\text{co}\, \partial \varphi(\overline{x}). $$
3 Convex Subdifferential Calculus via Mordukhovich Subgradients
Our goal in this section is to develop a unified approach for convex subdifferential calculus in infinite dimensions. We show that the concepts and techniques of nonsmooth analysis involving Mordukhovich subdifferential and coderivative constructions can be employed to shed new light on a fundamental subject of convex analysis: convex subdifferential calculus.
Let us start with some well-known examples of computing coderivatives of convex set-valued mappings. We provide the details for the convenience of the reader. Throughout this section, we assume X, Y, and Z are Banach spaces unless otherwise stated.
Example 1
Let K be a nonempty convex set in Y. Define F : X ⇉ Y by F(x) = K for every x ∈ X. Then
For any \((\overline {x},\overline {y})\in X \times K\), one has \(N((\overline {x},\overline {y}); ~\text {gph}\, F) = \{0\}\times N(\overline {y}; K)\). Thus, for y ∗ ∈ Y ∗,
In particular, if K = X, then
The example below shows that the coderivative concept is a generalization of the adjoint mapping well known in functional analysis.
Example 2
Let F : X ⇉ Y be given by F(x) = {A(x) + b}, where A : X → Y is a bounded linear mapping and b ∈ Y. For \((\overline {x},\overline {y})\in \text {gph}\, F\) with \(\overline {y}=A(\overline {x})+b\), one has
Let us first prove that
Obviously, gphF is a convex set. By the definition, \((x^{*},y^{*})\in N((\overline {x},\overline {y}); ~\text {gph} F)\) if and only if
In our case, we have
Thus, Eq. (1) is equivalent to \(\langle x^{*}+A^{*}y^{*}, x-\overline {x}\rangle \leq 0\) for all x ∈ X, which holds if and only if x ∗ = −A ∗ y ∗. As a result, \(x^{*}\in D^{*}F(\overline {x}, \overline {y})(y^{*})\) if and only if x ∗ = A ∗ y ∗.
The example below establishes the relationship between subdifferential and coderivative.
Example 3
Let f : X → (−∞, ∞] be a convex function. Define
Then gph F = epi f is a convex set. For \(\overline {y}=f(\overline {x})\in \mathbb R\), one has
First, we have the following
For λ > 0, by the definition, \(\mathit {v}\in D^{*}F(\overline {x},\overline {y})(\lambda )\) if and only if \((\mathit {v}, -\lambda )\in N((\overline {x},\overline {y}); ~\text {gph}\, F)\), which is equivalent to \((\frac {\mathit {v}}{\lambda }, -1)\in N((\overline {x},\overline {y}); ~\text {epi}\, f)\) or \(v\in \lambda \partial f(\overline {x})\). Similarly, for λ = 0, \(\mathit {v}\in D^{*}F(\overline {x},\overline {y})(0)\) if and only if \((\mathit {v}, 0)\in N((\overline {x},\overline {y}); \text {epi}\, f)\) or \(\mathit {v}\in \partial ^{\infty } f(\overline {x})\). Observe that if \((\mathit {v}, -\upmu )\in N((\overline {x},f(\overline {x})); \text {epi}\, f)\), then μ ≥ 0. Thus, the last part of the formula follows.
Let F : X ⇉ Y be a set-valued mapping between Banach spaces, and let φ : X × Y → (−∞, ∞] be an extended-real-valued function. The optimal value function built on F and φ is given by
We adopt the convention that inf ∅ = ∞. Thus, μ is an extended-real-valued function. Under convexity assumptions on F and φ, we will show that convex subdifferentials of the optimal value function μ can be represented as an equality in terms of convex subdifferentials of the function φ and coderivatives of the mapping F. Note that the result cannot be derived directly from [3].
The following obvious proposition guarantees that μ(x) > −∞, which is the standing assumption for this section.
Proposition 1
Consider the optimal value function μ given by Eq. (2). Suppose that φ(x, ⋅) is bounded below on F(x). Then μ(x) > −∞. In particular, suppose that there exist b ∈ ℝ and a function g : X → (− ∞, ∞] such that for any (x, y) ∈ X × Y with y ∈ F (x), one has
Then μ(x) > − ∞ for all x ∈ X.
Proposition 2
Suppose that F has convex graph and φ is a convex function. Then the optimal value function μ defined by Eq. (2) is a convex function.
Proof
Fix x 1, x 2 ∈ dom μ and λ ∈ (0, 1). For any ε > 0, by the definition, there exist y i ∈ F(x i ) such that
It follows that
Adding these inequalities and applying the convexity of φ yield
Since gph F is convex,
Therefore, λ y 1 + (1 − λ)y 2 ∈ F(λ x 1 + (1 − λ)x 2), and hence
Letting ε →0, we derive the convexity of μ. □
The optimal value function (2) can be used as a convex function generator in the sense that many operations that preserve convexity can be reduced to this function.
Proposition 3
Consider the optimal value function ( 2 ), where F : X ⇉ Y has convex graph and φ : X × Y → (−∞, ∞] is a convex function. Let
Assume that \(\upmu (\overline {x})<\infty \) and \(S(\overline {x})\neq \emptyset \) . For any \(\overline {y}\in S(\overline {x})\) , one has
Proof
Fix any w that belongs to the left side. Then there exists \((u,\mathit {v})\in \partial \varphi (\overline {x},\overline {y})\) such that
Thus, \((\mathit {w}-u, -\mathit {v})\in N((\overline {x},\overline {y}); ~\text {gph}\, F)\). Then
This implies
whenever y ∈ F(x). It follows that
Therefore, \(\mathit {w}\in \partial \upmu (\overline {x})\).
The opposite inclusion in Eq. (4) also holds true under the sequentially normally compactness and the qualification condition presented below.
A nonempty closed subset Ω of a Banach space is said to be sequentially normally compact (SNC) at \(\overline {x}\in \Omega \) if for \(x_{k}\xrightarrow {\Omega }\overline {x}\) and \(x^{*}_{k}\in N(x_{k}; \Omega )\), the following implication holds:
In this definition, \(x_{k}\xrightarrow {\Omega }\overline {x}\) means that \(x_{k}\to \overline {x}\) and x k ∈ Ω for every k. Obviously, every subset of a finite dimensional Banach space is SNC. An extended-real-valued function φ : X → (−∞, ∞] is called sequentially normally epi-compact (SNEC) at \(\overline {x}\) if its epigraph is SNC at \((\overline {x}, \varphi (\overline {x}))\).
Let us give a simple proof in the proposition below that every convex set with nonempty interior is SNC at any point of the set.
Proposition 4
Let Ω be a convex set with nonempty interior of a Banach space. Then Ω is SNC at any point \(\overline {x}\in \Omega \).
Proof
Let \(\overline {u}\in ~\text {int}\,\Omega \) and let δ > 0 satisfy \(\mathbb {B}(\overline {u}; 2\delta )\subseteq \Omega \). Fix sequences {x k } and \(\{{x^{*}_{k}}\}\) with \(x_{k}\xrightarrow {\Omega }\overline {x}\) and \(x^{*}_{k}\in N(x_{k}; \Omega )\), and \(x^{*}_{k}\xrightarrow {w^{*}}0\). Choose k 0 such that \(\|x_{k}-\overline {x}\|<\delta \) for all k ≥ k 0. It is not hard to see that for any e ∈ 𝔹, one has
Thus,
It follows that
so \(\delta \|x^{*}_{k}\|\leq \langle x^{*}_{k}, \overline {x}-\overline u\rangle \to 0\). Therefore, \(\|x^{*}_{k}\|\to 0\), and hence Ω is SNC at \(\overline {x}\). □
Theorem 2
Let X and Y be Asplund spaces. Consider the optimal value function ( 2 ), where F : X ⇉ Y has closed convex graph and φ : X × Y → (−∞, ∞] is a lower semicontinuous convex function. Assume that \(\upmu (\overline {x})<\infty \) and \(S(\overline {x})\neq \emptyset \) , where S is the solution mapping ( 3 ). For any \(\overline {y}\in S(\overline {x})\) , one has
under the qualification condition:
and either φ is SNEC at \((\overline {x}, \overline {y})\) or gph F is SNC at this point.
Proof
By Proposition 3, we only need to prove the inclusion ⊆. Fix any \(\mathit {w}\in \partial \upmu (\overline {x})\) and \(\overline {y}\in S(\overline {x})\). Then
Thus, for any (x, y) ∈ X × Y, one has
Let f(x, y):=φ(x, y) + δ((x, y); gph F). By the subdifferential sum rule (see [3, Theorem 3.36]), one has
under the qualification condition (5). Thus,
where \((u_{1},\mathit {v}_{1})\in \partial \varphi (\overline {x},\overline {y})\) and \((u_{2}, v_{2})\in N((\overline {x},\overline {y}); ~\text {gph}\, F)\). Then v 2 = − v 1, and hence \((u_{2}, -\mathit {v}_{1})\in N((\overline {x},\overline {y}); \text {gph}\, F)\). It follows that \(u_{2}\in D^{*}F(\overline {x},\overline {y})(\mathit {v}_{1})\) and
where \((u_{1}, \mathit {v}_{1})\in \partial \varphi (\overline {x},\overline {y})\). □
A function g : ℝn → (−∞, ∞] is called nondecreasing componentwise if the following implication holds:
Proposition 5
Let f i : X → ℝ for i = 1, … , n be convex functions and let h : X → ℝn be defined by h(x) = (f 1 (x), … , f n (x)). Suppose that g : ℝn → (−∞, ∞] is a convex function that is nondecreasing componentwise. Then g ∘ h : X → (−∞, ∞] is a convex function.
Proof
Define the set-valued mapping \(F: X\;{\lower 1pt \hbox {\(\rightarrow \)}} \hbox {\raise 2pt \hbox {\(\rightarrow \)}}\; \mathbb R^{n}\) by
Then
Since f i is convex for i = 1, … , n, gph F is convex. Define φ : X × ℝn → (−∞, ∞] by φ(x, y) = g(y). Since g is nondecreasing componentwise, it is obvious that
Thus, g ∘ h is convex by Proposition 2. □
Proposition 6
Let X be an Asplund space. Let f : X → ℝ be a convex function and let φ : ℝ → (−∞, ∞] be a nondecreasing convex function. Let \(\overline {x}\in X\) and let \(\overline {y}:=f(\overline {x})\) . Then
under the assumption that \(\partial ^{\infty }\varphi (\overline {y}) = \{0\}\) or \(0\notin \partial f(\overline {x})\).
Proof
It has been proved that φ ∘ f is a convex function. Define
Since φ is a nondecreasing function, one has
By Theorem 2,
Since φ is nondecreasing, λ ≥ 0 for every \(\lambda \in \partial \varphi (\overline {y})\). By Proposition 3,
Note that the condition \(\partial ^{\infty }\varphi (\overline {y}) = \{0\}\) or \(0\notin \partial f(\overline {x})\) guarantees qualification condition (5), and φ is automatically SNEC at \(\overline {y}\) since its epigraph is a subset in finite dimensions. □
The same approach can be applied for the general composition in Proposition 7. A simplified version of the proposition below in finite dimensions can be found in [2]. Note that our result is new in infinite dimensions and more general in finite dimensions. Moreover, our proof is much simpler than the proof in [2].
Proposition 7
Let X be an Asplund space. Let \(f_{i}: X\to \mathbb R\) for i = 1, … , n be convex functions that are locally Lipschitz around \(\overline {x}\) , and let \(h: X\to \mathbb {R}\,^{n}\) be defined by h(x) = (f 1 (x),…,f n (x)).
Suppose that \(g: \mathbb R^{n}\to (-\infty , \infty ]\) is a convex function that is nondecreasing componentwise. Then g ∘ h : X → (−∞, ∞] is a convex function. Moreover, for any \(\overline {x}\in ~\text {dom}\, (g\circ h)\) , one has
under the condition that whenever \((\lambda _{1}, \ldots , \lambda _{m})\in \partial ^{\infty } g(h(\overline {x})), x^{*}_{i}\in \partial f_{i}(\overline {x}) \;~\text {for }i=1,\ldots ,m\) , the following implication holds
Proof
Define the set-valued mapping \(F: X\;{\lower 1pt \hbox {\(\rightarrow \)}} \hbox {\raise 2pt \hbox {\(\rightarrow \)}}\; \mathbb R^{n}\) by
Then
The set gph F is convex since f i is convex for i = 1,…,n. Define \(\varphi : X\times \mathbb R^{n}\to (-\infty , \infty ]\) by φ(x, y) = g(y). Since g is increasing componentwise, it is obvious that
Define
Then
Using the Lipschitz continuity of f i for i = 1,…,n and the structure of the set Ω i , one can apply the intersection rule (see [3, Corollary 3.5]), to get that: \((x^{*}, -\lambda _{1}, \ldots , -\lambda _{n})\in N((\overline {x}, f_{1}(\overline {x}), \ldots , f_{n}(\overline {x})); \text {gph}\, F)\) if and only if
If this is the case, then
where \((x^{*}_{i}, -\lambda _{i})\in N(\overline {x}, f_{i}(\overline {x}))\). Using Theorem 2, one has that \(x^{*}\in \partial (g\circ h)(\overline {x})\) if and only if there exists \((\lambda _{1}, \ldots , \lambda _{m})\in \partial g(h(\overline {x}))\) such that
This is equivalent to the fact that \(x^{*}=\sum _{i=1}^{n}x^{*}_{i}\), where \(x^{*}_{i}\in \lambda _{i}\partial f_{i}(\overline {x})\). In other words,
where \((\lambda _{1}, \ldots , \lambda _{m})\in \partial g(h(\overline {x})), x^{*}_{i}\in \partial f_{i}(\overline {x}) \;~\text {for }i=1,\ldots ,n\). The proof is now complete. □
Proposition 8
Let X and Y be Asplund spaces. Let φ : X × Y → (−∞, ∞] be an extended-real-valued function, and let K be a nonempty convex subset of Y. Define
and
Suppose that for every x, φ(x,⋅) is bounded below on K. Let \(\overline {x}\in X\) with \(\upmu (\overline {x})<\infty \) and suppose that \(S(\overline {x})\neq \emptyset \) . Then for every \(\overline {y}\in S(\overline {x})\) , one has
under the qualification condition
In particular, if K = Y, then this qualification condition holds automatically and
Proof
The results follow directly from Example 1 and Theorem 2 for the mapping F(x) = K.
Proposition 9
Let X and Y be Asplund spaces. Let B : Y → X be an affine mapping with B(y) = A(y) + b, where A is a bounded linear mapping and b ∈ X. Let φ : Y → (−∞, ∞] be a convex function so that for every x ∈ X, φ is bounded below on B −1 (x). Define
and
Fix \(\overline {x}\in X\) with \(\upmu (\overline {x})<\infty \) and \(S(\overline {x})\neq \emptyset \) . For any \(\overline {y}\in S(\overline {x})\) , one has
Proof
Let us apply Theorem 2 for F(x) = B −1(x) (preimage) and φ(x, y) ≡ φ(y). Then
Thus,
It follows that
It is not hard to verify that the qualification condition (5) satisfies automatically in this case. □
Proposition 10
Let X be an Asplund space. Let f i : X → (−∞, ∞] for i = 1, 2 be convex functions. Define the convolution of f 1 and f 2 by
Suppose that (f 1 ⊕ f 2 )(x) > − ∞ for all x ∈ X and let \(\overline {x}\in ~\text {dom}\, f_{1}\oplus f_{2}\) . Fix \(\overline {x}_{1}, \overline {x}_{2}\in X\) such that \(\overline {x}=\overline {x}_{1}~+~\overline {x}_{2}\) and \((f_{1}\oplus f_{2})(\overline {x}) = f_{1}(\overline {x}_{1})~+~f_{2}(\overline {x}_{2})\) . Then
Proof
Let us apply Proposition 9 for φ : X × X → (−∞, ∞] with φ(y 1, y 2) = f 1(y 1) + f 2(y 2) and A : X × X → X with A(y 1, y 2) = y 1 + y 2. Then A ∗(v) = (v, v) for any v ∈ X ∗ and \(\partial \varphi (\overline {y}_{1}, \overline {y}_{2}) = (\partial f_{1}(\overline {y}_{1}), \partial f_{2}(\overline {y}_{2}))\). So \(\mathit {v}\in \partial (f_{1}\oplus f_{2})(\overline {x})\) if and only if \(A^{*}(\mathit {v}) = (\mathit {v},\mathit {v})\in \partial \varphi (\overline {y}_{1}, \overline {y}_{2})\), i.e., \(\mathit {v}\in \partial f_{1}(\overline {x}_{1})\cap \partial f_{2}(\overline {x}_{2}).\) □
Remark 1
The optimal value function (2) covers many other convex operations. Thus, based on Theorem 2, it is possible to derive many other convex subdifferential calculus rules. Some examples are given below.
-
(i)
Let f 1, f 2 : X → (−∞, ∞] be convex functions. Define
$$\varphi(x,y) = f_{1}(x)+y, $$and F(x) = [f 2(x), ∞). For any x ∈ X, one has
$$f_{1}(x)+f_{2}(x) = \inf_{y\in F(x)}\varphi(x,y). $$ -
(ii)
Let B : X → Y be an affine function, and let f : Y → (−∞, ∞] be a convex function. Define F(x) = {B(x)} and φ(x, y) = f(y) for x ∈ X and y ∈ Y. Then
$$(f\circ B)(x) = \inf_{y\in F(x)}\varphi(x,y). $$ -
(iii)
Let f 1, f 2 : X → (−∞, ∞] be convex functions. Define F(x) = [f 1(x), ∞) × [f 2(x), ∞) and
$$\varphi(y_{1}, y_{2}) = \max\{y_{1}, y_{2}\}=\frac{|y_{1}-y_{2}|+y_{1}+y_{2}}{2}. $$Then
$$\max\{f_{1}, f_{2}\}(x) = \inf_{(y_{1}, y_{2})\in F(x)}\varphi(y_{1}, y_{2}). $$
4 Convexified Coderivative and Subdifferential Calculus
Throughout this section, we assume that all Banach spaces under consideration are reflexive. Under this assumption, the definition of sequential normal compactness can be rewritten using weak sequential convergence. A subset Ω ⊆ X is sequentially normally compact (or shortly SNC) at \(\overline {x}\in \Omega \) if and only if, for any sequences involved, we have the implication
A subset Ω in the product space X × Y is said to be partially sequentially normally compact (PSNC) at \((\overline {x},\overline {y})\in \Omega \) (with respect to X) if and only if for any sequences (x k , y k ) ⊆ Ω and \(\{{(x^{*}_{k},y^{*}_{k})}\}\subseteq X^{*}\times Y^{*}\) such that \((x^{*}_{k},y^{*}_{k})\in \Hat N((x_{k},y_{k});\Omega )\), \(x^{*}_{k}\xrightarrow {\mathit {w}}0\) and \(y^{*}_{k}\xrightarrow {\|\cdot \|} 0\), we have \(x^{*}_{k}\xrightarrow {\|\cdot \|}0\).
Accordingly, a set-valued mapping G : X ⇉ Y is SNC (PSNC) at \((\overline {x},\overline {y})\in ~\text {gph}\, G\) if and only if its graph is SNC (PSNC) at this point, and an extended-real-valued function φ : X → (−∞, ∞] is sequentially normally epi-compact (SNEC) at \(\overline {x}\in ~\text {dom}\,\varphi \) if and only if its epigraph is SNC at \((\overline {x},\varphi (\overline {x}))\). These properties and their partial variants are comprehensively studied and applied in [3, 4].
For the purposes of this paper, we need the following modifications of the above properties.
Definition 1
-
(i)
A set Ω ⊆ X is sequentially convexly normally compact (SCNC) at \(\overline {x}\in \Omega \) if and only if we have the implication
$$ \left[x_{k}\stackrel{\Omega}{\to}\overline{x},\;x^{*}_{k}\stackrel{{w}}{\to}0,\;x^{*}_{k}\in\text{co} N(x_{k};\Omega)\right]\quad\Longrightarrow\quad \left[\|x^{*}_{k}\|\to 0\;\mbox{ as }\;k\to\infty\right] $$(7)for any sequences involved in Eq. (7). A mapping G : X ⇉ Y is SCNC at \((\overline {x},\overline {y})\in ~\text {gph}\, G\) if and only if its graph is SCNC at this point. A function φ : X → (−∞, ∞] is sequentially convexly epi-compact (SCNEC) at \(\overline {x}\in ~\text {dom}\,\varphi \) if and only if its epigraph is SCNC at \((\overline {x},\varphi (\overline {x}))\).
-
(ii)
A subset Ω of the product space X × Y is said to be partially sequentially convexifically normally compact (PSCNC) at \((\overline {x},\overline {y})\in \Omega \) with respect to X if and only if for any sequences \((x_{k}, y_{k})\xrightarrow {\Omega } (\overline {x}, \overline {y})\), \(\{{(x^{*}_{k},y^{*}_{k})}\}\subseteq X^{*}\times Y^{*}\) with \((x^{*}_{k},y^{*}_{k})\in \text {co}N((x_{k},y_{k});\Omega )\), \(x^{*}_{k}\xrightarrow {\mathit {w}}0\) and \(y^{*}_{k}\xrightarrow {\|\cdot \|} 0\), we have \(x^{*}_{k}\xrightarrow {\|\cdot \|}0\). A mapping \(G: X\rightrightarrows Y\) is PSCNC at \((\overline {x},\overline {y})\in \text {gph}\, G\) if and only if its graph is PSCNC at this point.
It is easy to check that the SCNC property holds at every point of a convex set with nonempty interior. Let us extend this result to a broad class of nonconvex sets. Given Ω ⊆ X with \(\overline {x}\in \Omega \), recall from [6] that v ∈ X is a hypertangent to Ω at \(\overline {x}\) if for some δ > 0 we have
From the definition, we see that if Ω ⊆ X admits a hypertangent at \(\overline {x}\), then Ω is SCNC at this point. Moreover, if φ : X → (−∞, ∞] is locally Lipschitz around \(\overline {x}\in ~\text {dom}\, \varphi \), then it is SCNEC at this point; see [5] for more details.
Lemma 1
Let X be a reflexive space and let Ω be a subset of X with \(\overline {x}\in \Omega \) . Then
Proof
By [3, Theorem 3.57], we have
Since in the reflexive spaces, the weak ∗ topology and weak topology on X ∗ coincide, we obtain by the celebrated Mazur’s theorem that
which completes the proof.
Let us start with a new sum rule for Clarke coderivatives that allow us to obtain many calculus rules for Clarke subdifferentials and normal cones. Given \(F_{i}: X\rightrightarrows Y, i=1,2\), we define a multifunction \(S: X\times Y\rightrightarrows Y\times Y\) by
S is said to be inner semicontinuous at \((\overline {x},\overline {y}, \overline {y}_{1},\overline {y}_{2})\in ~\text {gph}\, S\) if for any sequence {(x k , y k )} converging to \((\overline {x},\overline {y})\) with S(x k , y k ) ≠ ∅ for each \(k\in \mathbb {N}\), there exists (y 1k , y 2k ) ∈ S(x k , y k ) such that {(y 1k , y 2k )} contains a convergent subsequence to \((\overline {y}_{1},\overline {y}_{2})\).
In the theorem below, we show that under the inner semicontinuity of S, the convexified coderivative of a sum of two set-valued mappings can be represented in terms of the coderivative of each set-valued mapping. However, this does not hold true under the so-called inner semicompactness; see [3, Definition 1.63]. We use the fact that every bounded sequence in a reflexive Banach space has a subsequence that is weakly convergent, and that every closed convex set is weakly closed.
Theorem 3
Let \(F_{i}: X\rightrightarrows Y\) for i=1,2 be closed-graph mappings and \((\overline {x},\overline {y})\in ~\text {gph}\, (F_{1}+F_{2})\) . Fix \((\overline {y}_{1},\overline {y}_{2}) \in S(\overline {x},\overline {y})\) such that S is inner semicontinuous at \((\overline {x},\overline {y}, \overline {y}_{1},\overline {y}_{2})\) . Assume that either F 1 is PCSNC at \((\overline {x},\overline {y}_{1})\) or F 2 is PCSNC at \((\overline {x},\overline {y}_{2})\) , and that {F 1 ,F 2 } satisfies the qualification condition
Then
Proof
Define
Following the proof of [3, Theorem 3.10], one has
It follows that
Under the assumptions made, one can apply [3, Theorem 3.4] to obtain the following inclusion:
This implies
By the definition of Ω1 and Ω2,
and
Now fix any \(x^{*} \in D^{*}_{C}(F_{1}+F_{2})(\overline {x},\overline {y})(y^{*})\). Then
Let us show that
Using Lemma 1, there exists a sequence \(\{{(x^{*}_{k},-y^{*}_{k})}\}\) in \(\text {co} N((\overline {x},\overline {y});\text {gph}\,(F_{1}+F_{2}))\) such that \((x^{*}_{k},-y^{*}_{k}) \to (x^{*},-y^{*})\), and hence \((x^{*}_{k},-y^{*}_{k},-y^{*}_{k}) \to (x^{*},-y^{*},-y^{*})\).
Because \((x^{*}_{k},-y^{*}_{k},-y^{*}_{k}) \in \text {co} N((\overline {x},\overline {y}_{1},\overline {y}_{2});\Omega _{1}\cap \Omega _{2})\), using the above results, there exist sequences \(\{{(x^{*}_{1k},-y^{*}_{k},0)}\}\) in \(\text {co}N((\overline {x},\overline {y}_{1},\overline {y}_{2});\Omega _{1})\) and \(\{{(x^{*}_{2k},0,-y^{*}_{k})}\}\) in \(\text {co} N((\overline {x},\allowbreak \overline {y}_{1},\overline {y}_{2});\Omega _{2})\) such that
Without loss of generality, we assume that F 1 is PCSNC at \((\overline {x},\overline {y}_{1})\).
We assume by a contradiction that \(\{{x^{*}_{1k}}\}\) is not bounded, so we can extract a subsequence, without relabeling, such that \(\|x^{*}_{1k}\|\rightarrow \infty \). Then
The bounded sequence \(\{z^{*}_{k}\}\) defined by \(z^{*}_{k}:=\frac {x^{*}_{1k}}{\|x^{*}_{1k}\|}\) has a weak convergent subsequence, say, \(z^{*}_{k}\xrightarrow {w} z^{*}\). Then \((z^{*},0,0) \in N_{C}((\overline {x},\overline {y}_{1},\overline {y}_{2});\Omega _{1})\), and it is clear that \((z^{*},0,0)\in (-N_{C}((\overline {x},\overline {y}_{1},\overline {y}_{2});\Omega _{2}))\), which implies \((z^{*},0)\in N_{C}((\overline {x},\overline {y}_{1});\Omega _{1}) \bigcap (-N_{C}((\overline {x},\overline {y}_{2});\Omega _{2}))\). It follows that
so z ∗ = 0. Because F 1 is PCSNC at \((\overline {x},\overline {y}_{1})\), one has that \(\|z_{k}^{*}\|\rightarrow 0\). This is a contradiction since \(\|{z^{*}_{k}}\|=1\) for every \(k \in \mathbb {N}\).
Thus, \(\{{x^{*}_{1k}}\}\) is bounded, so we can extract a weak convergent subsequence. Suppose that \(x^{*}_{1k}\xrightarrow {\mathit {w}}x^{*}_{1}\) and \(x^{*}_{2k}\xrightarrow {\mathit {w}}x^{*}_{2}\).
Using the fact that \(N_{C}((\overline {x},\overline {y}_{i}); ~\text {gph}\,(F_{i})) = {\text {cl}}\text {co} N((\overline {x},\overline {y}_{i}); ~\text {gph}\,(F_{i}))\), one obtains
By definition, \(x_{i}^{*} \in D^{*}_{C}F_{i}(\overline {x},\overline {y}_{i})(y^{*})\) for i = 1,2. So we have
The theorem has been proved.
The PCSNC holds automatically in finite dimensions, so we obtain the following sum rule for Clarke coderivatives in finite dimensions.
Corollary 1
Let \(F_{i}: \mathbb R^{m}\rightrightarrows \mathbb R^{n}\) for i=1,2 be closed-graph mappings with \((\overline {x},\overline {y})\in ~\text {gph}\, (F_{1}+F_{2})\) . Assume that S is inner semicontinuous at \((\overline {x},\overline {y}, \overline {y}_{1},\overline {y}_{2})\) and that {F 1 ,F 2 } satisfies the qualification condition
Then
Next, we consider
where F : X ⇉ Y and Δ(x; Ω) = {0} ⊆ X if x ∈ Ω and Δ(x;Ω) = ∅ otherwise.
Proposition 11
Let Ω and gph F be closed with \(\overline {x}\in \Omega \) and \((\overline {x},\overline {y})\in ~\text {gph}\, F\) such that either F is PCSNC at \((\overline {x},\overline {y})\) or Ω is SCNC at \(\overline {x}\) . Assume that
Then
Proof
Let us apply Theorem 3 with F 1 = F and F 2 = Δ(⋅; Ω). It is not hard to see that for any (x, y) ∈ gph (F+Δ(⋅;Ω)), we have S(x, y) = {(y,0)}, which implies that S is inner semicontinuous at \((\overline {x},\overline {y},\overline {y},0)\). We also see that gph Δ(⋅;Ω) = Ω×{0}, so it is PCSNC under the assumption that Ω is SCNC at \(\overline {x}\). We have \(N((\overline {x},\overline {y}); \text {gph}\, F_{2}) = N((\overline {x},\overline {y}); \Omega \times \{{0}\}) = N(\overline {x}; \Omega ) \times Y^{*}\), so \(N_{C}((\overline {x},\overline {y})\); \(\text {gph}\, N_{C}((\overline {x},\overline {y})\); \(\text {gph}\, F_{2}) = N((\overline {x},\overline {y})\); \(\Omega \times \{{0}\}) = N_{C}(\overline {x};\allowbreak \Omega ) \times Y^{*}\). Therefore, \(D^{*}_{C}F(\overline {x},\overline {y})(y^{*}) = N_{C}(\overline {x},\Omega )\). The rest follows directly Theorem 3.
Corollary 2
Let Ω 1 and Ω 2 be two closed subsets of X and \(\overline {x}\in \Omega _{1}\cap \Omega _{2}\) . Assume that Ω 1 or Ω 2 is SCNC at \(\overline {x}\) and the qualification condition
is satisfied. Then
Proof
This is a special case of Proposition 11 with F 1 ≡ Δ(⋅; Ω1). □
Let us show that the qualification condition in Eq. (12) is weaker than a similar condition using Clarke tangent; see [1]. Let Ω be a nonempty closed subset a Banach space X. For \(\overline {x}\in \Omega \), the Clarke tangent cone \(T(\overline {x}; \Omega )\) contains all v ∈ X such that, whenever t k ↓ 0 and \(x_{k}\xrightarrow {\Omega }\overline {x}\), there exists w k → v with x k +t k w k ∈ Ω for all k.
Proposition 12
Let Ω 1 and Ω 2 be closed subsets of X and \(\overline {x}\in \Omega _{1}\cap \Omega _{2}\) . Suppose that
Then
Proof
Fix any \(x^{*}\in N_{C}(\overline {x};\Omega _{1})\cap (-N_{C}(\overline {x};\Omega _{2}))\). Choose \(\mathit {v}\in T(\overline {x};\Omega _{1})\) such that \(\mathit {v}+2\delta \mathbb {B}\subseteq T(\overline {x};\Omega _{2})\). Then we have 〈x ∗,v〉 ≤ 0, and 〈−x ∗,v+δ e〉 ≤ 0 for any \(e\in \mathbb {B}\). It follows that
for any \(e\in \mathbb B\) and hence δ∥x ∗∥ ≤ 〈x ∗,v〉 ≤ 0. Therefore x ∗ = 0. □
Remark 2
Note that the converse of the above proposition does not hold in general. For example, in \(\mathbb {R}^{3}\), we consider \(\overline {x}=(0,0,0)\), \(\Omega _{1}=\{{(0,0,z)\;|\;z \in \mathbb {R}\,}\}\) and \(\Omega _{2}=\{{(x,y,0)\;|\;x,y\in \mathbb {R}\,}\}\). We have \(N_{C}(\overline {x};\Omega _{1}) = \{{(x,y,0)\;|\;x,y\in \mathbb {R}\,}\}\) and \(N_{C}(\overline {x};\Omega _{2}) = \{{(0, 0, z)\;|\;z \in \mathbb {R}\,}\}\). Thus,
but \(T(\overline {x};\Omega _{1})\cap ~\text {int}\, T(\overline {x};\Omega _{2}) = \emptyset \) since \(~\text {int}\, T(\overline {x};\Omega _{2}) = ~\text {int}\, T(\overline {x};\Omega _{1}) = \emptyset \). Therefore, we obtain a stronger finite-dimensional version of Corollary 2.9.8 in [1].
Definition 2
We say that an extended-real-valued function φ : X → (−∞, ∞] is lower regular at \(\overline {x}\in \text {dom}\, \varphi \) if \(\Hat \partial \varphi (\overline {x}) = \partial _{C}\varphi (\overline {x})\), where \(\Hat \partial \varphi (\overline {x})\) is the Fréchet normal subdifferential of φ at \(\overline {x}\) defined by
It is clear that any convex function is lower regular.
Now, we apply the results obtained from coderivative calculus in Theorem 3 to obtain calculus for Clarke subdifferential in reflexive Banach spaces.
Theorem 4
Let φ i :X→ (−∞, ∞] for i=1,…,m be l.s.c. around \(\overline {x}\) and finite at this point. Assume all (except possibly one) of φ i ,i=1,…,m are SCNEC at \((\overline {x},\varphi _{i}(\overline {x}))\) and
Then
and
The equality in Eq. ( 16 ) holds if all φ i for i=1,…,m are upper regular at \(\overline {x}\).
Proof
We first consider the case where m = 2. Let us consider F 1(x) = [φ 1(x), ∞) and F 2(x) = [φ 2(x), ∞). Obviously, gph F i = epi (φ i ) for i = 1,2 and gph (F 1+F 2) = epi (φ 1+φ 2). We see that at the point \((\overline {x},\overline {y})\) where \(\overline {y}=\varphi _{1}(\overline {x})+\varphi _{2}(\overline {x})\), we have \(S(\overline {x},\overline {y}) = \{(\varphi _{1}(\overline {x}),\varphi _{2}(\overline {x}))\}\), and S is inner semicontinuous at \((\overline {x},\overline {y},\varphi _{1}(\overline {x}),\varphi _{2}(\overline {x}))\). Indeed, take any sequence {(x k , y k )} converging to \((\overline {x},\overline {y})\) with S(x k , y k ) ≠ ∅. Fix (λ 1k , λ 2k ) ∈S(x k , y k ). Then
Since φ 1 and φ 2 are lower semicontinuous,
We can see that {λ 1k } and {λ 2k } are bounded sequences. Let \(\lambda _{1}:=\liminf _{k\to \infty } \lambda _{1k}\) and \(\lambda _{2}:=\liminf _{k\to \infty } \lambda _{2k}\). Then there exist subsequences of {λ 1k } and {λ 2k } that converge to λ 1 and λ 2, respectively. Since \(\lambda _{1}\geq \varphi _{1}(\overline {x})\), \(\lambda _{2}\geq \varphi _{2}(\overline {x})\), and \(\lambda _{1}+\lambda _{2}=\overline {y}\), we see that \(\lambda _{1}=\varphi _{1}(\overline {x})\) and \(\lambda _{2}=\varphi _{2}(\overline {x})\), so S is inner semicontinuous at \((\overline {x},\overline {y},\varphi _{1}(\overline {x}),\varphi _{2}(\overline {x}))\).
Because \(D^{*}_{C} F_{i}(\overline {x},\varphi _{i}(\overline {x}))(0) = \partial _{C}^{\infty }\varphi _{i}(\overline {x})\), the qualification condition (10) holds. Applying Theorem 10, we have
and
which imply Eqs. (16) and (17) for m = 2.
Let us prove (16) holds as an equality under the regularity conditions. Assume that all φ i for i = 1, 2 are lower regular at \(\overline {x}\). Then
which implies the equality. The proof for m > 2 follows easily by induction. □
Proposition 13
Let \(\varphi _{i}: X\to \mathbb R\) satisfy
Then
Proof
Fix any \(x^{*}\in \partial _{C}^{\infty } \varphi _{1}(\overline {x})\cap (-\partial _{C}^{\infty } \varphi _{2}(\overline {x}))\). Then \((x^{*}, 0)\in N_{C}((\overline {x},\varphi _{1}(\overline {x}));\text {epi}\, \varphi _{1})\) and \((-x^{*}, 0)\in N_{C}((\overline {x},\varphi _{2}(\overline {x}));\text {epi}\, \varphi _{2})\). Fix an element v ∈ X such that \(\varphi _{1}^{\circ }(\overline {x};\mathit {v})<\infty \), while \(\mathit {v}\in \int \{ \mathit {v}_{1}\in X\; |\; {\varphi _{2}}^{\circ }(\overline {x};\mathit {v}_{1})<\infty \}\neq \emptyset \). Let us show that x ∗ = 0. From the proof of [1, Theorem 2.9.5], one finds \(\beta \in \mathbb {R}\,\) such that \((\mathit {v},\beta )\in \text {int} T((\overline {x}, \varphi _{2}(\overline {x})); \text {epi}\, \varphi _{2})\). From [1, Theorem 2.9.1], we see that if \(\gamma \in \mathbb {R}\,\) is fixed such that \(\gamma > \varphi _{1}^{\circ } (\overline {x};\mathit {v})\), then \((\mathit {v}, \gamma )\in T((\overline {x}, \varphi _{1}(\overline {x}));\text {epi}\, \varphi _{1})\). Thus, there exists δ > 0 with
Then
whenever ∥(e 1, e 2)∥ ≤ 1. It follows that 〈x ∗,δ/2e 1〉 ≤ 0 whenever ∥e 1∥ ≤ 1, which implies x ∗ = 0.
The proposition below shows that the SCNEC condition for extended-real-valued functions holds under the directional Lipschitz condition; see [1, Definition 2.9.2].
Proposition 14
Assume that φ:X→ (−∞, ∞] is directionally Lipschitz at \(\overline {x}\in ~\text {dom}\, \varphi \) . Then φ is SCNEC at \((\overline {x}, \varphi (\overline {x}))\).
Proof
Assume that φ is directionally Lipschitz at \(\overline {x}\). Then by [1, Proposition 2.9.3], there exists \(\beta \in \mathbb R\) such that (v, β) is a hypertangent to epi φ at \((\overline {x},\varphi (\overline {x}))\). Therefore, epi φ is SCNC at this point. □
Theorem 5
Let G : X ⇉ Y, F : Y ⇉ Z be closed-graph mappings with \(\overline {z}\in (F\circ G)(\overline {x})\) , and
Given \(\overline {y}\in S(\overline {x},\overline {z})\) , assume that S is inner semicontinuous at \((\overline {x},\overline {z},\overline {y})\) , that either F is PSCNC at \((\overline {y},\overline {z})\) or G is PSCNC at \((\overline {x},\overline {y})\) , and that the qualification condition
is fulfilled. Then
for any z ∗ ∈ Z ∗.
Proof
Consider the set-valued mapping Φ:X × Y ⇉ Z as follows
Using [3, Theorem 1.64], because S is inner semicontinuous at \((\overline {x},\overline {z},\overline {y})\), we have
Thus,
Therefore,
It follows from Proposition 11 that
Take \(x^{*} \in D^{*}_{C}(F\circ G)(\overline {x},\overline {z})(z^{*})\). Then \((x^{*},0) \in D^{*}_{C}\Phi (\overline {x},\overline {y},\overline {z})(z^{*})\). Since F = F(y), there exists y ∗ ∈ Y ∗ such that \((y^{*},-z^{*}) \in N_{C}((\overline {y},\overline {z}); \text {gph}\, F)\) and \((x^{*},-y^{*}) \in N_{C}((\overline {x},\overline {y});\text {gph}\, G)\). Then \(y^{*} \in D^{*}_{C}F(\overline {y},\overline {z})(z^{*})\) and \(x^{*} \in D^{*}_{C}G(\overline {x},\overline {y})(y^{*})\). Therefore, \(x^{*} \in D_{C}^{*} G(\overline {x} ,\overline {y} ) \circ D_{C}^{*}F(\overline {y},\allowbreak \overline {z} )(z^{*} )\). The theorem has been proved. □
Corollary 3
Let F : X ⇉ Y be a closed-graph mapping, and let Ω ⊆ Y be a closed set. For \((\overline {x},\overline {y}) \in ~\text {gph}\, F\) and \(\overline {y} \in \Omega \) , define
Assume that S(x) := F(x) ∩ Ω is inner semicontinuous at \((\overline {x},\overline {y})\) , and that either F is PSCNC at \((\overline {x},\overline {y})\) or Ω is SCNC at \(\overline {y}\) . Under the qualification condition
one has
Proof
This follows from Theorem 5 with \(F_{1}: Y \rightrightarrows \mathbb {R}\) and \(G_{1}: X\rightrightarrows Y\), where G 1 = F and F 1(⋅) = Δ(⋅;Ω). Then gph F 1 = Ω × {0}. Obviously, if Ω is SCNC at \(\overline {y}\), then gph F 1 is PSCNC at \((\overline {y},0)\). Applying Theorem 5, we obtain the result. □
Theorem 6
Let f : = g ∘ F, where F : X → Y is a strictly differentiable mapping and g : Y → (−∞, ∞] is an extended-real-valued function. Assume that g is l.s.c around \(F(\overline {x})\) , and that
Then
and
The first inclusion holds as an equality if g is lower regular at \(F(\overline {x})\).
Proof
Let us consider E g (x) = [g(x), ∞). Obviously,
Observe that \(D^{*}_{C}F(\overline {x}) = \nabla F(\overline {x})^{*}\). The result then follows directly from Theorem 5 with z ∗ = 1 and z ∗ = 0. □
References
Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)
Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. I. Fundamentals. Springer, Berlin (1993)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory. Springer, Berlin (2006)
Mordukhovich, B.S., Nam, N.M.: An Easy Path to Convex Analysis and Applications. Morgan and Claypool Publishers (2006)
Mordukhovich, B.S., Nam, N.M., Phan, H.: Variational analysis of marginal functions with applications to bilevel programming. J. Optim. Theory Appl. 152, 557–586 (2012)
Rockafellar, R.T.: Directionally Lipschitzian functions and subdifferential calculus. Proc. Lond. Math. Soc. 39, 331–355 (1979)
Author information
Authors and Affiliations
Corresponding author
Additional information
Dedicated to Professor Boris Mordukhovich on the occasion of his 65th birthday
The research of Nguyen Mau Nam was partially supported by the Simons Foundation under grant #208785.
Rights and permissions
About this article
Cite this article
Nam, N.M., Hoang, N.D. & Rector, R.B. A Unified Approach to Convex and Convexified Generalized Differentiation of Nonsmooth Functions and Set-Valued Mappings. Vietnam J. Math. 42, 479–497 (2014). https://doi.org/10.1007/s10013-014-0073-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10013-014-0073-3