Abstract
We introduce a robust optimization model consisting in a family of perturbation functions giving rise to certain pairs of dual optimization problems in which the dual variable depends on the uncertainty parameter. The interest of our approach is illustrated by some examples, including uncertain conic optimization and infinite optimization via discretization. The main results characterize desirable robust duality relations (as robust zero-duality gap) by formulas involving the epsilon-minima or the epsilon-subdifferentials of the objective function. The two extreme cases, namely, the usual perturbational duality (without uncertainty), and the duality for the supremum of functions (duality parameter vanishing) are analyzed in detail.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
We introduce a robust optimization model consisting in a family of perturbation functions giving rise to certain pairs of dual optimization problems in which the dual variable depends on the uncertainty parameter. The interest of our approach is illustrated by some examples, including uncertain conic optimization and infinite optimization via discretization. The main results characterize desirable robust duality relations (as robust zero-duality gap) by formulas involving the epsilon-minima or the epsilon-subdifferentials of the objective function. The two extreme cases, namely, the usual perturbational duality (without uncertainty), and the duality for the supremum of functions (duality parameter vanishing) are analyzed in detail.
1 Introduction
Duality theory was one of Jonathan Borwein’s favorite research topics. Indeed, 14 of his papers include the term “duality” in their titles. The present article, dedicated to Jon’s vast contribution to the subject, will refer only to four works of his, all of these related to optimization problems posed in locally convex Hausdorff topological vector spaces.
Duality theorems were provided in [3] for the minimum of arbitrary families of convex programs; the quasi-relative interior constraint qualification was introduced in [6] in order to obtain duality theorems for various optimization problems where the standard Slater condition fails; the same CQ was immediately used, in [5], to obtain duality theorems for convex optimization problems with constraints given by linear operators having finite-dimensional range together with a conical convex constraint; finally, quite recently, in [4], duality theorems for the minimization of the finite sum of convex functions were established, using conditions which involve the \(\varepsilon \)-subdifferential of the given functions.
In this paper, we consider a family of perturbation functions
and where X and \(Y_{u},\) \(u\in U,\) are given locally convex Hausdorff topological vector spaces (briefly, lcHtvs), the index set U is called the uncertainty set of the family,X is its decision space, and each \(Y_{u}\) is a parameter space. Note that our model includes a parameter space \(Y_{u},\) depending on \(u\in U,\) which is a novelty with respect to the “classical” robust duality scheme (see [21] and references therein, where a unique parameter space Y is considered), allowing us to cover a wider range of applications including uncertain optimization problems under linear perturbations of the objective function. The significance of our approach is illustrated along the paper by relevant cases extracted from deterministic optimization with linear perturbations, uncertain optimization without perturbations, uncertain conic optimization and infinite optimization. The antecedents of the paper are described in the paragraphs devoted to the first two cases in Section 2.
We associate with each family \(\left\{ F_{u}:u\in U\right\} \) of perturbation functions corresponding optimization problems whose definitions involve continuous linear functionals on the decision and the parameter spaces. We denote by \(0_{X},\) \({0}_{_{X}}^{*},\) \(0_{u},\) and \(0_{u}^{*},\) the null vectors of X, its topological dual \(X^{*},\) \(Y_{u},\) and its topological dual \(Y_{u}^{*},\) respectively. The optimal value of a minimization (maximization, respectively) problem \(\mathrm {(}\)P\(\mathrm {)}\) is denoted by \(\inf \mathrm {(}\)P\(\mathrm {)}\) (\(\sup \mathrm {(}\)P\(\mathrm {)}\)); in particular, we write \(\min \mathrm {(}\)P\(\mathrm {)}\) (\(\max \mathrm {(}\)P\(\mathrm {)}\)) whenever the optimal value of \(\mathrm {(}\)P\(\mathrm {)}\) is attained. We adopt the usual convention that \(\inf \mathrm {(P)=+\infty }\) (\(\sup \mathrm {(}\)P\(\mathrm {)=-\infty }\)) when the problem \(\mathrm {(P)}\) has no feasible solution. The associated optimization problems are the following:
-
Linearly perturbed uncertain problems: for each \((u,x^{*})\in U\times X^{*},\)
$$\begin{aligned} (P_{u})_{x^{*}}:\quad \quad \inf _{x\in X}\left\{ {{F_{u}}(x,{0_{u}})-\left\langle {x^{*},x}\right\rangle }\right\} . \end{aligned}$$ -
Robust counterpart of \(\left\{ (P_{u})_{x^{*}}\right\} _{u\in U}:\)
$$\begin{aligned} (\mathrm {RP})_{x^{*}}: \quad \quad \inf _{x\in X}\left\{ {\mathop {\sup }\limits _{u\in U}{F_{u}}(x,{0_{u}})-\left\langle {x^{*},x}\right\rangle }\right\} . \end{aligned}$$
Denoting by \(F_{u}^{*}:X^{*}\times Y_{u}^{*}\rightarrow \overline{\mathbb {R}}\), where \(\overline{\mathbb {R}}:=\mathbb {R\cup \{\pm \infty \}}\), the Fenchel conjugate of \(F_{u}\), namely,
we now introduce the corresponding dual problems:
-
Perturbational dual of \((P_{u})_{x^{*}}\):
$$\begin{aligned} \mathrm {(}{\text {D}}_{u}\mathrm {)}{_{{x^{*}}}:}\quad \quad \mathop {\sup }\limits _{y_{u}^{*}\in Y_{u}^{*}}-F_{u}^{*}({x^{*}},y_{u}^{*}). \end{aligned}$$Obviously,
$$\begin{aligned} \sup \, (D_{u})_{x^{*}}\le \inf \, (P_{u})_{x^{*}} \le \inf \, (\mathrm {RP})_{x^{*}} ,\forall u\in U. \end{aligned}$$ -
Optimistic dual of \((\mathrm {RP})_{x^{*}}\):
$$\begin{aligned} (\mathrm {ODP})_{x^{*}} \sup _{(u,y_{u}^{*})\in \Delta }-F_{u}^{*}(x^{*},y_{u}^{*}), \end{aligned}$$where \(\Delta :=\left\{ {\left( {u,y_{u}^{*}}\right) :u\in U,\ {y^{*}}\in Y_{u}^{*}}\right\} \) is the disjoint union of the spaces \({Y_{u}^{*}} \). We have
$$\begin{aligned} \sup \, (\mathrm {ODP})_{x^{*}} =\mathop {\sup }\limits _{u\in U} (D_{u})_{x^{*}} \le \inf \, (\mathrm {RP})_{x^{*}}. \end{aligned}$$
We are interested in the following desirable robust duality properties:
\(\bullet \) Robust duality is said to hold at \(x^{*}\) if \(\inf {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{x^{*}}}\),
\(\bullet \) Strong robust duality at \(x^{*}\) means \(\inf {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{x^{*}}}\),
\(\bullet \) Reverse strong robust duality at \(x^{*}\) means \(\min {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{x^{*}}}\),
\(\bullet \) Min-max robust duality at \(x^{*}\) means \(\min {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{x^{*}}}\).
Each of the above desirable properties is said to be stable when it holds for any \(x^{*}\in X^{*}\). The main results of this paper characterize these properties in terms of formulas involving the \(\varepsilon \)-minimizers and \(\varepsilon \)-subdifferentials of the objective function of the robust counterpart problem (RP)\(_{0_{X}^{*}}\), namely, the function
Theorem 1 characterizes robust duality at a given point \(x^{*}\in X^{*}\) as a formula for the inverse mapping of the \(\varepsilon \)-subdifferential at \(x^{*}\) without any convexity assumption. The same is done in Theorem 2 to characterize strong robust duality. In the case, when a primal optimal solution does exist we give a formula for the exact minimizers of \(p-x^{*}\) to characterize dual strong (resp. min-max) robust duality at \(x^{*}\), see Theorem 3 (resp. Theorem 4). We show that stable robust duality gives rise to a formula for the \(\varepsilon \)-subdifferential of p (Theorem 5, see also Theorem 1). The same is done for stable strong robust duality (Theorem 6). A formula for the exact subdifferential of p is provided in relation with robust duality at appropriate points (Theorem 7). The most simple possible formula for the exact subdifferential of p (the so-called Basic Robust Qualification condition) is studied in detail in Theorem 8. All the results from Sections 1–8 are specified for the two extreme cases (the case with no uncertainty and the one in absence of perturbations), namely, Cases 1 and 2 in Section 2 (for the sake of brevity, we do not give the specifications for Cases 3 and 4). It is worth noticing the generality of the mentioned results (as they do not require any assumption on the involved functions) and the absolute self-containment of their proofs. The use of convexity in the data will be addressed in a forthcoming paper.
2 Special Cases and Applications
In this section, we make explicit the meaning of the robust duality of the general model introduced in Section 1, composed by a family of perturbation functions together with its corresponding optimization problems. We are doing this by exploring the extreme case with no uncertainty, the extreme case in absence of perturbations, and two other significant situations. In all these cases, we propose ad hoc families of perturbation functions allowing to apply the duality results to given optimization problems, either turning back to variants of well-known formulas for conjugate functions or proposing new ones.
Let us recall the robust duality formula, \(\inf {(\mathrm {RP})_{x^{*}}} = \sup {(\mathrm {ODP})_{x^{*}}},\) i.e.,
We firstly study the two extreme cases: the case with no uncertainty and the one with no perturbations.
Case 1. The case with no uncertainty: Deterministic optimization with linear perturbations deals with parametric problems of the form:
where \(f:X\rightarrow \mathbb {R}_{\infty }\) (i.e., \(f\in (\mathbb {R}_{\infty })^{X}\)) is the nominal objective function and the parameter is \({x^{*}\in X}^{*}.\) Taking a singleton uncertainty set \(U=\left\{ u_{0}\right\} ,\) \(Y_{u_{0}}=Y\) and \(F_{u_{0}}=F\ \)such that \({F\left( {x,{0_{Y}}}\right) ={f}(x)}\) for all \(x\in X,\) (1) reads
which is the fundamental perturbational duality formula [7, 24, 28]. Stable and strong robust duality theorems are given in [9] (see also [11] and [20] for infinite optimization problems).
Case 2. The case with no perturbations: Uncertain optimization without perturbations deals with families of problems of the form
where \(f_{u}\in (\mathbb {R}_{\infty })^{X},\) \(u\in U\), and \(x^{*}\in X^{*}\). The absence of perturbation is realized by taking \(F_{u}\) such that \(F_{u} (x, y_{u}) = f_{u} (x)\) for all \(u\in U\), \(x\in X\) and \(y_{u}\in Y_{u}\). Assuming dom \(f_{u} \ne \emptyset \) we have
Then (1) writes
which amounts, for \(x^{*}={0}_{_{X}}^{*},\) to the \(\inf -\sup \) duality in robust optimization, also called robust infimum (recall that any constrained optimization problem can be reduced to an unconstrained one by summing up the indicator function of the feasible set to the objective function):
Robust duality theorems without perturbations are given in [27] for a special class of uncertain non-convex optimization problems while [11] provides robust strong duality theorems for uncertain convex optimization problems which are expressed in terms of the closedness of suitable sets regarding the vertical axis of \(X^{*}{\times }~\mathbb {R}.\)
Case 3. Conic optimization problem with uncertain constraints: Consider the uncertain problem
where, for each \(u\in U\), \(S_{u}\) is an ordering convex cone in \({Y}_{u},\) \(H_{u}:X\rightarrow {Y}_{u}\), and \(f\in (\mathbb {R}_{\infty })^{X}\). Denote by \({S_{u}^{+}:=}\left\{ {y_{u}^{*}\in Y}_{u}^{*}:\left\langle {y_{u}^{*},y_{u}}\right\rangle \ge 0,\forall {y_{u}\in }S_{u}\right\} \) the dual cone of \(S_{u}\).
Problems of this type arise, for instance, in the production planning of firms producing n commodities from uncertain amounts of resources by means of technologies which depend on the available resources (e.g., the technology differs when the energy is supplied by either fuel gas or a liquid fuel). The problem associated with each parameter \(u\in U\) consists of maximizing the cash-flow \(c\left( x_{1},...,x_{n}\right) \) of the total production, with \(x_{i}\) denoting the production level of the i-th commodity, \(i=1,..,n.\) The decision vector \(x=\left( x_{1},...,x_{n}\right) \) must satisfy a linear inequality system \(A_{u}x\le b_{u},\) where the matrix of technical coefficients \(A_{u}\) is \(m_{u}\times n\) and \(b_{u}\in \mathbb {R}^{m_{u}},\) for some \(m_{u}\in \mathbb {N}.\ \)Denoting by \(\mathrm {i}_{\mathbb {R}_{+}^{n}}\) the indicator function of \(\mathbb {R}_{+}^{n}\) (i.e., \(\mathrm {i}_{\mathbb {R}_{+}^{n}}(x)=0,\) when \(x\in \mathbb {R}_{+}^{n}, \) and \(\mathrm {i}_{\mathbb {R}_{+}^{n}}(x)=+\infty ,\) otherwise), the uncertain production planning problem can be formulated as
with the space \(Y_{u}=\mathbb {R}^{m_{u}}\) depending on the uncertain parameter u.
For each \(u\in U\), define the perturbation function
On the one hand, \((\mathrm {RP})_{0_{X}^{*}}\) collapses to the robust counterpart of \(\mathrm {(P)}\) in the sense of robust conic optimization with uncertain constraints:
On the other hand, it is easy to check that
\((\mathrm {ODP})_{0_{X}^{*}}\) is nothing else than the optimistic dual in the sense of uncertain conic optimization:
(a special case when \(Y_{u}=Y\), \(S_{u}=S\) for all \(u\in U\) is studied in [12, p. 1097] and [21]). Thus,
\(\bullet \) Robust duality holds at \(0_{X}^{*}\) means that \(\inf {(\mathrm {RP})} =\sup {(\mathrm {ODP})}, \)
\(\bullet \) Strong robust duality holds at \(0_{X}^{*}\) means that
Conditions for having such an equality are provided in [12, Theorem 6.3], [13, Corollaries 5, 6], for the particular case \(Y_{u}=Y\) for all \(u\in U\).
Strong robust duality and uncertain Farkas lemma: We focus again on the case where \(Y_{u}=Y\) and \(S_{u}=S\) for all \(u\in U\). For a given \(r\in \mathbb {R}\), let us consider the following statements:
-
(i)
\(H_{u}(x)\in -S,\;\forall u\in U\quad \Longrightarrow \quad f(x)\ge r\),
-
(ii)
\(\exists u\in U,\exists {y_{u}^{*}}\in S^{+}\) such that \(f(x)+\left\langle {y_{u}^{*}},H_{u}(x)\right\rangle \ge r,\;\forall x\in X.\)
Then, it is true that the strong robust duality holds at \(0_{X}^{*}\) if and only if \(\left[ (i)\Longleftrightarrow (ii)\right] \) for each \(r\in \mathbb {R},\) which can be seen as an uncertain Farkas lemma. For details see [12, Theorem 3.2] (also [13, Corollary 5 and Theorem 1] ).
It is worth noticing that when return to problem \(\mathrm {(P)}\), a given robust feasible solution \(\overline{x}\) is a minimizer if and only if \(f(\overline{x})\le f(x)\) for any robust feasible solution x. So, a robust (uncertain) Farkas lemma (with \(r=f(\bar{x})\)) will lead automatically to an optimality test for \((\mathrm {P})\). Robust conic optimization problems are studied in [2] and [25].
Case 4. Discretizing infinite optimization problems: Let \(f\in (\mathbb {R}_{\infty })^{X}\) and \(g_{t}\in \mathbb {R}^{X}\;\)for all\(\;t\in T\) (a possibly infinite index set). Consider the set U of non-empty finite subsets of T, interpreted as admissible perturbations of T, and the parametric optimization problem
Consider the parameter space \(Y_{s}:={\mathbb {R}^{S}}\) (depending on S) and the perturbation function \({F_{S}}:X\times {\mathbb {R}^{S}}\rightarrow \mathbb {R}_{\infty }\) such that, for any \(x\in X\) and \({{{{\mu :=}({\mu _{s}})}_{s\in S}\in }\mathbb {R}^{S},}\)
We now interpret the problems associated with the family of function perturbations \(\left\{ {F_{S}:S\in U}\right\} .\) One has \(Y_{s}^{*}={\mathbb {R}^{S}}\) and
The robust counterpart at \(0_{X}^{*},\)
is a general infinite optimization problem while the optimistic dual at \(0_{X}^{*}\) is
or, equivalently, the Lagrange dual of \((\mathrm {RP})_{0_{X}^{*}},\) i.e.,
where, for each \(\lambda =\left( {\lambda _{t}}\right) _{t\in T}\in \mathbb {R}_{+}^{(T)}\) (the subspace of \(\mathbb {R}^{T}\) formed by the functions \(\lambda \) whose support, supp\(\lambda :=\left\{ t\in T:{{\lambda _{t}\ne 0}}\right\} ,\) is finite),
Following [14, Section 8.3], we say that \((\mathrm {RP})_{0_{X}^{*}}\) is discretizable if there exists a sequence \(\left( S_{r}\right) _{r\in \mathbb {N}}\subset U\) such that
and it is reducible if there exists \(S\in U\) such that
Obviously, \(\inf {(\mathrm {RP})_{0_{X}^{*}}} =-\infty \) entails that \((\mathrm {RP})_{0_{X}^{*}}\) is reducible which, in turn, implies that \((\mathrm {RP})_{0_{X}^{*}}\) is discretizable.
Discretizable and reducible problems are important in practice. Indeed, on the one hand, discretization methods generate sequences \(\left( S_{r}\right) _{r\in \mathbb {N}}\subset U\) satisfying (5) when \((\mathrm {RP})_{0_{X}^{*}}\) is discretizable; discretization methods for linear and nonlinear semi-infinite programs have been reviewed in [15, Subsection 2.3] and [23], while a hard infinite optimization problem has been recently solved via discretization in [22]. On the other hand, replacing the robust counterpart (a hard semi-infinite program when the uncertainty set is infinite) of a given uncertainty optimization problem, when it is reducible, by a finite subproblem allows many times to get the desired tractable reformulation (see e.g., [1] and [8]).
Example 1
(Discretizing linear infinite optimization problems) Consider the problems introduced in Case 4 above, with \(f(\cdot ):=\left\langle c^{*},\cdot \right\rangle \) and \({g_{t}}(x):=\left\langle a_{t}^{*},\cdot \right\rangle -b_{t},\) where \(c^{*},a_{t}^{*}\in X^{*}\) and \(b_{t}\in \mathbb {R},\) for all \(t\in T.\) Then, \((\mathrm {RP})_{0_{X}^{*}}\) collapses to the linear infinite programming problem
whose feasible set we denote by A. So, \(\inf {(\mathrm {RP})_{0_{X}^{*}}}=\inf _{x\in X}\left\{ \left\langle c^{*},x\right\rangle +i_{A}\left( x\right) \right\} .\;\) We assume that \(A\ne \emptyset .\)
Given \(S\in U\) and \({{{\mu ,\lambda }\in }}\mathbb {R}^{S},\)
and
Hence, \((\mathrm {ODP})_{0_{X}^{*}}\) collapses to the so-called Haar dual problem [16] of \((\mathrm {RP})_{0_{X}^{*}},\)
i.e.,
From (8), if \(\inf {(\mathrm {RP})_{0_{X}^{*}}}=\max {(\mathrm {ODP})_{0_{X}^{*}}} \in \mathbb {R},\) then there exist \(S\in U\) and \({{{\lambda }\in }}\mathbb {R}_{+}^{S}\) such that
Let \(A_{S}:=\left\{ x\in X:{\ \left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}\right\} .\) Given \(x\in A_{S},\) from (9),
Since
so that \((\mathrm {RP})_{0_{X}^{*}}\) is reducible. Conversely, if (10) holds with \(\inf {(\mathrm {RP})_{0_{X}^{*}}}\in \mathbb {R}\) and \(\mathop {\mathrm {cone}}\left\{ \left( {{{{{{a_{t}^{*},}b_{t}}}}}}\right) :t\in T\right\} +\mathbb {R}_{+}\left( 0_{X}^{*},1\right) \) is weak\(^{*}\)-closed, since \(\inf {(\mathrm {RP})_{0_{X}^{*}}}\le \left\langle c^{*},x\right\rangle \) is consequence of \(\left\{ {\left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}\right\} ,\) by the nonhomogeneous Farkas lemma in lcHtvs [10] and the closedness assumption, there exist \({{{\lambda }\in }}\mathbb {R}_{+}^{S}\) and \(\mu {\in }\mathbb {R}_{+}\) such that
which implies that \(\mu =0\) and \(\inf {(\mathrm {RP})_{0_{X}^{*}}} =\max {(\mathrm {ODP})_{0_{X}^{*}}}\). The closedness assumption holds when X is finite dimensional (guaranteeing that any finitely generated convex cone in \(X^{*}\times \mathbb {R}\) is closed). So, as proved in [14, Theorem 8.3], a linear semi-infinite program \((\mathrm {RP})_{0_{X}^{*}}\) is reducible if and only if (10) holds if and only if \(\inf \mathrm {(RP)}_{0_{X}^{*}}=\max {(\mathrm {ODP})_{0_{X}^{*}}}\).
We now assume that \(\inf {(\mathrm {RP})_{0_{X}^{*}}}=\sup {(\mathrm {ODP})_{0_{X}^{*}}} \in \mathbb {R}.\) By (8), there exist sequences \(\left( S_{r}\right) _{r\in \mathbb {N}}\subset U\) and \(\ \left( {\lambda }_{r}\right) _{r\in \mathbb {N}},\) with \({\lambda }^{r}{\in }\mathbb {R}_{+}^{S_{r}}\) for all \(r\in \mathbb {N},\) such that
Denote \(v_{r}:=-{{{\sum \limits _{s\in S_{r}}{{\lambda _{s}^{r}b_{s}.}}}}}\) Then,
with \(\lim _{r}v_{r}=\inf {(\mathrm {RP})_{0_{X}^{*}}}.\) Let \(A_{r}:=\left\{ x\in X:{\ \left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}_{r}\right\} ,\) \(r\in \mathbb {N}.\) Given \(x\in A_{r},\) from (11),
Since \(v_{r}\le \left\langle c^{*},x\right\rangle \) for all \(x\in A_{r}, \)
Thus,
i.e., \((\mathrm {RP})_{0_{X}^{*}}\) is discretizable. Once again, the converse is true in linear semi-infinite programming [14, Corollary 8.2.1], but not in linear infinite programming.
3 Robust Conjugate Duality
We now turn back to the general perturbation function \({F_{u}}:X\times {Y_{u}}\rightarrow \mathbb {R}_{\infty },\) \(u\in U\), and let \(\Delta :=\left\{ (u,y_{u}^{*}):u\in U,y_{u}^{*}\in Y_{u}^{*}\right\} \) be the disjoint union of the spaces \(Y_{u}^{*}\). Recall that
Define \(p\in \overline{\mathbb {R}}^{X}\) and \(q\in \overline{\mathbb {R}}^{X^*}\) such that
One then has
and hence,
-
Weak robust duality always holds
$$\begin{aligned} p^{*}(x^{*})\le q^{**}(x^{*})\le q(x^{*}),\text { for all }x^{*}\in X^{*}. \end{aligned}$$(16) -
Robust duality at \(x^{*}\) means
$$\begin{aligned} p^{*}(x^{*})=q(x^{*}). \end{aligned}$$(17)
Robust duality at \(x^{*}\) also holds when either \(p^{*}(x^{*})=+\infty \) or \(q(x^{*})=-\infty .\)
As an illustration, consider Case 4 with linear data, as in Example 1. Then, \(p\left( x\right) =\left\langle c^{*},x\right\rangle +\mathrm {i}_{A}\left( x\right) ,\) \(\mathop {\mathrm {dom}}p=A,\) and so
Similarly, from (7),
\(\mathop {\mathrm {dom}}q=c^{*}+\mathop {\mathrm {cone}}\left\{ a_{t}^{*}:t\in T\right\} \) and
3.1 Basic Lemmas
Let us introduce the necessary notations. Given a lcHtvs Z, an extended real-valued function \(h\in \overline{\mathbb {R}}^{Z}\), and \(\varepsilon \in \mathbb {R}_{+}\), the set of \(\varepsilon \)-minimizers of h is defined by
or, equivalently,
Note that \(\varepsilon -\)argmin \(h\ne \emptyset \) when \(\inf _{Z}h\in \mathbb {R}\) and \(\varepsilon >0.\) Various calculus rules involving \(\varepsilon -\)argmin have been given in [26].
The \(\varepsilon \)-subdifferential of h at a point \(a\in Z\) is the set (see, for instance, [19])
It can be checked that if \(h\in \overline{\mathbb {R}}^{X}\) is convex and \(h(a)\in \mathbb {R}\), then \(\partial ^{\varepsilon }h(a)\ne \emptyset \) for all \(\varepsilon >0\) if and only if h is lower semi-continuous at a.
The inverse of the set-valued mapping \(\partial ^{\varepsilon }h:Z\rightrightarrows Z^{*}\) is denoted by \(M^{\varepsilon }h:Z^{*}\rightrightarrows Z.\) For each \((\varepsilon ,z^{*})\in \mathbb {R}_{+}\times Z^{*}\), we have
Denoting by \(\partial ^{\varepsilon }h^{*}(z^{*})\) the \(\varepsilon \)-subdifferential of \(h^{*}\) at \(z^{*}\in Z^{*}\), namely,
where \(h^{**}(z):=\sup \limits _{z^{*}\in Z^{*}}\{\langle z^{*},z\rangle -h^{*}(z^{*})\}\) is the biconjugate of h, we have
with equality if and only if \(h=h^{**}.\)
For each \(\varepsilon \in \mathbb {R}_{+}\), we consider the set-valued mapping \(S^{\varepsilon }:X^{*}\rightrightarrows X\) as follows:
If \(q(x^{*})=-\infty \), then \(S^{\varepsilon }(x^{*})=p^{-1}(\mathbb {R}).\) If \(q(x^{*})=+\infty ,\) then \(S^{\varepsilon }(x^{*})=\emptyset .\)
Since \(p^{*}\le q\), it is clear that
Lemma 1
Assume that \(\mathop {\mathrm {dom}} p\ne \emptyset \). Then, for each \(x^{*}\in X^{*} \), the next statements are equivalent:
\(\mathrm {(i)}\) Robust duality holds at \(x^{*}\) , i.e., \(p^{*}(x^{*})=q(x^{*})\),
\(\mathrm {(ii)} \) \(\left( M^{\varepsilon }p\right) (x^{*})=S^{\varepsilon }(x^{*}),\quad \forall \varepsilon \ge 0\),
\(\mathrm {(iii)}\) \(\exists \bar{\varepsilon }>0:\left( M^{\varepsilon }p\right) (x^{*})=S^{\varepsilon }(x^{*}),\quad \forall \varepsilon \in ]0,\bar{\varepsilon }[\).
Proof
\([\mathrm {(i)}\Rightarrow \mathrm {(ii)}]\) By definition
By \(\mathrm {(i)}\) we thus have \(\left( M^{\varepsilon }p\right) (x^{*})=S^{\varepsilon }(x^{*}).\)
\([\mathrm {(ii)}\Rightarrow \mathrm {(iii)}]\) It is obviously true.
\([\mathrm {(iii)}\Rightarrow \mathrm {(i)}]\) Since \(p^{*}(x^{*})\le q(x^{*})\), \(\mathrm {(i)}\) holds if \(p^{*}(x^{*})=+\infty .\) Moreover, since \(\mathop {\mathrm {dom}}p\ne \emptyset \), one has \(p^{*}(x^{*})\ne -\infty .\) Let now \(p^{*}(x^{*})\in \mathbb {R}\). In order to get a contradiction, assume that \(p^{*}(x^{*})\not =q(x^{*})\). Then \(p^{*}(x^{*})<q(x^{*})\) and there exists \(\varepsilon \in ]0,\bar{\varepsilon }[\) such that \(p^{*}(x^{*})+\varepsilon <q(x^{*})\). Since \(\inf _{x\in X}\left\{ p(x)-\left\langle x^{*},x\right\rangle \right\} =-p^{*}(x^{*})\in \mathbb {R}\) and \(\varepsilon >0,\) we have \(\varepsilon -\mathop {\mathrm {argmin}}\nolimits (p-x^{*})\ne \emptyset .\) Let us pick \(x\in (M^{\varepsilon }p)(x^{*})=\varepsilon -\mathop {\mathrm {argmin}}\nolimits (p-x^{*}).\) By \(\mathrm {(iii)}\), we have \(x\in S^{\varepsilon }(x^{*})\) and
which contradicts \(p^{*}(x^{*})+\varepsilon < q(x^{*})\). \(\square \)
For each \(\varepsilon \in \mathbb {R}_{+}\), let us introduce now the following set-valued mapping \(J^{\varepsilon }:U\rightrightarrows X\):
with the aim of making explicit the set \(S^{\varepsilon }(x^{*})\). To this purpose, given \(\varepsilon _{1},\varepsilon _{2}\in \mathbb {R}_{+}\), \(u\in U\), and \(y_{u}^{*}\in Y_{u}^{*}\), let us introduce the set-valued mapping \(A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}:X^{*}\rightrightarrows X\) such that
Lemma 2
For each \(x^{*}\in X^{*}\), \(\varepsilon _{1},\varepsilon _{2}\in \mathbb {R}_{+}\), \(u\in U\), and \(y_{u}^{*}\in Y_{u}^{*}\), one has
Proof
Let \(x\in J^{\varepsilon _{1}}(u)\) be such that \((x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*}).\) Then we have \(F_{u}^{*}(x^{*},y_{u}^{*})\in \mathbb {R}\) and \(F_{u}(x,0_{u})\in \mathbb {R}\). Moreover
implying \(p(x)\in \mathbb {R}\) and, by (15),
that means \(x\in S^{\varepsilon _{1}+\varepsilon _{2}}(x^{*})\). \(\square \)
Lemma 3
Assume that
Then, for each \(x^{*}\in X^{*},\varepsilon \in \mathbb {R}_{+},\eta >0 \), one has
Proof
Let \(x\in p^{-1}(\mathbb {R})\) be such that \(x\in S^{\varepsilon }(x^{*})\), i.e.,
We then have, for any \(\eta >0\),
and, by definition of q and p, there exist \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\) such that
Since \(p(x)\in \mathbb {R}\), \(F_{u}^{*}(x^{*},y_{u}^{*})\ne +\infty \). In fact, by (22), \(F_{u}^{*}(x^{*},y_{u}^{*})\in \mathbb {R}\). Similarly, \(F_{u}(x,0_{u})\in \mathbb {R}\). Setting
we get \(\alpha _{1}\in \mathbb {R}_{+}\), \(\alpha _{2}\in \mathbb {R}.\) Actually \(\alpha _{2}\ge 0\) since, by definition of conjugate,
i.e., if \(z=x\) and \(y_{u}=0_{u},\)
so that
Then, by (23), \(0\le \alpha _{1}+\alpha _{2}\le \varepsilon +\eta \). Consequently, there exist \(\varepsilon _{1},\varepsilon _{2}\in \mathbb {R}_{+}\) such that \(\alpha _{1}\le \varepsilon _{1}\), \(\alpha _{2}\le \varepsilon _{2}\), \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon +\eta \). Now \(\alpha _{1}\le \varepsilon _{1}\) means that \(x\in J^{\varepsilon _{1}}(u)\) and \(\alpha _{2}\le \varepsilon _{2}\) means that \((x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\), and we have \(x\in A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}(x^{*})\). \(\square \)
For each \(x^{*}\in X^{*}\), \(\varepsilon \in \mathbb {R}_{+}\), let us define
3.2 Robust Duality
We now can state the main result on characterizations of the robust conjugate duality.
Theorem 1
\(\mathbf {(Robust\,duality)}\) Assume that \(\mathop {\mathrm {dom}} p\ne \emptyset \). Then for each \(x^{*}\in X^{*} \), the next statements are equivalent:
\(\mathrm {(i)}\) \(\inf {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{{x^{*}}}}\),
\(\mathrm {(ii)} \) \(\left( M^{\varepsilon }p\right) (x^{*})=\mathcal {A}^{\varepsilon }(x^{*}),\quad \forall \varepsilon \ge 0\),
\(\mathrm {(iii)} \) \(\exists \bar{\varepsilon }>0: \ \left( M^{\varepsilon }p\right) (x^{*})=\mathcal {A}^{\varepsilon }(x^{*}),\quad \forall \varepsilon \in ]0,\bar{\varepsilon }[\).
Proof
We firstly claim that if \(\mathop {\mathrm {dom}} p\ne \emptyset \) then for each \(x^* \in X^*\), \(\varepsilon \in \mathbb {R}_+\), it holds:
Indeed, as \(\mathop {\mathrm {dom}} p\ne \emptyset \), (22) holds. It then follows from Lemma 3, \(S^\varepsilon (x^*) \ \subset \ \mathcal {A}^\varepsilon (x^*)\). On the other hand, for each \(\eta > 0\), one has, by Lemma 2,
Taking the intersection over all \(\eta > 0\) we get
and (24) follows. Taking into account the fact that (i) means \(p^{*}(x^{*})=q(x^{*})\), the conclusions now follows from (24) and Lemma 1. \(\square \)
For the deterministic optimization problem with linear perturbations (i.e., non-uncertain case where U is a singleton), the next result is a direct consequence of Theorem 1.
Corollary 1
\(\mathbf {(Robust\,duality\, for\, Case\, 1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\) be such that \(\mathop {\mathrm {dom}}F(\cdot ,0_{Y})\ne \emptyset \). Then, for each \(x^{*}\in X^{*},\) the fundamental duality formula (2) holds, i.e.,
if and only any if the (equivalent) conditions (ii) or (iii) in Theorem 1 holds, where
Proof
Let \(F_{u}=F:X\times Y\rightarrow \mathbb {R}_{\infty },\;p=F(\cdot ,0_{Y})\). In this case, one has,
and \(\mathcal {A}^{\varepsilon }(x^{*})\) will take the form (25). The conclusion follows from Theorem 1. \(\square \)
For uncertain optimization problem without perturbations, the following result is a consequence of Theorem 1.
Corollary 2
\(\mathbf {(Robust\, duality\, for\, Case\, 2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\) be a family of extended real-valued functions, \(p=\sup _{u\in U}f_{u}\) be such that \(\mathop {\mathrm {dom}}p\ne \emptyset \). Then, for each \(x^{*}\in X^{*}, \) the \(\inf -\sup \) duality in robust optimization (4) holds, i.e.,
if and only any of the (equivalent) conditions (ii) or (iii) in Theorem 1 holds, where
with
Proof
Let \(F_{u}(x,y_{u})=f_{u}(x),\) for all \(u\in U\) and let \(p=\mathop {\sup }\limits _{u\in U}{f_{u}}\). Then, by (21),
Moreover, recalling (3), for each \(u\in U\) such that \(\mathop {\mathrm {dom}}f_{u}\ne \emptyset \), \((x^{*},y_{u}^{*})\in X^{*}\times Y_{u}^{*}\), and \(\varepsilon \ge 0\),
Finally, for each \((x^{*},\varepsilon )\in X^{*}\times \mathbb {R}_{+} \), \(\mathcal {A}^{\varepsilon }({x^{*}})\) takes the form as in (26). The conclusion now follows from Theorem 1. \(\square \)
4 Strong Robust Duality
We retain the notations in Section 3 and consider the robust problem \((\mathrm {RP})_{x^{*}}\) and its robust dual problem \((\mathrm {ODP}) _{x^{*}}\) given in (12) and (13), respectively. Let p and q be the functions defined by (14) and recall the relations in (15), that is,
In this section we establish characterizations of strong robust duality at \(x^{*}\). Recall that the strong robust duality holds at \(x^*\) means that \(\inf {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{{x^{*}}}}\), which is the same as
For this, we need a technical lemma, but firstly, given \(x^* \in X^*\), \(\ u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\), and \(\varepsilon \ge 0\), let us introduce the set
Lemma 4
Assume that \(\mathop {\mathrm {dom}}F_{u}\ne \emptyset \), for all \(u\in U,\) holds and let \(x^{*}\in X^{*}\) be such that
Then there exist \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\) such that
Proof
By Lemma 2 we have \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\subset S^{\varepsilon }(x^{*})\). Conversely, let \(x\in S^{\varepsilon }(x^{*})\). By the exactness of q at \(x^{*}\), there exist \(u\in U\) and \(y_{u}^{*}\in Y_{u}^{*}\) such that
Since \(p(x)\in \mathbb {R}\) and \(\mathop {\mathrm {dom}}F_{u}\ne \emptyset \), for all \(u\in U,\) we have \(F_{u}^{*}(x^{*},y_{u}^{*})\in \mathbb {R}\), \(F_{u}(x,0_{u})\in \mathbb {R}\),
Consequently, there exist \(\varepsilon _{1}\ge 0,\varepsilon _{2}\ge 0\) such that \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon ,\)
that is, \(x\in J^{\varepsilon _{1}}(u)\ \text {and }(x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*}) \). Thus, \(x\in A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}(x^{*})\subset B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\), since \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon .\) \(\square \)
Theorem 2
\(\mathbf {(Strong\,robust\,duality)}\) Assume that \(\mathop {\mathrm {dom}}p\ne \emptyset \) and let \(x^{*}\in X^{*}\). The next statements are equivalent:
\(\mathrm {(i)} \) \(\inf {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{{x^{*}}}}\),
\(\mathrm {(ii)}\) \(\exists u\in U,\ \exists y_{u}^{*}\in Y_{u}^{*}:\) \(\left( M^{\varepsilon }p\right) (x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}),\forall \varepsilon \ge 0\),
\(\mathrm {(iii)} \) \(\exists \bar{\varepsilon }>0,\ \exists u\in U,\exists y_{u}^{*}\in Y_{u}^{*}:\) \(\left( M^{\varepsilon }p\right) (x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}),\forall \varepsilon \in ]0,\bar{\varepsilon }[\).
Proof
Observe firstly that (i) means that
As \(\mathop {\mathrm {dom}}p\ne \emptyset \), (22) holds, and then by Lemmas 1 and 4, \(\mathrm {(i)}\) implies the remaining conditions, which are equivalent to each other, and also that \(\mathrm {(iii)}\) implies \(p^{*}(x^{*})=q(x^{*})\).
We now prove that \(\mathrm {(iii)}\) implies \(q(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*})\). Assume by contradiction that there exists \(\varepsilon >0\) such that \(q(x^{*})+\varepsilon <F_{u}^{*}(x^{*},y_{u}^{*})\), and without loss of generality one can take \(\varepsilon \in \left] 0,\bar{\varepsilon }\right[ ,\) where \(\bar{\varepsilon }>0\) appeared in (iii). Then, by \(\mathrm {(iii)}\), \(\left( M^{\varepsilon }p\right) (x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}).\)
Pick \(x\in \left( M^{\varepsilon }p\right) (x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}).\) Then, there are \(\varepsilon _{1}\ge 0,\varepsilon _{2}\ge 0\), \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon \), \(x\in J^{\varepsilon _{1}}(u)\) and \((x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\). In other words,
It now follows from (29)–(30) that
which contradicts the fact that \(p^{*}(x^{*})=q(x^{*})\). \(\square \)
In deterministic optimization with linear perturbations we get the next consequence from Theorem 2.
Corollary 3
\(\mathbf {(Strong\, robust\, duality\, for\, Case\, 1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(p=F(\cdot ,0_{Y})\), and assume that \(\mathop {\mathrm {dom}}p\ne \emptyset \). Then, for each \(x^{*}\in X^{*}\), the strong duality for \(\mathrm {(P)}_{x^{*}}\) in Case 1 holds at \(x^{*}\), i.e.,
if and only if one of the (equivalent) conditions (ii) or (iii) in Theorem 2 holds with \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\) being replaced by
Proof
It is worth observing that we are in the non-uncertainty case (i.e., U is a singleton), and the set \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\) writes as in (31) for each \((x^{*},y^{*})\in X^{*}\times Y^{*}\), \(\varepsilon \ge 0\). The conclusion follows from Theorem 2. \(\square \)
In the non-perturbation case, Theorem 2 gives rise to
Corollary 4
\(\mathbf {(Strong\,robust\, duality\, for\, Case\, 2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(x^{*}\in X^{*}\), and \(p=\sup \limits _{u\in U}f_{u}\) such that \(\mathop {\mathrm {dom}}p\ne \emptyset \). Then, the robust duality formula
holds if and only if one of the (equivalent) conditions (ii) or (iii) in Theorem 2 holds with \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\) being replaced by
Proof
Let \(F_{u}(x,y_{u})=f_{u}(x)\), \(p=\mathop {\sup }\limits _{u\in U}{f_{u}}\), and, from (27) and (28) (see the proof of Corollary 2),
which in our situation, collapses to the set \(B_{u}^{\varepsilon }(x^{*}) \) defined by (32). The conclusion now follows from Theorem 2. \(\square \)
5 Reverse Strong and Min-Max Robust Duality
Given \(F_{u}:X\times Y_{u}\rightarrow (\mathbb {R}_{\infty })^{X}\) for each \(u\in U\), \(p=\sup \limits _{u\in U}F_{u}(\cdot ,0_{u})\), and \(x^{*}\in X^{*}\), we assume in this section that the problem \((\mathrm {RP})_{x^{*}}\) is finite-valued and admits an optimal solution or, in other words, that \(\mathrm {argmin}(p-x^{*})=(M^{0}p)(x^{*})\ne \emptyset \). For convenience, we set
Theorem 3
\(\mathbf {(Reverse\,strong\, robust\, duality)}\) Let \(x^{*}\in X^{*}\) be such that \((Mp)(x^{*})\ne \emptyset \) and let \(\mathcal {A}(x^*)\) be as in (33). The next statements are equivalent:
\(\mathrm {(i)}\) \(\min {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{{x^{*}}}}\),
\(\mathrm {(ii)} \) \((Mp)(x^{*}) = \mathcal {A} (x^{*})\).
Proof
Since \((Mp)(x^{*}){\ne } \emptyset \), \(\mathop {\mathrm {dom}}p{\ne } \emptyset \). It follows from Theorem 1 that \(\left[ {\mathrm {(i)}\Longrightarrow \mathrm {(ii)}}\right] \). For the converse, let us pick \(x\in (Mp)(x^{*})\). Then by (ii), for each \(\eta >0 \) there exist \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\), \(\varepsilon _{1}\ge 0,\varepsilon _{2}\ge 0\) such that \(\varepsilon _{1}+\varepsilon _{2}=\eta \), \(x\in J^{\varepsilon _{1}}(u)\), \((x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\) and we have
Since \(\eta >0\) is arbitrary we get \(q(x^{*})\le p^{*}(x^{*})\), which, together with the weak duality (see (16)), yields \(q(x^{*}) = \langle x^{*},x\rangle -p(x) = p^{*}(x^{*}) \), i.e., (i) holds and we are done. \(\square \)
In the deterministic case we obtain from Theorem 3:
Corollary 5
\(\mathbf {(Reverse\,strong\,robust\,duality\,for\,Case\,1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(x^{*}\in X^{*}\), \(p=F(\cdot ,0_{Y})\), and
Assume that \((Mp)(x^{*})\ne \emptyset \). Then the next statements are equivalent:
\(\mathrm {(i)}\) \(\mathop {\min }\limits _{x\in X}\left\{ {F\left( {x,{0_{Y}}} \right) -\left\langle {{x^{*}},x}\right\rangle }\right\} =\mathop {\sup } \limits _{{y^{*}}\in {Y^{*}}}-{F^{*}}\left( {{x^{*}},{y^{*}}}\right) \),
\(\mathrm {(ii)} \) \((Mp)(x^{*}) = \mathcal {A} (x^{*})\).
Corollary 6
\(\mathbf {(Reverse\,strong\,robust\,duality\,for\,Case\,2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p=\sup \limits _{u\in U}f_{u}\), \(x^{*}\in X^{*}\), and
where
Assume that \((Mp)(x^{*})\ne \emptyset \). Then the next statements are equivalent:
\(\mathrm {(i)}\) \(\Big (\sup \limits _{u\in U}f_{u}\Big )^{*}(x^{*})=\inf \limits _{u\in U}f_{u}^{*}(x^{*}),\) with attainment at the first member,
\(\mathrm {(ii)} \) \((Mp)(x^{*}) = \mathcal {A} (x^{*})\).
Now, for each \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\), \(x^{*}\in X^{*},\) we set
and
Theorem 4
\(\mathbf {(Min\text {-}max\, robust\, duality)}\) Let \(x^{*}\in X^{*}\) be such that \((Mp)(x^{*})\ne \emptyset \). The next statements are equivalent:
\(\mathrm {(i)}\) \(\min {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{x^{*}}}\),
\(\mathrm {(ii)}\) \(\exists u\in U,\) \(\exists y_{u}^{*}\in Y_{u}^{*}:\) \(\ (Mp)(x^{*})=B_{(u,y_{u}^{*})}(x^{*})\),
where \(B_{(u,y_{u}^{*})}(x^{*})\) is the set defined in (34).
Proof
By Theorem 2 we know that \([\mathrm {(i)}\Longrightarrow \mathrm {(ii)}]\). We now prove that \([\mathrm {(ii)}\Longrightarrow \mathrm {(i)}]\). Pick \(x\in (Mp)(x^{*})\) which is non-empty by assumption. Then by \(\mathrm {(ii)}\), \(x\in B_{(u,y_{u}^{*})}(x^{*})\), which yields \(x\in J(u)\) and \((x,0_{u})\in (MF_{u})(x^{*},y_{u}^{*})\). Hence,
which means that \(q(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*})=\langle x^{*},x\rangle -p(x)=p^{*}(x^{*})\) and \(\mathrm {(i)}\) follows. \(\square \)
Corollary 7
\(\mathbf {(Min\text {-}max~robust~duality~for~Case~1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(x^{*}\in X^{*}\), \(p=F(\cdot ,0_{Y})\), and for each \(y^{*}\in Y^{*}\),
Assume that \((Mp)(x^{*})\ne \emptyset \). The next statements are equivalent:
\(\mathrm {(i)} \) \(\mathop {\min }\limits _{x\in X}\left\{ {F\left( {x,{0_{Y}}}\right) -\left\langle {{x^{*}},x}\right\rangle }\right\} = \mathop {\max }\limits _{{y^{*}}\in {Y^{*}}}-{F^{*}}\left( {{x^{*}},{y^{*}}}\right) \),
\(\mathrm {(ii)} \) \(\exists y^{*}\in Y^{*}\): \(\ (Mp)(x^{*}) = B_{y^{*}}(x^{*})\).
Corollary 8
\(\mathbf {(Min\text {-}max~robust~duality~for~Case~2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p{=}\sup \limits _{u\in U}f_{u}\), \(x^{*}\in X^{*}\), and for each \(u\in U\),
where \(J(u)=\{x\in p^{-1}(\mathbb {R})\,:\,f_{u}(x)=p(x)\}\). Assume that \((Mp)(x^{*})\ne \emptyset \). Then the next statements are equivalent:
\(\mathrm {(i)} \) \(\Big (\sup \limits _{u \in U} f_u\Big )^*(x^*) =\min \limits _{u \in U} f_u^*(x^*)\), with attainment at the first member,
\(\mathrm {(ii)} \) \(\exists u \in U\): \(\ (Mp)(x^{*}) = B_u(x^{*})\).
6 Stable Robust Duality
Let us first recall some notations. Given \({F_{u}}:X\times {Y_{u}}\rightarrow \mathbb {R}_{\infty }\), \(u\in U,\ p=\mathop {\sup }\limits _{u\in U}{F_{u}}(\cdot ,{0_{u}})\) and \(q=\mathop {\inf }\limits _{{\scriptstyle u\in U \atop \scriptstyle y_{u}^{*}\in Y_{u}^{*}}}F_{u}^{*}(\cdot ,y_{u}^{*})\). Remember that \(p^{*}(x^{*})\le q(x^{*})\) for each \(x^{*}\in X^{*}\). Stable robust duality means that \(\inf {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{{x^{*}}}}\) for all \(x^{*}\in X^{*}\), or equivalently,
Theorem 1 says that, if \(\mathop {\mathrm {dom}}p\ne \emptyset ,\) then stable robust duality holds if and only if for each \(\varepsilon \ge 0\) the set-valued mappings \(M^{\varepsilon }p,\ \mathcal {A}^\varepsilon :X^{*}\rightrightarrows X\) coincide, where, for each \(x^{*}\in X^{*}\),
Consequently, stable robust duality holds if and only if for each \(\varepsilon \ge 0\), the inverse set-valued mappings
coincide. Recall that \((M^{\varepsilon }p)^{-1}\) is nothing but the \(\varepsilon \)-subdifferential of p at x.
Let us now make explicit \((\mathcal {A}^{\varepsilon })^{-1}\). To this end we need to introduce for each \(\varepsilon \ge 0\) the (\(\varepsilon \)-active indexes) set-valued mapping \(I^{\varepsilon }:X\rightrightarrows U\) with
We observe that \(I^{\varepsilon }\) is nothing but the inverse of the set-valued mapping \(J^{\varepsilon }:U\rightrightarrows X\) defined in (21).
Lemma 5
For each \((\varepsilon ,x)\in \mathbb {R}_{+}\times X\) one has
where \({{\mathrm {proj}_{X^{*}}^{u}:}}X^{*}\times Y_{u}^{*}\longrightarrow X^{*}\) is the projection mapping \(\mathrm {proj}_{X^{*}}^{u}(x^{*},y_{u}^{*})=x^{*}.\)
Proof
Let \((\varepsilon ,x,x^{*})\in \mathbb {R}_{+}\times X\times X^{*}\). One has
\(\square \)
Now, for each \((\varepsilon , x) \in \mathbb {R}_+ \times X\), let us set
Applying Theorem 1 and Lemma 5 we obtain:
Theorem 5
\(\mathbf {(Stable~robust~duality)}\) Assume that \(\mathop {\mathrm {dom}}p\ne \emptyset \). The next statements are equivalent:
\(\mathrm {(i)} \) \(\inf {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{{x^{*}}}}\) for all \(x^* \in X^*\),
\(\mathrm {(ii)} \) \(\partial ^\varepsilon p(x) = C^\varepsilon (x), \ \forall (\varepsilon , x) \in \mathbb {R}_+ \times X\),
\(\mathrm {(iii)} \) \(\exists \bar{\varepsilon }> 0\): \(\partial ^\varepsilon p(x) = C^\varepsilon (x), \ \forall (\varepsilon , x) \in \ ]0, \bar{\varepsilon }[\ \times X\).
Corollary 9
\(\mathbf {(Stable~robust~duality~for~Case~1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\) be such that \(\mathop {\mathrm {dom}}F(\cdot ,0_{Y})\ne \emptyset \). Let \({{\mathrm {proj}_{X^{*}}:}}X^{*}\times Y^{*}\longrightarrow X^{*}\) be the projection mapping \(\mathrm {proj}_{X^{*}}(x^{*},y^{*})=x^{*}. \) Then, the next statements are equivalent:
\(\mathrm {(i)} \) \(\inf \limits _{x \in X} \Big \{ F(x, 0_Y) - \langle x^*, x\rangle \Big \} = \sup \limits _{y^* \in Y^*} - F^*(x^*, y^*), \ \forall x^* \in X^*\),
\(\mathrm {(ii)}\) \((\partial ^{\varepsilon }p)(x)=\bigcap _{\eta >0}\mathrm {proj}_{X^{*}}(\partial ^{{\varepsilon +\eta }}{F})(x,{0_{Y}}),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X\),
\(\mathrm {(iii)}\) \(\exists \bar{\varepsilon }>0\): \((\partial ^{\varepsilon }p)(x)=\bigcap _{\eta >0}\mathrm {proj}_{X^{*}}(\partial ^{{\varepsilon + \eta }}{F})(x,{0_{Y}}),\ \forall (\varepsilon ,x)\in ]0,\bar{\varepsilon }[\times X\).
Proof
Let \(U=\{u_{0}\}\) and \(F=F_{u_{0}}:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(Y=Y_{u_{0}}\), \(p=F(\cdot ,0_{Y})\). Then for each \((\varepsilon ,x)\in \mathbb {R}_{+}\times X\),
and
The conclusion now follows from (36)–(37) and Theorem 5. \(\square \)
Remark 1
Condition \(\mathrm {(ii)}\) in Corollary 9 was quoted in [17, Theorem 4.3] for all \((\varepsilon ,x)\in ]0,+\infty [\times \mathbb {R},\) which is equivalent.
Corollary 10
\(\mathbf {(Stable~robust~duality~for~Case~2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X},\) \(p=\sup \limits _{u\in U}f_{u}\), and assume that \(\mathop {\mathrm {dom}}p\ne \emptyset \). The next statements are equivalent:
\(\mathrm {(i)} \) \(\Big (\sup \limits _{u \in U} f_u\Big )^*(x^*) =\inf \limits _{u \in U} f_u^*(x^*)\), \(\forall x^* \in X^*\),
\(\mathrm {(ii)} \) \((\partial ^\varepsilon p)(x) = C^\varepsilon (x), \ \forall (\varepsilon , x) \in \mathbb {R}_+ \times X\),
\(\mathrm {(iii)}\) \(\exists \bar{\varepsilon }>0\): \((\partial ^{\varepsilon }p)(x)=C^{\varepsilon }(x),\ \forall (\varepsilon ,x)\in ]0,\bar{\varepsilon }[\times X\),
where \(C^{\varepsilon }(x)\) is the set
Proof
Let \(F_{u}:X\times Y_{u}\rightarrow \mathbb {R}_{\infty }\ \)be such that \(F_{u}(x,y_{u})=f_{u}(x)\) for all \(u\in U\). Then for any \((\varepsilon ,x)\in \mathbb {R}_{+}\times X\),
and \(C^{\varepsilon }(x)\) reads as in (38). The conclusion now follows from Theorem 5. \(\square \)
7 Stable Strong Robust Duality
We retain all the notations used in the Sections 3–6. Given \((\varepsilon , u) \in \mathbb {R}_+ \times U\) and \(y_u^* \in Y_U^*\) we have introduced in Section 4 the set-valued mapping \(B_{(u,y_{u}^{*})}^{\varepsilon } : X^*\rightrightarrows X\) defined by
Let us now define \(B^\varepsilon : X^*\rightrightarrows X\) by setting
Lemma 6
For each \((\varepsilon , x) \in \mathbb {R}_+ \times X\) we have
Proof
\(x^* \in (B^\varepsilon )^{-1} (x)\) means that there exist \(u \in U\), \(y_u^* \in Y_u^*\) \(\varepsilon _1 \ge 0\), \(\varepsilon _2 \ge 0\), such that \(\varepsilon _1 + \varepsilon _2 = \varepsilon \), \(x \in J^{\varepsilon _1} (u) \), and \((x, 0_u) \in (M^{\varepsilon _2}F_u)(x^*, y_u^*)\), or, equivalently, \(u \in I^{\varepsilon _1} (x)\), and \((x^*, y_u^*) \in (\partial ^{\varepsilon _2}F_u)(x, 0_u)\). In other words, there exist \(u \in U\), \(y_u^* \in Y_u^*\) such that \(x \in B^\varepsilon _{(u, y_u^*)} (x^*)\), that is \(x \in B^\varepsilon (x^*)\). \(\square \)
For each \(\varepsilon \ge 0\) let us introduce the set-valued mapping \(D^{\varepsilon }:=(B^{\varepsilon })^{-1}\). Now Lemma 6 writes
Note that
and that \(D^{\varepsilon }(x)=\emptyset \) whenever \(p(x)\not \in \mathbb {R}\).
We now provide a characterization of stable strong robust duality in terms of \(\varepsilon \)-subdifferential formulas.
Theorem 6
\(\mathbf {(Stable~strong~robust~duality)}\) Assume that \(\mathop {\mathrm {dom}} p \ne \emptyset \), and let \(D^\varepsilon \) as in (39). The next statements are equivalent:
\(\mathrm {(i)} \) \(\inf {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{{x^{*}}}} = \max \limits _{{u \in U \atop y_u^* \in Y_u^*}} - F_u^*(x^*, y_u^*),\ \ \forall x^* \in X^*\),
\(\mathrm {(ii)} \) \(\partial ^\varepsilon p(x) = D^\varepsilon (x), \ \forall (\varepsilon , x) \in \mathbb {R}_+ \times X\).
Proof
\([\mathrm {(i)} \Longrightarrow \mathrm {(ii)}]\) Let \(x^* \in \partial ^\varepsilon p(x)\). Then \(x \in (M^\varepsilon p)(x^*)\). Since strong robust duality holds at \(x^*\), Theorem 2 says that there exist \(u \in U\), \(y_u^* \in Y_u^*\) such that \(x \in B^\varepsilon _{(u, y_u^*)}(x^*) \subset B^\varepsilon (x^*)\), and finally \(x^* \in D^\varepsilon (x)\) by Lemma 6. Thus \(\partial ^\varepsilon p(x) \subset D^\varepsilon (x)\).
Now, let \(x^{*}\in D^{\varepsilon }(x)\). By Lemma 6 we have \(x\in B^{\varepsilon }(x^{*})\) and there exist \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\) such that \(x\in B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\). By Lemma 2 and the definition of \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\) we have \(x\in S^{\varepsilon }(x^{*})\), and, by (20), \(x\in (M^{\varepsilon }p)(x^{*})\) which means that \(x^{*}\in \partial ^{\varepsilon }p(x)\), and hence, \(D^{\varepsilon }(x)\subset \partial ^{\varepsilon }p(x)\). Thus (ii) follows.
\([\mathrm {(ii)}\Longrightarrow \mathrm {(i)}]\) If \(p^{*}(x^{*})=+\infty \) then \(q(x^{*})=+\infty \) and one has \(p^{*}(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*})=+\infty \) for all \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\), and (i) holds. Assume that \(p^{*}(x^{*})\in \mathbb {R}\) and pick \(x\in p^{-1}(\mathbb {R})\) which is non-empty as \(\mathop {\mathrm {dom}}p\ne \emptyset \) and \(p^{*}(x^{*})\in \mathbb {R}\). Let \(\varepsilon :=p(x)+p^{*}(x^{*})-\langle x^{*},x\rangle \). Then \(\varepsilon \ge 0\) and we have \(x^{*}\in \partial ^{\varepsilon }p(x)\). By (ii) \(x\in D^{\varepsilon }(x)\) and hence, there exist \(\varepsilon _{1}\ge 0,\varepsilon _{2}\ge 0\), \(u\in U\), and \(y_{u}^{*}\in Y_{u}^{*}\) such that \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon \), \(u\in I^{\varepsilon _{1}}(x)\), \((x^{*},y_{u}^{*})\in (\partial ^{\varepsilon _{2}}F_{u})(x,0_{u})\). We have
and finally, \(q(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*})=p^{*}(x^{*})\), which is (i). \(\square \)
Next, as usual, we give two consequences of Theorem 6 for the non-uncertainty and non-parametric cases.
Corollary 11
\(\mathbf {(Stable\,\, strong\,\, duality\, for\,\, Case\,\, 1)}\) Let \(F:X{\times } Y\rightarrow \mathbb {R}_{\infty }\), \(p=F(\cdot ,0_{Y})\), \(\mathop {\mathrm {dom}}p\ne \emptyset \). The next statements are equivalent:
\(\mathrm {(i)} \) \(\inf \limits _{x \in X} \Big \{ F(x, 0_Y) - \langle x^*, x\rangle \Big \} = \max \limits _{ y^* \in Y^ *} - F^*(x^*, y^*), \ \forall x^* \in X^*\),
\(\mathrm {(ii)}\) \(\partial ^{\varepsilon }p(x)=\mathrm {proj}_{X^{*}}(\partial ^{\varepsilon }F)(x,{0_{y}}),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X.\)
Proof
This is the non-uncertainty case (i.e., the uncertainty set is a singleton) of the general problem \((\mathrm {RP})_{x^{*}}\), with \(U=\{u_{0}\}\) and \(F_{u_{0}}=F:X\times Y\rightarrow \mathbb {R}_{\infty }\). We have from (37),
The conclusion now follows from Theorem 6. \(\square \)
Corollary 12
\(\mathbf {(Stable~strong~duality~for~Case~2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p=\sup \limits _{u\in U}f_{u}\), and \(\mathop {\mathrm {dom}}p\ne \emptyset \). The next statements are equivalent:
\(\mathrm {(i)} \) \((\sup \limits _{u \in U} f_u)^*(x^*) = \min \limits _{ u\in U} f_u^*(x^*), \ \forall x^* \in X^*\),
\(\mathrm {(ii)}\)\(\partial ^{\varepsilon }p(x)=D^{\varepsilon }(x),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X\), where
and
Proof
In this non-parametric situation, let \(F_{u}(x,y_{u})=f_{u}(x)\). It is easy to see that in this case, the set \(D^{\varepsilon }(x)\) can be expressed as in (42), and the conclusion follows from Theorem 6. \(\square \)
8 Exact Subdifferential Formulas: Robust Basic Qualification Condition
Given \({F_{u}}:X\times {Y_{u}}\rightarrow \mathbb {R}_{\infty },\ u\in U\), as usual, we let \(p=\mathop {\sup }\limits _{u\in U}{F_{u}}(\cdot ,{0_{u}})\),
\( q:=\mathop {\inf }\limits _{\left( {u,y_{u}^{*}}\right) \in \Delta }\;F_{u}^{*}(\cdot ,y_{u}^{*})\). Again, we consider the robust problem \((\mathrm {RP})_{x^{*}}\) and its robust dual problem \((\mathrm {ODP})_{x^{*}}\) given in (12) and (13), respectively. Note that the reverse strong robust duality holds at \(x^{*} \) means that, for some \(\bar{x}\in X\), it holds
Now, let us set, for each \(x\in X\),
where \(I^{\varepsilon _{1}}(x)\) is defined as in (35) and
Lemma 7
For each \(x \in X\), it holds
Proof
The first inclusion is easy to check. Now let \(x^* \in C(x)\). For each \(\eta > 0\) there exist \((\varepsilon _1, \varepsilon _2 ) \in \mathbb {R}_+^2\), \(u \in I^{\varepsilon _1}(x)\), and \(y_u^* \in Y_u^*\) such that \(\varepsilon _1 + \varepsilon _2 = \eta \) and \((x^*, y_u^*) \in (\partial ^{\varepsilon _2} F_u)(x, 0_u)\). We then have \(F_u^*(x^*, y_u^*) + F_u(x, 0_u) - \langle x^*, x\rangle \le \varepsilon _2\), \(p(x) \le F_u(x, 0_u) + \varepsilon _1\) (as \(u \in I^{\varepsilon _1}(x)\)), and \(p^*(x^*) \le q(x^*) \le F^*_u(x^*, y^*_u)\). Consequently,
Since \(\eta > 0\) is arbitrary we get \(p^*(x^*) + p(x) -\langle x^*, x\rangle \le 0\), which means that \(x^* \in \partial p (x)\). The proof is complete. \(\square \)
Theorem 7
Let \(x \in p^{-1}(\mathbb {R})\) and C(x) be as in (45). The next statements are equivalent:
\(\mathrm {(i)} \) \(\partial p (x) = C(x)\),
\(\mathrm {(ii)} \) Reverse strong robust duality holds at each \(x^* \in \partial p (x) \),
\(\mathrm {(iii)} \) Robust duality holds at each \(x^* \in \partial p (x)\).
Proof
\([ \mathrm {(i)} \Longrightarrow \mathrm {(ii)}]\) Let \(x^* \in \partial p (x)\). We have \(x^* \in C(x) = (\mathcal {A})^{-1} (x)\) (see Lemma 5 with \(\varepsilon = 0\)). Then \(x \in \mathcal {A} (x^*)= S(x^*)\) (see (24) with \(\varepsilon = 0\)), and therefore,
that means that reverse strong robust duality holds at \(x^*\) (see (43)).
\([ \mathrm {(ii)} \Longrightarrow \mathrm {(iii)}]\) is obvious.
\([ \mathrm {(iii)} \Longrightarrow \mathrm {(i)}]\) By Lemma 7 it suffices to check that the inclusion “\(\subset \)” holds. Let \(x^* \in \partial p (x)\). We have \(x \in (Mp)(x^*)\). Since robust duality holds at \(x^*\), Theorem 1 (with \(\varepsilon = 0\)) says that \(x \in \mathcal {A} (x^*) \). Thus, \(x^* \in \mathcal {A} ^{-1} (x)\), and, by Lemma 5, \(x^* \in C(x)\). \(\square \)
In the deterministic and the non-parametric cases, we get the next results from Theorem 7.
Corollary 13
Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(p=F(\cdot ,0_{Y})\), and \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:
\(\mathrm {(i)} \) \(\partial p (x) = \bigcap \limits _{\eta > 0 } \mathrm {proj}_{X^*} (\partial ^\eta F)(x, 0_Y)\),
\(\mathrm {(ii)} \)\(\min \limits _{z \in X} \Big \{ F(z, 0_Y) - \langle x^*, x \rangle \Big \} = \sup \limits _{y^* \in Y^*} - F^*(x^*, y^*) , \ \forall x^* \in \partial p (x)\),
\(\mathrm {(iii)} \)\(\inf \limits _{z \in X} \Big \{ F(z, 0_Y) - \langle x^*, x \rangle \big \} = \sup \limits _{y^* \in Y^*} - F^*(x^*, y^*) , \ \forall x^* \in \partial p (x)\).
Proof
Let \(F_{u}=F:X\times Y\rightarrow \mathbb {R}_{\infty }\) and \(p=F(\cdot ,0_{Y})\). We then have
(see Corollary 9) and the conclusion follows directly from Theorem 7. \(\square \)
Corollary 14
Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p=\sup \limits _{u\in U}f_{u}\), \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:
\(\mathrm {(i)} \) \(\partial \left( \sup \limits _{u \in U} f_u\right) (x) = C(x)\),
\(\mathrm {(ii)} \)\(\max \limits _{z \in X} \Big \{ \langle x^*, z \rangle - p(z) \Big \} = \inf \limits _{u \in U} f_u^*(x^*), \ \forall x^* \in \partial p (x)\),
\(\mathrm {(iii)} \)\(\left( \sup \limits _{u \in U} f_u\right) ^*(x^*) = \inf \limits _{u \in U } f_u^*(x^*), \ \forall x^* \in \partial p (x)\),
where
Proof
Let \(F_u(x, y_u) = f_u(x)\). Then it is easy to see that in this case, C(x) can be expressed as in (47). The conclusion now follows from Theorem 7. \(\square \)
Let us come back to the general case and consider the most simple subdifferential formula one can expect for the robust objective function \(p=\sup \limits _{u\in U}F_{u}(\cdot ,0_{u})\):
where the set of active indexes at x, I(x), is defined by (46).
In Case 3 we have
\(I(x)=U\) for each \(x\in p^{-1}(\mathbb {R})\), and (48) writes
which has been called Basic Robust Subdifferential Condition (BRSC) in [8] (see [18, page 307] for the deterministic case). More generally, let us introduce the following terminology:
Definition 1
Given \(F_{u}:X\times Y_{u}\rightarrow \mathbb {R}_{\infty }\) for each \(u\in U\), and \(p=\sup \limits _{u\in U}F_{u}(\cdot ,0_{u})\), we will say that Basic Robust Subdifferential Condition holds at a point \(x\in p^{-1}(\mathbb {R})\) if (48) is satisfied, that is \(\partial p(x)=D(x)\).
Recall that, in Example 1, \(p\left( x\right) =\left\langle c^{*},x\right\rangle +\mathrm {i}_{A}\left( x\right) ,\) where \(A=p^{-1}(\mathbb {R})\) is the feasible set of the linear system. So, given \(x\in A,\) \(\partial p(x)\) is the sum of \(c^{*}\) with the normal cone of A at x, i.e., Basic Robust Subdifferential Condition (at x) asserts that such a cone can be expressed in some prescribed way.
Theorem 8
Let \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:
\(\mathrm {(i)}\) Basic Robust Subdifferential Condition holds at x,
\(\mathrm {(ii)}\) Min-max robust duality holds at each \(x^{*}\in \partial p(x)\),
\(\mathrm {(iii)}\) Strong robust duality holds at each \(x^{*}\in \partial p(x)\).
Proof
\([\mathrm {(i)} \Longrightarrow \mathrm {(ii)}]\) Let \(x^* \in \partial p (x)\). We have \(x^* \in D(x)\) and, by (44), there exist \(u \in I(x)\) (i.e., \(p(x) = F_u(x, 0_u)\)), \(y_u^*\in Y_u^*\), such that \((x^*, y_u^*) \in \partial F_u)(x, 0_u)\). Then,
It follows that
and min-max robust duality holds at \(x^*\).
\([\mathrm {(ii)} \Longrightarrow \mathrm {(iii)}]\) It is obvious.
\([\mathrm {(iii)} \Longrightarrow \mathrm {(i)}]\) By Lemma 7, it suffices to check that \(\partial p (x) \subset D(x)\). Let \(x^* \in \partial p (x)\). We have \(x \in (M p) (x^*)\). Since strong robust duality holds at \(x^*\), Theorem 2 says that there exist \(u \in U\), \(y_u^* \in Y_u^*\) such that \(x \in B^0_{(u, y_u^*)} (x^*)\), that means (see (34))
and by (44), \(x^* \in D(x)\). \(\square \)
As usual, Theorem 8 gives us corresponding results for the two extreme cases: non-uncertainty and non-perturbation cases.
Corollary 15
Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(p=F(\cdot ,0_{Y})\), and \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:
\(\mathrm {(i)} \) \(\partial p (x) = \mathrm {proj}_{X^*} (\partial F) (x, 0_Y)\),
\(\mathrm {(ii)} \)\(\max \limits _{z \in X} \Big \{ \langle x^*, z \rangle - F(z, 0_Y) \Big \} = \min \limits _{y^*\in Y^*} F^*(x^*, y^*), \ \forall x^* \in \partial p (x)\),
\(\mathrm {(iii)} \)\(p ^*(x^*) = \min \limits _{y^* \in Y^*} F^*(x^*, y^*), \ \forall x^* \in \partial p (x)\).
Proof
In this case we have, by (41), \(D(x)=\mathrm {proj}_{X^{*}}(\partial F)(x,0_{Y}) \) and the conclusion is a direct consequence of Theorem 8. \(\square \)
Corollary 16
Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p=\sup \limits _{u\in U}f_{u}\), \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:
\(\mathrm {(i)} \) \(\partial p (x) = \bigcup \limits _{u \in I(x)} \partial f_u (x) \),
\(\mathrm {(ii)} \)\(\max \limits _{z \in X} \Big \{ \langle x^*, z \rangle - p(z) \Big \} = \min \limits _{u \in U} f_u^*(x^*), \ \forall x^* \in \partial p (x)\),
\(\mathrm {(iii)} \)\((\sup \limits _{u \in U} f_u) ^*(x^*) = \min \limits _{y^* \in Y^*} f_u^*(x^*), \ \forall x^* \in \partial p (x)\).
Proof
In this non-parametric case, let \(F_{u}(x,y_{u})=f_{u}(x)\), \(p=\sup \limits _{u\in U}f_{u}\). We have
and Theorem 8 applies. \(\square \)
References
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)
Bertsimas, D., Sim, M.: Tractable approximations to robust conic optimization problems. Math. Program. 107B, 5–36 (2006)
Borwein, J.M.: A strong duality theorem for the minimum of a family of convex programs. J. Optim. Theory Appl. 31, 453–472 (1980)
Borwein, J.M., Burachik, R.S., Yao, L.: Conditions for zero duality gap in convex programming. J. Nonlinear Convex Anal. 15, 167–190 (2014)
Borwein, J.M., Lewis, A.S.: Partially finite convex programming. I. Quasi relative interiors and duality theory. Math. Program. 57B, 15-48 (1992)
Borwein, J.M., Lewis, A.S.: Practical conditions for Fenchel duality in infinite dimensions. In: Fixed Point Theory and Applications, pp. 83–89, Pitman Research Notes in Mathematics Series, vol. 252, Longman Scientific and Technology, Harlow (1991)
Bot, R.I.: Conjugate Duality in Convex Optimization. Springer, Berlin (2010)
Bot, R.I., Jeyakumar, V., Li, G.Y.: Robust duality in parametric convex optimization. Set-Valued Var. Anal. 21, 177–189 (2013)
Burachick, R.S., Jeyakumar, V., Wu, Z.-Y.: Necessary and sufficient condition for stable conjugate duality. Nonlinear Anal. 64, 1998–2006 (2006)
Chu, Y.C.: Generalization of some fundamental theorems on linear inequalities. Acta Math Sinica 16, 25–40 (1966)
Dinh, N., Goberna, M.A., López, M.A., Volle, M.: A unifying approach to robust convex infinite optimization duality. J. Optim. Theory Appl. 174, 650–685 (2017)
Dinh, N., Mo, T.H., Vallet, G., Volle, M.: A unified approach to robust Farkas-type results with applications to robust optimization problems. SIAM J. Optim. 27, 1075–1101 (2017)
Dinh, N., Long, D.H.: Complete characterizations of robust strong duality for robust optimization problems. Vietnam J. Math. 46, 293–328 (2018). https://doi.org/10.1007/s10013-018-0283-1
Goberna, M.A., López, M.A.: Linear Semi-Infinite Optimization. Wiley, Chichester (1998)
Goberna, M.A., López, M.A.: Recent contributions to linear semi-infinite optimization. 4OR-Q. J. Oper. Res. 15, 221–264 (2017)
Goberna, M.A., López, M.A., Volle, M.: Primal attainment in convex infinite optimization duality. J. Convex Anal. 21, 1043–1064 (2014)
Grad, S.-M.: Closedness type regularity conditions in convex optimization and beyond. Front. Appl. Math. Stat. 16, September (2016). https://doi.org/10.3389/fams.2016.00014
Hiriart-Urruty, J.-B., Lemarechal, C.: Convex Analysis and Minimization Algoritms I. Springer, Berlin (1993)
Hiriart-Urruty, J.-B., Moussaoui, M., Seeger, A., Volle, M.: Subdifferential calculus without qualification conditions, using approximate subdifferentials: a survey. Nonlinear Anal. 24, 1727–1754 (1995)
Jeyakumar, V., Li, G.Y.: Stable zero duality gaps in convex programming: complete dual characterisations with applications to semidefinite programs. J. Math. Anal. Appl. 360, 156–167 (2009)
Li, G.Y., Jeyakuma, V., Lee, G.M.: Robust conjugate duality for convex optimization under uncertainty with application to data classification. Nonlinear Anal. 74, 2327–2341 (2011)
Lindsey, M., Rubinstein, Y.A.: Optimal transport via a Monge-Ampère optimization problem. SIAM J. Math. Anal. 49, 3073–3124 (2017)
López, M.A., Still, G.: Semi-infinite programming. European J. Oper. Res. 180, 491–518 (2007)
Rockafellar, R.T.: Conjugate Duality and Optimization. CBMS Lecture Notes Series No. 162. SIAM, Philadelphia (1974)
Vera, J.R.: Geometric measures of convex sets and bounds on problem sensitivity and robustness for conic linear optimization. Math. Program. 147A, 47–79 (2014)
Volle, M.: Caculus rules for global approximate minima and applications to approximate subdifferential calculus. J. Global Optim. 5, 131–157 (1994)
Wang, Y., Shi, R., Shi, J.: Duality and robust duality for special nonconvex homogeneous quadratic programming under certainty and uncertainty environment. J. Global Optim. 62, 643–659 (2015)
Zălinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, River Edge (2002)
Acknowledgements
This research was supported by the Vietnam National University - HCM city, Vietnam, project B2019-28-02, by PGC2018-097960-B-C22 of the Ministerio de Ciencia, Innovación y Universidades (MCIU), the Agencia Estatal de Investigación (AEI), and the European Regional Development Fund (ERDF), and by the Australian Research Council, Project DP180100602.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Dinh, N., Goberna, M.A., López, M.A., Volle, M. (2020). Characterizations of Robust and Stable Duality for Linearly Perturbed Uncertain Optimization Problems. In: Bailey, D., et al. From Analysis to Visualization. JBCC 2017. Springer Proceedings in Mathematics & Statistics, vol 313. Springer, Cham. https://doi.org/10.1007/978-3-030-36568-4_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-36568-4_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36567-7
Online ISBN: 978-3-030-36568-4
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)