We introduce a robust optimization model consisting in a family of perturbation functions giving rise to certain pairs of dual optimization problems in which the dual variable depends on the uncertainty parameter. The interest of our approach is illustrated by some examples, including uncertain conic optimization and infinite optimization via discretization. The main results characterize desirable robust duality relations (as robust zero-duality gap) by formulas involving the epsilon-minima or the epsilon-subdifferentials of the objective function. The two extreme cases, namely, the usual perturbational duality (without uncertainty), and the duality for the supremum of functions (duality parameter vanishing) are analyzed in detail.

1 Introduction

Duality theory was one of Jonathan Borwein’s favorite research topics. Indeed, 14 of his papers include the term “duality” in their titles. The present article, dedicated to Jon’s vast contribution to the subject, will refer only to four works of his, all of these related to optimization problems posed in locally convex Hausdorff topological vector spaces.

Duality theorems were provided in [3] for the minimum of arbitrary families of convex programs; the quasi-relative interior constraint qualification was introduced in [6] in order to obtain duality theorems for various optimization problems where the standard Slater condition fails; the same CQ was immediately used, in [5], to obtain duality theorems for convex optimization problems with constraints given by linear operators having finite-dimensional range together with a conical convex constraint; finally, quite recently, in [4], duality theorems for the minimization of the finite sum of convex functions were established, using conditions which involve the \(\varepsilon \)-subdifferential of the given functions.

In this paper, we consider a family of perturbation functions

$$\begin{aligned} F_{u}:X\times Y_{u}\rightarrow \mathbb {R}_{\infty }:=\mathbb {R}\cup \{+\infty \},\text { with }u\in U, \end{aligned}$$

and where X and \(Y_{u},\) \(u\in U,\) are given locally convex Hausdorff topological vector spaces (briefly, lcHtvs), the index set U is called the uncertainty set of the family,X is its decision space, and each \(Y_{u}\) is a parameter space. Note that our model includes a parameter space \(Y_{u},\) depending on \(u\in U,\) which is a novelty with respect to the “classical” robust duality scheme (see [21] and references therein, where a unique parameter space Y is considered), allowing us to cover a wider range of applications including uncertain optimization problems under linear perturbations of the objective function. The significance of our approach is illustrated along the paper by relevant cases extracted from deterministic optimization with linear perturbations, uncertain optimization without perturbations, uncertain conic optimization and infinite optimization. The antecedents of the paper are described in the paragraphs devoted to the first two cases in Section 2.

We associate with each family \(\left\{ F_{u}:u\in U\right\} \) of perturbation functions corresponding optimization problems whose definitions involve continuous linear functionals on the decision and the parameter spaces. We denote by \(0_{X},\) \({0}_{_{X}}^{*},\) \(0_{u},\) and \(0_{u}^{*},\) the null vectors of X,  its topological dual \(X^{*},\) \(Y_{u},\) and its topological dual \(Y_{u}^{*},\) respectively. The optimal value of a minimization (maximization, respectively) problem \(\mathrm {(}\)P\(\mathrm {)}\) is denoted by \(\inf \mathrm {(}\)P\(\mathrm {)}\) (\(\sup \mathrm {(}\)P\(\mathrm {)}\)); in particular, we write \(\min \mathrm {(}\)P\(\mathrm {)}\) (\(\max \mathrm {(}\)P\(\mathrm {)}\)) whenever the optimal value of \(\mathrm {(}\)P\(\mathrm {)}\) is attained. We adopt the usual convention that \(\inf \mathrm {(P)=+\infty }\) (\(\sup \mathrm {(}\)P\(\mathrm {)=-\infty }\)) when the problem \(\mathrm {(P)}\) has no feasible solution. The associated optimization problems are the following:

  • Linearly perturbed uncertain problems: for each \((u,x^{*})\in U\times X^{*},\)

    $$\begin{aligned} (P_{u})_{x^{*}}:\quad \quad \inf _{x\in X}\left\{ {{F_{u}}(x,{0_{u}})-\left\langle {x^{*},x}\right\rangle }\right\} . \end{aligned}$$
  • Robust counterpart of \(\left\{ (P_{u})_{x^{*}}\right\} _{u\in U}:\)

    $$\begin{aligned} (\mathrm {RP})_{x^{*}}: \quad \quad \inf _{x\in X}\left\{ {\mathop {\sup }\limits _{u\in U}{F_{u}}(x,{0_{u}})-\left\langle {x^{*},x}\right\rangle }\right\} . \end{aligned}$$

Denoting by \(F_{u}^{*}:X^{*}\times Y_{u}^{*}\rightarrow \overline{\mathbb {R}}\), where \(\overline{\mathbb {R}}:=\mathbb {R\cup \{\pm \infty \}}\), the Fenchel conjugate of \(F_{u}\), namely,

$$\begin{aligned} F_{u}^{*}(x^{*},y_{u}^{*}):=\sup \limits _{(x,y_{u})\in X\times Y_{u}}\Big \{\langle x^{*},x\rangle +\langle y_{u}^{*},y_{u}\rangle -F_{u}(x,y_{u})\Big \},\quad (x^{*},y_{u}^{*})\in X^{*}\times Y_{u}^{*}, \end{aligned}$$

we now introduce the corresponding dual problems:

  • Perturbational dual of \((P_{u})_{x^{*}}\):

    $$\begin{aligned} \mathrm {(}{\text {D}}_{u}\mathrm {)}{_{{x^{*}}}:}\quad \quad \mathop {\sup }\limits _{y_{u}^{*}\in Y_{u}^{*}}-F_{u}^{*}({x^{*}},y_{u}^{*}). \end{aligned}$$

    Obviously,

    $$\begin{aligned} \sup \, (D_{u})_{x^{*}}\le \inf \, (P_{u})_{x^{*}} \le \inf \, (\mathrm {RP})_{x^{*}} ,\forall u\in U. \end{aligned}$$
  • Optimistic dual of \((\mathrm {RP})_{x^{*}}\):

    $$\begin{aligned} (\mathrm {ODP})_{x^{*}} \sup _{(u,y_{u}^{*})\in \Delta }-F_{u}^{*}(x^{*},y_{u}^{*}), \end{aligned}$$

    where \(\Delta :=\left\{ {\left( {u,y_{u}^{*}}\right) :u\in U,\ {y^{*}}\in Y_{u}^{*}}\right\} \) is the disjoint union of the spaces \({Y_{u}^{*}} \). We have

    $$\begin{aligned} \sup \, (\mathrm {ODP})_{x^{*}} =\mathop {\sup }\limits _{u\in U} (D_{u})_{x^{*}} \le \inf \, (\mathrm {RP})_{x^{*}}. \end{aligned}$$

We are interested in the following desirable robust duality properties:

\(\bullet \) Robust duality is said to hold at \(x^{*}\) if \(\inf {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{x^{*}}}\),

\(\bullet \) Strong robust duality at \(x^{*}\) means \(\inf {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{x^{*}}}\),

\(\bullet \) Reverse strong robust duality at \(x^{*}\) means \(\min {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{x^{*}}}\),

\(\bullet \) Min-max robust duality at \(x^{*}\) means \(\min {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{x^{*}}}\).

Each of the above desirable properties is said to be stable when it holds for any \(x^{*}\in X^{*}\). The main results of this paper characterize these properties in terms of formulas involving the \(\varepsilon \)-minimizers and \(\varepsilon \)-subdifferentials of the objective function of the robust counterpart problem (RP)\(_{0_{X}^{*}}\), namely, the function

$$\begin{aligned} p:=\sup \limits _{u\in U}F_{u}(\cdot ,0_{u}). \end{aligned}$$

Theorem 1 characterizes robust duality at a given point \(x^{*}\in X^{*}\) as a formula for the inverse mapping of the \(\varepsilon \)-subdifferential at \(x^{*}\) without any convexity assumption. The same is done in Theorem 2 to characterize strong robust duality. In the case, when a primal optimal solution does exist we give a formula for the exact minimizers of \(p-x^{*}\) to characterize dual strong (resp. min-max) robust duality at \(x^{*}\), see Theorem 3 (resp. Theorem 4). We show that stable robust duality gives rise to a formula for the \(\varepsilon \)-subdifferential of p (Theorem 5, see also Theorem 1). The same is done for stable strong robust duality (Theorem 6). A formula for the exact subdifferential of p is provided in relation with robust duality at appropriate points (Theorem 7). The most simple possible formula for the exact subdifferential of p (the so-called Basic Robust Qualification condition) is studied in detail in Theorem 8. All the results from Sections 18 are specified for the two extreme cases (the case with no uncertainty and the one in absence of perturbations), namely, Cases 1 and 2 in Section 2 (for the sake of brevity, we do not give the specifications for Cases 3 and 4). It is worth noticing the generality of the mentioned results (as they do not require any assumption on the involved functions) and the absolute self-containment of their proofs. The use of convexity in the data will be addressed in a forthcoming paper.

2 Special Cases and Applications

In this section, we make explicit the meaning of the robust duality of the general model introduced in Section 1, composed by a family of perturbation functions together with its corresponding optimization problems. We are doing this by exploring the extreme case with no uncertainty, the extreme case in absence of perturbations, and two other significant situations. In all these cases, we propose ad hoc families of perturbation functions allowing to apply the duality results to given optimization problems, either turning back to variants of well-known formulas for conjugate functions or proposing new ones.

Let us recall the robust duality formula, \(\inf {(\mathrm {RP})_{x^{*}}} = \sup {(\mathrm {ODP})_{x^{*}}},\) i.e.,

$$\begin{aligned} \;\;\mathop {\inf }\limits _{x\in X}\mathop {\sup }\limits _{u\in U}\left\{ {{F_{u}}\left( {x,{0_{u}}}\right) -\left\langle {{x^{*}},x}\right\rangle }\right\} =\mathop {\sup }\limits _{\left( {u,y_{^{u}}^{*}}\right) \in \Delta }-{F_{u}^{*}}\left( {{x^{*}},{y_{u}^{*}}}\right) . \end{aligned}$$
(1)

We firstly study the two extreme cases: the case with no uncertainty and the one with no perturbations.

Case 1. The case with no uncertainty: Deterministic optimization with linear perturbations deals with parametric problems of the form:

$$\begin{aligned} (\mathrm {P})_{{x^{*}}}:\quad \quad {\mathop {\inf }\limits _{x\in X}}\left\{ {{f}(x)-\left\langle {{x^{*}},x}\right\rangle }\right\} {,} \end{aligned}$$

where \(f:X\rightarrow \mathbb {R}_{\infty }\) (i.e., \(f\in (\mathbb {R}_{\infty })^{X}\)) is the nominal objective function and the parameter is \({x^{*}\in X}^{*}.\) Taking a singleton uncertainty set \(U=\left\{ u_{0}\right\} ,\) \(Y_{u_{0}}=Y\) and \(F_{u_{0}}=F\ \)such that \({F\left( {x,{0_{Y}}}\right) ={f}(x)}\) for all \(x\in X,\) (1) reads

$$\begin{aligned} \mathop {\inf }\limits _{x\in X}\left\{ {F\left( {x,{0_{Y}}}\right) -\left\langle {{x^{*}},x}\right\rangle }\right\} =\mathop {\sup }\limits _{{y^{*}}\in {Y^{*}}}-{F^{*}}\left( {{x^{*}},{y^{*}}}\right) , \end{aligned}$$
(2)

which is the fundamental perturbational duality formula [7, 24, 28]. Stable and strong robust duality theorems are given in [9] (see also [11] and [20] for infinite optimization problems).

Case 2. The case with no perturbations: Uncertain optimization without perturbations deals with families of problems of the form

$$\begin{aligned} (\mathrm {P})_{x^{*}}: \quad \quad \left\{ {\mathop {\inf }\limits _{x\in X}{f_{u}}(x)}-\langle { {{x^{*}},{x}}\rangle }\right\} _{u\in U}, \end{aligned}$$

where \(f_{u}\in (\mathbb {R}_{\infty })^{X},\) \(u\in U\), and \(x^{*}\in X^{*}\). The absence of perturbation is realized by taking \(F_{u}\) such that \(F_{u} (x, y_{u}) = f_{u} (x)\) for all \(u\in U\), \(x\in X\) and \(y_{u}\in Y_{u}\). Assuming dom \(f_{u} \ne \emptyset \) we have

$$\begin{aligned} F_{u}^{*}\left( {x^{*},y_{u}^{*}}\right) =\left\{ \begin{array}{ll} {f_{u}^{*}\left( {x^{*}}\right) ,} &{} \mathrm {if}\;\;{y_{u}^{*}} =0_{u}^{*}, \\ {+\infty ,} &{} \mathrm {if}\;\;y_{u}^{*}\ne 0_{u}^{*}. \end{array} \right. \end{aligned}$$
(3)

Then (1) writes

$$\begin{aligned} {\left( {\mathop {\sup }\limits _{u\in U}{f_{u}}}\right) ^{*}}({x^{*}})=\mathop {\inf }\limits _{u\in U}f_{u}^{*}({x^{*}}), \end{aligned}$$
(4)

which amounts, for \(x^{*}={0}_{_{X}}^{*},\) to the \(\inf -\sup \) duality in robust optimization, also called robust infimum (recall that any constrained optimization problem can be reduced to an unconstrained one by summing up the indicator function of the feasible set to the objective function):

$$\begin{aligned} \inf _{x\in X}\sup _{u\in U}f_{u}(x)=\sup _{u\in U}\inf _{x\in X}f_{u}(x). \end{aligned}$$

Robust duality theorems without perturbations are given in [27] for a special class of uncertain non-convex optimization problems while [11] provides robust strong duality theorems for uncertain convex optimization problems which are expressed in terms of the closedness of suitable sets regarding the vertical axis of \(X^{*}{\times }~\mathbb {R}.\)

Case 3. Conic optimization problem with uncertain constraints: Consider the uncertain problem

$$\begin{aligned} (\mathrm {P}): \ \ \ \ \ \ \ \ \left\{ {\inf _{x\in X}f(x)\;}\;{\text {s.t.}}{\;{H_{u}}(x)\in -{S_{u}}}\right\} _{u\in U}, \end{aligned}$$

where, for each \(u\in U\), \(S_{u}\) is an ordering convex cone in \({Y}_{u},\) \(H_{u}:X\rightarrow {Y}_{u}\), and \(f\in (\mathbb {R}_{\infty })^{X}\). Denote by \({S_{u}^{+}:=}\left\{ {y_{u}^{*}\in Y}_{u}^{*}:\left\langle {y_{u}^{*},y_{u}}\right\rangle \ge 0,\forall {y_{u}\in }S_{u}\right\} \) the dual cone of \(S_{u}\).

Problems of this type arise, for instance, in the production planning of firms producing n commodities from uncertain amounts of resources by means of technologies which depend on the available resources (e.g., the technology differs when the energy is supplied by either fuel gas or a liquid fuel). The problem associated with each parameter \(u\in U\) consists of maximizing the cash-flow \(c\left( x_{1},...,x_{n}\right) \) of the total production, with \(x_{i}\) denoting the production level of the i-th commodity, \(i=1,..,n.\) The decision vector \(x=\left( x_{1},...,x_{n}\right) \) must satisfy a linear inequality system \(A_{u}x\le b_{u},\) where the matrix of technical coefficients \(A_{u}\) is \(m_{u}\times n\) and \(b_{u}\in \mathbb {R}^{m_{u}},\) for some \(m_{u}\in \mathbb {N}.\ \)Denoting by \(\mathrm {i}_{\mathbb {R}_{+}^{n}}\) the indicator function of \(\mathbb {R}_{+}^{n}\) (i.e., \(\mathrm {i}_{\mathbb {R}_{+}^{n}}(x)=0,\) when \(x\in \mathbb {R}_{+}^{n}, \) and \(\mathrm {i}_{\mathbb {R}_{+}^{n}}(x)=+\infty ,\) otherwise), the uncertain production planning problem can be formulated as

$$\begin{aligned} (\mathrm {P}): \ \ \ \ \ \ \ \ \left\{ {\inf _{x\in \mathbb {R}^{n}}f(x)=-c(x)+}\mathrm {i}_{\mathbb {R}_{+}^{n}}{(x)\;}\;{\text {s.t.}}{\;A_{u}x-b_{u}\in -}\mathbb {R}_{+}^{m_{u}}\right\} _{u\in U}, \end{aligned}$$

with the space \(Y_{u}=\mathbb {R}^{m_{u}}\) depending on the uncertain parameter u.

For each \(u\in U\), define the perturbation function

$$\begin{aligned} {F_{u}}(x,{y_{u}})=\left\{ {\begin{array}{ll} f{(x),} &{} {\mathrm {if}\;{H_{u}}(x)+{y_{u}}\in -{S_{u}}}, \\ {+\infty ,} &{} {\mathrm {else}}. \end{array} }\right. \end{aligned}$$

On the one hand, \((\mathrm {RP})_{0_{X}^{*}}\) collapses to the robust counterpart of \(\mathrm {(P)}\) in the sense of robust conic optimization with uncertain constraints:

$$\begin{aligned} (\mathrm {RP}): \;\;\;\;\;\inf _{x\in X}f(x)\;\;{\text {s.t.}}\ \ \;{H_{u}}(x)\in -{S_{u}},\;\;\forall u\in U. \end{aligned}$$

On the other hand, it is easy to check that

$$\begin{aligned} F_{u}^{*}({x^{*}},{y}_{u}^{*})=\left\{ {\begin{array}{ll} {{{\left( {f+y_{u}^{*}\circ {H_{u}}}\right) }^{*}}({x^{*}}),} &{} {\mathrm {if}\;y_{u}^{*}\in S_{u}^{+}.} \\ {\ +\infty ,} &{} {\mathrm {else,}} \end{array} }\right. \end{aligned}$$

\((\mathrm {ODP})_{0_{X}^{*}}\) is nothing else than the optimistic dual in the sense of uncertain conic optimization:

$$\begin{aligned} (\mathrm {ODP}): \quad \quad \sup \limits _{u\in U,{y}_{u}^{*}\in S_{u}^{+}}\mathop {\inf }\limits _{x\in X}\left\{ {f(x)+\left\langle {y_{u}^{*},{H_{u}}(x)}\right\rangle }\right\} \end{aligned}$$

(a special case when \(Y_{u}=Y\), \(S_{u}=S\) for all \(u\in U\) is studied in [12, p. 1097] and [21]). Thus,

\(\bullet \) Robust duality holds at \(0_{X}^{*}\) means that \(\inf {(\mathrm {RP})} =\sup {(\mathrm {ODP})}, \)

\(\bullet \) Strong robust duality holds at \(0_{X}^{*}\) means that

$$\begin{aligned} \inf \left\{ {f(x):{H_{u}}(x)\in -{S_{u}},\forall u\in U}\right\} =\max \limits _{{u\in U \atop {y_{u}^{*}}\in S_{u}^{+}}}\inf \limits _{x\in X}\left\{ {f(x)+\left\langle {y_{u}^{*},{H_{u}}(x)}\right\rangle }\right\} . \end{aligned}$$

Conditions for having such an equality are provided in [12, Theorem 6.3], [13, Corollaries 5, 6], for the particular case \(Y_{u}=Y\) for all \(u\in U\).

Strong robust duality and uncertain Farkas lemma: We focus again on the case where \(Y_{u}=Y\) and \(S_{u}=S\) for all \(u\in U\). For a given \(r\in \mathbb {R}\), let us consider the following statements:

  1. (i)

    \(H_{u}(x)\in -S,\;\forall u\in U\quad \Longrightarrow \quad f(x)\ge r\),

  2. (ii)

    \(\exists u\in U,\exists {y_{u}^{*}}\in S^{+}\) such that \(f(x)+\left\langle {y_{u}^{*}},H_{u}(x)\right\rangle \ge r,\;\forall x\in X.\)

Then, it is true that the strong robust duality holds at \(0_{X}^{*}\) if and only if \(\left[ (i)\Longleftrightarrow (ii)\right] \) for each \(r\in \mathbb {R},\) which can be seen as an uncertain Farkas lemma. For details see [12, Theorem 3.2] (also [13, Corollary 5 and Theorem 1] ).

It is worth noticing that when return to problem \(\mathrm {(P)}\), a given robust feasible solution \(\overline{x}\) is a minimizer if and only if \(f(\overline{x})\le f(x)\) for any robust feasible solution x. So, a robust (uncertain) Farkas lemma (with \(r=f(\bar{x})\)) will lead automatically to an optimality test for \((\mathrm {P})\). Robust conic optimization problems are studied in [2] and [25].

Case 4. Discretizing infinite optimization problems: Let \(f\in (\mathbb {R}_{\infty })^{X}\) and \(g_{t}\in \mathbb {R}^{X}\;\)for all\(\;t\in T\) (a possibly infinite index set). Consider the set U of non-empty finite subsets of T,  interpreted as admissible perturbations of T,  and the parametric optimization problem

$$\begin{aligned} (\mathrm {P}):\quad \left\{ \inf _{x\in X}f(x)\;\mathrm {s.t.}\quad g_{t}(x)\le 0,\quad \forall t\in S\right\} _{S\in U}. \end{aligned}$$

Consider the parameter space \(Y_{s}:={\mathbb {R}^{S}}\) (depending on S) and the perturbation function \({F_{S}}:X\times {\mathbb {R}^{S}}\rightarrow \mathbb {R}_{\infty }\) such that, for any \(x\in X\) and \({{{{\mu :=}({\mu _{s}})}_{s\in S}\in }\mathbb {R}^{S},}\)

$$\begin{aligned} {F_{S}}\left( {x,{\mu }}\right) =\left\{ {\begin{array}{ll} {f(x),} &{} {\mathrm {if}\;{g_{s}}(x)\le -{\mu _{s}},\;\forall s\in S,} \\ {+\infty ,} &{} {\mathrm {else}}. \end{array} }\right. \end{aligned}$$

We now interpret the problems associated with the family of function perturbations \(\left\{ {F_{S}:S\in U}\right\} .\) One has \(Y_{s}^{*}={\mathbb {R}^{S}}\) and

$$\begin{aligned} F_{S}^{*}\left( {{x^{*}},{\lambda }}\right) =\left\{ {\begin{array}{ll} {{\left( {f+\sum \limits _{s\in S}{{\lambda _{s}g_{s}}}}\right) ^{*}}({x^{*}}),} &{} {\mathrm {if}\;{\lambda }\in \mathbb {R}_{+}^{S},} \\ {+\infty ,} &{} {\mathrm {else}}. \end{array} }\right. \end{aligned}$$

The robust counterpart at \(0_{X}^{*},\)

$$\begin{aligned} (\mathrm {RP})_{0_{X}^{*}}:\quad \inf f(x)\;\;\mathrm {s.t.}\ \;\;{g_{t}}(x)\le 0\ \text { for all }t\in T, \end{aligned}$$

is a general infinite optimization problem while the optimistic dual at \(0_{X}^{*}\) is

$$\begin{aligned} (\mathrm {ODP})_{0_{X}^{*}}:\quad \sup \limits _{S\in U,{{\lambda }\in \mathbb {R}_{+}^{S}}}\left\{ \mathop {\inf }\limits _{x\in X}\left( {f(x)+\sum \limits _{s\in S}{{\lambda _{s}g_{s}}(x)}}\right) \right\} , \end{aligned}$$

or, equivalently, the Lagrange dual of \((\mathrm {RP})_{0_{X}^{*}},\) i.e.,

$$\begin{aligned} (\mathrm {ODP})_{0_{{X}^{*}}}:\quad \quad \sup \limits _{{\lambda }\in \mathbb {R}_{+}^{(T)}}\left\{ \mathop {\inf }\limits _{x\in X}\left( {f(x)+\sum \limits _{t\in T}{{\lambda _{t}g_{t}}(x)}}\right) \right\} , \end{aligned}$$

where, for each \(\lambda =\left( {\lambda _{t}}\right) _{t\in T}\in \mathbb {R}_{+}^{(T)}\) (the subspace of \(\mathbb {R}^{T}\) formed by the functions \(\lambda \) whose support, supp\(\lambda :=\left\{ t\in T:{{\lambda _{t}\ne 0}}\right\} ,\) is finite),

$$\begin{aligned} \sum \limits _{t\in T}{{\lambda _{t}g_{t}}(x):}=\left\{ \begin{array}{ll} \sum \limits _{t\in \text {supp}\lambda }{{\lambda _{t}g_{t}}(x),} &{} {\mathrm {if}\;\lambda \ne 0,} \\ 0, &{} \mathrm {if}\;\lambda =0. \end{array} \right. \end{aligned}$$

Following [14, Section 8.3], we say that \((\mathrm {RP})_{0_{X}^{*}}\) is discretizable if there exists a sequence \(\left( S_{r}\right) _{r\in \mathbb {N}}\subset U\) such that

$$\begin{aligned} \inf {(\mathrm {RP})_{0_{X}^{*}}} = \lim _{r}\inf \left\{ f(x):{g_{t}}(x)\le 0,\ \forall t\in S_{r}\right\} , \end{aligned}$$
(5)

and it is reducible if there exists \(S\in U\) such that

$$\begin{aligned} \inf {(\mathrm {RP})_{0_{X}^{*}}} =\inf \left\{ f(x):{g_{t}}(x)\le 0,\ \forall t\in S\right\} . \end{aligned}$$

Obviously, \(\inf {(\mathrm {RP})_{0_{X}^{*}}} =-\infty \) entails that \((\mathrm {RP})_{0_{X}^{*}}\) is reducible which, in turn, implies that \((\mathrm {RP})_{0_{X}^{*}}\) is discretizable.

Discretizable and reducible problems are important in practice. Indeed, on the one hand, discretization methods generate sequences \(\left( S_{r}\right) _{r\in \mathbb {N}}\subset U\) satisfying (5) when \((\mathrm {RP})_{0_{X}^{*}}\) is discretizable; discretization methods for linear and nonlinear semi-infinite programs have been reviewed in [15, Subsection 2.3] and [23], while a hard infinite optimization problem has been recently solved via discretization in [22]. On the other hand, replacing the robust counterpart (a hard semi-infinite program when the uncertainty set is infinite) of a given uncertainty optimization problem, when it is reducible, by a finite subproblem allows many times to get the desired tractable reformulation (see e.g., [1] and [8]).

Example 1

(Discretizing linear infinite optimization problems) Consider the problems introduced in Case 4 above, with \(f(\cdot ):=\left\langle c^{*},\cdot \right\rangle \) and \({g_{t}}(x):=\left\langle a_{t}^{*},\cdot \right\rangle -b_{t},\) where \(c^{*},a_{t}^{*}\in X^{*}\) and \(b_{t}\in \mathbb {R},\) for all \(t\in T.\) Then, \((\mathrm {RP})_{0_{X}^{*}}\) collapses to the linear infinite programming problem

$$\begin{aligned} (\mathrm {RP})_{0_{X}^{*}}:\;\;\;\inf \quad \left\langle c^{*},x\right\rangle \;\;\mathrm {s.t.}\ \ \left\langle a_{t}^{*},x\right\rangle \le b_{t},\ \forall t\in T, \end{aligned}$$

whose feasible set we denote by A. So, \(\inf {(\mathrm {RP})_{0_{X}^{*}}}=\inf _{x\in X}\left\{ \left\langle c^{*},x\right\rangle +i_{A}\left( x\right) \right\} .\;\) We assume that \(A\ne \emptyset .\)

Given \(S\in U\) and \({{{\mu ,\lambda }\in }}\mathbb {R}^{S},\)

$$\begin{aligned} {F_{S}}\left( {x,{\mu }}\right) =\left\{ {\begin{array}{ll} \left\langle c^{*},x\right\rangle {,} &{} {\mathrm {if}\;\ \left\langle a_{s}^{*},x\right\rangle \le b_{s}-{\mu _{s}},\;\forall s\in S,} \\ {+\infty ,} &{} {\mathrm {else,}} \end{array} }\right. \end{aligned}$$
(6)

and

$$\begin{aligned} F_{S}^{*}\left( {{x^{*}},{\lambda }}\right) =\left\{ { \begin{array}{ll} {{{\sum \limits _{s\in S}{{\lambda _{s}b_{s}}}}},} &{} {\mathrm {if}\;{{{\sum \limits _{s\in S}{{\lambda _{s}a_{s}^{*}}}}}}}=x^{*}{{-c^{*}}}\quad \mathrm { and }\quad {{\lambda _{s}\ge 0\,,\ }\forall }s\in S, \\ {+\infty ,} &{} {\mathrm {else.}} \end{array} }\right. \end{aligned}$$
(7)

Hence, \((\mathrm {ODP})_{0_{X}^{*}}\) collapses to the so-called Haar dual problem [16] of \((\mathrm {RP})_{0_{X}^{*}},\)

$$\begin{aligned} (\mathrm {ODP})_{0_{X}^{*}}:\quad \sup \left\{ -{{{\sum \limits _{t\in \mathop {\mathrm {supp}}{\lambda }}{{\lambda _{t}b_{t}:-{{{\sum \limits _{t\in \mathop {\mathrm {supp}}{\lambda }}}}\lambda _{t}a_{t}^{*}=}}}}c^{*},\quad }} {\lambda }\in \mathbb {R}_{+}^{(T)}\right\} , \end{aligned}$$

i.e.,

$$\begin{aligned} \sup {(\mathrm {ODP})_{0_{X}^{*}}} =-\inf _{S\in U,{{{\lambda }\in }}\mathbb {R}_{+}^{S}}\left\{ {{{\sum \limits _{s\in S}{{\lambda _{s}b_{s}:}\sum \limits _{s\in S}{{\lambda _{s}a_{s}^{*}=}}}}-c^{*}}}\right\} . \end{aligned}$$
(8)

From (8), if \(\inf {(\mathrm {RP})_{0_{X}^{*}}}=\max {(\mathrm {ODP})_{0_{X}^{*}}} \in \mathbb {R},\) then there exist \(S\in U\) and \({{{\lambda }\in }}\mathbb {R}_{+}^{S}\) such that

$$\begin{aligned} {{{\sum \limits _{s\in S}{{\lambda _{s}}}}}}\left( {{{{{{a_{s}^{*},}b_{s}}}}}}\right) {=-}\left( c^{*},\inf {(\mathrm {RP})_{0_{X}^{*}}}\right) . \end{aligned}$$
(9)

Let \(A_{S}:=\left\{ x\in X:{\ \left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}\right\} .\) Given \(x\in A_{S},\) from (9),

$$\begin{aligned} 0\ge {{{\sum \limits _{s\in S}{{\lambda _{s}}}}}}\left( {\left\langle a_{s}^{*},x\right\rangle -{{{{b_{s}}}}}}\right) {=-}\left\langle c^{*},x\right\rangle +\inf {(\mathrm {RP})_{0_{X}^{*}}}. \end{aligned}$$

Since

$$\begin{aligned} \inf {(\mathrm {RP})_{0_{X}^{*}}}\le \left\langle c^{*},x\right\rangle ,\forall x\in A_{S}, \end{aligned}$$
$$\begin{aligned} \inf {(\mathrm {RP})_{0_{X}^{*}}}=\inf \left\{ \left\langle c^{*},x\right\rangle :{\left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}\right\} , \end{aligned}$$
(10)

so that \((\mathrm {RP})_{0_{X}^{*}}\) is reducible. Conversely, if (10) holds with \(\inf {(\mathrm {RP})_{0_{X}^{*}}}\in \mathbb {R}\) and \(\mathop {\mathrm {cone}}\left\{ \left( {{{{{{a_{t}^{*},}b_{t}}}}}}\right) :t\in T\right\} +\mathbb {R}_{+}\left( 0_{X}^{*},1\right) \) is weak\(^{*}\)-closed, since \(\inf {(\mathrm {RP})_{0_{X}^{*}}}\le \left\langle c^{*},x\right\rangle \) is consequence of \(\left\{ {\left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}\right\} ,\) by the nonhomogeneous Farkas lemma in lcHtvs [10] and the closedness assumption, there exist \({{{\lambda }\in }}\mathbb {R}_{+}^{S}\) and \(\mu {\in }\mathbb {R}_{+}\) such that

$$\begin{aligned} {-}\left( c^{*},\inf {(\mathrm {RP})_{0_{X}^{*}}}\right) ={{{\sum \limits _{s\in S}{{\lambda _{s}}}}}}\left( {{{{{{a_{s}^{*},}b_{s}}}}}}\right) {+}\mu \left( 0_{X}^{*},1\right) , \end{aligned}$$

which implies that \(\mu =0\) and \(\inf {(\mathrm {RP})_{0_{X}^{*}}} =\max {(\mathrm {ODP})_{0_{X}^{*}}}\). The closedness assumption holds when X is finite dimensional (guaranteeing that any finitely generated convex cone in \(X^{*}\times \mathbb {R}\) is closed). So, as proved in [14, Theorem 8.3], a linear semi-infinite program \((\mathrm {RP})_{0_{X}^{*}}\) is reducible if and only if (10) holds if and only if \(\inf \mathrm {(RP)}_{0_{X}^{*}}=\max {(\mathrm {ODP})_{0_{X}^{*}}}\).

We now assume that \(\inf {(\mathrm {RP})_{0_{X}^{*}}}=\sup {(\mathrm {ODP})_{0_{X}^{*}}} \in \mathbb {R}.\) By (8), there exist sequences \(\left( S_{r}\right) _{r\in \mathbb {N}}\subset U\) and \(\ \left( {\lambda }_{r}\right) _{r\in \mathbb {N}},\) with \({\lambda }^{r}{\in }\mathbb {R}_{+}^{S_{r}}\) for all \(r\in \mathbb {N},\) such that

$$\begin{aligned} \lim _{r}\inf _{{\lambda }^{r}{\in }\mathbb {R}_{+}^{S_{r}}}\left\{ {{{\sum \limits _{s\in S_{r}}{{\lambda _{s}^{r}b_{s}:}\sum \limits _{s\in S_{r}}{{\lambda _{s}^{r}a_{s}^{*}=}}}}-c^{*}}}\right\} =-\sup {(\mathrm {ODP})_{0_{X}^{*}}}. \end{aligned}$$

Denote \(v_{r}:=-{{{\sum \limits _{s\in S_{r}}{{\lambda _{s}^{r}b_{s}.}}}}}\) Then,

$$\begin{aligned} {{{\sum \limits _{s\in S_{r}}{{\lambda _{s}}}}}}\left( {{{{{{a_{s}^{*},}b_{s}}}}}}\right) {=-}\left( c^{*},v_{r}\right) , \end{aligned}$$
(11)

with \(\lim _{r}v_{r}=\inf {(\mathrm {RP})_{0_{X}^{*}}}.\) Let \(A_{r}:=\left\{ x\in X:{\ \left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}_{r}\right\} ,\) \(r\in \mathbb {N}.\) Given \(x\in A_{r},\) from (11),

$$\begin{aligned} 0\ge {{{\sum \limits _{s\in S_{r}}{{\lambda _{s}^{r}}}}}}\left( {\left\langle a_{s}^{*},x\right\rangle -{{{{b_{s}}}}}}\right) ={-}\left\langle c^{*},x\right\rangle +v_{r}. \end{aligned}$$

Since \(v_{r}\le \left\langle c^{*},x\right\rangle \) for all \(x\in A_{r}, \)

$$\begin{aligned} v_{r}\le \inf \left\{ \left\langle c^{*},x\right\rangle :{\left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}_{r}\right\} \le \inf {(\mathrm {RP})_{0_{X}^{*}}}. \end{aligned}$$

Thus,

$$\begin{aligned} \lim _{r}\inf \left\{ \left\langle c^{*},x\right\rangle :{\left\langle a_{s}^{*},x\right\rangle \le b_{s},\;\forall s\in S}_{r}\right\} =\inf {(\mathrm {RP})_{0_{X}^{*}}}, \end{aligned}$$

i.e., \((\mathrm {RP})_{0_{X}^{*}}\) is discretizable. Once again, the converse is true in linear semi-infinite programming [14, Corollary 8.2.1], but not in linear infinite programming.

3 Robust Conjugate Duality

We now turn back to the general perturbation function \({F_{u}}:X\times {Y_{u}}\rightarrow \mathbb {R}_{\infty },\) \(u\in U\), and let \(\Delta :=\left\{ (u,y_{u}^{*}):u\in U,y_{u}^{*}\in Y_{u}^{*}\right\} \) be the disjoint union of the spaces \(Y_{u}^{*}\). Recall that

$$\begin{aligned} (\mathrm {RP})_{x^{*}}:\quad \inf _{x\in X}\left\{ {\mathop {\sup }\limits _{u\in U}{F_{u}}\left( {x,{0_{u}}}\right) -\left\langle {{x^{*}},x}\right\rangle }\right\} , \end{aligned}$$
(12)
$$\begin{aligned} (\mathrm {ODP})_{{x^{*}}}:\quad \ \ \sup \limits _{(u,{y}_{u}^{*})\in \Delta }-F_{u}^{*}\left( {{x^{*}},y_{u}^{*}} \right) . \end{aligned}$$
(13)

Define \(p\in \overline{\mathbb {R}}^{X}\) and \(q\in \overline{\mathbb {R}}^{X^*}\) such that

$$\begin{aligned} p:=\mathop {\sup }\limits _{u\in U}{F_{u}}(\cdot ,{0_{u}})\ \ \ \text {and} \ \ \ q:=\mathop {\inf }\limits _{\left( {u,y_{u}^{*}}\right) \in \Delta }\;F_{u}^{*}(\cdot ,y_{u}^{*}). \end{aligned}$$
(14)

One then has

$$\begin{aligned} \left\{ {\begin{array}{l} {{p^{*}}({x^{*}})=-\inf {{(\mathrm {RP})}_{{x^{*}}}},\quad q({x^{*}})=-\sup {(\mathrm {ODP)}_{{x^{*}}} }} \\ {{q^{*}}=\mathop {\sup }\limits _{\left( {u,y_{u}^{*}}\right) \in \Delta }{{\left( {F_{u}^{*}\left( \cdot {,y_{u}^{*}}\right) }\right) }^{*}}=\mathop {\sup }\limits _{u\in U}F_{u}^{**}\left( \cdot {,{0_{u}}}\right) \le p},\end{array} }\right. \end{aligned}$$
(15)

and hence,

  • Weak robust duality always holds

    $$\begin{aligned} p^{*}(x^{*})\le q^{**}(x^{*})\le q(x^{*}),\text { for all }x^{*}\in X^{*}. \end{aligned}$$
    (16)
  • Robust duality at \(x^{*}\) means

    $$\begin{aligned} p^{*}(x^{*})=q(x^{*}). \end{aligned}$$
    (17)

Robust duality at \(x^{*}\) also holds when either \(p^{*}(x^{*})=+\infty \) or \(q(x^{*})=-\infty .\)

As an illustration, consider Case 4 with linear data, as in Example 1. Then, \(p\left( x\right) =\left\langle c^{*},x\right\rangle +\mathrm {i}_{A}\left( x\right) ,\) \(\mathop {\mathrm {dom}}p=A,\) and so

$$\begin{aligned} p^{*}\left( 0_{X}^{*}\right) =\sup _{x\in \mathbb {R}^{n}}\left( -p\left( x\right) \right) =-\inf _{x\in \mathbb {R}^{n}}\left\{ \left\langle c^{*},x\right\rangle +\mathrm {i}_{A}\left( x\right) \right\} =-\inf {(\mathrm {RP})_{0_{X}^{*}}}. \end{aligned}$$

Similarly, from (7),

$$\begin{aligned} q\left( {{x^{*}}}\right) =\inf _{S\in U,{{{\lambda }\in }}\mathbb {R}^{S}}\left\{ {{{\sum \limits _{s\in S}{{\lambda _{s}b_{s}:}\sum \limits _{s\in S}{{\lambda _{s}a_{s}^{*}=}}}}x^{*}-c^{*}}}\right\} , \end{aligned}$$

\(\mathop {\mathrm {dom}}q=c^{*}+\mathop {\mathrm {cone}}\left\{ a_{t}^{*}:t\in T\right\} \) and

$$\begin{aligned} q\left( 0_{X}^{*}\right) =\inf _{S\in U,{{{\lambda }\in }}\mathbb {R}_{+}^{S}}\left\{ {{{\sum \limits _{s\in S}{{\lambda _{s}b_{s}:} \sum \limits _{s\in S}{{\lambda _{s}a_{s}^{*}=}}}}-c^{*}}}\right\} =-\sup {(\mathrm {ODP})_{0_{X}^{*}}}. \end{aligned}$$
(18)

3.1 Basic Lemmas

Let us introduce the necessary notations. Given a lcHtvs Z, an extended real-valued function \(h\in \overline{\mathbb {R}}^{Z}\), and \(\varepsilon \in \mathbb {R}_{+}\), the set of \(\varepsilon \)-minimizers of h is defined by

$$\begin{aligned} \varepsilon -\text {argmin }h:=\left\{ \begin{array}{ll} \{z\in Z\,:\,h(z)\le \inf _{Z}h+\varepsilon \}, &{} \mathrm {if} \;\;\inf \limits _{Z}h\in \mathbb {R}, \\ \emptyset , &{} \mathrm {if}\;\;\inf \limits _{Z}h\not \in \mathbb {R},\end{array}\right. \end{aligned}$$

or, equivalently,

$$\begin{aligned} \varepsilon -\text {argmin }h=\{z\in h^{-1}(\mathbb {R})\,:\,h(z)\le \inf _{Z}h+\varepsilon \}. \end{aligned}$$

Note that \(\varepsilon -\)argmin \(h\ne \emptyset \) when \(\inf _{Z}h\in \mathbb {R}\) and \(\varepsilon >0.\) Various calculus rules involving \(\varepsilon -\)argmin have been given in [26].

The \(\varepsilon \)-subdifferential of h at a point \(a\in Z\) is the set (see, for instance, [19])

$$\begin{aligned} \partial ^{\varepsilon }h(a):= & {} \left\{ \begin{array}{ll} \{z^{*}\in Z^{*}\,:\,h(z)\ge h(a)+\langle z^{*},z-a\rangle -\varepsilon ,\forall z\in Z\}, &{} \mathrm {if}\;\;h(a)\in \mathbb {R}, \\ \emptyset , &{} \mathrm {if}\;\;h(a)\not \in \mathbb {R}, \end{array} \right. \\= & {} \Big \{z^{*}\in (h^{*})^{-1}(\mathbb {R})\,:\,h^{*}(z^{*})+h(a)\le \langle z^{*},a\rangle +\varepsilon \Big \}. \end{aligned}$$

It can be checked that if \(h\in \overline{\mathbb {R}}^{X}\) is convex and \(h(a)\in \mathbb {R}\), then \(\partial ^{\varepsilon }h(a)\ne \emptyset \) for all \(\varepsilon >0\) if and only if h is lower semi-continuous at a.

The inverse of the set-valued mapping \(\partial ^{\varepsilon }h:Z\rightrightarrows Z^{*}\) is denoted by \(M^{\varepsilon }h:Z^{*}\rightrightarrows Z.\) For each \((\varepsilon ,z^{*})\in \mathbb {R}_{+}\times Z^{*}\), we have

$$\begin{aligned} \Big (\partial ^{\varepsilon }h\Big )^{-1}(z^{*})=\Big (M^{\varepsilon }h\Big )(z^{*})=\varepsilon -\text {argmin }(h-z^{*}). \end{aligned}$$

Denoting by \(\partial ^{\varepsilon }h^{*}(z^{*})\) the \(\varepsilon \)-subdifferential of \(h^{*}\) at \(z^{*}\in Z^{*}\), namely,

$$\begin{aligned} \partial ^{\varepsilon }h^{*}(z^{*})=\Big \{z\in (h^{**})^{-1}(\mathbb {R})\,:\,h^{**}(z)+h^{*}(z^{*})\le \langle z^{*},z\rangle +\varepsilon \Big \}, \end{aligned}$$

where \(h^{**}(z):=\sup \limits _{z^{*}\in Z^{*}}\{\langle z^{*},z\rangle -h^{*}(z^{*})\}\) is the biconjugate of h, we have

$$\begin{aligned} (M^{\varepsilon }h)(z^{*})\subset (\partial ^{\varepsilon }h^{*}\big )(z^{*}),\ \forall (\varepsilon ,z^{*})\in \mathbb {R}_{+}\times Z^{*}, \end{aligned}$$

with equality if and only if \(h=h^{**}.\)

For each \(\varepsilon \in \mathbb {R}_{+}\), we consider the set-valued mapping \(S^{\varepsilon }:X^{*}\rightrightarrows X\) as follows:

$$\begin{aligned} \begin{array}{ll} S^{\varepsilon }(x^{*})&:=\left\{ x\in p^{-1}(\mathbb {R})\,:\,p(x)-\langle x^{*},x\rangle \le -q(x^{*})+\varepsilon \right\} . \end{array} \end{aligned}$$
(19)

If \(q(x^{*})=-\infty \), then \(S^{\varepsilon }(x^{*})=p^{-1}(\mathbb {R}).\) If \(q(x^{*})=+\infty ,\) then \(S^{\varepsilon }(x^{*})=\emptyset .\)

Since \(p^{*}\le q\), it is clear that

$$\begin{aligned} S^{\varepsilon }(x^{*})\subset (M^{\varepsilon }p)(x^{*}),\ \forall \varepsilon \ge 0,\ \forall x^{*}\in X^{*}. \end{aligned}$$
(20)

Lemma 1

Assume that \(\mathop {\mathrm {dom}} p\ne \emptyset \). Then, for each \(x^{*}\in X^{*} \), the next statements are equivalent:

\(\mathrm {(i)}\)   Robust duality holds at \(x^{*}\) ,  i.e., \(p^{*}(x^{*})=q(x^{*})\),

\(\mathrm {(ii)} \) \(\left( M^{\varepsilon }p\right) (x^{*})=S^{\varepsilon }(x^{*}),\quad \forall \varepsilon \ge 0\),

\(\mathrm {(iii)}\) \(\exists \bar{\varepsilon }>0:\left( M^{\varepsilon }p\right) (x^{*})=S^{\varepsilon }(x^{*}),\quad \forall \varepsilon \in ]0,\bar{\varepsilon }[\).

Proof

\([\mathrm {(i)}\Rightarrow \mathrm {(ii)}]\) By definition

$$\begin{aligned} \left( M^{\varepsilon }p\right) (x^{*})= & {} \varepsilon -\mathop {\mathrm {argmin}}\nolimits (p-x^{*}) \\= & {} \left\{ x\in p^{-1}(\mathbb {R})\,:\,p(x)-\langle x^{*},x\rangle \le -p^{*}(x^{*})+\varepsilon \right\} . \end{aligned}$$

By \(\mathrm {(i)}\) we thus have \(\left( M^{\varepsilon }p\right) (x^{*})=S^{\varepsilon }(x^{*}).\)

\([\mathrm {(ii)}\Rightarrow \mathrm {(iii)}]\) It is obviously true.

\([\mathrm {(iii)}\Rightarrow \mathrm {(i)}]\) Since \(p^{*}(x^{*})\le q(x^{*})\), \(\mathrm {(i)}\) holds if \(p^{*}(x^{*})=+\infty .\) Moreover, since \(\mathop {\mathrm {dom}}p\ne \emptyset \), one has \(p^{*}(x^{*})\ne -\infty .\) Let now \(p^{*}(x^{*})\in \mathbb {R}\). In order to get a contradiction, assume that \(p^{*}(x^{*})\not =q(x^{*})\). Then \(p^{*}(x^{*})<q(x^{*})\) and there exists \(\varepsilon \in ]0,\bar{\varepsilon }[\) such that \(p^{*}(x^{*})+\varepsilon <q(x^{*})\). Since \(\inf _{x\in X}\left\{ p(x)-\left\langle x^{*},x\right\rangle \right\} =-p^{*}(x^{*})\in \mathbb {R}\) and \(\varepsilon >0,\) we have \(\varepsilon -\mathop {\mathrm {argmin}}\nolimits (p-x^{*})\ne \emptyset .\) Let us pick \(x\in (M^{\varepsilon }p)(x^{*})=\varepsilon -\mathop {\mathrm {argmin}}\nolimits (p-x^{*}).\) By \(\mathrm {(iii)}\), we have \(x\in S^{\varepsilon }(x^{*})\) and

$$\begin{aligned} -p^{*}(x^{*})\le p(x)-\langle x^{*}, x\rangle \le -q(x^{*})+\varepsilon , \end{aligned}$$

which contradicts \(p^{*}(x^{*})+\varepsilon < q(x^{*})\). \(\square \)

For each \(\varepsilon \in \mathbb {R}_{+}\), let us introduce now the following set-valued mapping \(J^{\varepsilon }:U\rightrightarrows X\):

$$\begin{aligned} J^{\varepsilon }(u):=\left\{ x\in p^{-1}(\mathbb {R})\,:\,p(x)\le F_{u}(x,0_{u})+\varepsilon \right\} , \end{aligned}$$
(21)

with the aim of making explicit the set \(S^{\varepsilon }(x^{*})\). To this purpose, given \(\varepsilon _{1},\varepsilon _{2}\in \mathbb {R}_{+}\), \(u\in U\), and \(y_{u}^{*}\in Y_{u}^{*}\), let us introduce the set-valued mapping \(A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}:X^{*}\rightrightarrows X\) such that

$$\begin{aligned} A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}(x^{*}):=\Big \{x\in J^{\varepsilon _{1}}(u)\,:\,(x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\Big \}. \end{aligned}$$

Lemma 2

For each \(x^{*}\in X^{*}\), \(\varepsilon _{1},\varepsilon _{2}\in \mathbb {R}_{+}\), \(u\in U\), and \(y_{u}^{*}\in Y_{u}^{*}\), one has

$$\begin{aligned} A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}(x^{*})\ \subset \ S^{\varepsilon _{1}+\varepsilon _{2}}(x^{*}). \end{aligned}$$

Proof

Let \(x\in J^{\varepsilon _{1}}(u)\) be such that \((x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*}).\) Then we have \(F_{u}^{*}(x^{*},y_{u}^{*})\in \mathbb {R}\) and \(F_{u}(x,0_{u})\in \mathbb {R}\). Moreover

$$\begin{aligned} F_{u}(x,0_{u})+\varepsilon _{1}\ge p(x)\ge F_{u}(x,0_{u})\in \mathbb {R}\text {,} \end{aligned}$$

implying \(p(x)\in \mathbb {R}\) and, by (15),

$$\begin{aligned} p(x)-\langle x^{*},x\rangle\le & {} F_{u}(x,0_{u})-\langle x^{*},x\rangle +\varepsilon _{1}\le -F_{u}^{*}(x^{*},y_{u}^{*})+\varepsilon _{1}+\varepsilon _{2} \\\le & {} -q(x^{*})+\varepsilon _{1}+\varepsilon _{2}, \end{aligned}$$

that means \(x\in S^{\varepsilon _{1}+\varepsilon _{2}}(x^{*})\). \(\square \)

Lemma 3

Assume that

$$\begin{aligned} \mathop {\mathrm {dom}}F_{u}\ne \emptyset , \forall u\in U. \end{aligned}$$
(22)

Then, for each \(x^{*}\in X^{*},\varepsilon \in \mathbb {R}_{+},\eta >0 \), one has

$$\begin{aligned} {S^{\varepsilon }}({x^{*}})\ \ \subset \ \ \mathop {\displaystyle \bigcup }\limits _{{u\in U \atop y_{u}^{*}\in Y_{u}^{*}}}\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\ge 0,\;{\varepsilon _{2}}\ge 0} }A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}(x^{*}). \end{aligned}$$

Proof

Let \(x\in p^{-1}(\mathbb {R})\) be such that \(x\in S^{\varepsilon }(x^{*})\), i.e.,

$$\begin{aligned} p(x)-\langle x^{*},x\rangle \le -q(x^{*})+\varepsilon . \end{aligned}$$

We then have, for any \(\eta >0\),

$$\begin{aligned} q(x^{*})<\langle x^{*},x\rangle -p(x)+\varepsilon +\eta \end{aligned}$$

and, by definition of q and p,  there exist \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\) such that

$$\begin{aligned} F_{u}^{*}(x^{*},y_{u}^{*})\le \langle x^{*},x\rangle -p(x)+\varepsilon +\eta \le \langle x^{*},x\rangle -F_{u}(x,0_{u})+\varepsilon +\eta . \end{aligned}$$
(23)

Since \(p(x)\in \mathbb {R}\), \(F_{u}^{*}(x^{*},y_{u}^{*})\ne +\infty \). In fact, by (22), \(F_{u}^{*}(x^{*},y_{u}^{*})\in \mathbb {R}\). Similarly, \(F_{u}(x,0_{u})\in \mathbb {R}\). Setting

$$\begin{aligned} \alpha _{1}:=p(x)-F_{u}(x,0_{u}),\ \alpha _{2}:=F_{u}^{*}(x^{*},y_{u}^{*})+F_{u}(x,0_{u})-\langle x^{*},x\rangle , \end{aligned}$$

we get \(\alpha _{1}\in \mathbb {R}_{+}\), \(\alpha _{2}\in \mathbb {R}.\) Actually \(\alpha _{2}\ge 0\) since, by definition of conjugate,

$$\begin{aligned} F_{u}^{*}(x^{*},y_{u}^{*})=\sup _{z\in X,y_{u}\in Y_{u}}\left\{ \langle x^{*},z\rangle +\langle y_{u}^{*},y_{u}\rangle -F_{u}(z,y_{u})\right\} , \end{aligned}$$

i.e., if \(z=x\) and \(y_{u}=0_{u},\)

$$\begin{aligned} F_{u}^{*}(x^{*},y_{u}^{*})\ge \langle x^{*},x\rangle -F_{u}(x,0_{u}), \end{aligned}$$

so that

$$\begin{aligned} F_{u}^{*}(x^{*},y_{u}^{*})+F_{u}(x,0_{u})-\langle x^{*},x\rangle \ge 0. \end{aligned}$$

Then, by (23), \(0\le \alpha _{1}+\alpha _{2}\le \varepsilon +\eta \). Consequently, there exist \(\varepsilon _{1},\varepsilon _{2}\in \mathbb {R}_{+}\) such that \(\alpha _{1}\le \varepsilon _{1}\), \(\alpha _{2}\le \varepsilon _{2}\), \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon +\eta \). Now \(\alpha _{1}\le \varepsilon _{1}\) means that \(x\in J^{\varepsilon _{1}}(u)\) and \(\alpha _{2}\le \varepsilon _{2}\) means that \((x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\), and we have \(x\in A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}(x^{*})\). \(\square \)

For each \(x^{*}\in X^{*}\), \(\varepsilon \in \mathbb {R}_{+}\), let us define

$$\begin{aligned} \mathcal {A}^{\varepsilon }(x^{*}):= & {} \bigcap \limits _{\eta>0}\mathop {\displaystyle \bigcup }\limits _{{u\in U \atop y_{u}^{*}\in Y_{u}^{*}}}\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\ge 0,\;{\varepsilon _{2}}\ge 0}}A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}(x^{*}) \\= & {} \bigcap _{\eta >0}\mathop {\displaystyle \bigcup }\limits _{{u\in U \atop y_{u}^{*}\in Y_{u}^{*}}}{{\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\ge 0,\;{\varepsilon _{2}}\ge 0}}}}\Big \{x\in J^{\varepsilon _{1}}(u)\,:\,(x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\Big \} . \end{aligned}$$

3.2 Robust Duality

We now can state the main result on characterizations of the robust conjugate duality.

Theorem 1

\(\mathbf {(Robust\,duality)}\) Assume that \(\mathop {\mathrm {dom}} p\ne \emptyset \). Then for each \(x^{*}\in X^{*} \), the next statements are equivalent:

\(\mathrm {(i)}\)   \(\inf {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{{x^{*}}}}\),

\(\mathrm {(ii)} \) \(\left( M^{\varepsilon }p\right) (x^{*})=\mathcal {A}^{\varepsilon }(x^{*}),\quad \forall \varepsilon \ge 0\),

\(\mathrm {(iii)} \) \(\exists \bar{\varepsilon }>0: \ \left( M^{\varepsilon }p\right) (x^{*})=\mathcal {A}^{\varepsilon }(x^{*}),\quad \forall \varepsilon \in ]0,\bar{\varepsilon }[\).

Proof

We firstly claim that if \(\mathop {\mathrm {dom}} p\ne \emptyset \) then for each \(x^* \in X^*\), \(\varepsilon \in \mathbb {R}_+\), it holds:

$$\begin{aligned} S^\varepsilon (x^*) \ = \ \mathcal {A}^\varepsilon (x^*). \end{aligned}$$
(24)

Indeed, as \(\mathop {\mathrm {dom}} p\ne \emptyset \), (22) holds. It then follows from Lemma 3, \(S^\varepsilon (x^*) \ \subset \ \mathcal {A}^\varepsilon (x^*)\). On the other hand, for each \(\eta > 0\), one has, by Lemma 2,

$$\begin{aligned} \mathop {\displaystyle \bigcup }\limits _{{u\in U \atop y_{u}^{*}\in Y_{u}^{*}}} \bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\ge 0,\;{\varepsilon _{2}}\ge 0}} A^{(\varepsilon _1, \varepsilon _2)}_{(u, y_u^*)}(x^*) \ \subset \ S^{\varepsilon + \eta } (x^*). \end{aligned}$$

Taking the intersection over all \(\eta > 0\) we get

$$\begin{aligned} \mathcal {A}^\varepsilon (x^*) \subset \bigcap \limits _{\eta > 0} S^{\varepsilon + \eta } (x^*) = S^\varepsilon (x^*), \end{aligned}$$

and (24) follows. Taking into account the fact that (i) means \(p^{*}(x^{*})=q(x^{*})\), the conclusions now follows from (24) and Lemma 1. \(\square \)

For the deterministic optimization problem with linear perturbations (i.e., non-uncertain case where U is a singleton), the next result is a direct consequence of Theorem 1.

Corollary 1

\(\mathbf {(Robust\,duality\, for\, Case\, 1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\) be such that \(\mathop {\mathrm {dom}}F(\cdot ,0_{Y})\ne \emptyset \). Then, for each \(x^{*}\in X^{*},\) the fundamental duality formula (2) holds, i.e.,

$$\begin{aligned} \mathop {\inf }\limits _{x\in X}\left\{ {F(x,{0_{Y}})-\left\langle {{x^{*} },x}\right\rangle }\right\} =\mathop {\sup }\limits _{y\in {Y^{*}}}-F^{*}(x^{*},y^{*}), \end{aligned}$$

if and only any if the (equivalent) conditions (ii) or (iii) in Theorem 1 holds, where

$$\begin{aligned} \mathcal {A}^{\varepsilon }(x^{*})=\bigcap \limits _{\eta >0}\bigcup \limits _{y^{*}\in Y^{*}}\Big \{x\in X\,:\,\left( x,0_{Y}\right) \in \left( M^{\varepsilon +\eta }F\right) (x^{*},y^{*})\Big \}. \end{aligned}$$
(25)

Proof

Let \(F_{u}=F:X\times Y\rightarrow \mathbb {R}_{\infty },\;p=F(\cdot ,0_{Y})\). In this case, one has,

$$\begin{aligned} J^{\varepsilon }(u)=\left\{ x\in X\,:\,F(x,0_{Y})\in \mathbb {R}\right\} ,\ \forall \varepsilon \ge 0, \end{aligned}$$

and \(\mathcal {A}^{\varepsilon }(x^{*})\) will take the form (25). The conclusion follows from Theorem 1. \(\square \)

For uncertain optimization problem without perturbations, the following result is a consequence of Theorem 1.

Corollary 2

\(\mathbf {(Robust\, duality\, for\, Case\, 2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\) be a family of extended real-valued functions, \(p=\sup _{u\in U}f_{u}\) be such that \(\mathop {\mathrm {dom}}p\ne \emptyset \). Then, for each \(x^{*}\in X^{*}, \) the \(\inf -\sup \) duality in robust optimization (4) holds, i.e.,

$$\begin{aligned} {\left( {\mathop {\sup }\limits _{u\in U}{f_{u}}}\right) ^{*}}({x^{*}} )=\mathop {\inf }\limits _{u\in U}f_{u}^{*}({x^{*}}) \end{aligned}$$

if and only any of the (equivalent) conditions (ii) or (iii) in Theorem 1 holds, where

$$\begin{aligned} {{\mathcal {A}^{\varepsilon }}({x^{*}})=\bigcap \limits _{\eta >0}{\;\bigcup \limits _{u\in U}{\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\ge 0,\;{\varepsilon _{2}}\ge 0}}}}}\left\{ {{{{{J^{{\varepsilon _{1}}}}(u)\cap ({M^{{\varepsilon _{2}}}}{f_{u}})({x^{*}})}}}}\right\} , \end{aligned}$$
(26)

with

$$\begin{aligned} {{{{{J^{{\varepsilon _{1}}}}(u)=\{x\in p}}}}^{-1}(\mathbb {R}):\ f_{u}(x)\ge p(x)-\varepsilon _{1}\}. \end{aligned}$$

Proof

Let \(F_{u}(x,y_{u})=f_{u}(x),\) for all \(u\in U\) and let \(p=\mathop {\sup }\limits _{u\in U}{f_{u}}\). Then, by (21),

$$\begin{aligned} {J^{\varepsilon }}(u)=\left\{ x\in p^{-1}(\mathbb {R})\,:\,f_{u}(x)\ge p(x)-\varepsilon \right\} ,\ \forall \varepsilon \ge 0. \end{aligned}$$
(27)

Moreover, recalling (3), for each \(u\in U\) such that \(\mathop {\mathrm {dom}}f_{u}\ne \emptyset \), \((x^{*},y_{u}^{*})\in X^{*}\times Y_{u}^{*}\), and \(\varepsilon \ge 0\),

$$\begin{aligned} {\left( {{M^{\varepsilon }}{F_{u}}}\right) \left( {{x^{*}},y_{u}^{*}}\right) =\left\{ {\begin{array}{ll} {\left( {{M^{\varepsilon }}{f_{u}}}\right) \left( {x^{*}}\right) ,} &{} {\mathrm {if}\ \ y_{u}^{*}=0_{u}^{*},} \\ \emptyset , &{} \text {else}. \end{array} \;}\right. } \end{aligned}$$
(28)

Finally, for each \((x^{*},\varepsilon )\in X^{*}\times \mathbb {R}_{+} \), \(\mathcal {A}^{\varepsilon }({x^{*}})\) takes the form as in (26). The conclusion now follows from Theorem 1. \(\square \)

4 Strong Robust Duality

We retain the notations in Section 3 and consider the robust problem \((\mathrm {RP})_{x^{*}}\) and its robust dual problem \((\mathrm {ODP}) _{x^{*}}\) given in (12) and (13), respectively. Let p and q be the functions defined by (14) and recall the relations in (15), that is,

$$\begin{aligned} \left\{ {\begin{array}{l} {{p^{*}}({x^{*}})=-\inf {{(\mathrm {RP})}_{{x^{*}}}},\quad q({x^{*}})=-\sup {(\mathrm {ODP)}_{{x^{*}}}}} \\ {{q^{*}}=\mathop {\sup }\limits _{\left( {u,y_{u}^{*}}\right) \in \Delta }{{\left( {F_{u}^{*}\left( \cdot {,y_{u}^{*}}\right) }\right) }^{*}}=\mathop {\sup }\limits _{u\in U}F_{u}^{**}\left( \cdot {,{0_{u}}}\right) \le p}.\end{array} }\right. \end{aligned}$$

In this section we establish characterizations of strong robust duality at \(x^{*}\). Recall that the strong robust duality holds at \(x^*\) means that \(\inf {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{{x^{*}}}}\), which is the same as

$$\begin{aligned} \exists (u,y_{u}^{*})\in \Delta :p^{*}(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*}). \end{aligned}$$

For this, we need a technical lemma, but firstly, given \(x^* \in X^*\), \(\ u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\), and \(\varepsilon \ge 0\), let us introduce the set

$$\begin{aligned} B_{(u,y_{u}^{*})}^{\varepsilon }({x^{*}})= & {} \bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}} =\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}} A^{(\varepsilon _1, \varepsilon _2)}_{(u, y_u^*)}(x^*) \\= & {} \bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}} =\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}}\Big \{x\in J^{\varepsilon _1}(u)\, :\, (x,0_{u}) \in ( M^{ \varepsilon _{2}} F_{u}) (x^*,y_u^*)\Big \}. \end{aligned}$$

Lemma 4

Assume that \(\mathop {\mathrm {dom}}F_{u}\ne \emptyset \), for all \(u\in U,\) holds and let \(x^{*}\in X^{*}\) be such that

$$\begin{aligned} q(x^{*})=\min \limits _{ \begin{array}{l} u \in U \\ y_u^* \in Y_u^* \end{array}}F_{u}^{*}(x^{*},y_{u}^{*}). \end{aligned}$$

Then there exist \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\) such that

$$\begin{aligned} S^{\varepsilon }(x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}),\ \forall \varepsilon \ge 0. \end{aligned}$$

Proof

By Lemma 2 we have \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\subset S^{\varepsilon }(x^{*})\). Conversely, let \(x\in S^{\varepsilon }(x^{*})\). By the exactness of q at \(x^{*}\), there exist \(u\in U\) and \(y_{u}^{*}\in Y_{u}^{*}\) such that

$$\begin{aligned} p(x)-\langle x^{*},x\rangle \le -F_{u}^{*}(x^{*},y_{u}^{*})+\varepsilon . \end{aligned}$$

Since \(p(x)\in \mathbb {R}\) and \(\mathop {\mathrm {dom}}F_{u}\ne \emptyset \), for all \(u\in U,\) we have \(F_{u}^{*}(x^{*},y_{u}^{*})\in \mathbb {R}\), \(F_{u}(x,0_{u})\in \mathbb {R}\),

$$\begin{aligned} \Big (p(x)-F_{u}(x,0_{u})\Big )+\Big (F_{u}(x,0_{u})+F_{u}^{*}(x^{*},y_{u}^{*})-\langle x^{*},x\rangle \Big )\le \varepsilon . \end{aligned}$$

Consequently, there exist \(\varepsilon _{1}\ge 0,\varepsilon _{2}\ge 0\) such that \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon ,\)

$$\begin{aligned} p(x)-F_{u}(x,0_{u})\le \varepsilon _{1}\ \text {and}\ F_{u}(x,0_{u})+F_{u}^{*}(x^{*},y_{u}^{*})-\langle x^{*},x\rangle \le \varepsilon _{2}, \end{aligned}$$

that is, \(x\in J^{\varepsilon _{1}}(u)\ \text {and }(x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*}) \). Thus, \(x\in A_{(u,y_{u}^{*})}^{(\varepsilon _{1},\varepsilon _{2})}(x^{*})\subset B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\), since \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon .\) \(\square \)

Theorem 2

\(\mathbf {(Strong\,robust\,duality)}\) Assume that \(\mathop {\mathrm {dom}}p\ne \emptyset \) and let \(x^{*}\in X^{*}\). The next statements are equivalent:

\(\mathrm {(i)} \)  \(\inf {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{{x^{*}}}}\),

\(\mathrm {(ii)}\) \(\exists u\in U,\ \exists y_{u}^{*}\in Y_{u}^{*}:\) \(\left( M^{\varepsilon }p\right) (x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}),\forall \varepsilon \ge 0\),

\(\mathrm {(iii)} \) \(\exists \bar{\varepsilon }>0,\ \exists u\in U,\exists y_{u}^{*}\in Y_{u}^{*}:\) \(\left( M^{\varepsilon }p\right) (x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}),\forall \varepsilon \in ]0,\bar{\varepsilon }[\).

Proof

Observe firstly that (i) means that

$$\begin{aligned} p^{*}(x^{*})=q(x^{*})=\min \limits _{\begin{array}{l} u \in U \\ y_u^* \in Y_u^* \end{array}}F_{u}^{*}(x^{*},y_{u}^{*}). \end{aligned}$$

As \(\mathop {\mathrm {dom}}p\ne \emptyset \), (22) holds, and then by Lemmas 1 and 4, \(\mathrm {(i)}\) implies the remaining conditions, which are equivalent to each other, and also that \(\mathrm {(iii)}\) implies \(p^{*}(x^{*})=q(x^{*})\).

We now prove that \(\mathrm {(iii)}\) implies \(q(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*})\). Assume by contradiction that there exists \(\varepsilon >0\) such that \(q(x^{*})+\varepsilon <F_{u}^{*}(x^{*},y_{u}^{*})\), and without loss of generality one can take \(\varepsilon \in \left] 0,\bar{\varepsilon }\right[ ,\) where \(\bar{\varepsilon }>0\) appeared in (iii). Then, by \(\mathrm {(iii)}\), \(\left( M^{\varepsilon }p\right) (x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}).\)

Pick \(x\in \left( M^{\varepsilon }p\right) (x^{*})=B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*}).\) Then, there are \(\varepsilon _{1}\ge 0,\varepsilon _{2}\ge 0\), \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon \), \(x\in J^{\varepsilon _{1}}(u)\) and \((x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\). In other words,

$$\begin{aligned}&p(x)\le F_{u}(x,0_{u})+\varepsilon _{1}, \end{aligned}$$
(29)
$$\begin{aligned}&F^{*}((x^{*},y_{u}^{*})+F_{u}(x,0_{u})\le \langle x^{*},x\rangle +\varepsilon _{2}. \end{aligned}$$
(30)

It now follows from (29)–(30) that

$$\begin{aligned} p^{*}\left( x^{*}\right)\ge & {} \langle x^{*},x\rangle -p\left( x\right) \ge \langle x^{*},x\rangle -F_{u}(x,0_{u})-\varepsilon _{1} \\\ge & {} \langle x^{*},x\rangle +F_{u}^{*}(x^{*},y_{u}^{*})-\langle x^{*},x\rangle -\varepsilon _{2}-\varepsilon _{1}=F_{u}^{*}(x^{*},y_{u}^{*})-\varepsilon >q(x^{*}), \end{aligned}$$

which contradicts the fact that \(p^{*}(x^{*})=q(x^{*})\). \(\square \)

In deterministic optimization with linear perturbations we get the next consequence from Theorem 2.

Corollary 3

\(\mathbf {(Strong\, robust\, duality\, for\, Case\, 1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(p=F(\cdot ,0_{Y})\), and assume that \(\mathop {\mathrm {dom}}p\ne \emptyset \). Then, for each \(x^{*}\in X^{*}\), the strong duality for \(\mathrm {(P)}_{x^{*}}\) in Case 1 holds at \(x^{*}\), i.e.,

$$\begin{aligned} \mathop {\inf }\limits _{x\in X}\left\{ {F\left( {x,{0_{Y}}}\right) -\left\langle {{x^{*}},x}\right\rangle }\right\} =\mathop {\max }\limits _{{y^{*}}\in {Y^{*}}}-{F^{*}}\left( {{x^{*}},{y^{*}}}\right) , \end{aligned}$$

if and only if one of the (equivalent) conditions (ii) or (iii) in Theorem 2 holds with \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\) being replaced by

$$\begin{aligned} B_{y^{*}}^{\varepsilon }(x^{*}):=\big \{x\in X\,:\,(x,0_{Y})\in (M^{\varepsilon }F)(x^{*},y^{*})\big \}. \end{aligned}$$
(31)

Proof

It is worth observing that we are in the non-uncertainty case (i.e., U is a singleton), and the set \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\) writes as in (31) for each \((x^{*},y^{*})\in X^{*}\times Y^{*}\), \(\varepsilon \ge 0\). The conclusion follows from Theorem 2. \(\square \)

In the non-perturbation case, Theorem 2 gives rise to

Corollary 4

\(\mathbf {(Strong\,robust\, duality\, for\, Case\, 2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(x^{*}\in X^{*}\), and \(p=\sup \limits _{u\in U}f_{u}\) such that \(\mathop {\mathrm {dom}}p\ne \emptyset \). Then, the robust duality formula

$$\begin{aligned} \Big (\sup \limits _{u\in U}f_{u}\Big )^{*}(x^{*})=\min \limits _{u\in U}f_{u}^{*}(x^{*}) \end{aligned}$$

holds if and only if one of the (equivalent) conditions (ii) or (iii) in Theorem 2 holds with \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\) being replaced by

$$\begin{aligned} B_{u}^{\varepsilon }(x^{*}):=\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}}\Big (J^{\varepsilon _{1}}(u)\cap (M^{\varepsilon _{2}}f_{u})(x^{*})\Big ). \end{aligned}$$
(32)

Proof

Let \(F_{u}(x,y_{u})=f_{u}(x)\), \(p=\mathop {\sup }\limits _{u\in U}{f_{u}}\), and, from (27) and (28) (see the proof of Corollary 2),

$$\begin{aligned} B_{(u,y_{u}^{*})}^{\varepsilon }({x^{*}})=\left\{ \begin{array}{ll} \bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}}\Big ( J^{\varepsilon _{1}}(u)\cap (M^{\varepsilon _{2}}f_{u})(x^{*})\Big ) , &{} \mathrm {if}\ \ y_{u}^{*}=0_{u}^{*}, \\ \ \ \ \ \ \emptyset , &{} \text {else},\end{array} \right. \end{aligned}$$

which in our situation, collapses to the set \(B_{u}^{\varepsilon }(x^{*}) \) defined by (32). The conclusion now follows from Theorem 2. \(\square \)

5 Reverse Strong and Min-Max Robust Duality

Given \(F_{u}:X\times Y_{u}\rightarrow (\mathbb {R}_{\infty })^{X}\) for each \(u\in U\), \(p=\sup \limits _{u\in U}F_{u}(\cdot ,0_{u})\), and \(x^{*}\in X^{*}\), we assume in this section that the problem \((\mathrm {RP})_{x^{*}}\) is finite-valued and admits an optimal solution or, in other words, that \(\mathrm {argmin}(p-x^{*})=(M^{0}p)(x^{*})\ne \emptyset \). For convenience, we set

$$\begin{aligned} (Mp)(x^{*}):=(M^{0}p)(x^{*}),\ \ S(x^{*}):=S^{0}(x^{*}),\ \mathrm {and} \end{aligned}$$
$$\begin{aligned} \mathcal {A}(x^{*}):=\mathcal {A}^{0}(x^{*})=\bigcap _{\eta >0}\mathop {\displaystyle \bigcup }\limits _{{u\in U \atop y_{u}^{*}\in Y_{u}^{*}}}{{\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\eta \atop \scriptstyle {\varepsilon _{1}}\ge 0,\;{\varepsilon _{2}}\ge 0}}}} \!\!\!\!\Big \{x\in J^{\varepsilon _{1}}(u)\,:\,(x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\Big \}. \end{aligned}$$
(33)

Theorem 3

\(\mathbf {(Reverse\,strong\, robust\, duality)}\) Let \(x^{*}\in X^{*}\) be such that \((Mp)(x^{*})\ne \emptyset \) and let \(\mathcal {A}(x^*)\) be as in (33). The next statements are equivalent:

\(\mathrm {(i)}\)   \(\min {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{{x^{*}}}}\),

\(\mathrm {(ii)} \) \((Mp)(x^{*}) = \mathcal {A} (x^{*})\).

Proof

Since \((Mp)(x^{*}){\ne } \emptyset \), \(\mathop {\mathrm {dom}}p{\ne } \emptyset \). It follows from Theorem 1 that \(\left[ {\mathrm {(i)}\Longrightarrow \mathrm {(ii)}}\right] \). For the converse, let us pick \(x\in (Mp)(x^{*})\). Then by (ii), for each \(\eta >0 \) there exist \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\), \(\varepsilon _{1}\ge 0,\varepsilon _{2}\ge 0\) such that \(\varepsilon _{1}+\varepsilon _{2}=\eta \), \(x\in J^{\varepsilon _{1}}(u)\), \((x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\) and we have

$$\begin{aligned} q(x^{*})\le & {} F_{u}^{*}(x^{*},y_{u}^{*})\le \langle x^{*},x\rangle -F_{u}(x,0_{u})+\varepsilon _{2} \\\le & {} \langle x^{*},x\rangle -p(x)+\varepsilon _{1}+\varepsilon _{2}\le p^{*}(x^{*})+\eta . \end{aligned}$$

Since \(\eta >0\) is arbitrary we get \(q(x^{*})\le p^{*}(x^{*})\), which, together with the weak duality (see (16)), yields \(q(x^{*}) = \langle x^{*},x\rangle -p(x) = p^{*}(x^{*}) \), i.e., (i) holds and we are done. \(\square \)

In the deterministic case we obtain from Theorem 3:

Corollary 5

\(\mathbf {(Reverse\,strong\,robust\,duality\,for\,Case\,1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(x^{*}\in X^{*}\), \(p=F(\cdot ,0_{Y})\), and

$$\begin{aligned} \mathcal {A}(x^{*})=\bigcap _{\eta >0}\bigcup \limits _{y^{*}\in Y^{*}}\Big \{x\in X\,:\,(x,0_{Y})\in (M^{\eta }F)(x^{*},y^{*})\Big \}. \end{aligned}$$

Assume that \((Mp)(x^{*})\ne \emptyset \). Then the next statements are equivalent:

\(\mathrm {(i)}\)   \(\mathop {\min }\limits _{x\in X}\left\{ {F\left( {x,{0_{Y}}} \right) -\left\langle {{x^{*}},x}\right\rangle }\right\} =\mathop {\sup } \limits _{{y^{*}}\in {Y^{*}}}-{F^{*}}\left( {{x^{*}},{y^{*}}}\right) \),

\(\mathrm {(ii)} \) \((Mp)(x^{*}) = \mathcal {A} (x^{*})\).

Corollary 6

\(\mathbf {(Reverse\,strong\,robust\,duality\,for\,Case\,2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p=\sup \limits _{u\in U}f_{u}\), \(x^{*}\in X^{*}\), and

$$\begin{aligned} \mathcal {A}(x^{*}):=\bigcap _{\eta >0}\mathop {\displaystyle \bigcup }\limits _{u\in U}\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\eta \atop \scriptstyle {\varepsilon _{1}}\ge 0,\;{\varepsilon _{2}}\ge 0}}\Big (J^{\varepsilon _{1}}(u)\cap (M^{\varepsilon _{2}}f_{u})(x^{*})\Big ), \end{aligned}$$

where

$$\begin{aligned} J^{\varepsilon _{1}}(u)=\Big \{x\in p^{-1}(\mathbb {R})\,:\,f_{u}(x)\ge p(x)-\varepsilon _{1}\Big \}. \end{aligned}$$

Assume that \((Mp)(x^{*})\ne \emptyset \). Then the next statements are equivalent:

\(\mathrm {(i)}\)   \(\Big (\sup \limits _{u\in U}f_{u}\Big )^{*}(x^{*})=\inf \limits _{u\in U}f_{u}^{*}(x^{*}),\) with attainment at the first member,

\(\mathrm {(ii)} \) \((Mp)(x^{*}) = \mathcal {A} (x^{*})\).

Now, for each \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\), \(x^{*}\in X^{*},\) we set

$$\begin{aligned} J(u):=J^{0}(u)=\Big \{x\in p^{-1}(\mathbb {R})\,:\,F_{u}(x,0_{u})=p(x)\Big \}, \end{aligned}$$
$$\begin{aligned} (MF_{u})(x^{*},y_{u}^{*}):=(M^{0}F_{u})(x^{*},y_{u}^{*})= \mathrm {argmin}\Big (F_{u}-\langle x^{*},\cdot \rangle -\langle y_{u}^{*},\cdot \rangle \Big ), \end{aligned}$$

and

$$\begin{aligned} B_{(u,y_{u}^{*})}(x^{*}):=B_{(u,y_{u}^{*})}^{0}(x^{*})=\Big \{x\in J(u)\,:\,(x,0_{u})\in (MF_{u})(x^{*},y_{u}^{*})\Big \}. \end{aligned}$$
(34)

Theorem 4

\(\mathbf {(Min\text {-}max\, robust\, duality)}\) Let \(x^{*}\in X^{*}\) be such that \((Mp)(x^{*})\ne \emptyset \). The next statements are equivalent:

\(\mathrm {(i)}\)   \(\min {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{x^{*}}}\),

\(\mathrm {(ii)}\) \(\exists u\in U,\) \(\exists y_{u}^{*}\in Y_{u}^{*}:\) \(\ (Mp)(x^{*})=B_{(u,y_{u}^{*})}(x^{*})\),

where \(B_{(u,y_{u}^{*})}(x^{*})\) is the set defined in (34).

Proof

By Theorem 2 we know that \([\mathrm {(i)}\Longrightarrow \mathrm {(ii)}]\). We now prove that \([\mathrm {(ii)}\Longrightarrow \mathrm {(i)}]\). Pick \(x\in (Mp)(x^{*})\) which is non-empty by assumption. Then by \(\mathrm {(ii)}\), \(x\in B_{(u,y_{u}^{*})}(x^{*})\), which yields \(x\in J(u)\) and \((x,0_{u})\in (MF_{u})(x^{*},y_{u}^{*})\). Hence,

$$\begin{aligned} q(x^{*})\le & {} F_{u}^{*}(x^{*},y_{u}^{*})\le \langle x^{*},x\rangle -F_{u}(x,0_{u}) \\\le & {} \langle x^{*},x\rangle -p(x)\le p^{*}(x^{*})\le q(x^{*}), \end{aligned}$$

which means that \(q(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*})=\langle x^{*},x\rangle -p(x)=p^{*}(x^{*})\) and \(\mathrm {(i)}\) follows. \(\square \)

Corollary 7

\(\mathbf {(Min\text {-}max~robust~duality~for~Case~1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(x^{*}\in X^{*}\), \(p=F(\cdot ,0_{Y})\), and for each \(y^{*}\in Y^{*}\),

$$\begin{aligned} B_{y^{*}}(x^{*}):=\Big \{x\in X\,:\,(x,0_{Y})\in (MF)(x^{*},y^{*})\Big \}. \end{aligned}$$

Assume that \((Mp)(x^{*})\ne \emptyset \). The next statements are equivalent:

\(\mathrm {(i)} \)   \(\mathop {\min }\limits _{x\in X}\left\{ {F\left( {x,{0_{Y}}}\right) -\left\langle {{x^{*}},x}\right\rangle }\right\} = \mathop {\max }\limits _{{y^{*}}\in {Y^{*}}}-{F^{*}}\left( {{x^{*}},{y^{*}}}\right) \),

\(\mathrm {(ii)} \) \(\exists y^{*}\in Y^{*}\)\(\ (Mp)(x^{*}) = B_{y^{*}}(x^{*})\).

Corollary 8

\(\mathbf {(Min\text {-}max~robust~duality~for~Case~2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p{=}\sup \limits _{u\in U}f_{u}\), \(x^{*}\in X^{*}\), and for each \(u\in U\),

$$\begin{aligned} B_{u}(x^{*}):=J(u)\cap (Mf_{u})(x^{*}), \end{aligned}$$

where \(J(u)=\{x\in p^{-1}(\mathbb {R})\,:\,f_{u}(x)=p(x)\}\). Assume that \((Mp)(x^{*})\ne \emptyset \). Then the next statements are equivalent:

\(\mathrm {(i)} \)   \(\Big (\sup \limits _{u \in U} f_u\Big )^*(x^*) =\min \limits _{u \in U} f_u^*(x^*)\), with attainment at the first member,

\(\mathrm {(ii)} \) \(\exists u \in U\)\(\ (Mp)(x^{*}) = B_u(x^{*})\).

6 Stable Robust Duality

Let us first recall some notations. Given \({F_{u}}:X\times {Y_{u}}\rightarrow \mathbb {R}_{\infty }\), \(u\in U,\ p=\mathop {\sup }\limits _{u\in U}{F_{u}}(\cdot ,{0_{u}})\) and \(q=\mathop {\inf }\limits _{{\scriptstyle u\in U \atop \scriptstyle y_{u}^{*}\in Y_{u}^{*}}}F_{u}^{*}(\cdot ,y_{u}^{*})\). Remember that \(p^{*}(x^{*})\le q(x^{*})\) for each \(x^{*}\in X^{*}\). Stable robust duality means that \(\inf {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{{x^{*}}}}\) for all \(x^{*}\in X^{*}\), or equivalently,

$$\begin{aligned} p^{*}(x^{*})=q(x^{*}),\ \forall x^{*}\in X^{*}. \end{aligned}$$

Theorem 1 says that, if \(\mathop {\mathrm {dom}}p\ne \emptyset ,\) then stable robust duality holds if and only if for each \(\varepsilon \ge 0\) the set-valued mappings \(M^{\varepsilon }p,\ \mathcal {A}^\varepsilon :X^{*}\rightrightarrows X\) coincide, where, for each \(x^{*}\in X^{*}\),

$$\begin{aligned} (M^{\varepsilon }p)(x^{*}):=&\varepsilon -\mathrm {argmin}(p-x^{*}), \\ \mathcal {A}^{\varepsilon }(x^{*}):=&\bigcap _{\eta >0}\mathop {\displaystyle \bigcup }\limits _{{u\in U \atop y_{u}^{*}\in Y_{u}^{*}}}{{\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\ge 0,\;{\varepsilon _{2}}\ge 0}}}}\Big \{x\in J^{\varepsilon _{1}}(u)\,:\,(x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\Big \}. \end{aligned}$$

Consequently, stable robust duality holds if and only if for each \(\varepsilon \ge 0\), the inverse set-valued mappings

$$\begin{aligned} (M^{\varepsilon }p)^{-1},\ (\mathcal {A}^{\varepsilon })^{-1}:X\rightrightarrows X^{*}, \end{aligned}$$

coincide. Recall that \((M^{\varepsilon }p)^{-1}\) is nothing but the \(\varepsilon \)-subdifferential of p at x.

Let us now make explicit \((\mathcal {A}^{\varepsilon })^{-1}\). To this end we need to introduce for each \(\varepsilon \ge 0\) the (\(\varepsilon \)-active indexes) set-valued mapping \(I^{\varepsilon }:X\rightrightarrows U\) with

$$\begin{aligned} \ I^{\varepsilon }(x)=\left\{ \begin{array}{ll} \Big \{u\in U\,:\,F_{u}(x,0_{u})\ge p(x)-\varepsilon \Big \}, &{} \mathrm {if}\quad p(x)\in \mathbb {R}, \\ \ \ \ \emptyset , &{} \mathrm {if}\quad p(x)\not \in \mathbb {R}.\end{array} \right. \end{aligned}$$
(35)

We observe that \(I^{\varepsilon }\) is nothing but the inverse of the set-valued mapping \(J^{\varepsilon }:U\rightrightarrows X\) defined in (21).

Lemma 5

For each \((\varepsilon ,x)\in \mathbb {R}_{+}\times X\) one has

$$\begin{aligned} {(\mathcal {A}^{\varepsilon })^{-1}}(x)=\bigcap \limits _{\eta >0}{\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{1}}\geqslant 0}}{\bigcup \limits _{u\in {I^{\varepsilon _{1}}}(x)}\mathrm {proj}_{X^{*}}^{u}{\partial ^{{\varepsilon _{2}}}}{F_{u}}(x,{0_{u}})}}, \end{aligned}$$

where \({{\mathrm {proj}_{X^{*}}^{u}:}}X^{*}\times Y_{u}^{*}\longrightarrow X^{*}\) is the projection mapping \(\mathrm {proj}_{X^{*}}^{u}(x^{*},y_{u}^{*})=x^{*}.\)

Proof

Let \((\varepsilon ,x,x^{*})\in \mathbb {R}_{+}\times X\times X^{*}\). One has

$$\begin{aligned} x^{*}\in (\mathcal {A}^{\varepsilon })^{-1}(x)\Leftrightarrow & {} x\in \mathcal {A}^{\varepsilon }(x^{*}) \\\Leftrightarrow & {} \left\{ \begin{array}{l} \forall \eta>0,\exists u\in U,\exists y_{u}^{*}\in Y_{u}^{*},\exists (\varepsilon _{1},\varepsilon _{2})\in \mathbb {R}_{+}^{2} \quad \mathrm {such\ that} \\ \varepsilon _{1}+\varepsilon _{2}=\varepsilon +\eta ,\quad x\in J^{\varepsilon _{1}}(u)\ \mathrm {and}\ (x,0_{u})\in (M^{\varepsilon _{2}}F_{u})(x^{*},y_{u}^{*})\end{array} \right. \\\Leftrightarrow & {} \left\{ \begin{array}{l} \forall \eta>0,\exists u\in U,\exists y_{u}^{*}\in Y_{u}^{*},\exists (\varepsilon _{1},\varepsilon _{2})\in \mathbb {R}_{+}^{2} \quad \mathrm {such\ that\ } \\ \varepsilon _{1}+\varepsilon _{2}=\varepsilon +\eta ,\quad u\in I^{\varepsilon _{1}}(x),\ \mathrm {and}\ (x^{*},y_{u}^{*})\in \Big (\partial ^{\varepsilon _{2}}F_{u}\Big )(x,0_{u}) \end{array} \right. \\\Leftrightarrow & {} \left\{ \begin{array}{l} \forall \eta>0,\exists u\in U,\exists (\varepsilon _{1},\varepsilon _{2})\in \mathbb {R}_{+}^{2} \quad \mathrm {such\ that} \\ \mathrm {\ }\varepsilon _{1}+\varepsilon _{2}=\varepsilon +\eta ,\quad u\in I^{\varepsilon _{1}}(x),\ \mathrm {and}\ x^{*}\in \mathrm {proj}_{X^{*}}^{u}\Big (\partial ^{\varepsilon _{2}}F_{u}\Big )(x,0_{u}) \end{array} \right. \\\Leftrightarrow & {} x^{*}\in \bigcap \limits _{\eta >0}\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{1}}\geqslant 0}}\bigcup \limits _{u\in {I^{\varepsilon _{1}}}(x)}\mathrm {proj}_{X^{*}}^{u}(\partial ^{{\varepsilon _{2}}}{F_{u}})(x,{0_{u}}). \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{aligned}$$

\(\square \)

Now, for each \((\varepsilon , x) \in \mathbb {R}_+ \times X\), let us set

$$\begin{aligned} C^\varepsilon (x) := \bigcap \limits _{\eta >0} \bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{1}}\geqslant 0}}\bigcup \limits _{u\in I^{\varepsilon _{1}}(x)}\mathrm {proj}^u _{X^{*}}(\partial ^{\varepsilon _{2}}F_{u})(x,{0_{u}}). \end{aligned}$$
(36)

Applying Theorem 1 and Lemma 5 we obtain:

Theorem 5

\(\mathbf {(Stable~robust~duality)}\) Assume that \(\mathop {\mathrm {dom}}p\ne \emptyset \). The next statements are equivalent:

\(\mathrm {(i)} \)   \(\inf {(\mathrm {RP})_{x^{*}}}=\sup {(\mathrm {ODP})_{{x^{*}}}}\) for all \(x^* \in X^*\),

\(\mathrm {(ii)} \) \(\partial ^\varepsilon p(x) = C^\varepsilon (x), \ \forall (\varepsilon , x) \in \mathbb {R}_+ \times X\),

\(\mathrm {(iii)} \) \(\exists \bar{\varepsilon }> 0\): \(\partial ^\varepsilon p(x) = C^\varepsilon (x), \ \forall (\varepsilon , x) \in \ ]0, \bar{\varepsilon }[\ \times X\).

Corollary 9

\(\mathbf {(Stable~robust~duality~for~Case~1)}\) Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\) be such that \(\mathop {\mathrm {dom}}F(\cdot ,0_{Y})\ne \emptyset \). Let \({{\mathrm {proj}_{X^{*}}:}}X^{*}\times Y^{*}\longrightarrow X^{*}\) be the projection mapping \(\mathrm {proj}_{X^{*}}(x^{*},y^{*})=x^{*}. \) Then, the next statements are equivalent:

\(\mathrm {(i)} \)  \(\inf \limits _{x \in X} \Big \{ F(x, 0_Y) - \langle x^*, x\rangle \Big \} = \sup \limits _{y^* \in Y^*} - F^*(x^*, y^*), \ \forall x^* \in X^*\),

\(\mathrm {(ii)}\) \((\partial ^{\varepsilon }p)(x)=\bigcap _{\eta >0}\mathrm {proj}_{X^{*}}(\partial ^{{\varepsilon +\eta }}{F})(x,{0_{Y}}),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X\),

\(\mathrm {(iii)}\) \(\exists \bar{\varepsilon }>0\): \((\partial ^{\varepsilon }p)(x)=\bigcap _{\eta >0}\mathrm {proj}_{X^{*}}(\partial ^{{\varepsilon + \eta }}{F})(x,{0_{Y}}),\ \forall (\varepsilon ,x)\in ]0,\bar{\varepsilon }[\times X\).

Proof

Let \(U=\{u_{0}\}\) and \(F=F_{u_{0}}:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(Y=Y_{u_{0}}\), \(p=F(\cdot ,0_{Y})\). Then for each \((\varepsilon ,x)\in \mathbb {R}_{+}\times X\),

$$\begin{aligned} I^{\varepsilon }(x)=\left\{ \begin{array}{ll} \{u_{0}\},\ \ &{} \mathrm {if}\quad p(x)\in \mathbb {R}, \\ \emptyset , &{} \mathrm {if}\quad p(x)\not \in \mathbb {R},\end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}} =\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{1}}\geqslant 0}}\bigcup \limits _{u\in I^{\varepsilon _{1}}(x)} \mathrm {proj}_{X^{*}}^{u}(\partial ^{\varepsilon _{2}}F_{u})(x,{0_{u}})= \mathrm {proj}_{X^{*}}\Big (\partial ^{\varepsilon }F\Big )(x,0_{Y}). \end{aligned}$$
(37)

The conclusion now follows from (36)–(37) and Theorem 5. \(\square \)

Remark 1

Condition \(\mathrm {(ii)}\) in Corollary 9 was quoted in [17, Theorem 4.3] for all \((\varepsilon ,x)\in ]0,+\infty [\times \mathbb {R},\) which is equivalent.

Corollary 10

\(\mathbf {(Stable~robust~duality~for~Case~2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X},\) \(p=\sup \limits _{u\in U}f_{u}\), and assume that \(\mathop {\mathrm {dom}}p\ne \emptyset \). The next statements are equivalent:

\(\mathrm {(i)} \)  \(\Big (\sup \limits _{u \in U} f_u\Big )^*(x^*) =\inf \limits _{u \in U} f_u^*(x^*)\), \(\forall x^* \in X^*\),

\(\mathrm {(ii)} \) \((\partial ^\varepsilon p)(x) = C^\varepsilon (x), \ \forall (\varepsilon , x) \in \mathbb {R}_+ \times X\),

\(\mathrm {(iii)}\) \(\exists \bar{\varepsilon }>0\): \((\partial ^{\varepsilon }p)(x)=C^{\varepsilon }(x),\ \forall (\varepsilon ,x)\in ]0,\bar{\varepsilon }[\times X\),

where \(C^{\varepsilon }(x)\) is the set

$$\begin{aligned} C^{\varepsilon }(x)=\bigcap \limits _{\eta >0}\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon +\eta \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{1}}\geqslant 0}}\bigcup \limits _{u\in I^{\varepsilon _{1}}(x)}(\partial ^{\varepsilon _{2}}f_{u})(x),\ \ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X. \end{aligned}$$
(38)

Proof

Let \(F_{u}:X\times Y_{u}\rightarrow \mathbb {R}_{\infty }\ \)be such that \(F_{u}(x,y_{u})=f_{u}(x)\) for all \(u\in U\). Then for any \((\varepsilon ,x)\in \mathbb {R}_{+}\times X\),

$$\begin{aligned} I^{\varepsilon }(x)=\left\{ \begin{array}{ll} \Big \{u\in U\,:\,f_{u}(x)\ge p(x)-\varepsilon \Big \}, &{} \mathrm {if}\quad p(x)\in \mathbb {R}, \\ \ \ \emptyset , &{} \mathrm {if}\quad p(x)\not \in \mathbb {R},\end{array} \right. \end{aligned}$$
$$\begin{aligned} (\partial ^{\varepsilon }F_{u})(x,0_{u})=(\partial ^{\varepsilon }f_{u})(x)\times \{0_{u}^{*}\},\ \ \ \forall (u,\varepsilon ,x)\in U\times \mathbb {R}_{+}\times X, \end{aligned}$$

and \(C^{\varepsilon }(x)\) reads as in (38). The conclusion now follows from Theorem 5. \(\square \)

7 Stable Strong Robust Duality

We retain all the notations used in the Sections 36. Given \((\varepsilon , u) \in \mathbb {R}_+ \times U\) and \(y_u^* \in Y_U^*\) we have introduced in Section 4 the set-valued mapping \(B_{(u,y_{u}^{*})}^{\varepsilon } : X^*\rightrightarrows X\) defined by

$$\begin{aligned} B_{(u,y_{u}^{*})}^{\varepsilon }({x^{*}}) =\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}}\Big \{x\in J^{\varepsilon _1}(u)\, :\, (x,0_{u}) \in ( M^{ \varepsilon _{2}} F_{u}) (x^*,y_u^*)\Big \}. \end{aligned}$$

Let us now define \(B^\varepsilon : X^*\rightrightarrows X\) by setting

$$\begin{aligned} B^\varepsilon (x^*) := \bigcup \limits _{{u \in U \atop y_u^* \in Y_u^*}} B_{(u,y_{u}^{*})}^{\varepsilon }(x^*) , \ \ \forall x^* \in X^*. \end{aligned}$$

Lemma 6

For each \((\varepsilon , x) \in \mathbb {R}_+ \times X\) we have

$$\begin{aligned} (B^\varepsilon )^{-1} (x) = \bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}} \bigcup \limits _{u\in I^{\varepsilon _{1}}(x)}\mathrm {proj}^u_{X^{*}}(\partial ^{\varepsilon _{2}}F_{u})(x,{0_{u}}). \end{aligned}$$

Proof

\(x^* \in (B^\varepsilon )^{-1} (x)\) means that there exist \(u \in U\), \(y_u^* \in Y_u^*\) \(\varepsilon _1 \ge 0\), \(\varepsilon _2 \ge 0\), such that \(\varepsilon _1 + \varepsilon _2 = \varepsilon \), \(x \in J^{\varepsilon _1} (u) \), and \((x, 0_u) \in (M^{\varepsilon _2}F_u)(x^*, y_u^*)\), or, equivalently, \(u \in I^{\varepsilon _1} (x)\), and \((x^*, y_u^*) \in (\partial ^{\varepsilon _2}F_u)(x, 0_u)\). In other words, there exist \(u \in U\), \(y_u^* \in Y_u^*\) such that \(x \in B^\varepsilon _{(u, y_u^*)} (x^*)\), that is \(x \in B^\varepsilon (x^*)\). \(\square \)

For each \(\varepsilon \ge 0\) let us introduce the set-valued mapping \(D^{\varepsilon }:=(B^{\varepsilon })^{-1}\). Now Lemma 6 writes

$$\begin{aligned} D^{\varepsilon }(x)\ =\ \bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}}\bigcup \limits _{u\in I^{\varepsilon _{1}}(x)}\mathrm {proj}_{X^{*}}^{u}(\partial ^{\varepsilon _{2}}F_{u})(x,{0_{u}}),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X. \end{aligned}$$
(39)

Note that

$$\begin{aligned} C^{\varepsilon }(x)=\bigcap \limits _{\eta >0}D^{\varepsilon +\eta }(x),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X, \end{aligned}$$
(40)

and that \(D^{\varepsilon }(x)=\emptyset \) whenever \(p(x)\not \in \mathbb {R}\).

We now provide a characterization of stable strong robust duality in terms of \(\varepsilon \)-subdifferential formulas.

Theorem 6

\(\mathbf {(Stable~strong~robust~duality)}\) Assume that \(\mathop {\mathrm {dom}} p \ne \emptyset \), and let \(D^\varepsilon \) as in (39). The next statements are equivalent:

\(\mathrm {(i)} \)  \(\inf {(\mathrm {RP})_{x^{*}}}=\max {(\mathrm {ODP})_{{x^{*}}}} = \max \limits _{{u \in U \atop y_u^* \in Y_u^*}} - F_u^*(x^*, y_u^*),\ \ \forall x^* \in X^*\),

\(\mathrm {(ii)} \) \(\partial ^\varepsilon p(x) = D^\varepsilon (x), \ \forall (\varepsilon , x) \in \mathbb {R}_+ \times X\).

Proof

\([\mathrm {(i)} \Longrightarrow \mathrm {(ii)}]\) Let \(x^* \in \partial ^\varepsilon p(x)\). Then \(x \in (M^\varepsilon p)(x^*)\). Since strong robust duality holds at \(x^*\), Theorem 2 says that there exist \(u \in U\), \(y_u^* \in Y_u^*\) such that \(x \in B^\varepsilon _{(u, y_u^*)}(x^*) \subset B^\varepsilon (x^*)\), and finally \(x^* \in D^\varepsilon (x)\) by Lemma 6. Thus \(\partial ^\varepsilon p(x) \subset D^\varepsilon (x)\).

Now, let \(x^{*}\in D^{\varepsilon }(x)\). By Lemma 6 we have \(x\in B^{\varepsilon }(x^{*})\) and there exist \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\) such that \(x\in B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\). By Lemma 2 and the definition of \(B_{(u,y_{u}^{*})}^{\varepsilon }(x^{*})\) we have \(x\in S^{\varepsilon }(x^{*})\), and, by (20), \(x\in (M^{\varepsilon }p)(x^{*})\) which means that \(x^{*}\in \partial ^{\varepsilon }p(x)\), and hence, \(D^{\varepsilon }(x)\subset \partial ^{\varepsilon }p(x)\). Thus (ii) follows.

\([\mathrm {(ii)}\Longrightarrow \mathrm {(i)}]\) If \(p^{*}(x^{*})=+\infty \) then \(q(x^{*})=+\infty \) and one has \(p^{*}(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*})=+\infty \) for all \(u\in U\), \(y_{u}^{*}\in Y_{u}^{*}\), and (i) holds. Assume that \(p^{*}(x^{*})\in \mathbb {R}\) and pick \(x\in p^{-1}(\mathbb {R})\) which is non-empty as \(\mathop {\mathrm {dom}}p\ne \emptyset \) and \(p^{*}(x^{*})\in \mathbb {R}\). Let \(\varepsilon :=p(x)+p^{*}(x^{*})-\langle x^{*},x\rangle \). Then \(\varepsilon \ge 0\) and we have \(x^{*}\in \partial ^{\varepsilon }p(x)\). By (ii) \(x\in D^{\varepsilon }(x)\) and hence, there exist \(\varepsilon _{1}\ge 0,\varepsilon _{2}\ge 0\), \(u\in U\), and \(y_{u}^{*}\in Y_{u}^{*}\) such that \(\varepsilon _{1}+\varepsilon _{2}=\varepsilon \), \(u\in I^{\varepsilon _{1}}(x)\), \((x^{*},y_{u}^{*})\in (\partial ^{\varepsilon _{2}}F_{u})(x,0_{u})\). We have

$$\begin{aligned} q(x^{*})\le & {} F_{u}^{*}(x^{*},y_{u}^{*})\le \langle x^{*},x\rangle -F_{u}(x,0_{u})+\varepsilon _{2} \\\le & {} \langle x^{*},x\rangle -p(x)+\varepsilon _{1}+\varepsilon _{2}=p^{*}(x^{*})\ \ \ \ \ \ \mathrm {(by\ definition\ of\ }\varepsilon ) \\\le & {} q(x^{*}), \end{aligned}$$

and finally, \(q(x^{*})=F_{u}^{*}(x^{*},y_{u}^{*})=p^{*}(x^{*})\), which is (i). \(\square \)

Next, as usual, we give two consequences of Theorem 6 for the non-uncertainty and non-parametric cases.

Corollary 11

\(\mathbf {(Stable\,\, strong\,\, duality\, for\,\, Case\,\, 1)}\) Let \(F:X{\times } Y\rightarrow \mathbb {R}_{\infty }\), \(p=F(\cdot ,0_{Y})\), \(\mathop {\mathrm {dom}}p\ne \emptyset \). The next statements are equivalent:

\(\mathrm {(i)} \)  \(\inf \limits _{x \in X} \Big \{ F(x, 0_Y) - \langle x^*, x\rangle \Big \} = \max \limits _{ y^* \in Y^ *} - F^*(x^*, y^*), \ \forall x^* \in X^*\),

\(\mathrm {(ii)}\) \(\partial ^{\varepsilon }p(x)=\mathrm {proj}_{X^{*}}(\partial ^{\varepsilon }F)(x,{0_{y}}),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X.\)

Proof

This is the non-uncertainty case (i.e., the uncertainty set is a singleton) of the general problem \((\mathrm {RP})_{x^{*}}\), with \(U=\{u_{0}\}\) and \(F_{u_{0}}=F:X\times Y\rightarrow \mathbb {R}_{\infty }\). We have from (37),

$$\begin{aligned} D^{\varepsilon }(x)=\mathrm {proj}_{X^{*}}(\partial ^{\varepsilon }F)(x,{0_{Y}}),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X. \end{aligned}$$
(41)

The conclusion now follows from Theorem 6. \(\square \)

Corollary 12

\(\mathbf {(Stable~strong~duality~for~Case~2)}\) Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p=\sup \limits _{u\in U}f_{u}\), and \(\mathop {\mathrm {dom}}p\ne \emptyset \). The next statements are equivalent:

\(\mathrm {(i)} \)  \((\sup \limits _{u \in U} f_u)^*(x^*) = \min \limits _{ u\in U} f_u^*(x^*), \ \forall x^* \in X^*\),

\(\mathrm {(ii)}\)\(\partial ^{\varepsilon }p(x)=D^{\varepsilon }(x),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X\), where

$$\begin{aligned} D^{\varepsilon }(x)=\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\varepsilon \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}}\bigcup \limits _{u\in I^{\varepsilon _{1}}(x)}(\partial ^{\varepsilon _{2}}f_{u})(x),\ \forall (\varepsilon ,x)\in \mathbb {R}_{+}\times X, \end{aligned}$$
(42)

and

$$\begin{aligned} I^{\varepsilon }(x)=\left\{ \begin{array}{ll} \Big \{u\in U\,:\,f_{u}(x)\ge p(x)-\varepsilon \Big \} &{} \mathrm {if}\quad p(x)\in \mathbb {R}, \\ \ \ \emptyset &{} \mathrm {if}\quad p(x)\not \in \mathbb {R}. \end{array} \right. \end{aligned}$$

Proof

In this non-parametric situation, let \(F_{u}(x,y_{u})=f_{u}(x)\). It is easy to see that in this case, the set \(D^{\varepsilon }(x)\) can be expressed as in (42), and the conclusion follows from Theorem 6. \(\square \)

8 Exact Subdifferential Formulas: Robust Basic Qualification Condition

Given \({F_{u}}:X\times {Y_{u}}\rightarrow \mathbb {R}_{\infty },\ u\in U\), as usual, we let \(p=\mathop {\sup }\limits _{u\in U}{F_{u}}(\cdot ,{0_{u}})\),

\( q:=\mathop {\inf }\limits _{\left( {u,y_{u}^{*}}\right) \in \Delta }\;F_{u}^{*}(\cdot ,y_{u}^{*})\). Again, we consider the robust problem \((\mathrm {RP})_{x^{*}}\) and its robust dual problem \((\mathrm {ODP})_{x^{*}}\) given in (12) and (13), respectively. Note that the reverse strong robust duality holds at \(x^{*} \) means that, for some \(\bar{x}\in X\), it holds

$$\begin{aligned} -p^{*}(x^{*})=\min {(\mathrm {RP})_{x^{*}}}= & {} \sup _{u\in U}F_{u}(\bar{x},0_{u})-\langle x^{*},\bar{x}\rangle \nonumber \\= & {} p(\bar{x})-\langle x^{*},\bar{x}\rangle =\sup {(\mathrm {ODP})_{x^{*}}}=-q(x^{*}). \end{aligned}$$
(43)

Now, let us set, for each \(x\in X\),

$$\begin{aligned} D(x):=&D^{0}(x)=\bigcup \limits _{u\in I(x)}\mathrm {proj}^u_{X^{*}}(\partial F_{u})(x,{0_{u}}), \end{aligned}$$
(44)
$$\begin{aligned} C(x):=&C^{0}(x)=\bigcap \limits _{\eta >0}\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\eta \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}}\bigcup \limits _{u\in I^{\varepsilon _{1}}(x)}\mathrm {proj}^u_{X^{*}}(\partial ^{\varepsilon _{2}}F_{u})(x,{0_{u}}), \end{aligned}$$
(45)

where \(I^{\varepsilon _{1}}(x)\) is defined as in (35) and

$$\begin{aligned} I(x):=\left\{ \begin{array}{ll} \left\{ u\in U:F_{u}(x,0_{u})=p(x)\right\} , &{} \mathrm {if}\quad p(x)\in \mathbb {R}, \\ \emptyset , &{} \mathrm {if}\quad p(x)\not \in \mathbb {R}. \end{array} \right. \end{aligned}$$
(46)

Lemma 7

For each \(x \in X\), it holds

$$\begin{aligned} D(x) \subset C(x) \subset \partial p (x). \end{aligned}$$

Proof

The first inclusion is easy to check. Now let \(x^* \in C(x)\). For each \(\eta > 0\) there exist \((\varepsilon _1, \varepsilon _2 ) \in \mathbb {R}_+^2\), \(u \in I^{\varepsilon _1}(x)\), and \(y_u^* \in Y_u^*\) such that \(\varepsilon _1 + \varepsilon _2 = \eta \) and \((x^*, y_u^*) \in (\partial ^{\varepsilon _2} F_u)(x, 0_u)\). We then have \(F_u^*(x^*, y_u^*) + F_u(x, 0_u) - \langle x^*, x\rangle \le \varepsilon _2\), \(p(x) \le F_u(x, 0_u) + \varepsilon _1\) (as \(u \in I^{\varepsilon _1}(x)\)), and \(p^*(x^*) \le q(x^*) \le F^*_u(x^*, y^*_u)\). Consequently,

$$\begin{aligned} p^*(x^*) + p(x) - \langle x^*, x\rangle \le F_u^*(x^*, y_u^*) + F_u(x, 0_u) + \varepsilon _1- \langle x^*, x\rangle \le \varepsilon _1 + \varepsilon _2 = \eta . \end{aligned}$$

Since \(\eta > 0\) is arbitrary we get \(p^*(x^*) + p(x) -\langle x^*, x\rangle \le 0\), which means that \(x^* \in \partial p (x)\). The proof is complete. \(\square \)

Theorem 7

Let \(x \in p^{-1}(\mathbb {R})\) and C(x) be as in (45). The next statements are equivalent:

\(\mathrm {(i)} \)   \(\partial p (x) = C(x)\),

\(\mathrm {(ii)} \) Reverse strong robust duality holds at each \(x^* \in \partial p (x) \),

\(\mathrm {(iii)} \) Robust duality holds at each \(x^* \in \partial p (x)\).

Proof

\([ \mathrm {(i)} \Longrightarrow \mathrm {(ii)}]\) Let \(x^* \in \partial p (x)\). We have \(x^* \in C(x) = (\mathcal {A})^{-1} (x)\) (see Lemma 5 with \(\varepsilon = 0\)). Then \(x \in \mathcal {A} (x^*)= S(x^*)\) (see (24) with \(\varepsilon = 0\)), and therefore,

$$\begin{aligned} - p^*(x^*) \le p(x) - \langle x^*, x\rangle \le - q (x^*) \le - p^*(x^*), \end{aligned}$$
$$\begin{aligned} - p^*(x^*) = \min \limits _{z \in X} \{ p(z) - \langle x^*, z\rangle \} = p(x) - \langle x^*, x\rangle = -q(x^*) , \end{aligned}$$

that means that reverse strong robust duality holds at \(x^*\) (see (43)).

\([ \mathrm {(ii)} \Longrightarrow \mathrm {(iii)}]\) is obvious.

\([ \mathrm {(iii)} \Longrightarrow \mathrm {(i)}]\) By Lemma 7 it suffices to check that the inclusion “\(\subset \)” holds. Let \(x^* \in \partial p (x)\). We have \(x \in (Mp)(x^*)\). Since robust duality holds at \(x^*\), Theorem 1 (with \(\varepsilon = 0\)) says that \(x \in \mathcal {A} (x^*) \). Thus, \(x^* \in \mathcal {A} ^{-1} (x)\), and, by Lemma 5, \(x^* \in C(x)\). \(\square \)

In the deterministic and the non-parametric cases, we get the next results from Theorem 7.

Corollary 13

Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(p=F(\cdot ,0_{Y})\), and \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:

\(\mathrm {(i)} \)   \(\partial p (x) = \bigcap \limits _{\eta > 0 } \mathrm {proj}_{X^*} (\partial ^\eta F)(x, 0_Y)\),

\(\mathrm {(ii)} \)\(\min \limits _{z \in X} \Big \{ F(z, 0_Y) - \langle x^*, x \rangle \Big \} = \sup \limits _{y^* \in Y^*} - F^*(x^*, y^*) , \ \forall x^* \in \partial p (x)\),

\(\mathrm {(iii)} \)\(\inf \limits _{z \in X} \Big \{ F(z, 0_Y) - \langle x^*, x \rangle \big \} = \sup \limits _{y^* \in Y^*} - F^*(x^*, y^*) , \ \forall x^* \in \partial p (x)\).

Proof

Let \(F_{u}=F:X\times Y\rightarrow \mathbb {R}_{\infty }\) and \(p=F(\cdot ,0_{Y})\). We then have

$$\begin{aligned} C(x)=\bigcap \limits _{\eta >0}\mathrm {proj}_{X^{*}}(\partial ^{\eta }F)(x,0_{Y}),\ \forall x\in X, \end{aligned}$$

(see Corollary 9) and the conclusion follows directly from Theorem 7. \(\square \)

Corollary 14

Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p=\sup \limits _{u\in U}f_{u}\), \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:

\(\mathrm {(i)} \)   \(\partial \left( \sup \limits _{u \in U} f_u\right) (x) = C(x)\),

\(\mathrm {(ii)} \)\(\max \limits _{z \in X} \Big \{ \langle x^*, z \rangle - p(z) \Big \} = \inf \limits _{u \in U} f_u^*(x^*), \ \forall x^* \in \partial p (x)\),

\(\mathrm {(iii)} \)\(\left( \sup \limits _{u \in U} f_u\right) ^*(x^*) = \inf \limits _{u \in U } f_u^*(x^*), \ \forall x^* \in \partial p (x)\),

where

$$\begin{aligned} C(x)=\bigcap \limits _{\eta >0}\bigcup \limits _{{\scriptstyle {\varepsilon _{1}}+{\varepsilon _{2}}=\eta \atop \scriptstyle {\varepsilon _{1}}\geqslant 0,{\varepsilon _{2}}\geqslant 0}}\bigcup \limits _{u\in I^{\varepsilon _{1}}(x)}\mathrm {proj}^u_{X^{*}}(\partial ^{\varepsilon _{2}}f_{u})(x),\forall x\in X. \end{aligned}$$
(47)

Proof

Let \(F_u(x, y_u) = f_u(x)\). Then it is easy to see that in this case, C(x) can be expressed as in (47). The conclusion now follows from Theorem 7. \(\square \)

Let us come back to the general case and consider the most simple subdifferential formula one can expect for the robust objective function \(p=\sup \limits _{u\in U}F_{u}(\cdot ,0_{u})\):

$$\begin{aligned} \partial p(x)=\displaystyle \bigcup _{u\in I(x)}\mathrm {proj}^u_{X^{*}}\left( \partial F_{u}\right) (x,0_{u}), \end{aligned}$$
(48)

where the set of active indexes at x, I(x), is defined by (46).

In Case 3 we have

$$\begin{aligned} p(x)=\left\{ \begin{array}{ll} f(x), &{} \mathrm {if}\quad H_{u}(x)\in -S_{u},\forall u\in U, \\ +\infty , &{} \mathrm {else},\end{array} \right. \end{aligned}$$

\(I(x)=U\) for each \(x\in p^{-1}(\mathbb {R})\), and (48) writes

$$\begin{aligned} \partial p(x)=\bigcup \limits _{{u\in U,\ z_{u}^{*}\in S_{u}^{+} \atop \langle z_{u}^{*},H_{u}(x)\rangle =0}}\partial (f+z_{u}^{*}\circ H_{u})(x), \end{aligned}$$

which has been called Basic Robust Subdifferential Condition (BRSC) in [8] (see [18, page 307] for the deterministic case). More generally, let us introduce the following terminology:

Definition 1

Given \(F_{u}:X\times Y_{u}\rightarrow \mathbb {R}_{\infty }\) for each \(u\in U\), and \(p=\sup \limits _{u\in U}F_{u}(\cdot ,0_{u})\), we will say that Basic Robust Subdifferential Condition holds at a point \(x\in p^{-1}(\mathbb {R})\) if (48) is satisfied, that is \(\partial p(x)=D(x)\).

Recall that, in Example 1, \(p\left( x\right) =\left\langle c^{*},x\right\rangle +\mathrm {i}_{A}\left( x\right) ,\) where \(A=p^{-1}(\mathbb {R})\) is the feasible set of the linear system. So, given \(x\in A,\) \(\partial p(x)\) is the sum of \(c^{*}\) with the normal cone of A at x,  i.e., Basic Robust Subdifferential Condition (at x) asserts that such a cone can be expressed in some prescribed way.

Theorem 8

Let \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:

\(\mathrm {(i)}\) Basic Robust Subdifferential Condition holds at x,

\(\mathrm {(ii)}\) Min-max robust duality holds at each \(x^{*}\in \partial p(x)\),

\(\mathrm {(iii)}\) Strong robust duality holds at each \(x^{*}\in \partial p(x)\).

Proof

\([\mathrm {(i)} \Longrightarrow \mathrm {(ii)}]\) Let \(x^* \in \partial p (x)\). We have \(x^* \in D(x)\) and, by (44), there exist \(u \in I(x)\) (i.e., \(p(x) = F_u(x, 0_u)\)), \(y_u^*\in Y_u^*\), such that \((x^*, y_u^*) \in \partial F_u)(x, 0_u)\). Then,

$$\begin{aligned} p^*(x^*)\ge & {} \langle x^*, x\rangle - p(x) = \langle x^*, x \rangle - F_u(x, 0_u) = F_u^*(x^*, y_u^*) \\\ge & {} q(x^*) \ge p^*(x^*) . \end{aligned}$$

It follows that

$$\begin{aligned} \max \limits _{z \in X} \{ \langle x^*, z\rangle - p(z) \} = \langle x^*, x\rangle - p(x) = F_u^* (x^*, y_u^*) = q(x^*), \end{aligned}$$

and min-max robust duality holds at \(x^*\).

\([\mathrm {(ii)} \Longrightarrow \mathrm {(iii)}]\) It is obvious.

\([\mathrm {(iii)} \Longrightarrow \mathrm {(i)}]\) By Lemma 7, it suffices to check that \(\partial p (x) \subset D(x)\). Let \(x^* \in \partial p (x)\). We have \(x \in (M p) (x^*)\). Since strong robust duality holds at \(x^*\), Theorem 2 says that there exist \(u \in U\), \(y_u^* \in Y_u^*\) such that \(x \in B^0_{(u, y_u^*)} (x^*)\), that means (see (34))

$$\begin{aligned} (x, 0_u) \in (MF_u)(x^*, y_u^*) , \ (x^*, y_u^*) \in (\partial F_u)(x, 0_u), \end{aligned}$$

and by (44), \(x^* \in D(x)\). \(\square \)

As usual, Theorem 8 gives us corresponding results for the two extreme cases: non-uncertainty and non-perturbation cases.

Corollary 15

Let \(F:X\times Y\rightarrow \mathbb {R}_{\infty }\), \(p=F(\cdot ,0_{Y})\), and \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:

\(\mathrm {(i)} \)   \(\partial p (x) = \mathrm {proj}_{X^*} (\partial F) (x, 0_Y)\),

\(\mathrm {(ii)} \)\(\max \limits _{z \in X} \Big \{ \langle x^*, z \rangle - F(z, 0_Y) \Big \} = \min \limits _{y^*\in Y^*} F^*(x^*, y^*), \ \forall x^* \in \partial p (x)\),

\(\mathrm {(iii)} \)\(p ^*(x^*) = \min \limits _{y^* \in Y^*} F^*(x^*, y^*), \ \forall x^* \in \partial p (x)\).

Proof

In this case we have, by (41), \(D(x)=\mathrm {proj}_{X^{*}}(\partial F)(x,0_{Y}) \) and the conclusion is a direct consequence of Theorem 8. \(\square \)

Corollary 16

Let \((f_{u})_{u\in U}\subset \mathbb {R}_{\infty }^{X}\), \(p=\sup \limits _{u\in U}f_{u}\), \(x\in p^{-1}(\mathbb {R})\). The next statements are equivalent:

\(\mathrm {(i)} \)   \(\partial p (x) = \bigcup \limits _{u \in I(x)} \partial f_u (x) \),

\(\mathrm {(ii)} \)\(\max \limits _{z \in X} \Big \{ \langle x^*, z \rangle - p(z) \Big \} = \min \limits _{u \in U} f_u^*(x^*), \ \forall x^* \in \partial p (x)\),

\(\mathrm {(iii)} \)\((\sup \limits _{u \in U} f_u) ^*(x^*) = \min \limits _{y^* \in Y^*} f_u^*(x^*), \ \forall x^* \in \partial p (x)\).

Proof

In this non-parametric case, let \(F_{u}(x,y_{u})=f_{u}(x)\), \(p=\sup \limits _{u\in U}f_{u}\). We have

$$\begin{aligned} D(x)=\bigcup _{u\in I(x)}\partial f_{u}(x),\ I(x)=\{u\in U\,:\,f_{u}(x)=p(x)\in \mathbb {R}\} \end{aligned}$$

and Theorem 8 applies. \(\square \)