Abstract
We present an exact formula for the radius of robust feasibility of uncertain linear programs with a compact and convex uncertainty set. The radius of robust feasibility provides a value for the maximal ‘size’ of an uncertainty set under which robust feasibility of the uncertain linear program can be guaranteed. By considering spectrahedral uncertainty sets, we obtain numerically tractable radius formulas for commonly used uncertainty sets of robust optimization, such as ellipsoids, balls, polytopes and boxes. In these cases, we show that the radius of robust feasibility can be found by solving a linearly constrained convex quadratic program or a minimax linear program. The results are illustrated by calculating the radius of robust feasibility of uncertain linear programs for several different uncertainty sets.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The real-world linear programs involve input data that are often noisy or uncertain due to prediction or measurement errors. For example, a linear program, arising in industry or commerce [1, 2], might involve various costs, financial returns, and future demands that might be unknown at the time of the decision. They have to be predicted and are replaced with their forecasts. They often result in prediction errors. Similarly, some of the data, such as the contents associated with raw materials, might be hard to measure exactly. These input data are subject to measurement errors.
If data uncertainty arises, for instance, at the constraints of some linear programs, it is likely that these constraints will be violated upon realization of the actual data. Constraint violations can potentially influence the usability of the solution. Consequently, the development of optimization methodologies, which are capable of generating robust solutions that are immunized against data uncertainty, has become more important than ever in mathematics, commerce and engineering.
The deterministic approach of finding robust solutions of linear programs under data uncertainty was pioneered in the 1970s, leading to the introduction of what is now called, Robust Optimization, by Soyster [3]. Two decades later, in the late 1990s, Ben-Tal and Nemirovski [4, 5], and El Ghaoui [6] provided a highly successful computationally tractable treatment of robust optimization approach for linear as well as convex optimization problems under data uncertainty (see [4, 7,8,9,10,11,12,13,14] and other references therein).
The robust optimization approach is based on the principle that the robust counterpart of an uncertain program, where uncertain constraints are enforced for all possible parameter realizations within some uncertainty set, has a feasible solution, called a robust feasible solution. Recent research in robust optimization has addressed the issue of guaranteeing robust feasibility by providing formulas for radius of robust feasibility for uncertain linear and convex programs [15,16,17]. The radius of robust feasibility gives a value for the maximal ‘size’ of an uncertainty set under which robust feasibility can be guaranteed. However, these formulas are valid only for programs with a ball uncertainty set.
The purpose of this paper is to present a new exact formula for radius of robust feasibility for uncertain linear programs under a general compact and convex uncertainty set. By considering spectrahedral uncertainty sets, we obtain numerically tractable radius formulas for commonly used uncertainty sets of robust optimization, such as ellipsoids, balls, polytopes and boxes. In these cases, we show that the radius of robust feasibility can be calculated by means of solving a linearly constrained convex quadratic program or a minimax linear program.
Our approach to deriving these formulas was inspired by the elegant work on the consistency radius in linear semi-infinite programming in order to guarantee the feasibility of the nominal system under perturbations preserving the number of constraints [18,19,20].
In Sect. 2, we provide a general exact formula for the radius of robust feasibility under a general compact and convex uncertainty set, whereas in Sect. 3 we present numerically tractable formulas for the radius of robust feasibility in the case of spectrahedral uncertainty sets. In the Appendix, we provide the proof of a technical lemma which paves the way to establishing the general exact formula for the radius.
2 An Exact Formula for Radius of Robust Feasibility
This section is devoted to presenting an exact formula for computing the radius of robust feasibility for an uncertain linear program under a compact and convex uncertainty set.
We begin by fixing some definitions and some basic results that will be used throughout the paper. A symmetric \((n\times n)\) matrix A is said to be positive semi-definite, denoted by \(A\succeq 0\), whenever \(x^\top Ax\ge 0\) for all \(x\in {\mathbb {R}}^n,\) where \(n\in {\mathbb {N}}:=\{1,2,\ldots \}\). If \(x^\top Ax> 0\) for all \(x\in {\mathbb {R}}^n\setminus \{0_n\},\) then A is called positive definite, denoted by \(A\succ 0.\)
The following lemma is known as the Schur complement.
Lemma 2.1
(See [5, Lemma 4.2.1]) Let
be a symmetric matrix with \((k\times k)\) block A and \((l\times l)\) block C. Assume that \(A \succ 0.\) Then, \(W\succeq 0\) if and only if \(C-BA^{-1}B^\top \succeq 0.\)
Let \({\varOmega }\subset {\mathbb {R}}^n\) be a convex set such that \(0_n\in \mathrm{int}{\varOmega },\) where the notation \(\mathrm{int}{\varOmega }\) stands for the interior of \({\varOmega }\). The function \(\phi _{\varOmega }:{\mathbb {R}}^n\rightarrow {\mathbb {R}}_+:=[0,+\infty [\) given by
is called Minkowski function or gauge of \({\varOmega }.\) The following lemma provides some properties of the Minkowski function that can be found in the literature; see, e.g., [21, Lemma 1.3.13].
Lemma 2.2
Let \({\varOmega }\subset {\mathbb {R}}^n\) be a convex set such that \(0_n\in \mathrm{int}{\varOmega }.\) Then, the following properties hold:
-
(i)
\(\phi _{\varOmega }\) is sublinear and continuous;
-
(ii)
\(\{x\in {\mathbb {R}}^n \,:\, \phi _{\varOmega }(x)\le 1\}=\mathrm{cl\,}{\varOmega },\) where \(\mathrm{cl\,}{\varOmega }\) stands for the closure of \({\varOmega }\); and
-
(iii)
if in addition \({\varOmega }\) is bounded and symmetric (i.e., \(x\in {\varOmega }\Rightarrow -x\in {\varOmega }),\) then \(\phi _{\varOmega }:=||\cdot ||\) is a norm on \({\mathbb {R}}^n\) generated by \({\varOmega }\).
A parametric linear program in the face of data uncertainty in both the objective and the constraints, denoted by \(\mathrm{(PU)}\), can be captured by a family of uncertain linear programs: for each parameter \(\alpha \in {\mathbb {R}}_+,\) we have an uncertain linear program
where \(c, (a_j,b_j), j=1,\ldots ,p,\) are uncertain vectors and \(V^\alpha , U_{j}^\alpha , j=1,\ldots ,p,\) are uncertainty sets that are compact and convex.
The robust counterpart of (PU\(\alpha \)) is given by
where the uncertain constraints are enforced for all possible parameter realizations within the corresponding uncertainty sets. The program (PU\(\alpha \)) is said to be robust feasible if the robust program (PR\(_\alpha \)) is feasible, i.e., \(\{x\in {\mathbb {R}}^n \,:\,a_j^\top x-b_j\le 0,\;\forall (a_j,b_j)\in U_j^{\alpha }, \; j=1,\ldots ,p\}\ne \emptyset .\)
Throughout this paper, the uncertainty sets \(U_{j}^\alpha , j=1,\ldots ,p,\) are given by
where \(({{\overline{a}}}_j, {{\overline{b}}}_j)\in {\mathbb {R}}^{n+1}, j=1,\ldots ,p,\) are fixed and \(Z\subset {\mathbb {R}}^{n+1}\) is a convex and compact set such that \(0_{n+1}\in \mathrm{int}Z.\) We also assume that the nominal program \(\mathrm{(PU_0)}\) is feasible, i.e., \(\{x\in {\mathbb {R}}^n \,:\,{{\overline{a}}}_j^\top x-{{\overline{b}}}_j\le 0,\; j=1,\ldots ,p \}\ne \emptyset .\)
Following [16, 17], we define the notion of radius of robust feasibility for the parametric uncertain linear program \(\mathrm{(PU)}\) as follows.
Definition 2.1
(Radius of robust feasibility) The radius of robust feasibility for (PU) is given by
The radius \(\rho \) provides a value for the maximal ‘size’ of the uncertainty set under which robust feasibility of our uncertain program is guaranteed.
The following dual characterization of solutions of a semi-infinite linear inequality system, which is useful for our analysis in the sequel, can be found in [22]. Recall that \(\mathrm{conv\,}{\varOmega }\) denotes the convex hull of \({\varOmega }\subset {\mathbb {R}}^n,\) while \(\mathrm{cone\,}{\varOmega }:={\mathbb {R}}_+\mathrm{conv\,}{\varOmega }\) stands for the convex conical hull of \({\varOmega }\cup \{0_n\}\).
Lemma 2.3
(See [22, Corollary 3.1.1]) Let T be an arbitrary index set. Then, it holds that
The next lemma can be regarded as an extension of [16, Lemma 3] for the case of ball uncertainty, and its method of proof is similar to that of [15, Lemma 2.2], where the data involve the epigraphs of conjugate functions of convex functions. For the benefit of the reader a self-contained proof is given in the appendix.
Lemma 2.4
Let \(\alpha \in {\mathbb {R}}_+, ({{\overline{a}}}_j,{{\overline{b}}}_j)\in {\mathbb {R}}^n\times {\mathbb {R}}, j=1,\ldots ,p,\) and let \(Z\subset {\mathbb {R}}^{n+1}\) be a convex and compact set with \(0_{n+1}\in \mathrm{int}Z.\) Assume that \((0_n,1)\in \mathrm{cl\, cone}\bigg \{\bigcup \limits _{j=1}^p[(-{{\overline{a}}}_j,-{{\overline{b}}}_j)-\alpha Z]\bigg \}.\) Then,
Proof
See Appendix. \(\square \)
The following theorem provides an exact formula for the radius of robust feasibility for (PU) in terms of the Minkowski function of Z. This formula involves the so-called hypographical set of the nominal system \(\{{\overline{a}}_j^\top x-{\overline{b}}_j\le 0, j=1,\ldots ,p\}\) (cf. [16, 18]) defined by
where \({\overline{a}}:=({\overline{a}}_1,\ldots ,\overline{a}_p)\in ({\mathbb {R}}^n)^p\) and \({\overline{b}}:=({\overline{b}}_1,\ldots ,{\overline{b}}_p)\in {\mathbb {R}}^p.\)
Theorem 2.1
(Exact radius formula) Let the nominal program \(\mathrm{(PU_0)}\) be feasible. Then, the radius of robust feasibility for (PU) defined in (3) is given by
Proof
We first prove that
Assume on the contrary that (6) is false. It means that there exists \((a,b)\in H({\overline{a}},{\overline{b}})\) such that
By \((a,b)\in H({\overline{a}},{\overline{b}}),\) there exist \(\lambda _k\ge 0, k=1,\ldots , p\) with \(\sum _{k=1}^p\lambda _k=1\) and \(\mu \ge 0\) such that
Let \(\epsilon >0.\) Then, (8) implies that
Besides, by the definition of \(\phi _{Z}\) in (1), for \(\epsilon >0\) as above, there exists \(t_{\epsilon }>0\) such that \((a,b-\epsilon )\in t_\epsilon Z\) and \(t_\epsilon <\phi _{Z}(a,b-\epsilon )+\epsilon .\) It then follows that there exists \(z\in Z\) such that \((a,b-\epsilon )=t_\epsilon z\). Moreover, since Z is convex and \(0_{n+1}\in Z\), we have
Hence, there exists \({\widetilde{z}}\in Z\) such that \((a,b-\epsilon )=\big (\phi _{Z}(a,b-\epsilon )+\epsilon \big ){\widetilde{z}},\) and then, by (9), we obtain that
Now, let \(\alpha \in {\mathbb {R}}_+\) be such that (PR\(_\alpha \)) is feasible. Then,
where \(U_j^\alpha :=({\overline{a}}_j,{\overline{b}}_j)+\alpha Z, \;j=1,\ldots ,p.\) According to Lemma 2.3, (11) is equivalent to the following condition
We claim that
Indeed, assume on the contrary that \(\alpha > \phi _{Z}(a,b-\epsilon )+\epsilon .\) Then, \(\alpha >0\) and
because Z is convex and \(0_{n+1}\in Z\). We find \(\widehat{z}\in Z\) such that \(\big (\phi _{Z}(a,b-\epsilon )+\epsilon \big ){\widetilde{z}}=\alpha {\widehat{z}}.\) This together with (10) yields
which contradicts (12), and so, our claim in (13) holds. Note that \(\phi _{Z}\) is a continuous function [cf. Lemma 2.2(i)]. By letting \(\epsilon \rightarrow 0\), we obtain from (13) that \(\alpha \le \phi _{Z}(a,b)\), which in turn implies by definition of radius that \(\rho \le \phi _{Z}(a,b).\) This together with (7) gives a contradiction. Consequently, (6) holds.
We now prove that
We assume by contradiction that
For each \(\varepsilon >0,\) let \({{\widetilde{\alpha }}}:=\rho +\varepsilon .\) By the definition of \(\rho \), the program (PR\(_\alpha \)) is not feasible at \(\alpha :={{\widetilde{\alpha }}}.\) It means that
where \(U_j^{{{\widetilde{\alpha }}}}:=({\overline{a}}_j,{\overline{b}}_j)+{{\widetilde{\alpha }}} Z, \;j=1,\ldots ,p.\) In view of Lemma 2.3, (17) is equivalent to the following condition
Invoking Lemma 2.4, we assert that
for all \( \varepsilon >0.\) Then, there exist \(\lambda _j\ge 0\) and \(z_j\in Z, j=1,\ldots , p,\) such that
Observe by (18) that \(\sum _{j=1}^p\lambda _j\ne 0\). Putting \({{\widetilde{\lambda }}}_j:=\frac{\lambda _j}{\sum _{j=1}^p\lambda _j}\ge 0, j=1,\ldots , p\), we obtain that \(\sum \limits _{j=1}^p{{\widetilde{\lambda }}}_j= 1\), and then deduce from (18) that
Note that \(z_j\in Z\) for all \(j=1,\ldots ,p,\) and hence, \(z^*:=\sum \limits _{j=1}^p{{\widetilde{\lambda }}}_j z_j\in Z\) due to the convexity of Z. So, from (19) follows that \(({{\widetilde{\alpha }}}+\varepsilon )z^*\in H({\overline{a}},{\overline{b}}),\) which in turn implies that
where the second and third inequalities in (20) hold by virtue of Lemma 2.2(i, ii). Letting \(\varepsilon \rightarrow 0\) in (20), we arrive at \(\inf \limits _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}\le \rho ,\) which contradicts (16). So, (15) is valid, which completes the proof of the theorem. \(\square \)
We now focus on the case where the uncertainty set Z in (2) is symmetric (i.e., \(z\in Z \Rightarrow -z\in Z\)). In this case, the radius of robust feasibility for (PU) can be computed by solving a convex programming problem in terms of a specified norm. In what follows, we use the notation
to denote the simplex in \({\mathbb {R}}^p.\)
Corollary 2.1
(Radius formula with symmetric uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let the uncertainty set Z in (2) be symmetric. Then, we have
where \(||\cdot ||\) is a norm on \({\mathbb {R}}^{n+1}\) generated by Z, i.e., \(||\cdot ||=\phi _{Z}.\)
Proof
Since Z is symmetric and bounded, we assert by Lemma 2.2 (iii) that \(\phi _{Z}=||\cdot ||\), where \(||\cdot ||\) is a norm on \({\mathbb {R}}^{n+1}.\) Applying Theorem 2.1, we conclude that
Note that \(H({\overline{a}},{\overline{b}})\) is a nonempty and closed set in \({\mathbb {R}}^{n+1}\) and any norm in \({\mathbb {R}}^{n+1}\) is coercive. Therefore, the “inf” in (22) is attained; i.e., we have
For each \((a,b)\in H({\overline{a}},{\overline{b}}),\) there exist \(\lambda _j\ge 0, j=1,\ldots , p\) with \(\sum _{j=1}^p\lambda _j=1\) and \(\mu \ge 0\) such that
or equivalently,
Letting \(\lambda :=(\lambda _1,\ldots ,\lambda _p)\), we obtain that \((\lambda ,\mu )\in \triangle _p\times {\mathbb {R}}_+.\) Substituting now (24) into (23), we arrive at (21), which completes the proof. \(\square \)
Observe by definition that the radius of robust feasibility for (PU) is always nonnegative, and moreover, it could be zero (see, e.g., [23, Example 4.5] for a particular box uncertainty set). The following proposition presents a necessary and sufficient condition for the radius of robust feasibility for (PU) to be positive.
Proposition 2.1
(Necessary and sufficient condition for the positivity of \(\rho \)) Let the nominal program \(\mathrm{(PU_0)}\) be feasible. Consider the radius of robust feasibility for (PU) defined in (3). Then, it holds that
Proof
Since the nominal program \(\mathrm{(PU_0)}\) is feasible, we assert by Theorem 2.1 that \(\rho = \inf \limits _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}.\)
[Proving \(\Longrightarrow \)] Let \(\sup \limits _{(x,y)\in {\mathbb {R}}^n\times {\mathbb {R}}}{\{y\,:\, {\overline{a}}_j^\top x-{\overline{b}}_j+y\le 0,\; j=1,\ldots ,p\}}>0.\) Then, there exist \({\widehat{x}}\in {\mathbb {R}}^n\) and \(\delta _0>0\) such that
Arguing by contradiction, we suppose that \(\rho =0.\) It means that for each \(k\in {\mathbb {N}}\), there exists \((a_k,b_k)\in ~H({\overline{a}},{\overline{b}})\) such that
Let \(k\in {\mathbb {N}}\). Then, the relation \((a_k,b_k)\in H({\overline{a}},{\overline{b}})\) implies that there exist \(\lambda ^k_j\ge 0, j=1,\ldots , p\) with \(\sum _{j=1}^p\lambda ^k_j=1\) and \(\mu _k\ge 0\) such that
which entails that
By the definition of \(\phi _{Z}(a_k,b_k)\), we find \(t_{k}>0\) such that \(t_k<\phi _{Z}(a_k,b_k)+\frac{1}{k}\) and \((a_k,b_k)\in t_k Z.\) The latter ensures that there exists \(z_k\in Z\) such that \((a_k,b_k)=t_kz_k\). In addition, since Z is convex and \(0_{n+1}\in Z\), we have
Hence, there exists \({\widetilde{z}}_k\in Z\) such that \((a_k,b_k)=\big (\phi _{Z}(a_k,b_k)+\frac{1}{k}\big ){\widetilde{z}}_k,\) and then, by (28), we obtain that
where \({{\widetilde{\lambda }}}^k_j:=\frac{\lambda ^k_j}{\mu _k+\frac{1}{k}}\ge 0.\) Letting \(\gamma _k:=\sum _{j=1}^p{{\widetilde{\lambda }}}^k_j\), we see that \(\gamma _k>0\) and, by taking subsequences if necessary, we assume that \(\frac{{{\widetilde{\lambda }}}^k_j}{\gamma _k}\rightarrow \lambda _j\ge 0, j=1,\ldots , p\) as \(k\rightarrow \infty \) and \(\sum _{j=1}^p\lambda _j=1.\) On account of (29), we obtain that
Keeping in mind the compactness of Z and the relation in (27), and letting \(k\rightarrow \infty \) in (30), we arrive at \(-\sum _{j=1}^p\lambda _j(-{\overline{a}}_j,-{\overline{b}}_j)\in {\mathbb {R}}_+(0_n,-1)\). Then, we find \(\mu \ge 0\) such that
which implies that
where \(\delta _0\) was given in (26). It together with Lemma 2.3 implies that
which contradicts (26), and thus, our conclusion follows.
[Proving \(\Longleftarrow \)] Let \(\rho >0\) and set \(X(\varepsilon ):=\big \{x\in {\mathbb {R}}^n \,:\, {\overline{a}}_j^\top x-{\overline{b}}_j+\varepsilon \le 0, j=1,\ldots , p\big \}\), where \(\varepsilon >0.\) The proof will be completed if we can show that there exists \(\varepsilon _0>0\) such that \(X(\varepsilon _0)\ne \emptyset .\)
Assume on the contrary that \(X(\varepsilon )=\emptyset \) for all \(\varepsilon >0\). In this circumstance, we invoke Lemma 2.3 to assert that, for each \(\varepsilon >0,\)
Let us consider any \(\varepsilon >0.\) Then, by (31), there exist \(\lambda ^\varepsilon _j\ge 0, j=1,\ldots , p,\) such that
Observe by (32) that \(\sum _{j=1}^p\lambda ^\varepsilon _j\ne 0\). Putting \({{\widetilde{\lambda }}}^\varepsilon _j:=\frac{\lambda ^\varepsilon _j}{\sum _{j=1}^p\lambda ^\varepsilon _j}\ge 0, j=1,\ldots , p\), we obtain that \(\sum \limits _{j=1}^p{{\widetilde{\lambda }}}^\varepsilon _j= 1\), and then deduce from (32) that
Letting \(\varepsilon \rightarrow 0\), we get by (33) that \(0_{n+1}\in H({\overline{a}},{\overline{b}})\) as inasmuch \(H({\overline{a}},{\overline{b}})\) is a closed set. Hence,
where the last equality holds due to \(0_{n+1}\in Z.\) Now, clearly, (34) contradicts our assumption that \(\rho >0\). So, the proof is complete. \(\square \)
3 Numerically Tractable Radius Formulas for Spectrahedral Uncertainty Sets
In this section, we derive numerically tractable formulas from Theorem 2.1 for calculating the radius of robust feasibility for (PU) in the framework, where the uncertainty set Z in (2) is a spectrahedron (see, e.g.,[24,25,26]) described by
with \(A^i, i=0,1,\ldots ,n+1\) being symmetric \((q\times q)\) matrices.
The remarkable feature of the spectrahedron (35) is that, besides the closedness and convexity of Z followed straightforwardly by definition, its boundedness can be characterized in terms of dual data (cf. [22, Theorem 9.3]). Furthermore, the condition \(0_{n+1}\in \mathrm{int}Z\) implies that the matrix \(A^0\) can be chosen to be positive definite. Conversely, if \(A^0\succ 0\), then \(A^0\) can be congruently transformed to the identity matrix \(I_{n+1}\), and in this case, we obtain that \(0_{n+1}\in \mathrm{int}Z\) (see e.g., [24, Page 252] and [22, Theorem 5.9]). It is also worth mentioning that the spectrahedron (35) possesses a broad spectrum of convex infinite sets, such as polyhedra, balls and ellipsoids [24,25,26], which are used to describe uncertainty sets of robust optimization.
With the aid of the spectrahedron (35), we can derive numerically tractable formulas for calculating the radius of robust feasibility for (PU) in commonly used uncertainty sets of robust optimization, such as ellipsoids, balls, polytopes and boxes.
Let us first consider the case, where the spectrahedron Z in (35) is an ellipsoid. More concretely, we examine an ellipsoid centered at the origin as
where M is a positive definite symmetric \((n+1)\times (n+1)\) matrix. In this case, as the following corollary shows, the radius of robust feasibility for (PU) can be computed by solving a linearly constrained convex quadratic programming problem.
Corollary 3.1
(Radius formula with ellipsoidal uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let Z be as in (36). Then, the radius of robust feasibility \(\rho \) for (PU) defined in (3) satisfies
Proof
Let
where \(e_i\in {\mathbb {R}}^{n+1}\) is a vector whose ith component is one and all others are zero. Then, we have
where the second equality in (38) is valid due to the Schur complement (cf. Lemma 2.1). It means that the ellipsoid Z in (36) can be expressed in terms of the spectrahedron (35) (see also,[24]). Since the ellipsoid Z is symmetric, we apply Corollary 2.1 to obtain that
where \(||\cdot ||=\phi _Z.\) Note further that the Minkowski function of the spectrahedron (35) is given by
for \((a,b):=(a^1,\ldots ,a^n,b)\in H({\overline{a}},{\overline{b}}).\) Thus, we arrive at \(||(a,b)||=\inf {\big \{t>0 \,:\, t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\big \}}\) for \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}.\) In this setting, for any \(t>0\) and \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1},\) the matrix inequality \( t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\) amounts to \( A^0+\sum \limits _{i=1}^{n+1}z^iA^i\succeq 0\) with \(z^i:=\frac{1}{t}a^i, i=1,\ldots , n\) and \(z^{n+1}:=\frac{1}{t}b.\) In view of (38) with \(z:=(z^1,\ldots ,z^{n+1})=\frac{1}{t}(a,b)\), we see that
Then, for each \((a,b)\in {\mathbb {R}}^{n+1},\)
where we should note that \((a,b)^\top M^{-1}(a,b)\ge 0\) as M is a positive definite matrix.
Now, on account of (40), we assert that the equivalence between (37) and (39) is valid. It completes the proof. \(\square \)
Next, we provide an example which illustrates how the radius of robust feasibility for (PU) can be computed in the case of ellipsoidal uncertainty.
Example 3.1
(Calculating the radius with ellipsoidal uncertainty) Let
Consider a parametric uncertain linear program \(\mathrm{(PU)}\) that is defined as follows: for each parameter \(\alpha \in {\mathbb {R}}_+,\) we have an uncertain linear program
where \(c\in [1,2]\) and \((a_j,b_j)\in U_j^\alpha :=(\overline{a}_j,{\overline{b}}_j)+\alpha Z\) for \(j=1,2\) with \(({\overline{a}}_1, {\overline{b}}_1):=(1,0), ({\overline{a}}_2,{\overline{b}}_2):=(0,1)\in {\mathbb {R}}^2\).
It is easy to check that \(x_0:=-1\) is a robust feasible solution of program (EP\(1_\alpha \)) at \(\alpha =0,\) or equivalent to saying that \(\mathrm{(PU_0)}\) is feasible and that Z can be viewed in the form (36) with \(M=\left( \begin{array}{cc} 4 &{} 0 \\ 0&{} 8 \\ \end{array} \right) .\) Applying Corollary 3.1, we obtain that
where \(\lambda :=(\lambda _1,\lambda _2).\) Since \(\lambda _2:=1-\lambda _1\ge 0\), (41) amounts to the following convex quadratic program
Note that, for this setting, we can evaluate the objective function f of problem (42) as follows:
where the equality is attained at \(({\overline{\lambda }}_1,{\overline{\mu }}):=(\frac{1}{3},0).\) It shows that \(f_{\min }=\frac{1}{12}\) and hence confirms that \(\rho ^2=\frac{1}{12}.\) Consequently, we conclude that the radius of robust feasibility is \(\rho =\frac{1}{2\sqrt{3}}\) (Fig. 1).
Let us now consider a particular case, where the matrix M given in (36) is the identity matrix \(I_{n+1}.\) In this setting, the ellipsoid Z becomes the Euclidean unit closed ball
Corollary 3.2
(Radius formula with ball uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let . Then, the radius of robust feasibility \(\rho \) for (PU) defined in (3) satisfies
where \({\overline{a}}_j:=({\overline{a}}_j^1,\ldots ,{\overline{a}}_j^n)\) for \(j=1,\ldots ,p.\)
Proof
Letting \(M:=I_{n+1}\), Corollary 3.1 gives us that
It is clear that (44) and (43) are equivalent to each other, which finishes the proof. \(\square \)
Remark 3.1
It is worth observing here that the result obtained in Corollary 3.2 can be viewed as
where the norm \(||\cdot ||\) is given (see (40) with \(M:=I_{n+1}\)) by
Here, the norm \(||\cdot ||_2\) is the Euclidean norm on \({\mathbb {R}}^{n+1}.\) With the norm (46), the formula (45) can be expressed in terms of distance as
Note that the distance function d generated by the Euclidean norm \(||\cdot ||_2\) of \({\varOmega }\subset {\mathbb {R}}^n\) is defined by
with the convention \(d(x,{\varOmega }):=+\infty \) whenever \({\varOmega }\) is empty, i.e., \(\inf {\emptyset }=+\infty \). The formula (47) was established in [16, Theorem 4], where a direct proof was given.
We now deal with the case where the spectrahedron Z in (35) is a cross-polytope defined by
In this case, we obtain a numerically tractable formula for calculating the radius of robust feasibility for (PU) as follows.
Corollary 3.3
(Radius formula with polytopic uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let Z be as in (48). Then, the radius of robust feasibility for (PU) defined in (3) is given by
where \({\overline{a}}_j:=({\overline{a}}_j^1,\ldots ,{\overline{a}}_j^n)\) for \(j=1,\ldots ,p.\)
Proof
Let
where \(D_i, i=1,\ldots , n+1,\) are the (\(2^{n}\times 2^{n}\)) diagonal matricies with the entries \(\sigma ^i_k=1\) or \(\sigma ^i_k=-1\) in the kth row for \(k=1,\ldots , 2^n\) satisfying \((\sigma ^1_{k_1},\ldots ,\sigma ^{n+1}_{k_1})\ne \pm (\sigma ^1_{k_2},\ldots ,\sigma ^{n+1}_{k_2})\) whenever \(k_1, k_2\in \{1,\ldots , 2^{n}\}, k_1\ne k_2\). Then,
which shows how the cross-polytope Z can be expressed in terms of the spectrahedron (35). Since the cross-polytope Z is symmetric, we get by Corollary 2.1 that
where \(||\cdot ||=\phi _Z.\)
Similar to the proof of Corollary 3.1, we conclude that \(||(a,b)||=\inf \big \{t>0 \,:\, t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\big \}\) for \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}.\) In this circumstance, for any \(t>0\) and \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1},\) the matrix inequality \(t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\) amounts to \( A^0+\sum \limits _{i=1}^{n+1}z^iA^i\succeq 0\) with \(z^i:=\frac{1}{t}a^i, i=1,\ldots , n\) and \(z^{n+1}:=\frac{1}{t}b.\) In view of (50) with \(z:=(z^1,\ldots ,z^{n+1})=\frac{1}{t}(a,b)\), we assert that
Then, for each \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}\), we have
where \(||\cdot ||_1\) is the \(\ell _1\)-norm on \({\mathbb {R}}^{n+1}.\) With the norm (52), we see that (49) coincides with (51), which finishes the proof. \(\square \)
Finally, we examine the case where the spectrahedron Z in (35) is a hypercube/box given by
In this setting, the radius of robust feasibility for (PU) can be found by solving a linear minimax program with affine constraints.
Corollary 3.4
(Radius formula with box uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let Z be as in (53). Then, it holds
where \({\overline{a}}_j:=({\overline{a}}_j^1,\ldots ,{\overline{a}}_j^n)\) for \(j=1,\ldots ,p.\)
Proof
Let \(E_i\) be the \((n+1)\times (n+1)\) diagonal matrix with one in the (i, i)th entry and zeros elsewhere, and
Then, we see that
which shows how the box Z can be expressed in terms of the spectrahedron (35). In this case, the formula of Corollary 2.1 becomes
where \(||(a,b)||=\inf {\big \{t>0 \,:\, t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\big \}}\) for \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}.\) For any \(t>0\) and \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1},\) the matrix inequality \( t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\) is nothing else but \( A^0+\sum \limits _{i=1}^{n+1}z^iA^i\succeq 0\) with \(z^i:=\frac{1}{t}a^i, i=1,\ldots , n\) and \(z^{n+1}:=\frac{1}{t}b.\) Substituting \(z:=(z^1,\ldots ,z^{n+1})=\frac{1}{t}(a,b)\) into (55), we arrive at
Then, for each \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}\), we have
where the norm \(||\cdot ||_\infty \) is the maximum norm on \({\mathbb {R}}^{n+1}.\) With the norm (57), it is easy to see that (54) is nothing else but (56). It finishes the proof. \(\square \)
Similar to the case of ball uncertainty, the formula (54) can be expressed in terms of distance as
where d is generated by the maximum norm \(||\cdot ||_\infty .\) Note further that (58) has recently been given in [23, Corollary 5.4] by a different approach. More precisely, the radius of robust feasibility in [23, Corollary 5.4] has been derived from the so-called radius of robust global error bound.
We close this section with a remark that for a general spectrahedron of the form (35), we can calculate the radius of robust feasibility for (PU) directly from (5) as inasmuch in this case the Minkowski function of Z is explicitly given by
for \((a,b):=(a^1,\ldots ,a^n,b)\in H({\overline{a}},{\overline{b}}).\)
Let us see the following example, which illustrates how the radius of robust feasibility for (PU) can be directly calculated from (5) when Z is a simple non-symmetric spectrahedral uncertainty set.
Example 3.2
(Calculating the radius with non-symmetric spectrahedral uncertainty) Let
Consider a parametric uncertain linear program \(\mathrm{(PU)}\) that is defined as follows: for each parameter \(\alpha \in {\mathbb {R}}_+,\) we have an uncertain linear program
where \(c\in [-1,2]\) and \((a_j,b_j)\in U_j^\alpha :=(\overline{a}_j,{\overline{b}}_j)+\alpha Z\) for \(j=1,2\) with \(({\overline{a}}_1, {\overline{b}}_1):=(1,0), ({\overline{a}}_2,{\overline{b}}_2):=(0,1)\in {\mathbb {R}}^2\).
It is easy to check that \(x_0:=-1\) is a robust feasible solution of program (EP\(2_\alpha \)) at \(\alpha =0\), or equivalent to saying that \(\mathrm{(PU_0)}\) is feasible, and that Z can be viewed in the form (35) with
In this case, for any \(t>0\) and \((a,b)\in {\mathbb {R}}^{2},\) we assert that
So, for each \((a,b)\in {\mathbb {R}}^{2},\)
Invoking now Theorem 2.1, we arrive at
Note that for each \((a,b)\in H({\overline{a}},{\overline{b}})=\mathrm{conv}\{(-1,0),(0,-1)\}+{\mathbb {R}}_+(0,-1),\) there exist \( \lambda \in [0,1]\) and \(\mu \ge 0\) such that
Hence, (60) becomes
Observe that, for any \(\mu \ge 0,\)
and therefore, by solving the problem (61), we conclude that the radius of robust feasibility is \(\rho =1\) (Fig. 2).
4 Conclusions
We have provided an exact formula for the radius of robust feasibility of a parametric uncertain linear program \(\mathrm{(PU)}\) with a compact and convex uncertainty set. In the case, where the uncertainty set is a spectrahedron, we have obtained a tractable radius formula. In particular, we have shown that the radius of robust feasibility for \(\mathrm{(PU)}\) can be found by solving a linearly constrained convex quadratic program or a minimax linear program for the cases of commonly used uncertainty sets of robust optimization like ellipsoids, balls, polytopes and boxes.
References
Bertsimas, D., Thiele, A.: A robust optimization approach to supply chain management. Oper. Res. 54, 150–168 (2006)
Fabozzi, F.J., Huang, D., Zhou, G.: Robust portfolios: contributions from operations research and finance. Ann. Oper. Res. 176, 191–220 (2010)
Soyster, A.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 1, 1154–1157 (1973)
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization, Princeton Ser. Appl. Math. Princeton University Press, Princeton (2009)
Ben-Tal, A., Nemirovski, A.: Lectures on modern convex optimization: analysis, algorithms, and engineering applications. SIAM Philadelphia (2001)
El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18, 1035–1064 (1997)
Beck, A., Ben-Tal, A.: Duality in robust optimization: primal worst equals dual best. Oper. Res. Lett. 37, 1–6 (2009)
Beck, A., Eldar, Y.C., Ben-Tal, A.: Mean-squared error estimation for linear systems with block circulant uncertainty. SIAM J. Matrix Anal. Appl. 29, 712–730 (2007)
den Ben-Tal, A., Hertog, D., Vial, J.-P.: Deriving robust counterparts of nonlinear uncertain inequalities. Math. Program. 149, 265–299 (2015)
Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53, 464–501 (2011)
Bertsimas, D., Brown, D.B.: Constructing uncertainty sets for robust linear optimization. Oper. Res. 57, 1483–1495 (2009)
Gorissen, B.L., Blanc, H., den Hertog, D., Ben-Tal, A.: Technical note-deriving robust and globalized robust solutions of uncertain linear programs with general convex uncertainty sets. Oper. Res. 62, 672–679 (2014)
Jeyakumar, V., Li, G.: Characterizing robust set containments and solutions of uncertain linear programs without qualifications. Oper. Res. Lett. 38, 188–194 (2010)
Jeyakumar, V., Li, G., Vicente-Perez, J.: Robust SOS-convex polynomial programs: exact SDP relaxations. Optim. Lett. 9, 1–18 (2015)
Goberna, M.A., Jeyakumar, V., Li, G., Linh, N.: Radius of robust feasibility formulas for classes of convex programs with uncertain polynomial constraints. Oper. Res. Lett. 44, 67–73 (2016)
Goberna, M.A., Jeyakumar, V., Li, G., Perez, J.-V.: Robust solutions to multi-objective linear programs with uncertain data. Eur. J. Oper. Res. 242, 730–743 (2015)
Goberna, M.A., Jeyakumar, V., Li, G., Perez, J.-V.: Robust solutions of multi-objective linear semi-infinite programs under constraint data uncertainty. SIAM J. Optim. 24, 1402–1419 (2014)
Cánovas, M.J., López, M.A., Parra, J., Toledo, F.J.: Distance to ill-posedness and the consistency value of linear semi-infinite inequality systems. Math. Program. 103A, 95–126 (2005)
Cánovas, M.J., López, M.A., Parra, J., Toledo, F.J.: Distance to ill-posedness for linear inequality systems under block perturbations: convex and infinite-dimensional cases. Optimization 60, 925–946 (2011)
Cánovas, M.J., López, M.A., Parra, J., Todorov, M.I.: Stability and well-posedness in linear semi-infinite programming. SIAM J. Optim. 10, 82–98 (1999)
Schirotzek, W.: Nonsmooth Analysis. Universitext. Springer, Berlin (2007)
Goberna, M.A., López, M.A.: Linear Semi-Infinite Optimization. Wiley, Chichester (1998)
Chuong, T.D., Jeyakumar, V.: Robust global error bounds for uncertain linear inequality systems with applications. Linear Algebra Appl. 493, 183–205 (2016)
Nie, J.: Semidefinite representability. Semidefinite optimization and convex algebraic geometry, 251291, MOS–SIAM Ser. Optim., 13, SIAM, Philadelphia (2013)
Ramana, M., Goldman, A.J.: Some geometric results in semidefinite programming. J. Glob. Optim. 7, 33–50 (1995)
Vinzant, C.: What is a spectrahedron? Not. Am. Math. Soc. 61, 492–494 (2014)
Acknowledgements
The authors would like to thank the referees for the valuable comments and suggestions which have improved the final preparation of the paper. Research of the first author was supported by the UNSW Vice-Chancellor’s Postdoctoral Research Fellowship (RG134608/SIR50). Research of the second author was partially supported by a grant from the Australian Research Council.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Jean-Pierre Crouzeix.
Appendix
Appendix
In this section, we give the proof of Lemma 2.4 by exploiting some techniques from [15].
Proof of Lemma 2.4
We assume by contradiction that there exists \(\delta >0\) such that
Applying the separation theorem, we find a nonzero element \((u,v)\in {\mathbb {R}}^n\times {\mathbb {R}}\) such that
Obviously \(v\le 0\) because of \(0_{n+1}\in \mathrm{cone}\Big \{\bigcup \limits _{j=1}^p[(-{\overline{a}}_j,-{\overline{b}}_j)-(\alpha +\delta )Z]\Big \},\) and further, we assert that
Otherwise, there exists \((a_0,b_0)\in \mathrm{cone}\Big \{\bigcup \limits _{j=1}^p[(-{\overline{a}}_j,-{\overline{b}}_j)-(\alpha +\delta )Z]\Big \}\) such that \(\langle (u,v),(a_0,b_0)\rangle < 0.\) Then, \(\lambda (a_0,b_0)\in \mathrm{cone}\Big \{\bigcup \limits _{j=1}^p[(-{\overline{a}}_j,-{\overline{b}}_j)-(\alpha +\delta )Z]\Big \}\) for all \(\lambda >0\), and so,
This fact contradicts (62) and thus, the assertion (63) holds.
By our assumption,
Then, there exists a sequence \(\{(u_k,v_k)\}\subset \mathrm{cone}\Big \{\bigcup \limits _{j=1}^p[(-{\overline{a}}_j,-{\overline{b}}_j)-\alpha Z]\Big \}\) such that
So, for each \(k\in {\mathbb {N}}\), there exist \(z_k^j\in Z\) and \(\lambda _k^j\ge 0, j=1,\ldots ,p,\) such that
We claim that there exists \(\kappa >0\) such that
If this is not the case, then by passing to a subsequence if necessary, we may assume that \(\sum _{j=1}^{p}\lambda _k^j\rightarrow 0\) as \(k\rightarrow \infty .\) It implies by (65) that \((u_k,v_k)\rightarrow 0_{n+1}\) as \(k\rightarrow \infty \) due to the compactness of Z. This contradicts (64), and so, our claim in (66) holds.
By \(0_{n+1}\in \mathrm{int}Z,\) we find \(\rho >0\) such that \(\rho {\mathbb {B}}_{n+1}\subset Z.\) Note that \(||(u,v)||=\max \limits _{w^*\in {\mathbb {B}}_{n+1}}{\langle (u,v),w^*\rangle },\) and hence, there exists \(w^*\in {\mathbb {B}}_{n+1}\) such that \(||(u,v)||=\langle (u,v),w^*\rangle .\) Now, letting \(z:=\rho w^*,\) it follows that \(z\in \rho {\mathbb {B}}_{n+1}\subset Z\) and that
Taking (65) into account, we see that
where \(\big (\frac{\alpha }{\alpha +\delta } z^j_k+\frac{\delta }{\alpha +\delta } z\big )\in Z\) inasmuch as Z is a convex set. Granting this, (63) and (67) entail that
This together with (66) entails that
Letting \(k\rightarrow \infty \) in (68), we arrive at \(0\le v-\delta \rho \kappa ||(u,v)||\), which is absurd by virtue of \(v\le 0\) and \((u,v)\ne 0_{n+1}.\) So, the proof is complete. \(\square \)
Rights and permissions
About this article
Cite this article
Chuong, T.D., Jeyakumar, V. An Exact Formula for Radius of Robust Feasibility of Uncertain Linear Programs. J Optim Theory Appl 173, 203–226 (2017). https://doi.org/10.1007/s10957-017-1067-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-017-1067-6
Keywords
- Uncertain linear program
- Robust linear optimization
- Spectrahedron
- Robust solution
- Linear matrix inequality