1 Introduction

The real-world linear programs involve input data that are often noisy or uncertain due to prediction or measurement errors. For example, a linear program, arising in industry or commerce [1, 2], might involve various costs, financial returns, and future demands that might be unknown at the time of the decision. They have to be predicted and are replaced with their forecasts. They often result in prediction errors. Similarly, some of the data, such as the contents associated with raw materials, might be hard to measure exactly. These input data are subject to measurement errors.

If data uncertainty arises, for instance, at the constraints of some linear programs, it is likely that these constraints will be violated upon realization of the actual data. Constraint violations can potentially influence the usability of the solution. Consequently, the development of optimization methodologies, which are capable of generating robust solutions that are immunized against data uncertainty, has become more important than ever in mathematics, commerce and engineering.

The deterministic approach of finding robust solutions of linear programs under data uncertainty was pioneered in the 1970s, leading to the introduction of what is now called, Robust Optimization, by Soyster [3]. Two decades later, in the late 1990s, Ben-Tal and Nemirovski [4, 5], and El Ghaoui [6] provided a highly successful computationally tractable treatment of robust optimization approach for linear as well as convex optimization problems under data uncertainty (see [4, 7,8,9,10,11,12,13,14] and other references therein).

The robust optimization approach is based on the principle that the robust counterpart of an uncertain program, where uncertain constraints are enforced for all possible parameter realizations within some uncertainty set, has a feasible solution, called a robust feasible solution. Recent research in robust optimization has addressed the issue of guaranteeing robust feasibility by providing formulas for radius of robust feasibility for uncertain linear and convex programs [15,16,17]. The radius of robust feasibility gives a value for the maximal ‘size’ of an uncertainty set under which robust feasibility can be guaranteed. However, these formulas are valid only for programs with a ball uncertainty set.

The purpose of this paper is to present a new exact formula for radius of robust feasibility for uncertain linear programs under a general compact and convex uncertainty set. By considering spectrahedral uncertainty sets, we obtain numerically tractable radius formulas for commonly used uncertainty sets of robust optimization, such as ellipsoids, balls, polytopes and boxes. In these cases, we show that the radius of robust feasibility can be calculated by means of solving a linearly constrained convex quadratic program or a minimax linear program.

Our approach to deriving these formulas was inspired by the elegant work on the consistency radius in linear semi-infinite programming in order to guarantee the feasibility of the nominal system under perturbations preserving the number of constraints [18,19,20].

In Sect. 2, we provide a general exact formula for the radius of robust feasibility under a general compact and convex uncertainty set, whereas in Sect. 3 we present numerically tractable formulas for the radius of robust feasibility in the case of spectrahedral uncertainty sets. In the Appendix, we provide the proof of a technical lemma which paves the way to establishing the general exact formula for the radius.

2 An Exact Formula for Radius of Robust Feasibility

This section is devoted to presenting an exact formula for computing the radius of robust feasibility for an uncertain linear program under a compact and convex uncertainty set.

We begin by fixing some definitions and some basic results that will be used throughout the paper. A symmetric \((n\times n)\) matrix A is said to be positive semi-definite, denoted by \(A\succeq 0\), whenever \(x^\top Ax\ge 0\) for all \(x\in {\mathbb {R}}^n,\) where \(n\in {\mathbb {N}}:=\{1,2,\ldots \}\). If \(x^\top Ax> 0\) for all \(x\in {\mathbb {R}}^n\setminus \{0_n\},\) then A is called positive definite, denoted by \(A\succ 0.\)

The following lemma is known as the Schur complement.

Lemma 2.1

(See [5, Lemma 4.2.1]) Let

$$\begin{aligned} W:= \left( \begin{array}{cc} A &{} \;B^\top \\ B&{} \;C \\ \end{array} \right) \end{aligned}$$

be a symmetric matrix with \((k\times k)\) block A and \((l\times l)\) block C. Assume that \(A \succ 0.\) Then, \(W\succeq 0\) if and only if \(C-BA^{-1}B^\top \succeq 0.\)

Let \({\varOmega }\subset {\mathbb {R}}^n\) be a convex set such that \(0_n\in \mathrm{int}{\varOmega },\) where the notation \(\mathrm{int}{\varOmega }\) stands for the interior of \({\varOmega }\). The function \(\phi _{\varOmega }:{\mathbb {R}}^n\rightarrow {\mathbb {R}}_+:=[0,+\infty [\) given by

$$\begin{aligned} \phi _{\varOmega }(x):=\inf {\{t>0\,:\, x\in t{\varOmega }\}},\quad x\in {\mathbb {R}}^n, \end{aligned}$$
(1)

is called Minkowski function or gauge of \({\varOmega }.\) The following lemma provides some properties of the Minkowski function that can be found in the literature; see, e.g., [21, Lemma 1.3.13].

Lemma 2.2

Let \({\varOmega }\subset {\mathbb {R}}^n\) be a convex set such that \(0_n\in \mathrm{int}{\varOmega }.\) Then, the following properties hold:

  1. (i)

    \(\phi _{\varOmega }\) is sublinear and continuous;

  2. (ii)

    \(\{x\in {\mathbb {R}}^n \,:\, \phi _{\varOmega }(x)\le 1\}=\mathrm{cl\,}{\varOmega },\) where \(\mathrm{cl\,}{\varOmega }\) stands for the closure of \({\varOmega }\); and

  3. (iii)

    if in addition \({\varOmega }\) is bounded and symmetric (i.e., \(x\in {\varOmega }\Rightarrow -x\in {\varOmega }),\) then \(\phi _{\varOmega }:=||\cdot ||\) is a norm on \({\mathbb {R}}^n\) generated by \({\varOmega }\).

A parametric linear program in the face of data uncertainty in both the objective and the constraints, denoted by \(\mathrm{(PU)}\), can be captured by a family of uncertain linear programs: for each parameter \(\alpha \in {\mathbb {R}}_+,\) we have an uncertain linear program

figure a

where \(c, (a_j,b_j), j=1,\ldots ,p,\) are uncertain vectors and \(V^\alpha , U_{j}^\alpha , j=1,\ldots ,p,\) are uncertainty sets that are compact and convex.

The robust counterpart of (PU\(\alpha \)) is given by

figure b

where the uncertain constraints are enforced for all possible parameter realizations within the corresponding uncertainty sets. The program (PU\(\alpha \)) is said to be robust feasible if the robust program (PR\(_\alpha \)) is feasible, i.e., \(\{x\in {\mathbb {R}}^n \,:\,a_j^\top x-b_j\le 0,\;\forall (a_j,b_j)\in U_j^{\alpha }, \; j=1,\ldots ,p\}\ne \emptyset .\)

Throughout this paper, the uncertainty sets \(U_{j}^\alpha , j=1,\ldots ,p,\) are given by

$$\begin{aligned} U_j^\alpha :=({{\overline{a}}}_j,{{\overline{b}}}_j)+\alpha Z_{}, \; j=1,\ldots ,p, \end{aligned}$$
(2)

where \(({{\overline{a}}}_j, {{\overline{b}}}_j)\in {\mathbb {R}}^{n+1}, j=1,\ldots ,p,\) are fixed and \(Z\subset {\mathbb {R}}^{n+1}\) is a convex and compact set such that \(0_{n+1}\in \mathrm{int}Z.\) We also assume that the nominal program \(\mathrm{(PU_0)}\) is feasible, i.e., \(\{x\in {\mathbb {R}}^n \,:\,{{\overline{a}}}_j^\top x-{{\overline{b}}}_j\le 0,\; j=1,\ldots ,p \}\ne \emptyset .\)

Following [16, 17], we define the notion of radius of robust feasibility for the parametric uncertain linear program \(\mathrm{(PU)}\) as follows.

Definition 2.1

(Radius of robust feasibility) The radius of robust feasibility for (PU) is given by

$$\begin{aligned} \rho :=\sup {\{\alpha \in {\mathbb {R}}_+ \,:\, (PR_\alpha ) \text{ is } \text{ feasible }\}}. \end{aligned}$$
(3)

The radius \(\rho \) provides a value for the maximal ‘size’ of the uncertainty set under which robust feasibility of our uncertain program is guaranteed.

The following dual characterization of solutions of a semi-infinite linear inequality system, which is useful for our analysis in the sequel, can be found in [22]. Recall that \(\mathrm{conv\,}{\varOmega }\) denotes the convex hull of \({\varOmega }\subset {\mathbb {R}}^n,\) while \(\mathrm{cone\,}{\varOmega }:={\mathbb {R}}_+\mathrm{conv\,}{\varOmega }\) stands for the convex conical hull of \({\varOmega }\cup \{0_n\}\).

Lemma 2.3

(See [22, Corollary 3.1.1]) Let T be an arbitrary index set. Then, it holds that

$$\begin{aligned} \{x\in {\mathbb {R}}^n \,:\, u^\top _t x-v_t\le 0, t\in T\}\ne \emptyset \Longleftrightarrow (0_n,1)\notin \mathrm{cl\, cone}\{(-u_t,-v_t) \,:\, t\in T\}. \end{aligned}$$

The next lemma can be regarded as an extension of [16, Lemma 3] for the case of ball uncertainty, and its method of proof is similar to that of [15, Lemma 2.2], where the data involve the epigraphs of conjugate functions of convex functions. For the benefit of the reader a self-contained proof is given in the appendix.

Lemma 2.4

Let \(\alpha \in {\mathbb {R}}_+, ({{\overline{a}}}_j,{{\overline{b}}}_j)\in {\mathbb {R}}^n\times {\mathbb {R}}, j=1,\ldots ,p,\) and let \(Z\subset {\mathbb {R}}^{n+1}\) be a convex and compact set with \(0_{n+1}\in \mathrm{int}Z.\) Assume that \((0_n,1)\in \mathrm{cl\, cone}\bigg \{\bigcup \limits _{j=1}^p[(-{{\overline{a}}}_j,-{{\overline{b}}}_j)-\alpha Z]\bigg \}.\) Then,

$$\begin{aligned} (0_n,1)\in \mathrm{cone}\Bigg \{\bigcup _{j=1}^p[(-{{\overline{a}}}_j,-{\overline{b}}_j)-(\alpha +\delta )Z]\Bigg \}\; \text{ for } \text{ all } \; \delta >0. \end{aligned}$$

Proof

See Appendix. \(\square \)

The following theorem provides an exact formula for the radius of robust feasibility for (PU) in terms of the Minkowski function of Z. This formula involves the so-called hypographical set of the nominal system \(\{{\overline{a}}_j^\top x-{\overline{b}}_j\le 0, j=1,\ldots ,p\}\) (cf. [16, 18]) defined by

$$\begin{aligned} H({\overline{a}},{\overline{b}}):=\mathrm{conv\,}\{(-{\overline{a}}_j,-{\overline{b}}_j)\,:\, j=1,\ldots ,p\}+{\mathbb {R}}_+(0_n,-1), \end{aligned}$$
(4)

where \({\overline{a}}:=({\overline{a}}_1,\ldots ,\overline{a}_p)\in ({\mathbb {R}}^n)^p\) and \({\overline{b}}:=({\overline{b}}_1,\ldots ,{\overline{b}}_p)\in {\mathbb {R}}^p.\)

Theorem 2.1

(Exact radius formula) Let the nominal program \(\mathrm{(PU_0)}\) be feasible. Then, the radius of robust feasibility for (PU) defined in (3) is given by

$$\begin{aligned} \rho = \inf _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}. \end{aligned}$$
(5)

Proof

We first prove that

$$\begin{aligned} {\rho }\le \inf _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}. \end{aligned}$$
(6)

Assume on the contrary that (6) is false. It means that there exists \((a,b)\in H({\overline{a}},{\overline{b}})\) such that

$$\begin{aligned} \rho > \phi _{Z}(a,b). \end{aligned}$$
(7)

By \((a,b)\in H({\overline{a}},{\overline{b}}),\) there exist \(\lambda _k\ge 0, k=1,\ldots , p\) with \(\sum _{k=1}^p\lambda _k=1\) and \(\mu \ge 0\) such that

$$\begin{aligned} (a,b)=\sum _{k=1}^p\lambda _k(-{\overline{a}}_k,-{\overline{b}}_k)+\mu (0_n,-1). \end{aligned}$$
(8)

Let \(\epsilon >0.\) Then, (8) implies that

$$\begin{aligned} (0_n,1)=\sum _{k=1}^p\frac{\lambda _k}{\mu +\epsilon } \big ((-{\overline{a}}_k,-{\overline{b}}_k)-(a,b-\epsilon )\big ). \end{aligned}$$
(9)

Besides, by the definition of \(\phi _{Z}\) in (1), for \(\epsilon >0\) as above, there exists \(t_{\epsilon }>0\) such that \((a,b-\epsilon )\in t_\epsilon Z\) and \(t_\epsilon <\phi _{Z}(a,b-\epsilon )+\epsilon .\) It then follows that there exists \(z\in Z\) such that \((a,b-\epsilon )=t_\epsilon z\). Moreover, since Z is convex and \(0_{n+1}\in Z\), we have

$$\begin{aligned} (a,b-\epsilon )&=\big (\phi _{Z}(a,b-\epsilon )+\epsilon \big )\left( \frac{t_\epsilon }{\phi _{Z}(a,b-\epsilon )+\epsilon }z\right. \\&\quad \left. + \Big (1-\frac{t_\epsilon }{\phi _{Z}(a,b-\epsilon )+\epsilon }\Big )0_{n+1}\right) \in \big (\phi _{Z}(a,b-\epsilon )+\epsilon \big )Z. \end{aligned}$$

Hence, there exists \({\widetilde{z}}\in Z\) such that \((a,b-\epsilon )=\big (\phi _{Z}(a,b-\epsilon )+\epsilon \big ){\widetilde{z}},\) and then, by (9), we obtain that

$$\begin{aligned} (0_n,1)=\sum _{k=1}^p\frac{\lambda _k}{\mu +\epsilon } \Big ((-{\overline{a}}_k,-{\overline{b}}_k)-\big (\phi _{Z}(a,b-\epsilon )+\epsilon \big ){\widetilde{z}}\Big ). \end{aligned}$$
(10)

Now, let \(\alpha \in {\mathbb {R}}_+\) be such that (PR\(_\alpha \)) is feasible. Then,

$$\begin{aligned} \{x\in {\mathbb {R}}^n \,:\, a_j^\top x-b_j\le 0,\; \forall (a_j,b_j)\in U_j^{\alpha }, \; j=1,\ldots ,p\}\ne \emptyset , \end{aligned}$$
(11)

where \(U_j^\alpha :=({\overline{a}}_j,{\overline{b}}_j)+\alpha Z, \;j=1,\ldots ,p.\) According to Lemma 2.3, (11) is equivalent to the following condition

$$\begin{aligned} (0_n,1)\notin -\mathrm{cl\,cone}\left( \bigcup \limits _{j=1}^pU^\alpha _j\right) . \end{aligned}$$
(12)

We claim that

$$\begin{aligned} \alpha \le \phi _{Z}(a,b-\epsilon )+\epsilon . \end{aligned}$$
(13)

Indeed, assume on the contrary that \(\alpha > \phi _{Z}(a,b-\epsilon )+\epsilon .\) Then, \(\alpha >0\) and

$$\begin{aligned}&\big (\phi _{Z}(a,b-\epsilon )+\epsilon \big ){\widetilde{z}}\\&\quad =\alpha \left( \frac{\phi _{Z}(a,b-\epsilon )+\epsilon }{\alpha }{\widetilde{z}}+ \Big (1-\frac{\phi _{Z}(a,b-\epsilon )+\epsilon }{\alpha }\Big )0_{n+1}\right) \in \alpha Z, \end{aligned}$$

because Z is convex and \(0_{n+1}\in Z\). We find \(\widehat{z}\in Z\) such that \(\big (\phi _{Z}(a,b-\epsilon )+\epsilon \big ){\widetilde{z}}=\alpha {\widehat{z}}.\) This together with (10) yields

$$\begin{aligned} (0_n,1)=\sum _{k=1}^p\frac{\lambda _k}{\mu +\epsilon } \big ((-{\overline{a}}_k,-{\overline{b}}_k)-\alpha {\widehat{z}}\big )\in -\mathrm{cone}\left( \bigcup \limits _{j=1}^pU^\alpha _j\right) , \end{aligned}$$
(14)

which contradicts (12), and so, our claim in (13) holds. Note that \(\phi _{Z}\) is a continuous function [cf. Lemma 2.2(i)]. By letting \(\epsilon \rightarrow 0\), we obtain from (13) that \(\alpha \le \phi _{Z}(a,b)\), which in turn implies by definition of radius that \(\rho \le \phi _{Z}(a,b).\) This together with (7) gives a contradiction. Consequently, (6) holds.

We now prove that

$$\begin{aligned} \rho \ge \inf _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}. \end{aligned}$$
(15)

We assume by contradiction that

$$\begin{aligned} \rho < \inf \limits _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}. \end{aligned}$$
(16)

For each \(\varepsilon >0,\) let \({{\widetilde{\alpha }}}:=\rho +\varepsilon .\) By the definition of \(\rho \), the program (PR\(_\alpha \)) is not feasible at \(\alpha :={{\widetilde{\alpha }}}.\) It means that

$$\begin{aligned} \{x\in {\mathbb {R}}^n \,:\, a_j^\top x-b_j\le 0,\; \forall (a_j,b_j)\in U_j^{{{\widetilde{\alpha }}}}, \; j=1,\ldots ,p\}= \emptyset , \end{aligned}$$
(17)

where \(U_j^{{{\widetilde{\alpha }}}}:=({\overline{a}}_j,{\overline{b}}_j)+{{\widetilde{\alpha }}} Z, \;j=1,\ldots ,p.\) In view of Lemma 2.3, (17) is equivalent to the following condition

$$\begin{aligned} (0_n,1)\in -\mathrm{cl\,cone}\left( \bigcup \limits _{j=1}^pU^{{{\widetilde{\alpha }}}}_j\right) = \mathrm{cl\, cone}\Big \{\bigcup \limits _{j=1}^p[(-{\overline{a}}_j,-{\overline{b}}_j)-{{\widetilde{\alpha }}} Z]\Big \}. \end{aligned}$$

Invoking Lemma 2.4, we assert that

$$\begin{aligned} (0_n,1)\in&\mathrm{cone}\Bigg \{\bigcup \limits _{j=1}^p[(-{\overline{a}}_j,-{\overline{b}}_j)-({{\widetilde{\alpha }}}+\varepsilon ) Z]\Bigg \} \end{aligned}$$

for all \( \varepsilon >0.\) Then, there exist \(\lambda _j\ge 0\) and \(z_j\in Z, j=1,\ldots , p,\) such that

$$\begin{aligned} (0_n,1)=\sum _{j=1}^p\lambda _j\big [(-{\overline{a}}_j,-{\overline{b}}_j)-({{\widetilde{\alpha }}}+\varepsilon ) z_j\big ]. \end{aligned}$$
(18)

Observe by (18) that \(\sum _{j=1}^p\lambda _j\ne 0\). Putting \({{\widetilde{\lambda }}}_j:=\frac{\lambda _j}{\sum _{j=1}^p\lambda _j}\ge 0, j=1,\ldots , p\), we obtain that \(\sum \limits _{j=1}^p{{\widetilde{\lambda }}}_j= 1\), and then deduce from (18) that

$$\begin{aligned} ({{\widetilde{\alpha }}}+\varepsilon )\sum _{j=1}^p{{\widetilde{\lambda }}}_j z_j=\sum _{j=1}^p{{\widetilde{\lambda }}}_j(-{\overline{a}}_j,-{\overline{b}}_j)+\frac{1}{\sum _{j=1}^p\lambda _j}(0_n,-1)\in H({\overline{a}},{\overline{b}}). \end{aligned}$$
(19)

Note that \(z_j\in Z\) for all \(j=1,\ldots ,p,\) and hence, \(z^*:=\sum \limits _{j=1}^p{{\widetilde{\lambda }}}_j z_j\in Z\) due to the convexity of Z. So, from (19) follows that \(({{\widetilde{\alpha }}}+\varepsilon )z^*\in H({\overline{a}},{\overline{b}}),\) which in turn implies that

$$\begin{aligned} \inf _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}\le \phi _{Z}[({{\widetilde{\alpha }}}+\varepsilon )z^*] \le ({{\widetilde{\alpha }}}+\varepsilon )\phi _{Z}(z^*) \le {{\widetilde{\alpha }}}+\varepsilon =\rho +2\varepsilon , \end{aligned}$$
(20)

where the second and third inequalities in (20) hold by virtue of Lemma 2.2(i, ii). Letting \(\varepsilon \rightarrow 0\) in (20), we arrive at \(\inf \limits _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}\le \rho ,\) which contradicts (16). So, (15) is valid, which completes the proof of the theorem. \(\square \)

We now focus on the case where the uncertainty set Z in (2) is symmetric (i.e., \(z\in Z \Rightarrow -z\in Z\)). In this case, the radius of robust feasibility for (PU) can be computed by solving a convex programming problem in terms of a specified norm. In what follows, we use the notation

$$\begin{aligned} \triangle _p:=\left\{ \lambda :=(\lambda _1,\ldots ,\lambda _p)\in {\mathbb {R}}^p\,:\, \lambda _j\ge 0, j=1,\ldots , p, \sum _{j=1}^p\lambda _j=1\right\} \end{aligned}$$

to denote the simplex in \({\mathbb {R}}^p.\)

Corollary 2.1

(Radius formula with symmetric uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let the uncertainty set Z in (2) be symmetric. Then, we have

$$\begin{aligned} \rho = \min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{\left| \left| \left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) \right| \right| }, \end{aligned}$$
(21)

where \(||\cdot ||\) is a norm on \({\mathbb {R}}^{n+1}\) generated by Z, i.e., \(||\cdot ||=\phi _{Z}.\)

Proof

Since Z is symmetric and bounded, we assert by Lemma 2.2 (iii) that \(\phi _{Z}=||\cdot ||\), where \(||\cdot ||\) is a norm on \({\mathbb {R}}^{n+1}.\) Applying Theorem 2.1, we conclude that

$$\begin{aligned} \rho = \inf _{(a,b)\in H({\overline{a}},{\overline{b}})}{||(a,b)||}. \end{aligned}$$
(22)

Note that \(H({\overline{a}},{\overline{b}})\) is a nonempty and closed set in \({\mathbb {R}}^{n+1}\) and any norm in \({\mathbb {R}}^{n+1}\) is coercive. Therefore, the “inf” in (22) is attained; i.e., we have

$$\begin{aligned} \rho = \min _{(a,b)\in H({\overline{a}},{\overline{b}})}{||(a,b)||}. \end{aligned}$$
(23)

For each \((a,b)\in H({\overline{a}},{\overline{b}}),\) there exist \(\lambda _j\ge 0, j=1,\ldots , p\) with \(\sum _{j=1}^p\lambda _j=1\) and \(\mu \ge 0\) such that

$$\begin{aligned} (a,b)=\sum _{j=1}^p\lambda _j(-{\overline{a}}_j,-{\overline{b}}_j)+\mu (0_n,-1), \end{aligned}$$

or equivalently,

$$\begin{aligned} a=-\sum _{j=1}^p\lambda _j{\overline{a}}_j,\quad b=-\sum _{j=1}^p\lambda _j{\overline{b}}_j-\mu . \end{aligned}$$
(24)

Letting \(\lambda :=(\lambda _1,\ldots ,\lambda _p)\), we obtain that \((\lambda ,\mu )\in \triangle _p\times {\mathbb {R}}_+.\) Substituting now (24) into (23), we arrive at (21), which completes the proof. \(\square \)

Observe by definition that the radius of robust feasibility for (PU) is always nonnegative, and moreover, it could be zero (see, e.g., [23, Example 4.5] for a particular box uncertainty set). The following proposition presents a necessary and sufficient condition for the radius of robust feasibility for (PU) to be positive.

Proposition 2.1

(Necessary and sufficient condition for the positivity of \(\rho \)) Let the nominal program \(\mathrm{(PU_0)}\) be feasible. Consider the radius of robust feasibility for (PU) defined in (3). Then, it holds that

$$\begin{aligned} \sup _{(x,y)\in {\mathbb {R}}^n\times {\mathbb {R}}}{\{y\,:\, {\overline{a}}_j^\top x-{\overline{b}}_j+y\le 0,\; j=1,\ldots ,p\}}>0 \Longleftrightarrow \rho >0. \end{aligned}$$
(25)

Proof

Since the nominal program \(\mathrm{(PU_0)}\) is feasible, we assert by Theorem 2.1 that \(\rho = \inf \limits _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}.\)

[Proving \(\Longrightarrow \)] Let \(\sup \limits _{(x,y)\in {\mathbb {R}}^n\times {\mathbb {R}}}{\{y\,:\, {\overline{a}}_j^\top x-{\overline{b}}_j+y\le 0,\; j=1,\ldots ,p\}}>0.\) Then, there exist \({\widehat{x}}\in {\mathbb {R}}^n\) and \(\delta _0>0\) such that

$$\begin{aligned} {\overline{a}}_j^\top {\widehat{x}}-{\overline{b}}_j+\delta _0\le 0, \quad j=1,\ldots , p. \end{aligned}$$
(26)

Arguing by contradiction, we suppose that \(\rho =0.\) It means that for each \(k\in {\mathbb {N}}\), there exists \((a_k,b_k)\in ~H({\overline{a}},{\overline{b}})\) such that

$$\begin{aligned} \phi _{Z}(a_k,b_k)<\frac{1}{k}. \end{aligned}$$
(27)

Let \(k\in {\mathbb {N}}\). Then, the relation \((a_k,b_k)\in H({\overline{a}},{\overline{b}})\) implies that there exist \(\lambda ^k_j\ge 0, j=1,\ldots , p\) with \(\sum _{j=1}^p\lambda ^k_j=1\) and \(\mu _k\ge 0\) such that

$$\begin{aligned} (a_k,b_k)=\sum _{j=1}^p\lambda ^k_j(-{\overline{a}}_j,-{\overline{b}}_j)+\mu _k(0_n,-1), \end{aligned}$$

which entails that

$$\begin{aligned} (0_n,1)=\sum _{j=1}^p\frac{\lambda ^k_j}{\mu _k+\frac{1}{k}} \left( \big (-{\overline{a}}_j,-{\overline{b}}_j\big )-\big (a_k,b_k\big )+\left( 0_n,\frac{1}{k}\right) \right) . \end{aligned}$$
(28)

By the definition of \(\phi _{Z}(a_k,b_k)\), we find \(t_{k}>0\) such that \(t_k<\phi _{Z}(a_k,b_k)+\frac{1}{k}\) and \((a_k,b_k)\in t_k Z.\) The latter ensures that there exists \(z_k\in Z\) such that \((a_k,b_k)=t_kz_k\). In addition, since Z is convex and \(0_{n+1}\in Z\), we have

$$\begin{aligned} (a_k,b_k)&=\left( \phi _{Z}(a_k,b_k)+\frac{1}{k}\right) \left( \frac{t_k}{\phi _{Z}(a_k,b_k)+\frac{1}{k}}z_k\right. \\&\quad \left. + \left( 1-\frac{t_k}{\phi _{Z}(a_k,b_k)+\frac{1}{k}}\right) 0_{n+1}\right) \in \Big (\phi _{Z}(a_k,b_k)+\frac{1}{k}\Big )Z. \end{aligned}$$

Hence, there exists \({\widetilde{z}}_k\in Z\) such that \((a_k,b_k)=\big (\phi _{Z}(a_k,b_k)+\frac{1}{k}\big ){\widetilde{z}}_k,\) and then, by (28), we obtain that

$$\begin{aligned} (0_n,1)=\sum _{j=1}^p{{\widetilde{\lambda }}}^k_j \left( (-{\overline{a}}_j,-{\overline{b}}_j)-\left( \phi _{Z}(a_k,b_k)+\frac{1}{k}\right) {\widetilde{z}}_k+\left( 0_n,\frac{1}{k}\right) \right) , \end{aligned}$$
(29)

where \({{\widetilde{\lambda }}}^k_j:=\frac{\lambda ^k_j}{\mu _k+\frac{1}{k}}\ge 0.\) Letting \(\gamma _k:=\sum _{j=1}^p{{\widetilde{\lambda }}}^k_j\), we see that \(\gamma _k>0\) and, by taking subsequences if necessary, we assume that \(\frac{{{\widetilde{\lambda }}}^k_j}{\gamma _k}\rightarrow \lambda _j\ge 0, j=1,\ldots , p\) as \(k\rightarrow \infty \) and \(\sum _{j=1}^p\lambda _j=1.\) On account of (29), we obtain that

$$\begin{aligned}&-\left[ \sum _{j=1}^p\frac{{{\widetilde{\lambda }}}^k_j}{\gamma _k} \left( (-{\overline{a}}_j,-{\overline{b}}_j)-\left( \phi _{Z}(a_k,b_k)+\frac{1}{k}\right) {\widetilde{z}}_k+\left( 0_n,\frac{1}{k}\right) \right) \right] \nonumber \\&\quad =\frac{1}{\gamma _k}(0_n,-1)\in {\mathbb {R}}_+(0_n,-1), \quad \forall k\in {\mathbb {N}}. \end{aligned}$$
(30)

Keeping in mind the compactness of Z and the relation in (27), and letting \(k\rightarrow \infty \) in (30), we arrive at \(-\sum _{j=1}^p\lambda _j(-{\overline{a}}_j,-{\overline{b}}_j)\in {\mathbb {R}}_+(0_n,-1)\). Then, we find \(\mu \ge 0\) such that

$$\begin{aligned} 0_{n+1}=\sum _{j=1}^p\lambda _j(-{\overline{a}}_j,-{\overline{b}}_j)+\mu (0_n,-1), \end{aligned}$$

which implies that

$$\begin{aligned} (0_n,1)&=\sum _{j=1}^p\frac{\lambda _j}{\mu +\delta _0} \big ( -{\overline{a}}_j,-({\overline{b}}_j-\delta _0)\big )\in \mathrm{cone}\big \{\big (-{\overline{a}}_j,\\&-({\overline{b}}_j-\delta _0)\big ) \,:\, j=1,\ldots ,p\big \}, \end{aligned}$$

where \(\delta _0\) was given in (26). It together with Lemma 2.3 implies that

$$\begin{aligned} \big \{x\in {\mathbb {R}}^n\,:\, {\overline{a}}_j^\top x-({\overline{b}}_j-\delta _0)\le 0, j=1,\ldots , p\big \}=\emptyset , \end{aligned}$$

which contradicts (26), and thus, our conclusion follows.

[Proving \(\Longleftarrow \)] Let \(\rho >0\) and set \(X(\varepsilon ):=\big \{x\in {\mathbb {R}}^n \,:\, {\overline{a}}_j^\top x-{\overline{b}}_j+\varepsilon \le 0, j=1,\ldots , p\big \}\), where \(\varepsilon >0.\) The proof will be completed if we can show that there exists \(\varepsilon _0>0\) such that \(X(\varepsilon _0)\ne \emptyset .\)

Assume on the contrary that \(X(\varepsilon )=\emptyset \) for all \(\varepsilon >0\). In this circumstance, we invoke Lemma 2.3 to assert that, for each \(\varepsilon >0,\)

$$\begin{aligned}&(0_n,1) \in \mathrm{cl\,cone}\big \{\big (-{\overline{a}}_j,-({\overline{b}}_j-\varepsilon )\big )\,:\, j=1,\ldots ,p\big \} \nonumber \\&\quad =\mathrm{cone}\big \{\big (-{\overline{a}}_j,-({\overline{b}}_j-\varepsilon )\big ) \,:\, j=1,\ldots ,p\big \}. \end{aligned}$$
(31)

Let us consider any \(\varepsilon >0.\) Then, by (31), there exist \(\lambda ^\varepsilon _j\ge 0, j=1,\ldots , p,\) such that

$$\begin{aligned} (0_n,1)=\sum _{j=1}^p\lambda ^\varepsilon _j\big (-{\overline{a}}_j,-{\overline{b}}_j +\varepsilon \big ). \end{aligned}$$
(32)

Observe by (32) that \(\sum _{j=1}^p\lambda ^\varepsilon _j\ne 0\). Putting \({{\widetilde{\lambda }}}^\varepsilon _j:=\frac{\lambda ^\varepsilon _j}{\sum _{j=1}^p\lambda ^\varepsilon _j}\ge 0, j=1,\ldots , p\), we obtain that \(\sum \limits _{j=1}^p{{\widetilde{\lambda }}}^\varepsilon _j= 1\), and then deduce from (32) that

$$\begin{aligned} (0_n,-\varepsilon )=\sum _{j=1}^p{{\widetilde{\lambda }}}^\varepsilon _j(-{\overline{a}}_j,-{\overline{b}}_j)+\frac{1}{\sum _{j=1}^p\lambda ^\varepsilon _j}(0_n,-1)\in H({\overline{a}},{\overline{b}}). \end{aligned}$$
(33)

Letting \(\varepsilon \rightarrow 0\), we get by (33) that \(0_{n+1}\in H({\overline{a}},{\overline{b}})\) as inasmuch \(H({\overline{a}},{\overline{b}})\) is a closed set. Hence,

$$\begin{aligned} \rho = \inf \limits _{(a,b)\in H({\overline{a}},{\overline{b}})}{\phi _{Z}(a,b)}\le \phi _{Z}(0_{n+1})=0, \end{aligned}$$
(34)

where the last equality holds due to \(0_{n+1}\in Z.\) Now, clearly, (34) contradicts our assumption that \(\rho >0\). So, the proof is complete. \(\square \)

3 Numerically Tractable Radius Formulas for Spectrahedral Uncertainty Sets

In this section, we derive numerically tractable formulas from Theorem 2.1 for calculating the radius of robust feasibility for (PU) in the framework, where the uncertainty set Z in (2) is a spectrahedron (see, e.g.,[24,25,26]) described by

$$\begin{aligned} Z:=\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1} \,:\, A^0+\sum _{i=1}^{n+1}z^iA^i\succeq 0\big \} \end{aligned}$$
(35)

with \(A^i, i=0,1,\ldots ,n+1\) being symmetric \((q\times q)\) matrices.

The remarkable feature of the spectrahedron (35) is that, besides the closedness and convexity of Z followed straightforwardly by definition, its boundedness can be characterized in terms of dual data (cf. [22, Theorem 9.3]). Furthermore, the condition \(0_{n+1}\in \mathrm{int}Z\) implies that the matrix \(A^0\) can be chosen to be positive definite. Conversely, if \(A^0\succ 0\), then \(A^0\) can be congruently transformed to the identity matrix \(I_{n+1}\), and in this case, we obtain that \(0_{n+1}\in \mathrm{int}Z\) (see e.g., [24, Page 252] and [22, Theorem 5.9]). It is also worth mentioning that the spectrahedron (35) possesses a broad spectrum of convex infinite sets, such as polyhedra, balls and ellipsoids [24,25,26], which are used to describe uncertainty sets of robust optimization.

With the aid of the spectrahedron (35), we can derive numerically tractable formulas for calculating the radius of robust feasibility for (PU) in commonly used uncertainty sets of robust optimization, such as ellipsoids, balls, polytopes and boxes.

Let us first consider the case, where the spectrahedron Z in (35) is an ellipsoid. More concretely, we examine an ellipsoid centered at the origin as

$$\begin{aligned} Z:=\{z\in {\mathbb {R}}^{n+1}\,:\, z^\top M^{-1}z\le 1\}, \end{aligned}$$
(36)

where M is a positive definite symmetric \((n+1)\times (n+1)\) matrix. In this case, as the following corollary shows, the radius of robust feasibility for (PU) can be computed by solving a linearly constrained convex quadratic programming problem.

Corollary 3.1

(Radius formula with ellipsoidal uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let Z be as in (36). Then, the radius of robust feasibility \(\rho \) for (PU) defined in (3) satisfies

$$\begin{aligned} \rho ^2=\min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{\left\{ \left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) ^\top M^{-1}\left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) \right\} }. \end{aligned}$$
(37)

Proof

Let

$$\begin{aligned} A^0:= \left( \begin{array}{cc} M &{} 0 \\ 0&{} 1 \\ \end{array} \right) ,\quad A^i:= \left( \begin{array}{cc} 0 &{} e_i \\ e^\top _i&{} 0 \\ \end{array} \right) ,\; i=1,\ldots , n+1, \end{aligned}$$

where \(e_i\in {\mathbb {R}}^{n+1}\) is a vector whose ith component is one and all others are zero. Then, we have

$$\begin{aligned}&\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\,A^0+\sum _{i=1}^{n+1}z^iA^i\succeq 0\big \}\nonumber \\&\quad =\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\, \left( \begin{array}{cc} M &{} z \\ z^\top &{} 1 \\ \end{array} \right) \succeq 0\big \}\nonumber \\&\quad =\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\, 1-z^\top M^{-1}z\ge 0\big \}\nonumber \\&\quad =Z, \end{aligned}$$
(38)

where the second equality in (38) is valid due to the Schur complement (cf. Lemma 2.1). It means that the ellipsoid Z in (36) can be expressed in terms of the spectrahedron (35) (see also,[24]). Since the ellipsoid Z is symmetric, we apply Corollary 2.1 to obtain that

$$\begin{aligned} \rho :=\min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{\left| \left| \left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) \right| \right| }, \end{aligned}$$
(39)

where \(||\cdot ||=\phi _Z.\) Note further that the Minkowski function of the spectrahedron (35) is given by

$$\begin{aligned} \phi _{Z}(a,b)=\inf {\big \{t>0 \,:\, t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\big \}} \end{aligned}$$

for \((a,b):=(a^1,\ldots ,a^n,b)\in H({\overline{a}},{\overline{b}}).\) Thus, we arrive at \(||(a,b)||=\inf {\big \{t>0 \,:\, t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\big \}}\) for \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}.\) In this setting, for any \(t>0\) and \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1},\) the matrix inequality \( t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\) amounts to \( A^0+\sum \limits _{i=1}^{n+1}z^iA^i\succeq 0\) with \(z^i:=\frac{1}{t}a^i, i=1,\ldots , n\) and \(z^{n+1}:=\frac{1}{t}b.\) In view of (38) with \(z:=(z^1,\ldots ,z^{n+1})=\frac{1}{t}(a,b)\), we see that

$$\begin{aligned} t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0 \Leftrightarrow t^2-(a,b)^\top M^{-1}(a,b)\ge 0. \end{aligned}$$

Then, for each \((a,b)\in {\mathbb {R}}^{n+1},\)

$$\begin{aligned} ||(a,b)||&=\inf {\big \{t>0 \,:\, t^2-(a,b)^\top M^{-1}(a,b)\ge 0\big \}}\nonumber \\&=\inf {\big \{t>0 \,:\, t\ge \big ((a,b)^\top M^{-1}(a,b)\big )^{\frac{1}{2}}\big \}}=\big ((a,b)^\top M^{-1}(a,b)\big )^{\frac{1}{2}}, \end{aligned}$$
(40)

where we should note that \((a,b)^\top M^{-1}(a,b)\ge 0\) as M is a positive definite matrix.

Now, on account of (40), we assert that the equivalence between (37) and (39) is valid. It completes the proof. \(\square \)

Next, we provide an example which illustrates how the radius of robust feasibility for (PU) can be computed in the case of ellipsoidal uncertainty.

Example 3.1

(Calculating the radius with ellipsoidal uncertainty) Let

$$\begin{aligned} Z:=\left\{ z:=\left( z^1,z^2\right) \in {\mathbb {R}}^2\,:\, \frac{\left( z^1\right) ^2}{4}+\frac{\left( z^2\right) ^2}{8}\le 1\right\} . \end{aligned}$$

Consider a parametric uncertain linear program \(\mathrm{(PU)}\) that is defined as follows: for each parameter \(\alpha \in {\mathbb {R}}_+,\) we have an uncertain linear program

figure c

where \(c\in [1,2]\) and \((a_j,b_j)\in U_j^\alpha :=(\overline{a}_j,{\overline{b}}_j)+\alpha Z\) for \(j=1,2\) with \(({\overline{a}}_1, {\overline{b}}_1):=(1,0), ({\overline{a}}_2,{\overline{b}}_2):=(0,1)\in {\mathbb {R}}^2\).

It is easy to check that \(x_0:=-1\) is a robust feasible solution of program (EP\(1_\alpha \)) at \(\alpha =0,\) or equivalent to saying that \(\mathrm{(PU_0)}\) is feasible and that Z can be viewed in the form (36) with \(M=\left( \begin{array}{cc} 4 &{} 0 \\ 0&{} 8 \\ \end{array} \right) .\) Applying Corollary 3.1, we obtain that

$$\begin{aligned} \rho ^2=\min _{(\lambda ,\mu )\in {\triangle _2}\times {{\mathbb {R}}_+}}{\left\{ \frac{1}{4}\lambda _1^2+\frac{1}{8}(\mu +\lambda _2)^2\right\} }, \end{aligned}$$
(41)

where \(\lambda :=(\lambda _1,\lambda _2).\) Since \(\lambda _2:=1-\lambda _1\ge 0\), (41) amounts to the following convex quadratic program

$$\begin{aligned} \rho ^2=\min _{}{\left\{ f(\lambda _1,\mu ):=\frac{3}{8}\lambda _1^2+\frac{1}{8}\mu ^2+\frac{1}{4}\mu (1-\lambda _1)-\frac{1}{4}\lambda _1+\frac{1}{8}\,:\, 0\le \lambda _1\le 1,\; \mu \ge 0 \right\} }. \end{aligned}$$
(42)

Note that, for this setting, we can evaluate the objective function f of problem (42) as follows:

$$\begin{aligned} f(\lambda _1,\mu )=\frac{3}{8}\left( \lambda _1-\frac{1}{3}\right) ^2+\frac{1}{8}\mu ^2+\frac{1}{4}\mu (1-\lambda _1)+\frac{1}{12}\ge \frac{1}{12}, \end{aligned}$$

where the equality is attained at \(({\overline{\lambda }}_1,{\overline{\mu }}):=(\frac{1}{3},0).\) It shows that \(f_{\min }=\frac{1}{12}\) and hence confirms that \(\rho ^2=\frac{1}{12}.\) Consequently, we conclude that the radius of robust feasibility is \(\rho =\frac{1}{2\sqrt{3}}\)  (Fig. 1).

Fig. 1
figure 1

Ellipsoid Z and sets \(\alpha Z, \alpha \le \rho \)

Let us now consider a particular case, where the matrix M given in (36) is the identity matrix \(I_{n+1}.\) In this setting, the ellipsoid Z becomes the Euclidean unit closed ball

Corollary 3.2

(Radius formula with ball uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let . Then, the radius of robust feasibility \(\rho \) for (PU) defined in (3) satisfies

$$\begin{aligned} \rho ^2= \min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}\left\{ {\sum _{i=1}^n\left( \sum _{j=1}^p\lambda _j{\overline{a}}^i_j\right) ^2+\left( \mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) ^2}\right\} , \end{aligned}$$
(43)

where \({\overline{a}}_j:=({\overline{a}}_j^1,\ldots ,{\overline{a}}_j^n)\) for \(j=1,\ldots ,p.\)

Proof

Letting \(M:=I_{n+1}\), Corollary 3.1 gives us that

$$\begin{aligned} \rho ^2=\min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{\left\{ \left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) ^\top \Big (I_{n+1}\Big )^{-1}\left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) \right\} }. \end{aligned}$$
(44)

It is clear that (44) and (43) are equivalent to each other, which finishes the proof. \(\square \)

Remark 3.1

It is worth observing here that the result obtained in Corollary 3.2 can be viewed as

$$\begin{aligned} \rho =\min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{ \left| \left| \left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) \right| \right| }, \end{aligned}$$
(45)

where the norm \(||\cdot ||\) is given (see (40) with \(M:=I_{n+1}\)) by

$$\begin{aligned} ||(a,b)||=\left( \sum _{i=1}^n (a^i)^2+b^2\right) ^{\frac{1}{2}}=||(a,b)||_2 \; \text{ for } \; (a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}. \end{aligned}$$
(46)

Here, the norm \(||\cdot ||_2\) is the Euclidean norm on \({\mathbb {R}}^{n+1}.\) With the norm (46), the formula (45) can be expressed in terms of distance as

$$\begin{aligned} \rho = d\big (0_{n+1},H({\overline{a}},{\overline{b}})\big ). \end{aligned}$$
(47)

Note that the distance function d generated by the Euclidean norm \(||\cdot ||_2\) of \({\varOmega }\subset {\mathbb {R}}^n\) is defined by

$$\begin{aligned} d(x,{\varOmega }):=\inf {\{||x-y||_2 \,:\, y\in {\varOmega }\}}, x\in {\mathbb {R}}^n \end{aligned}$$

with the convention \(d(x,{\varOmega }):=+\infty \) whenever \({\varOmega }\) is empty, i.e., \(\inf {\emptyset }=+\infty \). The formula (47) was established in [16, Theorem 4], where a direct proof was given.

We now deal with the case where the spectrahedron Z in (35) is a cross-polytope defined by

$$\begin{aligned} Z:=\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\, \sum _{i=1}^{n+1}|z^i|\le 1\big \}. \end{aligned}$$
(48)

In this case, we obtain a numerically tractable formula for calculating the radius of robust feasibility for (PU) as follows.

Corollary 3.3

(Radius formula with polytopic uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let Z be as in (48). Then, the radius of robust feasibility for (PU) defined in (3) is given by

$$\begin{aligned} \rho = \min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{\left\{ \sum _{i=1}^n\big |\sum _{j=1}^p\lambda _j{\overline{a}}^i_j\big |+\big |\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\big |\right\} }, \end{aligned}$$
(49)

where \({\overline{a}}_j:=({\overline{a}}_j^1,\ldots ,{\overline{a}}_j^n)\) for \(j=1,\ldots ,p.\)

Proof

Let

$$\begin{aligned} A^0:= \left( \begin{array}{cc} I_{2^n} &{} 0 \\ 0&{} I_{2^{n}} \\ \end{array} \right) ,\quad A^i:= \left( \begin{array}{cc} D_i &{} 0 \\ 0&{} -D_i \\ \end{array} \right) ,\; i=1,\ldots , n+1, \end{aligned}$$

where \(D_i, i=1,\ldots , n+1,\) are the (\(2^{n}\times 2^{n}\)) diagonal matricies with the entries \(\sigma ^i_k=1\) or \(\sigma ^i_k=-1\) in the kth row for \(k=1,\ldots , 2^n\) satisfying \((\sigma ^1_{k_1},\ldots ,\sigma ^{n+1}_{k_1})\ne \pm (\sigma ^1_{k_2},\ldots ,\sigma ^{n+1}_{k_2})\) whenever \(k_1, k_2\in \{1,\ldots , 2^{n}\}, k_1\ne k_2\). Then,

$$\begin{aligned}&\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\,A^0+\sum _{i=1}^{n+1}z^iA^i\succeq 0\big \}\nonumber \\&\quad =\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\, 1+\sum \limits _{i=1}^{n+1}\sigma ^i_{k}z^i\ge 0,\nonumber \\&\qquad 1-\sum \limits _{i=1}^{n+1}\sigma ^i_{k}z^i\ge 0, k=1,\ldots , 2^n\big \}\nonumber \\&\quad =\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\, \sum \limits _{i=1}^{n+1}|z^i|\le 1\big \}=Z, \end{aligned}$$
(50)

which shows how the cross-polytope Z can be expressed in terms of the spectrahedron (35). Since the cross-polytope Z is symmetric, we get by Corollary 2.1 that

$$\begin{aligned} \rho = \min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{\left| \left| \left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) \right| \right| }, \end{aligned}$$
(51)

where \(||\cdot ||=\phi _Z.\)

Similar to the proof of Corollary 3.1, we conclude that \(||(a,b)||=\inf \big \{t>0 \,:\, t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\big \}\) for \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}.\) In this circumstance, for any \(t>0\) and \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1},\) the matrix inequality \(t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\) amounts to \( A^0+\sum \limits _{i=1}^{n+1}z^iA^i\succeq 0\) with \(z^i:=\frac{1}{t}a^i, i=1,\ldots , n\) and \(z^{n+1}:=\frac{1}{t}b.\) In view of (50) with \(z:=(z^1,\ldots ,z^{n+1})=\frac{1}{t}(a,b)\), we assert that

$$\begin{aligned} t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0 \Leftrightarrow t\ge \sum _{i=1}^n |a^i|+|b|. \end{aligned}$$

Then, for each \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}\), we have

$$\begin{aligned} ||(a,b)||=\inf {\big \{t>0 \,:\, t\ge \sum _{i=1}^n |a^i|+|b|\big \}}=\sum _{i=1}^n |a^i|+|b|=||(a,b)||_1, \end{aligned}$$
(52)

where \(||\cdot ||_1\) is the \(\ell _1\)-norm on \({\mathbb {R}}^{n+1}.\) With the norm (52), we see that (49) coincides with (51), which finishes the proof. \(\square \)

Finally, we examine the case where the spectrahedron Z in (35) is a hypercube/box given by

$$\begin{aligned} Z:=[-1,1]\times \cdots \times [-1,1]\subset {\mathbb {R}}^{n+1}. \end{aligned}$$
(53)

In this setting, the radius of robust feasibility for (PU) can be found by solving a linear minimax program with affine constraints.

Corollary 3.4

(Radius formula with box uncertainty) Let the nominal program \(\mathrm{(PU_0)}\) be feasible, and let Z be as in (53). Then, it holds

$$\begin{aligned} \rho = \min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{\left\{ \max {\Big \{\big |\sum _{j=1}^p\lambda _j{\overline{a}}^i_j\big |, i=1,\ldots , n, \big |\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\big |\Big \}}\right\} }, \end{aligned}$$
(54)

where \({\overline{a}}_j:=({\overline{a}}_j^1,\ldots ,{\overline{a}}_j^n)\) for \(j=1,\ldots ,p.\)

Proof

Let \(E_i\) be the \((n+1)\times (n+1)\) diagonal matrix with one in the (ii)th entry and zeros elsewhere, and

$$\begin{aligned} A^0:= \left( \begin{array}{cc} I_{n+1} &{} 0 \\ 0&{} I_{n+1} \\ \end{array} \right) ,\quad A^i:= \left( \begin{array}{cc} E_i &{} 0 \\ 0&{} -E_i \\ \end{array} \right) ,\; i=1,\ldots , n+1. \end{aligned}$$

Then, we see that

$$\begin{aligned}&\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\, A^0+\sum _{i=1}^{n+1}z^iA^i\succeq 0\big \}\nonumber \\&\quad =\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\, 1+z^i\ge 0, 1-z^i\ge 0, i=1,\ldots , n+1\big \}\nonumber \\&\quad =\big \{z:=(z^1,\ldots , z^{n+1})\in {\mathbb {R}}^{n+1}\,:\, |z^i|\le 1, i=1,\ldots , n+1\big \}=Z, \end{aligned}$$
(55)

which shows how the box Z can be expressed in terms of the spectrahedron (35). In this case, the formula of Corollary 2.1 becomes

$$\begin{aligned} \rho = \min _{(\lambda ,\mu )\in {\triangle _p}\times {{\mathbb {R}}_+}}{\left| \left| \left( \sum _{j=1}^p\lambda _j{\overline{a}}_j,\mu +\sum _{j=1}^p\lambda _j{\overline{b}}_j\right) \right| \right| }, \end{aligned}$$
(56)

where \(||(a,b)||=\inf {\big \{t>0 \,:\, t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\big \}}\) for \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}.\) For any \(t>0\) and \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1},\) the matrix inequality \( t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\) is nothing else but \( A^0+\sum \limits _{i=1}^{n+1}z^iA^i\succeq 0\) with \(z^i:=\frac{1}{t}a^i, i=1,\ldots , n\) and \(z^{n+1}:=\frac{1}{t}b.\) Substituting \(z:=(z^1,\ldots ,z^{n+1})=\frac{1}{t}(a,b)\) into (55), we arrive at

$$\begin{aligned} t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0 \Leftrightarrow {\left\{ \begin{array}{ll} t\ge |a^i|,\; \; i=1,\ldots ,n,\\ t\ge |b|. \end{array}\right. } \end{aligned}$$

Then, for each \((a,b):=(a^1,\ldots ,a^n,b)\in {\mathbb {R}}^{n+1}\), we have

$$\begin{aligned} ||(a,b)||&=\inf {\big \{t>0\,:\, t\ge \max {\{|a^1|, \ldots ,|a^n|, |b|\}}\big \}}\nonumber \\&=\max {\{|a^1|, \ldots , |a^n|, |b|\}}=||(a,b)||_{\infty }, \end{aligned}$$
(57)

where the norm \(||\cdot ||_\infty \) is the maximum norm on \({\mathbb {R}}^{n+1}.\) With the norm (57), it is easy to see that (54) is nothing else but (56). It finishes the proof. \(\square \)

Similar to the case of ball uncertainty, the formula (54) can be expressed in terms of distance as

$$\begin{aligned} \rho = d\big (0_{n+1},H({\overline{a}},{\overline{b}})\big ), \end{aligned}$$
(58)

where d is generated by the maximum norm \(||\cdot ||_\infty .\) Note further that (58) has recently been given in [23, Corollary 5.4] by a different approach. More precisely, the radius of robust feasibility in [23, Corollary 5.4] has been derived from the so-called radius of robust global error bound.

We close this section with a remark that for a general spectrahedron of the form (35), we can calculate the radius of robust feasibility for (PU) directly from (5) as inasmuch in this case the Minkowski function of Z is explicitly given by

$$\begin{aligned} \phi _{Z}(a,b)=\inf {\big \{t>0\,:\, t A^0+\sum \limits _{i=1}^{n}a^iA^i+bA^{n+1}\succeq 0\big \}} \end{aligned}$$
(59)

for \((a,b):=(a^1,\ldots ,a^n,b)\in H({\overline{a}},{\overline{b}}).\)

Let us see the following example, which illustrates how the radius of robust feasibility for (PU) can be directly calculated from (5) when Z is a simple non-symmetric spectrahedral uncertainty set.

Example 3.2

(Calculating the radius with non-symmetric spectrahedral uncertainty) Let

$$\begin{aligned} Z:=\left\{ z:=(z^1,z^2)\in {\mathbb {R}}^2\,:\, |z^1+\frac{1}{2}|\le 1, |z^2-\frac{1}{2}|\le 1, |z^1+z^2|\le 1\right\} . \end{aligned}$$

Consider a parametric uncertain linear program \(\mathrm{(PU)}\) that is defined as follows: for each parameter \(\alpha \in {\mathbb {R}}_+,\) we have an uncertain linear program

figure d

where \(c\in [-1,2]\) and \((a_j,b_j)\in U_j^\alpha :=(\overline{a}_j,{\overline{b}}_j)+\alpha Z\) for \(j=1,2\) with \(({\overline{a}}_1, {\overline{b}}_1):=(1,0), ({\overline{a}}_2,{\overline{b}}_2):=(0,1)\in {\mathbb {R}}^2\).

It is easy to check that \(x_0:=-1\) is a robust feasible solution of program (EP\(2_\alpha \)) at \(\alpha =0\), or equivalent to saying that \(\mathrm{(PU_0)}\) is feasible, and that Z can be viewed in the form (35) with

$$\begin{aligned} A^0&:=I_6,\quad A^1:= \left( \begin{array}{ccccccc} \frac{2}{3} &{}\quad 0 &{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0 \\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0 \\ 0&{}\quad 0&{}\quad -2&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0 \\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 1&{}\quad 0 \\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad -1 \end{array} \right) ,\quad \\ A^2&:= \left( \begin{array}{ccccccc} 0 &{}\quad 0 &{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0 \\ 0&{}\quad 2&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0 \\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad -\frac{2}{3}&{}\quad 0&{}\quad 0 \\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 1&{}\quad 0 \\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad -1 \end{array} \right) . \end{aligned}$$

In this case, for any \(t>0\) and \((a,b)\in {\mathbb {R}}^{2},\) we assert that

$$\begin{aligned} t A^0+aA^1+bA^{2}\succeq 0\Leftrightarrow {\left\{ \begin{array}{ll} t\ge -\frac{2}{3}a,\; t\ge 2a,\\ t\ge -2b,\; t\ge \frac{2}{3}b,\\ t\ge -a-b,\; t\ge a+b. \end{array}\right. } \end{aligned}$$

So, for each \((a,b)\in {\mathbb {R}}^{2},\)

$$\begin{aligned} \phi _{Z}(a,b)&=\inf {\Big \{t>0 \,:\, t A^0+aA^1+bA^{2}\succeq 0\Big \}}\\&=\inf {\Big \{t>0 \,:\, t\ge \max {\big \{-\frac{2}{3}a, 2a,-2b, \frac{2}{3}b , -a-b,a+b\big \}}\Big \}}\\&=\max {\Big \{-\frac{2}{3}a, 2a,-2b, \frac{2}{3}b , -a-b,a+b\Big \}}. \end{aligned}$$

Invoking now Theorem 2.1, we arrive at

$$\begin{aligned} \rho = \inf _{(a,b)\in H({\overline{a}},{\overline{b}})}{\max {\Big \{-\frac{2}{3}a, 2a,-2b, \frac{2}{3}b, -a-b,a+b\Big \}}}. \end{aligned}$$
(60)

Note that for each \((a,b)\in H({\overline{a}},{\overline{b}})=\mathrm{conv}\{(-1,0),(0,-1)\}+{\mathbb {R}}_+(0,-1),\) there exist \( \lambda \in [0,1]\) and \(\mu \ge 0\) such that

$$\begin{aligned} a=-\lambda ,\quad b=\lambda -1-\mu . \end{aligned}$$

Hence, (60) becomes

$$\begin{aligned} \rho = \inf _{(\lambda ,\mu )\in [0,1]\times {\mathbb {R}}_+}{\max {\Big \{\frac{2}{3}\lambda , 2(1-\lambda +\mu ), 1+\mu \Big \}}}. \end{aligned}$$
(61)

Observe that, for any \(\mu \ge 0,\)

$$\begin{aligned} \max {\Big \{\frac{2}{3}\lambda , 2(1-\lambda +\mu ), 1+\mu \Big \}}= {\left\{ \begin{array}{ll} 2(1-\lambda +\mu ) &{}\quad \text{ if } \; 0\le \lambda \le \frac{\mu +1}{2},\\ 1+\mu &{}\quad \text{ if } \; \frac{\mu +1}{2}<\lambda \le 1, \end{array}\right. } \end{aligned}$$

and therefore, by solving the problem (61), we conclude that the radius of robust feasibility is \(\rho =1\) (Fig. 2).

Fig. 2
figure 2

Parametric sets \(\alpha Z, \alpha \le \rho \)

4 Conclusions

We have provided an exact formula for the radius of robust feasibility of a parametric uncertain linear program \(\mathrm{(PU)}\) with a compact and convex uncertainty set. In the case, where the uncertainty set is a spectrahedron, we have obtained a tractable radius formula. In particular, we have shown that the radius of robust feasibility for \(\mathrm{(PU)}\) can be found by solving a linearly constrained convex quadratic program or a minimax linear program for the cases of commonly used uncertainty sets of robust optimization like ellipsoids, balls, polytopes and boxes.