1 Introduction

In most real world problems, problem parameters are uncertain at the optimization or decision-making phase and stochastic and robust optimization are two different paradigms to address this parameter uncertainty. In a stochastic optimization approach, we model uncertainty using probability distributions and optimize the expected value of the objective. Stochastic optimization was introduced by Dantzig [13] and Beale [1], and since then has been extensively studied in literature. We refer the reader to several textbooks including Infanger [19], Kall and Wallace [20], Prekopa [23], Shapiro [24], Shapiro et al. [25] and the references therein for a comprehensive review of stochastic optimization. While the stochastic optimization approach has its merits, it is by and large computationally intractable even when the constraint and objective functions are linear. Shapiro and Nemirovski [26] give hardness results for two-stage and multi-stage stochastic optimization problems where they show that the multi-stage stochastic optimization problem is computationally intractable even if approximate solutions are desired. Dyer and Stougie [14] show that a multi-stage stochastic optimization problem where the distribution of uncertain parameters in any stage also depends on the decisions in past stages is PSPACE-hard. Even for the stochastic problems that can be solved efficiently, it is difficult to estimate the probability distributions of the uncertain parameters from historical data.

Robust optimization is another paradigm for dynamic optimization that has been recently considered (see Ben-Tal and Nemirovski [35], El Ghaoui and Lebret [15], Bertsimas and Sim [10, 11], Goldfarb and Iyengar [17]). In a robust optimization approach, the uncertain parameters are assumed to belong to some uncertainty set. The goal is to construct a single (static) solution that is feasible for all possible realizations of the uncertain parameters from the set and optimizes the worst case objective value. We point the reader to the survey by Bertsimas et al. [6] and the book by Ben-Tal et al. [2] and the references therein for an extensive review of the literature in robust optimization. This approach is significantly more tractable and for a large class of problems, the robust problem is equivalent to the corresponding deterministic problem in computational complexity [2, 6]. However, the robust optimization approach is perceived to be highly conservative as it optimizes over the worst-case realization of uncertain parameters, and the solution is not adjustable to the uncertain parameter realization. However, computing an optimal adjustable robust solution is computationally intractable even for two-stage linear optimization problems. In fact, Feige et al. [16] show that it is hard to approximate a two-stage fractional set covering problem within a factor of \(\varOmega (\log n)\).

In this paper, we study the performance of static solutions for two-stage adjustable robust linear optimization problems under uncertain constraint and objective coefficients. Approximation bounds for the performance of static solutions in two-stage and multi-stage linear optimization problems have been studied in the literature. Bertsimas and Goyal [7] show that if the uncertainty set is perfectly symmetric (such as a hypercube or an ellipsoid), a static solution gives a \(2\)-approximation for the adjustable robust linear covering problems under right hand side uncertainty. Bertsimas et al. [9] generalize the result for general convex compact uncertainty sets and show that the performance of the static solution depends on a measure of symmetry of the uncertainty set, introduced in Minkowski [21]. They also extend the approximation bounds to multi-stage problems. However, the models in [7] and [9] consider linear optimization problems with only covering constraints of the form \({\varvec{a}}^T {\varvec{x}} \ge b, ~{\varvec{a}} \in {\mathbb R}^n\) and \(b \ge 0\). Moreover, the uncertainty appears only in the right hand side of the constraints. Therefore, they do not capture many important applications including revenue management and resource allocation problems where we require packing constraints with uncertain constraint coefficients. For instance, in a typical revenue management problem, the goal is to allocate scarce resources to a demand with uncertain resource requirements such that the total revenue is maximized. The constraints in the problem corresponding to resource capacities are packing constraints and the uncertainty related to resource requirements appears in the constraint coefficients.

1.1 Our models

We consider the following two-stage adjustable robust linear optimization problem \(\varPi _\mathsf{AR}\) under uncertain constraint coefficients.

$$\begin{aligned} z_\mathsf{AR} = \max _{{\varvec{x}}} \;&{\varvec{c}}^T {\varvec{x}} + \min _{{\varvec{B}}\in \mathcal{U}} \max _{{\varvec{y}}({\varvec{B}})} {\varvec{d}}^T {\varvec{y}}({\varvec{B}})\nonumber \\&{\varvec{A}}{\varvec{x}} + {\varvec{B}}{\varvec{y}}({\varvec{B}}) \; \le \; {\varvec{h}}\\&{\varvec{x}} \; \in \; {\mathbb R}^{n_1}\nonumber \\&{\varvec{y}}({\varvec{B}}) \; \in \; {\mathbb R}^{n_2}_+,\nonumber \end{aligned}$$
(1.1)

where \({\varvec{A}} \in {\mathbb R}^{m {\times } n_1}, {\varvec{c}}\in {\mathbb R}^{n_1}, {\varvec{d}}\in {\mathbb R}^{n_2}_+\) and \({\varvec{h}} \in {\mathbb R}^m\). The second-stage constraint matrix \({\varvec{B}} \in {\mathbb R}^{m {\times } n_2}_+\) is uncertain and belongs to a full dimensional compact convex uncertainty set \(\mathcal{U} \subseteq {\mathbb R}^{m\times n_2}_+\) in the non-negative orthant. The decision variables \({\varvec{x}}\) represent the first-stage decisions before the constraint matrix \({\varvec{B}}\) is revealed, and \({\varvec{y}}({\varvec{B}})\) represent the second-stage or recourse decision variables after observing the uncertain constraint matrix \({\varvec{B}}\in \mathcal{U}\). Therefore, the (adjustable) second-stage decisions depend on the uncertainty realization. We can assume without loss of generality that \(\mathcal{U}\) is down-monotone (see Appendix 1).

We would like to emphasize that the second-stage objective coefficients \({\varvec{d}}\), constraint coefficients \({\varvec{B}}\), and the second-stage decision variables \({\varvec{y}}({\varvec{B}})\) are all non-negative. Also, the uncertainty set \(\mathcal U\) of second-stage constraint matrices is contained in the non-negative orthant. Therefore, the model is slightly restrictive and does not allow us to handle arbitrary two-stage linear programs. For instance, we can not handle covering constraints involving second-stage variables, or lower bounds on second-stage decision variables. Note that there is no restrictions on the first-stage constraint coefficients \({\varvec{A}}\) or objective coefficients \({\varvec{c}}\). Also, the first-stage decision variables \({\varvec{x}}\) and right-hand-side \({\varvec{h}}\) are not necessarily non-negative.

Our model is still fairly general and captures important applications including resource allocation and revenue management problems. For instance, consider a resource allocation problem where we have fixed capacity of resources and demand arrives online with uncertain resource requirements. We can model this problem in our framework as follows: the second-stage constraint coefficients \(B_{ij}\) refer to the uncertain requirement of resource \(i\) for demand \(j\); the right-hand-side \({\varvec{h}}\) denotes the resource capacities and the first and second-stage decision variables represent the allocation decisions. Note that the irrevocable first-stage allocation decisions are made before we observe the second-stage demand and resource requirements and the goal is to maximize the worst-case profit assuming that an adversary chooses the second-stage resource requirements from the given uncertainty set.

Computing an optimal adjustable robust solution is intractable in general [16]. Therefore, we consider a static robust optimization approach to approximate (1.1). The corresponding static robust optimization problem \(\varPi _\mathsf{Rob}\) can be formulated as follows.

$$\begin{aligned} z_\mathsf{Rob} = \max _{{\varvec{x}}}\;&{\varvec{c}}^T {\varvec{x}} + {\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{A}}{\varvec{x}} + {\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}, \; \forall {\varvec{B}} \in \mathcal{U}\\&{\varvec{x}} \; \in \; {\mathbb R}^{n_1}\nonumber \\&{\varvec{y}} \; \in \; {\mathbb R}^{n_2}_+.\nonumber \end{aligned}$$
(1.2)

Note that the second-stage solution \({\varvec{y}}\) is static and does not depend on the realization of uncertain \({\varvec{B}}\). Both first-stage and second-stage decisions \({\varvec{x}}\) and \({\varvec{y}}\) are selected before the second-stage uncertain constraint matrix is known and \(({\varvec{x}},{\varvec{y}})\) is feasible for all \({\varvec{B}}\in \mathcal U\). An optimal static robust solution to (1.2) can be computed efficiently if \(\mathcal{U}\) has an efficient separation oracle. In fact, Ben-Tal and Nemirovski [4] give compact formulations for solving (1.2) for polyhedral and conic uncertainty sets. Our goal is to compare the performance of an optimal static robust solution with respect to the optimal adjustable robust solution of (1.1).

The above models have been considered in the literature. Ben-Tal and Nemirovski [4] consider the adjustable robust problem and show that a static solution is optimal if the uncertainty set \(\mathcal{U}\) is constraint-wise where each constraint \(i=1,\ldots ,m\) can be selected independently from a compact convex set \(\mathcal{U}_i\), i.e., \(\mathcal{U}\) is a Cartesian product of \(\mathcal{U}_i, i=1,\ldots ,m\). However, the authors do not discuss performance of static solutions if the constraint-wise condition on \(\mathcal{U}\) is not satisfied. Bertsimas and Goyal [8] consider a general multi-stage convex optimization problem under uncertain constraints and objective functions and show that the performance of the static solution is related to the symmetry of the uncertainty set \(\mathcal{U}\). However, the symmetry bound in [8] can be quite loose in many instances. For example, consider the case when \(\mathcal{U}\) is constraint-wise where each \(\mathcal{U}_i\), \(i=1,\ldots ,m\) is a simplex, i.e.,

$$\begin{aligned} \mathcal{U}_i = \{{\varvec{x}}\in {\mathbb R}^n_+\;|\;{\varvec{e}}^T{\varvec{x}}\le 1\}. \end{aligned}$$

The symmetry of \(\mathcal{U}\) is \(O(1/n)\) [9] and the results in [8] imply an approximation bound of \(\varOmega (n)\). While from Ben-Tal and Nemirovski [4], we know that a static solution is optimal.

1.2 Our contributions

In this paper, we analyze the performance of static solutions as compared to an optimal fully-adjustable solution for two-stage adjustable robust linear optimization problems under constraint and objective uncertainty. Our main contributions are summarized below.

1.2.1 Optimality of static solution

We give a tight characterization of the conditions under which a static solution is optimal for the two-stage adjustable robust problem. The optimality of static solutions depends on the geometric properties of a transformation of the uncertainty set. In particular, we show that the static solution is optimal if the transformation of \(\mathcal{U}\) is convex. If \(\mathcal{U}\) is a constraint-wise set, we show that the transformation of \(\mathcal{U}\) is convex. Ben-Tal and Nemirovski [4] show that for such \(\mathcal{U}\), a static solution is optimal for adjustable robust problem. Therefore, our result extends the result in [4] for the case where \(\mathcal{U}\) is contained in the non-negative orthant. We also present other families of uncertainty sets for which the transformation is convex.

This result is quite surprising as the worst-case choice of \({\varvec{B}} \in \mathcal{U}\) usually depends on the first-stage solution even if \(\mathcal{U}\) is constraint-wise unless \(\mathcal{U}\) is a hypercube. For the case of hypercube, each uncertain element can be selected independently from an interval and in that case, the worst-case \({\varvec{B}}\) is independent of the first-stage decision. However, a constraint-wise set is a Cartesian product of general convex sets. We show that if the transformation of \(\mathcal{U}\) is convex, there is an optimal recourse solution for the worst-case choice of \({\varvec{B}} \in \mathcal{U}\) that is feasible for all \({\varvec{B}} \in \mathcal{U}\). Furthermore, we show that our result can also be interpreted as the following min-max theorem.

$$\begin{aligned} \min _{{\varvec{B}} \in \mathcal{U}} \; \max _{{\varvec{y}} \ge {\varvec{0}} } \; \{ {\varvec{d}}^T {\varvec{y}} \; |\; {\varvec{B}} {\varvec{y}} \le {\varvec{h}}\} = \max _{{\varvec{y}} \ge {\varvec{0}}} \; \min _{{\varvec{B}}\in \mathcal{U}} \; \{ {\varvec{d}}^T {\varvec{y}} \; |\; {\varvec{B}} {\varvec{y}} \le {\varvec{h}}\}. \end{aligned}$$

The inner minimization on the max-min problem implies that the solution \({\varvec{y}}\) must be feasible for all \({\varvec{B}} \in \mathcal{U}\) and therefore, is a static robust solution. We would like to note that the above min-max result does not follow from the general saddle-point theorem [12].

1.2.2 Approximation bounds for the static solution

We give a tight approximation bound on the performance of the optimal static solution for the adjustable robust problem when the transformation of \(\mathcal{U}\) is not convex and the static solution is not optimal. We relate the performance of static solutions to a natural measure of non-convexity of the transformation of \(\mathcal{U}\). We also present a family of uncertainty sets and instances where we show that the approximation bound is tight, i.e., the ratio of the optimal objective value of the adjustable robust problem (1.1) and the optimal objective value of the robust problem (1.2) is exactly equal to the bound given by the measure of non-convexity.

We also compare our approximation bounds with the bound in Bertsimas and Goyal [8] where the authors relate the performance of the static solutions with the symmetry of the uncertainty set. We show that our bound is stronger than the symmetry bound in [8]. In particular, for any instance, we can show that our bound is at least as good as the symmetry bound, and in fact in many cases, significantly better. For instance, consider the following uncertainty set

$$\begin{aligned} \mathcal{U} = \left\{ {\varvec{B}} \in {\mathbb R}^{m \times n}_+ \; \left| \; \sum _{i,j} B_{ij} \le 1\right\} \right. . \end{aligned}$$

In this case, \(\mathsf{sym}(\mathcal{U}) = 1/mn\) [9] and the symmetry bound is \(\varOmega (mn)\). However, we show that a static solution is optimal for the adjustable robust problem (our bound is equal to one).

1.2.3 Models with both constraint and objective uncertainty

We extend our result to two-stage models where both constraint and objective coefficients are uncertain. In particular, we consider a two-stage model where the uncertainty in the second-stage constraint matrix \({\varvec{B}}\) is independent of the uncertainty in the second-stage objective \({\varvec{d}}\). Therefore, \(({\varvec{B}}, {\varvec{d}})\) belong to a convex compact uncertainty set \(\mathcal{U}\) that is a Cartesian product of the uncertainty set of constraint matrices \(\mathcal{U}^B\) and uncertainty set of second-stage objective \(\mathcal{U}^d\).

We show that our results for the model with only constraint coefficient uncertainty can also be extended to this case of both constraint and objective uncertainty. In particular, we show that a static solution is optimal if the transformation of \(\mathcal{U}^B\) is convex. Furthermore, if the transformation is not convex, then the approximation bound on the performance of the optimal static solution is related to the measure of non-convexity of the transformation of \(\mathcal{U}^B\). Surprisingly, the approximation bound or the optimality of a static solution does not depend on the uncertainty set of objectives \(\mathcal{U}^d\). If the transformation of \(\mathcal{U}^B\) is convex, a static solution is optimal for all convex compact uncertainty sets \(\mathcal{U}^d \subseteq {\mathbb R}^{n_2}_+\). We also present a family of examples to show that our bound is tight for this case as well.

We also consider a two-stage adjustable robust model where in addition to the second-stage constraint matrix \({\varvec{B}}\) and objective \({\varvec{d}}\), the right hand side \({\varvec{h}}\) of the constraints is also uncertain and

$$\begin{aligned} ({\varvec{B}},{\varvec{h}}, {\varvec{d}}) \in \mathcal{U} = \mathcal{U}^{B,h} \times \mathcal{U}^d, \end{aligned}$$

where \(\mathcal{U}\) is a convex compact set that is a Cartesian product of the uncertainty set for \(({\varvec{B}}, {\varvec{h}})\) and the uncertainty set for \({\varvec{d}}\). For this case, we give a sufficient condition for the optimality of a static solution that is related to the convexity of the transformation of uncertainty set \(\mathcal{U}^{B,h}\). Note again that the optimality of a static solution does not depend on the uncertainty set of objectives \(\mathcal{U}^d\). However, our approximation bounds do not extend for this case if the transformation of \(\mathcal{U}^{B,h}\) is not convex.

1.2.4 Outline

The rest of the paper is organized as follows: In Sect. 2, we discuss the optimality of static solutions for the two-stage adjustable robust problem under constraint uncertainty and relate it to the convexity of an appropriate transformation of the uncertainty set. In Sect. 3, we introduce a measure of non-convexity for any compact set. Moreover, we present a tight approximation bound for the performance of an optimal static solution for the adjustable robust problem, that is related to the measure of non-convexity of the transformation of the uncertainty set. In Sect. 4, we extend our result to two-stage models where both second-stage constraint and objective are uncertain.

2 Optimality of static solutions

In this section, we present a tight characterization of the conditions under which a static robust solution computed from (1.2) is optimal for the adjustable robust problem (1.1). We introduce a transformation of the uncertainty set \(\mathcal{U}\) and relate the optimality of a static solution to the convexity of the transformation.

An optimal static solution for (1.2) can be computed efficiently. Note that a static solution \(({\varvec{x}},{\varvec{y}})\) is feasible for all \({\varvec{B}}\in \mathcal{U}\). To observe that an optimal static robust solution can be computed in polynomial time, consider the separation problem: given a solution \({\varvec{x}},{\varvec{y}}\), we need to decide whether or not there exists \({\varvec{B}}\in \mathcal{U}\) and \(j \in \{1,\ldots ,m\}\) such that

$$\begin{aligned} {{\varvec{e}}_j^T({\varvec{A}}{\varvec{x}}+{\varvec{B}}{\varvec{y}})}>h_j, \end{aligned}$$

and find a separating hyperplane if \(({\varvec{x}},{\varvec{y}})\) is not feasible. Therefore, by solving \(m\) linear optimization problems over \(\mathcal{U}\) we can decide whether the given solution is feasible or obtain a separating hyperplane. From the equivalence of the separation and optimization [18], we can compute an optimal static robust solution in polynomial time. In fact, there is a compact linear formulation to compute the optimal static solution for \(\varPi _\mathsf{Rob}\) [2, 4].

We can easily see that the static solution is a lower bound of the optimal value of the adjustable robust problem. Suppose \(({\varvec{x}}^*, {\varvec{y}}^*)\) is an optimal solution for \(\varPi _\mathsf{Rob}\). Then, \({\varvec{x}}={\varvec{x}}^*, {\varvec{y}}({\varvec{B}})={\varvec{y}}^*\) for all \({\varvec{B}}\in \mathcal{U}\) is feasible for \(\varPi _\mathsf{AR}\). Therefore,

$$\begin{aligned} z_\mathsf{AR}\ge z_\mathsf{Rob}. \end{aligned}$$
(2.1)

We would like to study the conditions under which \(z_\mathsf{AR}\le z_\mathsf{Rob}\). Suppose \(({\varvec{x}}^*, {\varvec{y}}^*({\varvec{B}}))\) for all \({\varvec{B}} \in \mathcal{U}\) is a fully-adjustable optimal solution for \(\varPi _\mathsf{AR}\). Then

$$\begin{aligned} z_\mathsf{AR}&={\varvec{c}}^T{\varvec{x}}^*+\min _{{\varvec{B}}\in \mathcal{U}}\max _{{\varvec{y}}({\varvec{B}})}\left\{ {\varvec{d}}^T{\varvec{y}}({\varvec{B}})\;\left| \;{\varvec{B}}{\varvec{y}}({\varvec{B}})\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*\right. \right\} \\ z_\mathsf{Rob}&\ge {\varvec{c}}^T{\varvec{x}}^*+\max _{{\varvec{y}}\ge {\varvec{0}}}\left\{ {\varvec{d}}^T{\varvec{y}}\;\left| \;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*, \forall {\varvec{B}}\in \mathcal{U}\right. \right\} . \end{aligned}$$

Note that \({\varvec{h}} - {\varvec{A}} {\varvec{x}}^* \ge {\varvec{0}}\), since otherwise the second-stage problem becomes infeasible for \(\varPi _\mathsf{AR}\). In fact, we can assume without loss of generality that \({\varvec{h}} - {\varvec{A}}{\varvec{x}}^* > {\varvec{0}}\). Otherwise, it is easy to see that \(z_\mathsf{AR}= z_\mathsf{Rob}\): suppose \((h-Ax^*)_i=0\) for some \(i\). Since \(\mathcal{U}\) is a full-dimensional convex set, we can find \(\hat{{\varvec{B}}} \in \mathcal{U}\) such that \(\hat{B}_{ij} >0\) for all \(j=1,\ldots ,n_2\). Therefore,

$$\begin{aligned} \min _{{\varvec{B}}\in \mathcal{U}}\; \max _{{\varvec{y}}({\varvec{B}})\ge {\varvec{0}}} \; \left\{ {\varvec{d}}^T{\varvec{y}}({\varvec{B}}) \left| {\varvec{B}}{\varvec{y}}({\varvec{B}})\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^* \right. \right\} \le \max _{{\varvec{y}} \ge {\varvec{0}} } \; \left\{ {\varvec{d}}^T{\varvec{y}}\; |\; \hat{{\varvec{B}}} {\varvec{y}} \le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*\right\} \!=\! 0, \end{aligned}$$

which implies \(z_\mathsf{AR}= z_\mathsf{Rob}\). Therefore, we need to study conditions under which

$$\begin{aligned} \max _{{\varvec{y}}\ge {\varvec{0}}}\left\{ {\varvec{d}}^T{\varvec{y}}\left| {\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*, \forall {\varvec{B}}\in \mathcal{U}\right. \right\} \ge \min _{{\varvec{B}}\in \mathcal{U}}\max _{{\varvec{y}}\ge {\varvec{0}}}\left\{ {\varvec{d}}^T{\varvec{y}}\left| {\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*\right. \right\} , \end{aligned}$$
(2.2)

where \({\varvec{h}}-{\varvec{A}}{\varvec{x}}^* > {\varvec{0}}\).

2.1 One-stage models

Motivated by (2.2), we consider the following one-stage adjustable robust problem \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\):

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}) = \min _{{\varvec{B}}\in \mathcal{U}} \max _{{\varvec{y}}}\;&{\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}\\&{\varvec{y}} \; \in \; {\mathbb R}^n_+,\nonumber \end{aligned}$$
(2.3)

where \({\varvec{h}}\in {\mathbb R}^m_+\) and \({\varvec{h}}>{\varvec{0}}\), \({\varvec{d}}\in {\mathbb R}^n_+\) and \(\mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\) is the convex, compact and down-monotone uncertainty set. The corresponding one-stage static robust problem \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) can be formulated as follows:

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}) = \max _{{\varvec{y}}}\;&{\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}, \; \forall {\varvec{B}} \in \mathcal{U}\\&{\varvec{y}} \; \in \; {\mathbb R}^{n}_+.\nonumber \end{aligned}$$
(2.4)

Ben-Tal and Nemirovski [4] study these one-stage models and show that the solution of \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) is optimal if the uncertainty set \(\mathcal U\) constraint-wise.

Consider \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) as defined in (2.3). We can write the dual problem of the inner maximization problem.

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}) = \min _{{\varvec{B}},{\varvec{\alpha }}} \{ {\varvec{h}}^T {\varvec{\alpha }} \;|\;{\varvec{B}}^T {\varvec{\alpha }} \ge {\varvec{d}}, {\varvec{B}}\in \mathcal{U},{\varvec{\alpha }}\in {\mathbb R}^{m}_+\}. \end{aligned}$$

Substituting \(\lambda = {\varvec{h}}^T {\varvec{\alpha }}\) and \({\varvec{\alpha }} = \lambda {\varvec{\mu }}\), we can reformulate \(z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) as follows.

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}) = \min _{\lambda , {\varvec{B}},{\varvec{\mu }}} \; \{ \lambda \;|\; \lambda {\varvec{B}}^T {\varvec{\mu }} \ge {\varvec{d}}, \; {\varvec{h}}^T {\varvec{\mu }} =1, \; {\varvec{B}}\in \mathcal{U}, {\varvec{\mu }}\in {\mathbb R}^{m}_+\}. \end{aligned}$$
(2.5)

2.2 Transformation of \(\mathcal U\)

Motivated from the formulation (2.5), we define the following transformation \(T(\mathcal{U},{\varvec{h}})\) of the uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n}_+\) and \({\varvec{h}}>{\varvec{0}}\).

$$\begin{aligned} T(\mathcal{U},{\varvec{h}}) = \left\{ {\varvec{B}}^T {\varvec{\mu }} \; \left| \; {\varvec{h}}^T{\varvec{\mu }}=1, \; {\varvec{B}}\in \mathcal{U}, \; {\varvec{\mu }} \ge {\varvec{0}} \right. \right\} . \end{aligned}$$
(2.6)

For instance, if \({\varvec{h}}={\varvec{e}}\), then \(T(\mathcal{U},{\varvec{e}})\) is the set of all convex combinations of rows of \({\varvec{B}} \in \mathcal{U}\) for all \({\varvec{B}}\in \mathcal{U}\). Note that \(T(\mathcal{U},{\varvec{e}})\) is not necessarily convex in general. We discuss several examples below to illustrate properties of \(T(\mathcal{U}, {\varvec{h}})\).

Example 1

(\(\mathcal{U}\) where \(T(\mathcal{U},{\varvec{h}})\) is non-convex). Consider the following uncertainty set \(\mathcal{U}\):

$$\begin{aligned} \mathcal{U} = \left\{ {\varvec{B}} \in [0,1]^{n\times n} \; \left| \; B_{ij}=0, \; \forall i\ne j, \; \sum _{j=1}^n B_{jj} \le 1\right. \right\} . \end{aligned}$$
(2.7)

\(T(\mathcal{U},{\varvec{e}})\) is non-convex. Figure 1 illustrates \(T(\mathcal{U}, {\varvec{e}})\) when \(n=3\). In fact, in Theorem 7, we prove that \(T(\mathcal{U},{\varvec{h}})\) is non-convex for all \({\varvec{h}}>{\varvec{0}}\).

On the other hand, in the following two lemmas, we show that \(T(\mathcal{U},{\varvec{h}})\) can be convex for all \({\varvec{h}}>{\varvec{0}}\) for some interesting families of examples.

Fig. 1
figure 1

The boundary of the set \(T(\mathcal{U}, {\varvec{e}})\) when \(n=3\)

Example 2

(Constraint-wise uncertainty set) Suppose the uncertainty set \(\mathcal{U}\) is constraint-wise where each constraint \(i=1,\ldots ,m\) can be selected independently from a compact convex set \(\mathcal{U}_i\). In other words, \(\mathcal{U}\) is a Cartesian product of \(\mathcal{U}_i, i=1,\ldots ,m\), i.e.,

$$\begin{aligned} \mathcal{U} = \mathcal{U}_1 \times \mathcal{U}_2 \times \cdots \times \mathcal{U}_m, \end{aligned}$$

then \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\). In particular, we have the following lemma.

Lemma 1

Suppose the convex compact uncertainty set \(\mathcal{U}\) is constraint-wise:

$$\begin{aligned} \mathcal{U} =\{ {\varvec{B}}\;|\; {\varvec{B}}^T{\varvec{e}}_j \in \mathcal{U}_j \}, \end{aligned}$$

where \(\mathcal{U}_j\) is a compact convex set in \({\mathbb R}^{n}_+\). Then \(T(\mathcal{U}, {\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\).

We provide a detailed proof of Lemma 1 in “Appendix 2”. In Ben-Tal and Nemirovski [4], the authors show that a static solution is optimal for the adjustable robust problem if \(\mathcal{U}\) is constraint-wise. In later discussion, we show that a static solution is optimal if \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}} >{\varvec{0}}\); thereby extending the result in [4] for the case where \(\mathcal U\) is contained in the non-negative orthant. Note that constraint-wise uncertainty is analogous to independence in distributions for stochastic optimization problems.

Example 3

(Scaled projections) A compact convex set \(\mathcal{U} \in {\mathbb R}^{m \times n}_+\) is said to satisfy the scaled projections property if the projections of \(\mathcal{U}\) onto each of its \(m\) row sets are scalar multiples of each other. We show in the following lemma that if the uncertainty set \(\mathcal{U}\) satisfies the scaled projections property, then \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\).

Lemma 2

Consider any convex compact uncertainty set \(\mathcal{U} \subseteq {\mathbb R}^{m \times n}_+\). For any \(j=1,\ldots ,m\), let

$$\begin{aligned} \mathcal{U}_j =\left\{ {\varvec{b}} \; |\; \exists \; {\varvec{B}} \in \mathcal{U}, {\varvec{b}} = {\varvec{B}}^T {\varvec{e}}_j\right\} . \end{aligned}$$

Suppose \(\mathcal{U}_j = \alpha _j \mathcal{U}_1\) for all \(j=2,\ldots ,m\) where \(\alpha _j >0\). Then \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\).

We provide a proof of Lemma 2 in “Appendix 2”.

The family of permutation invariant sets is an important sub-class of sets with scaled projections. A set \(\mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\) is permutation invariant if for any \({\varvec{B}}\in \mathcal{U}\) and any permutation \(\sigma \) of \(\{1,\ldots ,m\}\), the matrix obtained by permuting the rows of \({\varvec{B}}\), say \({\varvec{B}}^{\sigma }\) where \(B^{\sigma }_{ij}=B_{\sigma (i)j}\), also belongs to \(\mathcal{U}\). For example, consider the following set:

$$\begin{aligned} \mathcal{U} = \left\{ {\varvec{B}} \in {\mathbb R}^{m \times n}_+ \; \left| \; \sum _{i,j} B_{ij} \le 1\right\} \right. . \end{aligned}$$

It is easy to observe that \(\mathcal{U}\) is permutation invariant. Any permutation invariant set \(\mathcal{U}\) satisfies the scaled projections property since \(\mathcal{U}_i = \mathcal{U}_j\) for all \(i,j\in \{1,\ldots ,m\}\). In fact, all projections are equal for permutation invariant sets. Therefore, \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\) for any permutation invariant \(\mathcal{U}\).

2.3 Main theorem

Now, we introduce the main theorem which gives a tight characterization of the optimality of the static solution for the two-stage adjustable robust problem.

Theorem 1

(Optimality of Static Solutions) Let \(z_\mathsf{AR}\) be the objective of the two-stage adjustable robust problem \(\varPi _\mathsf{AR}\) defined in (1.1) and \(z_\mathsf{Rob}\) be that of \(\varPi _\mathsf{Rob}\) defined in (1.2). Then, \(z_\mathsf{AR}=z_\mathsf{Rob}\) if \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\). Furthermore, if \(T(\mathcal{U},{\varvec{h}})\) is not convex for some \({\varvec{h}}>{\varvec{0}}\), then there exist an instance such that \(z_\mathsf{AR}>z_\mathsf{Rob}\).

Note that \(z_\mathsf{AR} = z_\mathsf{Rob}\) implies that the optimal static robust solution for \(\varPi _\mathsf{Rob}\) is also optimal for the adjustable robust problem \(\varPi _\mathsf{AR}\). In order to prove Theorem 1, we first reformulate \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) and \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) as optimization problems over \(T(\mathcal{U},{\varvec{h}})\). From (2.5) and the definition of \(T(\mathcal{U}, {\varvec{h}})\), we have the following lemma.

Lemma 3

Given \(\mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\) and \({\varvec{h}}>{\varvec{0}}\), the one-stage adjustable robust problem \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) defined in (2.3) can be formulated as

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}) = \min _{\lambda , {\varvec{b}}} \; \{\lambda \;|\; \lambda {\varvec{b}} \; \ge \; {\varvec{d}},\; {\varvec{b}}\in T(\mathcal{U}, {\varvec{h}})\}. \end{aligned}$$
(2.8)

We can also reformulate \(\varPi _\mathsf{Rob}^I(\mathcal{U}, {\varvec{h}})\) as an optimization problem over \(\mathsf{conv}(T(\mathcal{U}, {\varvec{h}})\) as follows.

Lemma 4

Given \(\mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\) and \({\varvec{h}}>{\varvec{0}}\), the one-stage static robust problem \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) defined in (2.4) can be formulated as

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}) = \min _{\lambda ,{\varvec{b}}} \; \{ \lambda \;|\; \lambda {\varvec{b}} \ge {\varvec{d}}, \; {\varvec{b}}\in \mathsf{conv}(T(\mathcal{U},{\varvec{h}}))\}. \end{aligned}$$
(2.9)

Proof

Suppose

$$\begin{aligned} \mathcal{U} = \mathsf{conv}({\varvec{B}}_1, \ldots , {\varvec{B}}_K) \end{aligned}$$

where \({\varvec{B}}_j\in \mathcal{U}, j=1,\ldots ,K\) are the extreme points of \(\mathcal U\). Let \({\varvec{b}}_i^j={\varvec{B}}_j^T{\varvec{e}}_i\), we can rewrite (2.4) as follows.

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\;=\;\max _{{\varvec{y}}} \left\{ {\varvec{d}}^T {\varvec{y}} \;\left| \; {\varvec{B}}_j{\varvec{y}}\le {\varvec{h}}, \;\forall j=1,\ldots ,K, {\varvec{y}}\in {\mathbb R}^{n}_+\right. \right\} . \end{aligned}$$

Again, by writing the dual problem, we have

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}) = \min _{{\varvec{\alpha }}^j, j\in [K]} \left\{ \sum _{j=1}^K {\varvec{h}}^T {\varvec{\alpha }}^j \;\left| \;\sum _{j=1}^K {\varvec{B}}_j^T {\varvec{\alpha }}^j \ge {\varvec{d}}, {\varvec{\alpha }}^j\in {\mathbb R}^{m}_+\right. \right\} \end{aligned}$$

Denote \(\theta _j={\varvec{h}}^T{\varvec{\alpha ^j}}\), \(\lambda ={\varvec{e}}^T{\varvec{\theta }}\). Note that \({\varvec{h}}>{\varvec{0}}\), \({\varvec{d}}\ge {\varvec{0}}\) and \({\varvec{d}}\ne {\varvec{0}}\), we have \({\varvec{\alpha }}^j\ne {\varvec{0}}\) for any \(j\in [K]\). Therefore, \(\theta _j>0\) for any \(j\in [K]\) and \(\lambda >0\). We have

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})&\;=\;\min _{\lambda , {\varvec{\alpha }}^j, j\in [K]} \left\{ \lambda \; \left| \; \lambda \sum _{j=1}^K \frac{\theta _j}{\lambda } {\varvec{B}}_j^T\frac{{\varvec{\alpha }}^j}{\theta _j}\ge {\varvec{d}}, {\varvec{\alpha _j}} \ge \ 0\right. \right\} \\&\;=\; \min _{\lambda , \hat{{\varvec{b}}}_j} \left\{ \lambda \;\left| \;\lambda \sum _{j=1}^K \frac{\theta _j}{\lambda } \hat{{\varvec{b}}}_j\ge {\varvec{d}}, \hat{{\varvec{b}}}_j\in T(\mathcal{U},{\varvec{h}}) \right. \right\} \\&\;=\;\min _{\lambda ,{\varvec{b}}} \{\lambda \;|\;\lambda {\varvec{b}}\ge {\varvec{d}}, {\varvec{b}}\in \mathsf{conv}(T(\mathcal{U},{\varvec{h}}))\} \end{aligned}$$

where the second equality holds because \({\varvec{h}}^T\frac{{\varvec{\alpha }}^j}{\theta _j}=1, j=1,\ldots ,K\). \(\square \)

Note that the formulations (2.8) and (2.9) are bilinear optimization problems over \(T(\mathcal{U}, {\varvec{h}})\) and not necessarily convex even if \(T(\mathcal{U},{\varvec{h}})\) is convex. However, the reformulations provide interesting geometric insights about the relation between the adjustable robust and static robust problems with respect to properties of \(\mathcal{U}\). Figure 2 illustrates the geometric interpretation of \(z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) and \(z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) in terms of the formulation in Lemma 3 and 4. Now, we are ready to prove Theorem 1.

Fig. 2
figure 2

A geometric illustration of \({z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}})}\) and \({z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})}\) when \({\varvec{d}}=\frac{1}{2}{\varvec{e}}\): For \({z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}})}\), the optimal solution \({\varvec{b}}\) is the point where \({\varvec{d}}\) intersects with the boundary of \(T(\mathcal{U}, {\varvec{h}})\), while for \({z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})}\), the optimal solution is \({\varvec{b}}={\varvec{d}}\) since \({\varvec{d}}\in \mathsf{conv}(T(\mathcal{U}, {\varvec{h}}))\)

Proof of Theorem 1

Suppose \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\). Let \(({\varvec{x}}^*,{\varvec{y}}^*({\varvec{B}}), {\varvec{B}}\in \mathcal{U})\) be an optimal fully-adjustable solution to \(\varPi _\mathsf{AR}\). Then

$$\begin{aligned} z_\mathsf{AR}&= {\varvec{c}}^T{\varvec{x}}^*+ \min _{{\varvec{B}}\in \mathcal{U}} \max _{{\varvec{y}}({\varvec{B}})\ge {\varvec{0}}} \left\{ {\varvec{d}}^T{\varvec{y}}({\varvec{B}}) \; \left| \; {\varvec{B}}{\varvec{y}}({\varvec{B}})\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^* \right. \right\} \\&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*), \end{aligned}$$

where the second equation follows from (2.3). We can assume without loss of generality that \({\varvec{h}}-{\varvec{A}}{\varvec{x}}^*>{\varvec{0}}\) as discussed earlier. Now,

$$\begin{aligned} z_\mathsf{Rob}&\ge {\varvec{c}}^T{\varvec{x}}^*+ \max _{{\varvec{y}}\ge {\varvec{0}}}\left\{ {\varvec{d}}^T{\varvec{y}}\;\left| \;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*, \forall {\varvec{B}}\in \mathcal{U}\right. \right\} \nonumber \\&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*)\nonumber \\&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*)\nonumber \\&= z_\mathsf{AR}, \end{aligned}$$
(2.10)

where the first inequality follows as \({\varvec{x}}^*\) is a feasible first-stage solution for the static robust problem. The second equation follows from (2.4). Equation (2.10) follows from Lemma 3 and Lemma 4 and the fact that \(T(\mathcal{U}, {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*)\) is convex. Also, from (2.1) we know that \(z_\mathsf{AR} \ge z_\mathsf{Rob}\) which implies \(z_\mathsf{AR}= z_\mathsf{Rob}\).

Conversely, suppose \(z_\mathsf{AR}=z_\mathsf{Rob}\). For the sake of contradiction, assume \(T(\mathcal{U},{\varvec{h}})\) is non-convex for some \({\varvec{h}}=\hat{{\varvec{h}}}\). Then, there must exist \({\varvec{\hat{b}}}\in {\mathbb R}^n_+\) such that \({\varvec{\hat{b}}}\not \in T(\mathcal{U},\hat{{\varvec{h}}})\) but \({\varvec{\hat{b}}}\in \mathsf{conv}(T(\mathcal{U},\hat{{\varvec{h}}}))\). Consider the following instance of \(\varPi _\mathsf{AR}\) and \(\varPi _\mathsf{Rob}\):

$$\begin{aligned} {\varvec{A}}={\varvec{0}},\; {\varvec{c}}={\varvec{0}}, \; {\varvec{h}}=\hat{{\varvec{h}}},\; {\varvec{d}}={\varvec{\hat{b}}}. \end{aligned}$$

Note that in this case, we have \(z_\mathsf{AR}=z_\mathsf{AR}^I(\mathcal{U},\hat{{\varvec{h}}})\) and \(z_\mathsf{Rob}=z_\mathsf{Rob}^I(\mathcal{U},\hat{{\varvec{h}}})\). Therefore, by our assumption,

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},\hat{{\varvec{h}}})=z_\mathsf{Rob}^I(\mathcal{U},\hat{{\varvec{h}}}). \end{aligned}$$

Since \({\varvec{\hat{b}}}\in \mathsf{conv}(T(\mathcal{U},\hat{{\varvec{h}}}))\), \(\alpha =1, {\varvec{b}}={\varvec{\hat{b}}}\) is a feasible solution for \(z_\mathsf{Rob}^I(\mathcal{U},\hat{{\varvec{h}}})\). Therefore, \(z_\mathsf{Rob}^I(\mathcal{U},\hat{{\varvec{h}}})\le 1\), which implies \(z_\mathsf{AR}^I(\mathcal{U},\hat{{\varvec{h}}})\le 1\). However, this would further imply that there exists some \({\varvec{b_1}}\in T(\mathcal{U},\hat{{\varvec{h}}})\) such that \({\varvec{b_1}}\ge {\varvec{\hat{b}}}\). Since \(\mathcal{U}\) is down-monotone by our assumption, so is \(T(\mathcal{U},\hat{{\varvec{h}}})\) (see “Appendix 1”). Therefore, \({\varvec{\hat{b}}}\in T(\mathcal{U},\hat{{\varvec{h}}})\), which is a contradiction. \(\square \)

We give examples of families of \(\mathcal U\) in Lemma 1 and Lemma 2, where \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\). We would like to note that for a given \({\varvec{h}}>{\varvec{0}}\), it is not necessarily tractable to decide whether \(T(\mathcal{U},{\varvec{h}})\) is convex or not for any arbitrary \(\mathcal U\).

2.4 Min–Max theorem interpretation

We can interpret a special case of Theorem 1 as a min-max theorem. Consider the case where \({\varvec{A}}={\varvec{0}}, {\varvec{c}}={\varvec{0}}\), in which we have

$$\begin{aligned} z_\mathsf{AR}=z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}), z_\mathsf{Rob}=z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}). \end{aligned}$$

Recall:

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U}, {\varvec{h}})=\min _{B\in \mathcal{U}} \max _{{\varvec{y}}\ge {\varvec{0}}} \left\{ \left. {\varvec{d}}^T{\varvec{y}}\;\right| \;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}\right\} . \end{aligned}$$

We define the following function for \({\varvec{y}}\in {\mathbb R}^{n}_+, {\varvec{B}}\in \mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\):

$$\begin{aligned} f({\varvec{y}},{\varvec{B}})=\left\{ \begin{array}{l} {\varvec{d}}^T{\varvec{y}},\;\text {if}\;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}\\ -\infty ,\;\text {otherwise.}\end{array}\right. \end{aligned}$$

Now, we can express \(z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) and \(z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) as follows:

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}})=\min _{B\in \mathcal{U}} \max _{{\varvec{y}}\ge {\varvec{0}}} f({\varvec{y}},{\varvec{B}}) \end{aligned}$$

and

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})=\max _{{\varvec{y}}\ge {\varvec{0}}} \min _{B\in \mathcal{U}} f({\varvec{y}},{\varvec{B}}). \end{aligned}$$

Therefore, from Theorem 1, we have:

$$\begin{aligned} \min _{B\in \mathcal{U}} \max _{{\varvec{y}}\ge {\varvec{0}}} f({\varvec{y}},{\varvec{B}})=\max _{{\varvec{y}}\ge {\varvec{0}}} \min _{B\in \mathcal{U}} f({\varvec{y}},{\varvec{B}}) \end{aligned}$$
(2.11)

if \(T(\mathcal{U},{\varvec{h}})\) is convex. We would like to note that the min-max equality (2.11) does not follow from the general Saddle-Point Theorem [12] since \(f({\varvec{y}},{\varvec{B}})\) is not a quasi-convex function of \({\varvec{B}}\).

3 Measure of non-convexity and approximation bound

In this section, we introduce a measure of non-convexity for general down-monotone compact sets in the non-negative orthant and show that the performance of the optimal static solution is related to this measure of non-convexity of the transformation \(T(\mathcal{U}, {\varvec{h}})\) of the uncertainty set \(\mathcal{U}\). We also compare our bound with the symmetry bound introduced by Bertsimas and Goyal [8]. In particular, we show that our bound is at least as good as the symmetry bound, and is significantly better in many cases.

Definition 1

Given a down-monotone compact set \(\mathcal{S}\subseteq {\mathbb R}^n_+\) that is contained in the non-negative orthant, the measure of non-convexity \(\kappa (\mathcal{S})\) is defined as follows.

$$\begin{aligned} \kappa (\mathcal{S}) = \min \left\{ \alpha \left| \; \mathsf{conv}(\mathcal{S})\subseteq \alpha \mathcal{S}\right. \right\} . \end{aligned}$$
(3.1)

For any down-monotone compact set \(\mathcal{S} \subseteq {\mathbb R}^n_+\), \(\kappa (\mathcal{S})\) is the smallest factor by which \(\mathcal S\) must be scaled to contain the convex hull of \(\mathcal S\). If \(\mathcal S\) is convex, then \(\kappa (\mathcal{S})=1\). Therefore, if the uncertainty set \(\mathcal U\) is constraint-wise, then \(\kappa (T(\mathcal{U},{\varvec{h}}))=1\) for all \({\varvec{h}}>{\varvec{0}}\) (Lemma 1). On the other hand, if \(\mathcal S\) is non-convex, then \(\kappa (\mathcal{S})>1\). For instance, consider the following set:

$$\begin{aligned} \mathcal{S}^n=\left\{ {\varvec{x}}\in {\mathbb R}^n_+\;\left| \;\sum _{i=1}^n x_j^{\frac{1}{2}}\le 1\right. \right\} \end{aligned}$$

Figure 3 illustrates set \(\mathcal{S}^n\) for \(n=2\) and its measure of non-convexity. We would like to emphasize that given an arbitrary down-monotone compact set \(\mathcal U\) and \({\varvec{h}}>{\varvec{0}}\), it is not necessarily tractable to compute \(\kappa (T(\mathcal{U},{\varvec{h}}))\).

Fig. 3
figure 3

A geometric illustration of \(\kappa (\mathcal{S})\) when \(n=2\): \(\mathcal S\) is down-monotone and shaded with dot lines, \(\mathsf{conv}(\mathcal{S})\) is marked with dashed lines, and the outmost curve is the boundary of \(\kappa \cdot \mathcal{S}\). Draw a line from the origin which intersects with the boundary of \(\mathcal S\) at \(v\) and the boundary of \(\mathsf{conv}(\mathcal{S})\) at \(u\). \(\kappa (\mathcal{S})\) is the largest ratio of such \(u\) and \(v\)’s

3.1 Approximation bounds

In this section, we relate the performance of the static solution for the two-stage adjustable robust problem to the measure of non-convexity of \(T(\mathcal{U},{\varvec{h}})\).

Additional Assumption: For the analysis of the performance bound for static solutions, we make two additional assumptions in the model (1.1): the first-stage objective coefficients \({\varvec{c}}\) and the first-stage decision variables \({\varvec{x}}\) in \(\varPi _\mathsf{AR}\) (1.1) are both non-negative. We work with these assumptions for the rest of the paper.

Theorem 2

For any down-monotone, compact set \(\mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\), let

$$\begin{aligned} \rho (\mathcal{U})=\max \{\kappa (T(\mathcal{U},{\varvec{h}}))\;|\;{\varvec{h}}>{\varvec{0}}\}. \end{aligned}$$

Let \(z_\mathsf{AR}\) be the optimal value of \(\varPi _\mathsf{AR}\) in (1.1) and \(z_\mathsf{Rob}\) be the optimal value of \(\varPi _\mathsf{Rob}\) in (1.2) under the additional assumption that \({\varvec{x}} \ge {\varvec{0}}\) and \({\varvec{c}} \ge {\varvec{0}}\). Then,

$$\begin{aligned} z_\mathsf{AR}\le \rho (\mathcal{U})\cdot z_\mathsf{Rob}. \end{aligned}$$

Furthermore, we can show that the bound is tight.

Proof

Suppose \(({\varvec{x}}^*, {\varvec{y}}^*({\varvec{B}}), {\varvec{B}}\in \mathcal{U})\) is an optimal fully-adjustable solution for \(\varPi _\mathsf{AR}\). Based on the discussion in Theorem 1, we can assume without loss of generality that \({\varvec{h}}-{\varvec{A}}{\varvec{x}}^*>{\varvec{0}}\). Then

$$\begin{aligned} z_\mathsf{AR}&= {\varvec{c}}^T {\varvec{x}}^* + \min _{{\varvec{B}} \in \mathcal{U}} \; \max _{{\varvec{y}}({\varvec{B}}) \ge {\varvec{0}}} \left\{ {\varvec{d}}^T {\varvec{y}}({\varvec{B}}) \; |\; {\varvec{B}} {\varvec{y}}({\varvec{B}}) \le {\varvec{h}} - {\varvec{A}}{\varvec{x}}^* \right\} \\&= {\varvec{c}}^T {\varvec{x}}^* + z_\mathsf{AR}^I(\mathcal{U}, {\varvec{h}} - {\varvec{A}}{\varvec{x}}^*), \end{aligned}$$

and

$$\begin{aligned} z_\mathsf{Rob}&\ge {\varvec{c}}^T{\varvec{x}}^*+ \max _{{\varvec{y}}}\; \left\{ {\varvec{d}}^T{\varvec{y}}\;|\;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*,\;\forall {\varvec{B}}\in \mathcal{U} \right\} \nonumber \\&= {\varvec{c}}^T {\varvec{x}}^* + z_\mathsf{Rob}^I(\mathcal{U}, {\varvec{h}} - {\varvec{A}} {\varvec{x}}^*). \end{aligned}$$
(3.2)

Let \(\hat{{\varvec{h}}} = {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*\) and \(\kappa = \kappa (T(\mathcal{U}, \hat{{\varvec{h}}}))\). From Lemmas 4, we can reformulate \(z_\mathsf{Rob}^I(\mathcal{U}, \hat{{\varvec{h}}})\) as follows.

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U}, \hat{{\varvec{h}}}) = \min _{{\varvec{b}} \in \mathsf{conv}(T(\mathcal{U}, \hat{{\varvec{h}}}))} \; \{ \lambda \; |\; \lambda {\varvec{b}} \ge {\varvec{d}}, \lambda \ge 0\}. \end{aligned}$$
(3.3)

Suppose \((\hat{\lambda }, \hat{{\varvec{b}}})\) be the minimizer for \(z_\mathsf{Rob}^I(\mathcal{U}, \hat{{\varvec{h}}})\) in (3.3). Therefore,

$$\begin{aligned} \hat{{\varvec{b}}} \in \mathsf{conv}(T(\mathcal{U}, \hat{{\varvec{h}}})) \Rightarrow \frac{1}{\kappa } \cdot \hat{{\varvec{b}}} \in T(\mathcal{U}, \hat{{\varvec{h}}}). \end{aligned}$$

Now,

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U}, \hat{{\varvec{h}}})&= \min _{{\varvec{b}} \in T(\mathcal{U}, \hat{{\varvec{h}}})} \; \{ \lambda \; |\; \lambda {\varvec{b}} \ge {\varvec{d}}, \lambda \ge 0\}\nonumber \\&\le \kappa \cdot \hat{\lambda }\nonumber \\&= \kappa \cdot z_\mathsf{Rob}^I(\mathcal{U}, \hat{{\varvec{h}}}), \end{aligned}$$
(3.4)

where the first equation follows from the reformulation of \(z_\mathsf{AR}^I(\mathcal{U}, \hat{{\varvec{h}}})\) in Lemma 3. The second inequality follows as \((1/\kappa ) \hat{{\varvec{b}}} \in T(\mathcal{U}, \hat{{\varvec{h}}})\) and \(\hat{\lambda }\hat{{\varvec{b}}} \ge {\varvec{d}}\) and the last equality follows as \(z_\mathsf{Rob}^I(\mathcal{U}, \hat{{\varvec{h}}}) = \hat{\lambda }\). Therefore,

$$\begin{aligned} z_\mathsf{AR}&= {\varvec{c}}^T {\varvec{x}}^* + z_\mathsf{AR}^I(\mathcal{U}, {\varvec{h}} - {\varvec{A}}{\varvec{x}}^*)\nonumber \\&\le {\varvec{c}}^T {\varvec{x}}^* + \kappa \cdot z_\mathsf{Rob}^I(\mathcal{U}, {\varvec{h}} - {\varvec{A}}{\varvec{x}}^*)\end{aligned}$$
(3.5)
$$\begin{aligned}&\le \kappa \cdot \big ({\varvec{c}}^T {\varvec{x}}^* + z_\mathsf{Rob}^I(\mathcal{U}, {\varvec{h}} - {\varvec{A}}{\varvec{x}}^*)\big )\nonumber \\&\le \rho (\mathcal{U}) \cdot z_\mathsf{Rob}, \end{aligned}$$
(3.6)

where (3.5) follows from (3.4) and the last inequality follows from (3.2) and the fact that \(\kappa = \kappa (T(\mathcal{U}, \hat{{\varvec{h}}})) \le \rho (\mathcal{U})\).

Tightness of the bound. We can show that the bound is tight. In particular, given any scalar \(\mu >1\) and some \(n\in {\mathbb Z}_+\), take \({\varvec{A}}={\varvec{0}}, {\varvec{c}}={\varvec{0}}, {\varvec{d}}={\varvec{e}}, {\varvec{h}}={\varvec{e}}\) and \(\theta =\log _{\mu }n\). Consider the following uncertainty set:

$$\begin{aligned} \mathcal{U} = \left\{ {\varvec{B}} \in [0,1]^{n\times n} \; \left| \; B_{ij}=0, \; \forall i\ne j, \; \sum _{j=1}^n B_{jj}^{\theta } \le 1.\right. \right\} . \end{aligned}$$

For \(\varPi _\mathsf{AR}\), we have

$$\begin{aligned} z_\mathsf{AR}&= \min _{{\varvec{B}}} \max _{{\varvec{y}}}\left\{ {\varvec{e}}^T {\varvec{y}} \;\left| \; B_{jj}y_j\le 1, j=1,\ldots ,n, \sum _{j=1}^n B_{jj}^\theta \le 1 \right. \right\} \\&=\min _{{\varvec{B}}}\left\{ \sum _{j=1}^n \frac{1}{B_{jj}}\;\left| \;\sum _{j=1}^n B_{jj}^\theta \le 1\right. \right\} . \end{aligned}$$

This is a convex problem and by solving the KKT conditions, we have the optimal solution as \(B_{jj}=n^{-\frac{1}{\theta }}\) for \(j=1,\ldots ,n\). Hence, the optimal value of \(z_\mathsf{AR}=n\cdot n^{\frac{1}{\theta }}=n^{1+\frac{1}{\theta }}\).

For \(\varPi _\mathsf{Rob}\), we have

$$\begin{aligned} z_\mathsf{Rob} = \max _{{\varvec{y}}} \left\{ \left. {\varvec{e}}^T {\varvec{y}}\; \right| \;B_{jj}y_j \le 1, \; \forall {\varvec{B}}\in \mathcal{U},\; j=1,\ldots ,n.\right\} \end{aligned}$$

The constraints essentially enforce \(B_{jj}y_j\le 1\) for all \(B_{jj}\le 1\), \(j=1,\ldots ,n\). We only need to consider the extreme case where \(B_{jj}=1\), which yields \(y_j=1\). Therefore, \(z_\mathsf{Rob}=n\) and

$$\begin{aligned} \frac{z_\mathsf{AR}}{z_\mathsf{Rob}}=\frac{n^{1+\frac{1}{\theta }}}{n}=n^{\frac{1}{\theta }}=\mu . \end{aligned}$$

In “Appendix 3”, we show that \(\kappa (T(\mathcal{U},{\varvec{h}}))=n^{\frac{1}{\theta }}\) for all \({\varvec{h}}>{\varvec{0}}\). Therefore, \(\rho (\mathcal{U})=n^{\frac{1}{\theta }}=\mu \) and \(z_\mathsf{AR}=\rho (\mathcal{U})\cdot {z_\mathsf{Rob}}\). \(\square \)

In Theorem 2, we prove a bound on the optimal objective value \(z_\mathsf{AR}\) of \(\varPi _\mathsf{AR}\) with respect to the optimal objective value \(z_\mathsf{Rob}\) of \(\varPi _\mathsf{Rob}\). Note that this also implies a bound on the performance of the optimal static robust solution for \(\varPi _\mathsf{Rob}\) for the adjustable robust problem \(\varPi _\mathsf{AR}\). Furthermore, in using a static robust solution \((\hat{{\varvec{x}}}, \hat{{\varvec{y}}})\) for the two-stage adjustable robust problem, we only implement the first-stage solution \(\hat{{\varvec{x}}}\) and recompute the optimal second-stage solution \({\varvec{y}}({\varvec{B}})\) after the uncertain constraint matrix \({\varvec{B}}\) is known. Therefore, the cost of such a solution would only be better than \(z_\mathsf{Rob}\) which is at most \(\rho (\mathcal{U}) \cdot z_\mathsf{AR}\). We would like to note that given any arbitrary down-monotone uncertainty set \(\mathcal U\), it is not necessarily tractable to compute \(\kappa (T(\mathcal{U},{\varvec{h}}))\) or \(\rho (\mathcal{U})\). In Table 1, we compute \(\rho (\mathcal{U})\) for some commonly used uncertainty sets. Moreover, in the following theorem, we show that \(\kappa (T(\mathcal{U}, {\varvec{h}}))\) is at most \(m\) for any for any \(\mathcal{U}\subseteq {\mathbb R}^{m \times n}_+\) and \({\varvec{h}} > {\varvec{0}}\).

Table 1 A comparison between the non-convexity bound and the symmetry bound for various uncertainty sets

Theorem 3

For any down-monotone convex compact set \(\mathcal{U} \in {\mathbb R}^{m\times n}_+\) and \({\varvec{h}}>{\varvec{0}}\),

$$\begin{aligned} \kappa (T(\mathcal{U},{\varvec{h}})) \le m. \end{aligned}$$

Proof

Note that

$$\begin{aligned} T(\mathcal{U},{\varvec{h}})= \left\{ {\varvec{B}}^T {\varvec{\mu }} \; \left| \; {\varvec{B}} \in \mathcal{U}, {\varvec{h}}^T {\varvec{\mu }} = 1, {\varvec{\mu }} \ge {\varvec{0}} \right. \right\} . \end{aligned}$$

For all \(j=1,\ldots ,m\), let

$$\begin{aligned} \mathcal{U}_j = \left\{ \left. \left( \frac{1}{h_j}\right) \cdot {\varvec{B}}^T {\varvec{e}}_j \; \right| \; {\varvec{B}} \in \mathcal{U} \right\} . \end{aligned}$$

We can show that

$$\begin{aligned} \bigcup _{j=1}^m \mathcal{U}_j \subseteq T(\mathcal{U}, {\varvec{h}}) \subseteq \mathsf{conv} \left( \bigcup _{j=1}^m \mathcal{U}_j \right) . \end{aligned}$$
(3.7)

For any \(j =1,\ldots ,m\), consider \({\varvec{\mu }}={\varvec{e}}_j/h_j\). Then \(\mathcal{U}_j = \{ {\varvec{B}}^T {\varvec{\mu }} \; |\; {\varvec{B}} \in \mathcal{U}\} \subseteq T(\mathcal{U}, {\varvec{h}})\) for all \(j=1,\ldots ,m\). Consider any \({\varvec{b}} \in T(\mathcal{U}, {\varvec{h}})\) where

$$\begin{aligned} {\varvec{b}} = \sum _{j=1}^m {\varvec{B}}^T {\varvec{e}}_j \mu _j, \end{aligned}$$

for some \({\varvec{B}} \in \mathcal{U}\) and \({\varvec{\mu }} \ge {\varvec{0}}\) and \({\varvec{h}}^T {\varvec{\mu }} =1\). Therefore,

$$\begin{aligned} {\varvec{b}} = \sum _{j=1}^m \left( \frac{1}{h_j} {\varvec{B}}^T {\varvec{e}}_j\right) \cdot (h_j \mu _j) = \sum _{j=1}^m {\varvec{b}}_j \cdot (h_j \mu _j), \end{aligned}$$

where \({\varvec{b}}_j \in \mathcal{U}_j\) for all \(j\in [m]\) and \(h_1\mu _1+ \ldots + h_m \mu _m =1\) which proves that \({\varvec{b}}\) belongs to the convex hull of the union of \(\mathcal{U}_j\), \(j\in [m]\). From (3.7), we have that

$$\begin{aligned} \mathsf{conv}(T(\mathcal{U}, {\varvec{h}})) = \mathsf{conv} \left( \bigcup _{j=1}^m \mathcal{U}_j \right) . \end{aligned}$$

Now consider any \({\varvec{b}} \in \mathsf{conv}(T(\mathcal{U}, {\varvec{h}}))\). Therefore, \({\varvec{b}}\) belongs to the convex hull of union of sets \(\mathcal{U}_j\), i.e.,

$$\begin{aligned} {\varvec{b}} = \sum _{j=1}^m {\varvec{b}}_j \lambda _j, \end{aligned}$$

for some \({\varvec{b}}_j \in \mathcal{U}_j\), \(j=1,\ldots ,m\) and some \({\varvec{\lambda }} \ge {\varvec{0}}\) such that \(\lambda _1 + \ldots + \lambda _m =1\). For all \(j=1,\ldots ,m\), let

$$\begin{aligned} {\varvec{B}}_j = h_j \cdot {\varvec{e}}_j {\varvec{b}}_j^T. \end{aligned}$$

Since \({\varvec{b}}_j \in \mathcal{U}_j\) and \(\mathcal{U}\) is down-monotone, \({\varvec{B}}_j \in \mathcal{U}\). Now, let

$$\begin{aligned} \hat{{\varvec{B}}} = \sum _{j=1}^m \frac{1}{m} {\varvec{B}}_j \; \in \mathcal{U}, \end{aligned}$$

as \(\hat{{\varvec{B}}}\) is a convex combination of elements in \(\mathcal{U}\). Also, let \(\hat{\mu }_j = \lambda _j/h_j\) for all \(j=1,\ldots ,m\). Therefore, \({\varvec{h}}^T \hat{{\varvec{\mu }}} = 1\) and \(\hat{{\varvec{b}}} = \hat{{\varvec{B}}}^T \hat{{\varvec{\mu }}} \in T(\mathcal{U}, {\varvec{h}})\). Also,

$$\begin{aligned} \hat{{\varvec{b}}} = \frac{1}{m} \cdot \left( \sum _{j=1} {\varvec{B}}_j^T \hat{{\varvec{\mu }}} \right) = \frac{1}{m} \cdot \left( \sum _{j=1} h_j {\varvec{b}}_j {\varvec{e}}_j^T \hat{{\varvec{\mu }}} \right) = \frac{1}{m} \sum _{j=1}^m {\varvec{b}}_j \lambda _j = \frac{1}{m} \cdot {\varvec{b}}. \end{aligned}$$

\(\square \)

3.2 Comparison with symmetry bound [8]

Bertsimas and Goyal [8] consider a general two-stage adjustable robust convex optimization problem with uncertain convex constraints and under mild conditions, show that the performance of a static solution is related to the symmetry of the uncertainty set. In this section, we compare our bound \(\rho (\mathcal{U})\) defined in (3.1) with the symmetry bound of [8] for the case of two-stage adjustable robust linear optimization problem under uncertain constraints. The notion of symmetry is introduced by Minkowski [21].

Definition 2

Given a nonempty convex compact uncertainty set \(\mathcal{S}\subseteq {\mathbb R}^m\) and a point \(s\in \mathcal{S}\), the symmetry of \(s\) with respect to \(\mathcal S\) is defined as:

$$\begin{aligned} \mathsf{sym}(s,\mathcal{S}):=\max \{\alpha \ge 0\;|\;{\varvec{s}}+\alpha ({\varvec{s}}-\hat{{\varvec{s}}})\in \mathcal{S},\forall \hat{{\varvec{s}}}\in \mathcal{S}\}. \end{aligned}$$

The symmetry of the set \(\mathcal S\) is defined as:

$$\begin{aligned} \mathsf{sym}(\mathcal{S}):=\max \{\mathsf{sym}(s,\mathcal{S})\;|\;{\varvec{s}}\in \mathcal{S}\}. \end{aligned}$$
(3.8)

The maximizer of (3.8) is called the point of symmetry.

In Bertsimas and Goyal [8], the authors prove the following bound on the performance of static solution for the two-stage adjustable robust convex optimization with uncertain constraints under some mild conditions.

$$\begin{aligned} z_\mathsf{AR}\le \left( 1+\frac{1}{\mathsf{sym}(\mathcal{U})}\right) \cdot z_\mathsf{Rob} \end{aligned}$$

We show that for the case of two-stage adjustable robust linear optimization under uncertain constraints, our approximation bound in 2 is at least as good as the symmetry bound for all instances.

Theorem 4

Consider uncertainty set \(\mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\). Then,

$$\begin{aligned} \max \{\kappa (T(\mathcal{U},{\varvec{h}}))\;|\;{\varvec{h}}>{\varvec{0}}\}\le \left( 1+\frac{1}{\mathsf{sym}(\mathcal{U})}\right) . \end{aligned}$$

Proof

For a given \({\varvec{h}}>{\varvec{0}}\), from the definition of \(\kappa (\cdot )\) in (3.1), we have

$$\begin{aligned} \mathsf{conv}(T(\mathcal{U},{\varvec{h}}))\subseteq \kappa (T(\mathcal{U},{\varvec{h}}))\cdot T(\mathcal{U},{\varvec{h}}). \end{aligned}$$

Therefore, it is sufficient to show

$$\begin{aligned} \mathsf{conv}(T(\mathcal{U},{\varvec{h}}))\subseteq \left( 1+\frac{1}{\mathsf{sym}(\mathcal{U})}\right) \cdot T(\mathcal{U},{\varvec{h}}) \end{aligned}$$
(3.9)

for all \({\varvec{h}}>{\varvec{0}}\). Let

$$\begin{aligned} {\varvec{B}}_0=\arg \max \{\mathsf{sym}({\varvec{B}},\mathcal{U})\;|\;{\varvec{B}}\in \mathcal{U}\} \end{aligned}$$

be the point of symmetry. Then, from the result in [9], we have

$$\begin{aligned} \left( 1+\frac{1}{\mathsf{sym}(\mathcal{U})}\right) \cdot {\varvec{B}}_0\ge {\varvec{B}}, \forall {\varvec{B}}\in \mathcal{U}. \end{aligned}$$
(3.10)

Now, given any \({\varvec{h}}>{\varvec{0}}\), consider an arbitrary \({\varvec{b}}\in \mathsf{conv}(T(\mathcal{U},{\varvec{h}}))\). There exists \({\varvec{B}}_1,\ldots ,{\varvec{B}}_K\in \mathcal{U}\) such that

$$\begin{aligned} {\varvec{b}}=\sum _{j=1}^K \theta _j {\varvec{B}}_j^T{\varvec{\lambda }}^j,\;{\varvec{h}}^T{\varvec{\lambda }}^j=1,\;{\varvec{\lambda }}^j\in {\mathbb R}^m_+,\;j=1,\ldots ,K,\;{\varvec{e}}^T{\varvec{\theta }}=1,\;{\varvec{\theta }}\in {\mathbb R}^K_+. \end{aligned}$$

From (3.10), since \({\varvec{B}}_1,\ldots ,{\varvec{B}}_K\in \mathcal{U}\), we have

$$\begin{aligned} {\varvec{b}}&\le \sum _{j=1}^K \theta _j \left( 1+\frac{1}{\mathsf{sym}(\mathcal{U})}\right) {\varvec{B}}_0^T{\varvec{\lambda }}^j\\&=\left( 1+\frac{1}{\mathsf{sym}(\mathcal{U})}\right) {\varvec{B}}_0^T\left( \sum _{j=1}^K\theta _j{\varvec{\lambda }}^j\right) \in \left( 1+\frac{1}{\mathsf{sym}(\mathcal{U})}\right) \cdot T(\mathcal{U},{\varvec{h}}). \end{aligned}$$

The last inequality holds because

$$\begin{aligned} {\varvec{h}}^T\left( \sum _{j=1}^K\theta _j{\varvec{\lambda }}^j\right) =\left( \sum _{j=1}^K\theta _j{\varvec{h}}^T{\varvec{\lambda }}^j\right) ={\varvec{e}}^T{\varvec{\theta }}=1. \end{aligned}$$

Since \(\mathcal{U}\) is down-monotone by assumption, so is \(T(\mathcal{U},{\varvec{h}})\) (Appendix 1), and we have

$$\begin{aligned} {\varvec{b}}\in \left( 1+\frac{1}{\mathsf{sym}(\mathcal{U})}\right) \cdot T(\mathcal{U},{\varvec{h}}). \end{aligned}$$

\(\square \)

Theorem 4 states that our bound in Theorem 2 is at least as good as the symmetry bound and in many cases significantly better. For instance, consider the following example:

$$\begin{aligned} \mathcal{U} = \left\{ {\varvec{B}} \in [0,1]^{n\times n} \; \left| \; \sum _{i,j} B_{ij} \le 1\right. \right\} . \end{aligned}$$

In this case, \(\mathcal U\) satisfies the scaled projections property since all projections are equal. Therefore, from Lemma 2, \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\) and

$$\begin{aligned} \max \{\kappa (T(\mathcal{U},{\varvec{h}}))\;|\;{\varvec{h}}>{\varvec{0}}\}=1. \end{aligned}$$

On the other hand, \(\mathcal U\) is a simplex and \(\mathsf{sym}(\mathcal{U})=\frac{1}{n^2}\) [9]. Therefore,

$$\begin{aligned} 1+\frac{1}{\mathsf{sym}(\mathcal{U})}=n^2+1, \end{aligned}$$

which is a significantly worse bound. Table 1 compares our bound with the symmetry bound for several interesting uncertainty sets.

4 Two-stage model with constraint and objective uncertainty

In this section, we consider a two-stage adjustable robust linear optimization problem where both constraint and objective coefficients are uncertain. In particular, we consider the following two-stage adjustable robust problem \(\varPi _\mathsf{AR}^{(B,d)}\).

$$\begin{aligned} z_\mathsf{AR}^{(B,d)}= \max \;&{\varvec{c}}^T {\varvec{x}} + \min _{({\varvec{B}},{\varvec{d}})\in \mathcal{U}}\;\max _{{\varvec{y}}({\varvec{B}},{\varvec{d}})} {\varvec{d}}^T {\varvec{y}}({\varvec{B}},{\varvec{d}})\nonumber \\&{\varvec{A}}{\varvec{x}} + {\varvec{B}}{\varvec{y}}({\varvec{B}},{\varvec{d}}) \; \le \; {\varvec{h}}\\&{\varvec{x}} \; \in \; {\mathbb R}^{n_1}_+\nonumber \\&{\varvec{y}}({\varvec{B}},{\varvec{d}}) \; \in \; {\mathbb R}^{n_2}_+,\nonumber \end{aligned}$$
(4.1)

where \({\varvec{A}} \in {\mathbb R}^{m {\times } n_1}, {\varvec{c}}\in {\mathbb R}^{n_1}_+\), \({\varvec{h}} \in {\mathbb R}^m_+\), and \(({\varvec{B}},{\varvec{d}})\) are uncertain second-stage constraint matrix and objective that belong to a convex compact uncertainty set \(\mathcal{U} \subseteq {\mathbb R}^{m \times n_2}_+ \times {\mathbb R}^{n_2}_+\). We consider the case where the uncertainty in constraint matrix \({\varvec{B}}\) does not depend on the uncertainty in objective coefficients \({\varvec{d}}\). Therefore,

$$\begin{aligned} \mathcal{U} = \mathcal{U}^B \times \mathcal{U}^d, \end{aligned}$$

where \(\mathcal{U}^B \subseteq {\mathbb R}^{m\times n_2}_+\) is a convex compact uncertainty set of constraint matrices and \(\mathcal{U}^d \subseteq {\mathbb R}^{n_2}_+\) is a convex compact uncertainty set of the second-stage objective. As previous sections, we can assume without loss of generality that \(\mathcal{U}^B\) is down-monotone.

We formulate the corresponding static robust problem \(\varPi _\mathsf{Rob}^{(B,d)}\), as follows.

$$\begin{aligned} z_\mathsf{Rob}^{(B,d)}= \max _{{\varvec{x}},{\varvec{y}}}\;\min _{{\varvec{d}}\in \mathcal{U}^{d}}\;&{\varvec{c}}^T {\varvec{x}} + {\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{A}}{\varvec{x}} + {\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}, \; \forall {\varvec{B}}\in \mathcal{U}^{B}\\&{\varvec{x}} \; \in \; {\mathbb R}^{n_1}_+\nonumber \\&{\varvec{y}} \; \in \; {\mathbb R}^{n_2}_+.\nonumber \end{aligned}$$
(4.2)

We can compute an optimal static robust solution efficiently. It is easy to see that the separation problem for (4.2) can be solved in polynomial time. In fact, we can also give a compact LP formulation to compute an optimal static robust solution similar to (1.2). Now, suppose the optimal solution of \(\varPi _\mathsf{Rob}^{(B,d)}\) is \(({\varvec{x}}^*,{\varvec{y}}^*)\), then \({\varvec{x}}={\varvec{x}}^*, {\varvec{y}}({\varvec{B}},{\varvec{d}})={\varvec{y}}^*\) for all \(({\varvec{B}},{\varvec{d}})\in \mathcal{U}\) is a feasible solution to \(\varPi _\mathsf{AR}^{(B,d)}\). Therefore,

$$\begin{aligned} z_\mathsf{AR}^{(B,d)}\ge z_\mathsf{Rob}^{(B,d)}. \end{aligned}$$
(4.3)

We prove the following main theorem.

Theorem 5

Let \(z_\mathsf{AR}^{(B,d)}\) be the optimal objective value of \(\varPi _\mathsf{AR}^{(B,d)}\) in (4.1) defined over the uncertainty \(\mathcal{U}=\mathcal{U}^B\times \mathcal{U}^d\). Let \(z_\mathsf{Rob}^{(B,d)}\) be the optimal objective value of \(\varPi _\mathsf{Rob}^{(B,d)}\) in (4.2). Also, let

$$\begin{aligned} \rho (\mathcal{U}^B) = \max _{{\varvec{h}} > {\varvec{0}}} \kappa (T(\mathcal{U}^B, {\varvec{h}})). \end{aligned}$$

Then,

$$\begin{aligned} z_\mathsf{AR}^{(B,d)}\le \rho (\mathcal{U}^B) \cdot z_\mathsf{Rob}^{(B,d)}. \end{aligned}$$

Furthermore, the bound is tight.

If \(T(\mathcal{U}^B, {\varvec{h}})\) is convex for all \({\varvec{h}} > {\varvec{0}}\), then \(\rho (\mathcal{U}^B) =1\) and \(z_\mathsf{AR}^{(B,d)}\le z_\mathsf{Rob}^{(B,d)}\). In this case, Theorem 5 implies that a static solution is optimal for the adjustable robust problem \(\varPi _\mathsf{AR}^{(B,d)}\). Therefore, if \(\mathcal{U}^B\) is constraint-wise or has scaled projections then \(T(\mathcal{U}^B, {\varvec{h}})\) is convex for all \({\varvec{h}} > {\varvec{0}}\) (Lemmas 1 and 2). In general, the performance of static solution depends on the worst-case measure of non-convexity of \(T(\mathcal{U}^B, {\varvec{h}})\) for all \({\varvec{h}} >{\varvec{0}}\). Surprisingly, the approximation bound for the static solution does not depend on the uncertain set of objectives \(\mathcal{U}^d\).

To prove the Theorem 5, we need to introduce the following one-stage models as in Sect. 2, \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) and \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\).

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}) = \min _{({\varvec{B}},{\varvec{d}})\in \mathcal{U}} \max _{{\varvec{y}}}\;&{\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}\\&{\varvec{y}} \; \in \; {\mathbb R}^{n}_+,\end{aligned}$$
(4.4)
$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}) = \max _{{\varvec{y}}}\;\min _{{\varvec{d}}\in \mathcal{U}^{{\varvec{d}}}}\;&{\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}, \; \; \forall {\varvec{B}}\in \mathcal{U}^{{\varvec{B}}}\\&{\varvec{y}} \; \in \; {\mathbb R}^{n}_+.\nonumber \end{aligned}$$
(4.5)

where \(\mathcal{U} = \mathcal{U}^B \times \mathcal{U}^d\) and \({\varvec{h}}>{\varvec{0}}\). Similar to Lemmas 3 and 4, we can reformulate the above problems as optimization problems over the transformation set \(T(\mathcal{U}^B,{\varvec{h}})\).

Lemma 5

The one-stage adjustable robust problem \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) defined in (4.4) can be written as:

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}) = \min _{\lambda ,{\varvec{b}},{\varvec{d}}} \{\;\lambda \;|\;\lambda {\varvec{b}}\ge {\varvec{d}}, {\varvec{b}}\in T(\mathcal{U}^B,{\varvec{h}}),{\varvec{d}}\in \mathcal{U}^d\}. \end{aligned}$$

Proof

Consider \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\), by writing the dual of its inner maximization problem, we have

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}})&= \min _{{\varvec{B}},{\varvec{d}},{\varvec{\alpha }}} \{ {\varvec{h}}^T {\varvec{\alpha }} \;|\;{\varvec{B}}^T {\varvec{\alpha }} \ge {\varvec{d}}, ({\varvec{B}},{\varvec{d}})\in \mathcal{U},{\varvec{\alpha }}\in {\mathbb R}^{m}_+\} \\&= \min _{\lambda ,{\varvec{B}},{\varvec{d}},{\varvec{\alpha }}} \left\{ \lambda {\varvec{h}}^T \left( \frac{{\varvec{\alpha }}}{\lambda }\right) \; \left| \;\lambda {\varvec{B}}^T\left( \frac{{\varvec{\alpha }}}{\lambda }\right) \ge {\varvec{d}}, {\varvec{h}}^T{\varvec{\alpha }}=\lambda , ({\varvec{B}},{\varvec{d}})\in \mathcal{U}, {\varvec{\alpha }}\in {\mathbb R}^{m}_+\right. \right\} \!. \\&= \min _{\lambda ,{\varvec{b}},{\varvec{d}}} \{\;\lambda \;|\;\lambda {\varvec{b}}\ge {\varvec{d}}, {\varvec{b}}\in T(\mathcal{U}^B,{\varvec{h}}),{\varvec{d}}\in \mathcal{U}^d\}, \end{aligned}$$

where the last equality holds because \(\mathcal{U} = \mathcal{U}^B \times \mathcal{U}^d\). \(\square \)

Lemma 6

The one-stage static robust problem \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) defined in (4.5) can be written as:

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}) = \min _{\lambda ,{\varvec{b}},{\varvec{d}}} \{\;\lambda \; | \; \lambda {\varvec{b}} \ge {\varvec{d}}, {\varvec{b}}\in \mathsf{conv}(T(\mathcal{U}^B,{\varvec{h}})),{\varvec{d}}\in \mathcal{U}^d\}. \end{aligned}$$

Proof

Suppose

$$\begin{aligned} \mathcal{U} = \mathsf{conv}(({\varvec{B}}_1,{\varvec{d}}_1) \ldots , ({\varvec{B}}_K,{\varvec{d}}_K)) \end{aligned}$$

where \(({\varvec{B}}_j,{\varvec{d}}_j), j=1,\ldots ,K\) are the extreme points of \(\mathcal{U}\). We can rewrite (4.5) as follows.

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\;=\max _{z,{\varvec{y}}} \{ \;z\;|\;{\varvec{B}}_j{\varvec{y}}\le {\varvec{h}}, z-{\varvec{d}}_j^T {\varvec{y}}\le 0, \;\forall j=1,\ldots ,K, {\varvec{y}}\in {\mathbb R}^{n}_+\}. \end{aligned}$$

Again, by writing the dual problem, we have

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}) = \min _{{\varvec{\alpha }}_j\in {\mathbb R}^{m}_+,{\varvec{\beta }}\in {\mathbb R}^{K}_+} \left\{ \; \sum _{j=1}^K {\varvec{h}}^T {\varvec{\alpha }}_j \;\left| \;\sum _{j=1}^K {\varvec{B}}_j^T {\varvec{\alpha }}_j\ge \sum _{j=1}^K \beta _j{\varvec{d}}_j,{\varvec{e}}^T{\varvec{\beta }}=1\right. \right\} . \end{aligned}$$

Note that \(\mathcal{U} = \mathcal{U}^B \times \mathcal{U}^d\), \({\varvec{d}}\) can be chosen regardless of \({\varvec{B}}\). Therefore, denote \(\theta _j={\varvec{h}}^T{\varvec{\alpha }}_j\), \(\lambda ={\varvec{e}}^T{\varvec{\theta }}\). Using arguments similar to the proof of Lemma 4, we have \({\varvec{\theta }}>{\varvec{0}}\) and \(\lambda >0\). Therefore,

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})&= \min _{\lambda ,{\varvec{\alpha }}_j \ge {\varvec{0}},{\varvec{d}}\in \mathcal{U}^d} \left\{ \; \sum _{j=1}^K \theta _j{\varvec{h}}^T\left( \frac{{\varvec{\alpha }}_j}{\theta _j}\right) \; \left| \; \lambda \sum _{j=1}^K \frac{\theta _j}{\lambda }{\varvec{B}}_j^T \left( \frac{{\varvec{\alpha }}_j}{\theta _j}\right) \ge {\varvec{d}} \right. \right\} \\&= \min _{\lambda ,\hat{{\varvec{b}}}_j,{\varvec{d}}} \left\{ \;\lambda \;\left| \;\lambda \sum _{j=1}^K \frac{\theta _j}{\lambda } \hat{{\varvec{b}}}_j\ge {\varvec{d}}, \hat{{\varvec{b}}}_j\in T(\mathcal{U}^B,{\varvec{h}}), {\varvec{d}}\in \mathcal{U}^d \right. \right\} \\&= \min _{\lambda ,{\varvec{b}},{\varvec{d}}} \{\; \lambda \; | \; \lambda {\varvec{b}} \ge {\varvec{d}}, {\varvec{b}}\in \mathsf{conv}(T(\mathcal{U}^B,{\varvec{h}})),{\varvec{d}}\in \mathcal{U}^d\}, \end{aligned}$$

where the second equality holds because \({\varvec{h}}^T\left( \frac{{\varvec{\alpha }}_j}{\theta _j}\right) =1, j=1,\ldots ,K\). \(\square \)

Now, we are ready to prove Theorem 5.

Proof of Theorem 5

Suppose \(({\varvec{x}}^*, {\varvec{y}}^*({\varvec{B}},{\varvec{d}}),({\varvec{B}},{\varvec{d}})\in \mathcal{U})\) is a fully-adjustable optimal solution for \(\varPi _\mathsf{AR}^{(B,d)}\). As discussed earlier, we can assume without loss of generality that \({\varvec{h}}-{\varvec{A}}{\varvec{x}}^*>{\varvec{0}}\). Then,

$$\begin{aligned} z_\mathsf{AR}^{(B,d)}&= {\varvec{c}}^T{\varvec{x}}^*+\min _{({\varvec{B}},{\varvec{d}})\in \mathcal{U}} \max _{{\varvec{y}}({\varvec{B}},{\varvec{d}})\ge {\varvec{0}}} \left\{ {\varvec{d}}^T{\varvec{y}}({\varvec{B}},{\varvec{d}})\;\left| \;{\varvec{B}}{\varvec{y}}({\varvec{B}},{\varvec{d}})\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*\right. \right\} \nonumber \\&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*), \end{aligned}$$
(4.6)

and

$$\begin{aligned} z_\mathsf{Rob}^{(B,d)}&\ge {\varvec{c}}^T{\varvec{x}}^*+ \max _{{\varvec{y}}\ge {\varvec{0}}} \min _{{\varvec{d}}\in \mathcal{U}^d} \left\{ {\varvec{d}}^T{\varvec{y}}\;\left| \;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*,\forall {\varvec{B}}\in \mathcal{U}^B\right. \right\} \nonumber \\&={\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*). \end{aligned}$$
(4.7)

Let \(\hat{{\varvec{h}}}={\varvec{h}}-{\varvec{A}}{\varvec{x}}^*\) and \(\kappa =\kappa (T(\mathcal{U}^B,\hat{{\varvec{h}}}))\). Suppose \((\hat{\lambda },\hat{{\varvec{b}}},\hat{{\varvec{d}}})\) is an optimal solution for \(\varPi _\mathsf{Rob}^I(\mathcal{U},\hat{{\varvec{h}}})\). Therefore,

$$\begin{aligned} \hat{{\varvec{b}}}\in \mathsf{conv}(T(\mathcal{U}^B,\hat{{\varvec{h}}}))\Rightarrow \frac{1}{\kappa }\cdot \hat{{\varvec{b}}}\in T(\mathcal{U}^B,\hat{{\varvec{h}}}). \end{aligned}$$

Also,

$$\begin{aligned} \hat{\lambda }\cdot \hat{{\varvec{b}}}\ge \hat{{\varvec{d}}}\Rightarrow (\kappa \hat{\lambda })\cdot \left( \frac{1}{\kappa }\hat{{\varvec{b}}}\right) \ge \hat{{\varvec{d}}}, \end{aligned}$$

which implies that \((\kappa \hat{\lambda }, \hat{{\varvec{b}}}/\kappa , \hat{{\varvec{d}}})\) is a feasible solution to \(\varPi _\mathsf{AR}^I(\mathcal{U},\hat{{\varvec{h}}})\) and

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U},\hat{{\varvec{h}}})\le \kappa \cdot z_\mathsf{Rob}^I(\mathcal{U},\hat{{\varvec{h}}}). \end{aligned}$$

From (4.6), we have

$$\begin{aligned} z_\mathsf{AR}^{(B,d)}&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{AR}^I(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*)\nonumber \\&\le {\varvec{c}}^T{\varvec{x}}^*+\kappa \cdot z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*)\\&\le \kappa \cdot ({\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{Rob}^I(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*))\nonumber \\&\le \kappa \cdot z_\mathsf{Rob}^{(B,d)},\nonumber \end{aligned}$$
(4.8)

where (4.8) holds because \(\kappa \ge 1\), the last inequality holds from (4.7).

We can show that the bound is tight using the same family of uncertainty sets of matrices \(\mathcal{U}^B_{\theta }\) in the discussion of Theorem 2:

$$\begin{aligned} \mathcal{U}^B_{\theta } = \left\{ {\varvec{B}} \in [0,1]^{n\times n} \; \left| \; B_{ij}=0, \; \forall i\ne j, \; \sum _{j=1}^n B_{jj}^{\theta } \le 1\right. \right\} . \end{aligned}$$

Consider the following instance of \(\varPi _\mathsf{AR}^{(B,d)}\) and \(\varPi _\mathsf{Rob}^{(B,d)}\):

$$\begin{aligned} {\varvec{A}}={\varvec{0}},{\varvec{c}}={\varvec{0}}, {\varvec{h}}={\varvec{e}}, \mathcal{U}^{d}=\{{\varvec{e}}\}. \end{aligned}$$

From the discussion in Theorem 2, the bound in Theorem 5 is tight. \(\square \)

Note that surprisingly, the bound only depends on the measure of non-convexity of \(\mathcal{U}^B\) and is not related to \(\mathcal{U}^d\). Therefore, if \(T(\mathcal{U}^B,{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\), then a static solution is optimal for the adjustable robust problem \(\varPi _\mathsf{AR}^{(B,d)}\) irrespective of \(\mathcal{U}^d\). As a special case where there is no uncertainty in \({\varvec{B}}\), i.e., \(\mathcal{U}^{B}=\{{\varvec{B}}^0\}\), and the only uncertainty is in \(\mathcal{U}^d\), \(T(\mathcal{U}^B,{\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\) and a static solution is optimal. In fact, the optimality of static solution in this case follows from von Neumann’s Min-max theorem [22]. Therefore, we can interpret the result as a generalization of von Neumann’s theorem.

General Case when \(\mathcal{U}\) is not a Cartesian product. For the general case where the uncertainty set \(\mathcal U\) of constraint matrices \({\varvec{B}}\) and objective coefficients \({\varvec{d}}\) is not a Cartesian product of the respective uncertainty sets, our bound of Theorem 5 does not extend. Consider the following example:

$$\begin{aligned}&{\varvec{A}}={\varvec{0}},{\varvec{c}}={\varvec{0}}, {\varvec{h}}={\varvec{e}},\\&\quad \mathcal{U}=\left\{ ({\varvec{B}},{\varvec{d}})\in {\mathbb R}^{n\times n}_+\times {\mathbb R}^n_+\;\left| \;\sum _{i=1}^n B_{ii}\le \sum _{j=1}^n d_j\le 1, {\varvec{d}}\ge \frac{\epsilon }{n}{\varvec{e}},B_{ij}=0, \forall i\ne j\right. \right\} \!. \end{aligned}$$

Now,

$$\begin{aligned} z_\mathsf{AR}^{(B,d)}&= \min _{({\varvec{B}}, {\varvec{d}}) \in \mathcal{U}} \; \max _{{\varvec{y}}} \left\{ {\varvec{d}}^T {\varvec{y}} \;\left| \; B_{jj} y_j \le 1, \; \forall j=1,\ldots ,n, \; {\varvec{y}} \ge {\varvec{0}}\right. \right\} \\&= \min _{({\varvec{B}}, {\varvec{d}}) \in \mathcal{U}} \; \sum _{j=1}^n \frac{d_j}{B_{jj}} \\&\ge 1, \end{aligned}$$

where the second equation follows from the fact that at optimum of the outer minimization problem, \(B_{jj} > 0\) for all \(j=1,\ldots ,n\) and \(y_j = 1/B_{jj}\) for the inner maximization problem. Otherwise, if \(B_{jj}=0\) for some \(j\), then \(y_j\) and \(d_j y_j\) are both unbounded as \(d_j > \epsilon /n >0\). The last equality follows as for any \(({\varvec{B}}, {\varvec{d}}) \in \mathcal{U}\),

$$\begin{aligned} \sum _{j=1}^n B_{jj} \le \sum _{j=1}^n d_j, \end{aligned}$$

which implies that \(B_{jj} \le d_j\) for some \(j \in [n]\).

For the robust problem \({\varPi }^{(B,d)}_\mathsf{Rob}\), consider any static solution \({\varvec{y}} \ge {\varvec{0}}\). For all \(j=1,\ldots ,n\),

$$\begin{aligned} B_{jj} y_j \le 1, \; \forall ({\varvec{B}}, {\varvec{d}}) \in \mathcal{U}. \end{aligned}$$

Since there exist \(({\varvec{B}}, {\varvec{d}}) \in \mathcal{U}\) such that \(B_{jj} = 1\), \(y_j \le 1\) for all \(j =1,\ldots ,n\). Moreover, \({\varvec{y}} = {\varvec{e}}\) is a feasible solution as \(B_{jj} \le 1\) for all \(({\varvec{B}}, {\varvec{d}}) \in \mathcal{U}\) for all \(j\in [n]\). Therefore,

$$\begin{aligned} {z}^{(B,d)}_\mathsf{Rob} = \min _{({\varvec{B}}, {\varvec{d}}) \in \mathcal{U}} \; {\varvec{d}}^T {\varvec{e}} \le \epsilon , \end{aligned}$$

where the second inequality follows by setting \(d_j = \epsilon /n\) for all \(j=1,\ldots ,n\). Therefore,

$$\begin{aligned} z_\mathsf{AR}^{(B,d)}\ge \frac{1}{\epsilon }\cdot {z}^{(B,d)}_\mathsf{Rob}, \end{aligned}$$

where \(\epsilon >0\) is arbitrary. Therefore, the performance of the optimal static robust solution as compared to the optimal fully adjustable solution can not be bounded by the measure of non-convexity as in Theorem 5.

4.1 Constraint, right-hand-side and objective uncertainty

In this section, we discuss the case where the right-hand-side, the constraint and the objective coefficients are all uncertain. Consider the following adjustable robust problem \(\varPi _\mathsf{AR}^{(B,h,d)}\).

$$\begin{aligned} z_\mathsf{AR}^{(B,h,d)}= \max _{{\varvec{x}}} \;&{\varvec{c}}^T {\varvec{x}} + \min _{({\varvec{B}},{\varvec{h}}, {\varvec{d}})\in \mathcal{U}^{B,h,d}}\max _{{\varvec{y}}({\varvec{B}},{\varvec{h}},{\varvec{d}})} {\varvec{d}}^T {\varvec{y}}({\varvec{B}},{\varvec{h}},{\varvec{d}})\nonumber \\&{\varvec{A}}{\varvec{x}} + {\varvec{B}}{\varvec{y}}({\varvec{B}},{\varvec{h}},{\varvec{d}}) \; \le \; {\varvec{h}}\\&{\varvec{x}} \; \in \; {\mathbb R}^{n_1}_+\nonumber \\&{\varvec{y}}({\varvec{B}},{\varvec{h}},{\varvec{d}}) \; \in \; {\mathbb R}^{n_2}_+,\nonumber \end{aligned}$$
(4.9)

where \({\varvec{A}} \in {\mathbb R}^{m {\times } n_1}, {\varvec{c}}\in {\mathbb R}^{n_1}_+\). In this case, \(({\varvec{B}},{\varvec{h}},{\varvec{d}})\in \mathcal{U}^{B,h,d}\) are uncertain and \(\mathcal{U}^{B,h,d}\subseteq {\mathbb R}^{m\times n_2}_+\times {\mathbb R}^m_+ \times {\mathbb R}^{n_2}_+\) is convex and compact. We consider the case that the uncertainties in constraint matrix \({\varvec{B}}\) and right-hand-side vector \({\varvec{h}}\) are independent of the uncertainty in the objective coefficients \({\varvec{d}}\), i.e.,

$$\begin{aligned} \mathcal{U}^{B,h,d}=\mathcal{U}^{B,h}\times \mathcal{U}^d, \end{aligned}$$

where \(\mathcal{U}^{B,h}\subseteq {\mathbb R}^{m\times (n_2+1)}\) is the convex compact uncertainty set of constraint matrices and right-hand-side vectors, and \(\mathcal{U}^d\subseteq {\mathbb R}^{n_2}\) is the convex compact set of the constraint coefficients.

The corresponding static robust version \(\varPi _\mathsf{Rob}^{(B,h,d)}\), can be formulated as follows.

$$\begin{aligned} z_\mathsf{Rob}^{(B,h,d)}= \max _{{\varvec{x}},{\varvec{y}}}\;\min _{{\varvec{d}}\in \mathcal{U}^d}\;&{\varvec{c}}^T {\varvec{x}} + {\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{A}}{\varvec{x}} + {\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}, \; \forall ({\varvec{B}},{\varvec{h}})\in \mathcal{U}^{B,h}\\&{\varvec{x}} \; \in \; {\mathbb R}^{n_1}_+\nonumber \\&{\varvec{y}} \; \in \; {\mathbb R}^{n_2}_+.\nonumber \end{aligned}$$
(4.10)

We can compute an optimal solution for (4.10) efficiently by solving a compact LP formulation for its separation problem. Now, we study the performance of static solution and show that it is optimal if \(\mathcal{U}^{B,h}\) is constraint-wise. In particular, we have the following theorem.

Theorem 6

Let \(z_\mathsf{AR}^{(B,h,d)}\) be the optimal value of \(\varPi _\mathsf{AR}^{(B,h,d)}\) defined in (4.9) and \(z_\mathsf{Rob}^{(B,h,d)}\) be the optimal value of \(\varPi _\mathsf{Rob}^{(B,h,d)}\) defined in (4.10). Suppose \(\mathcal{U}^{B,h}\) is constraint-wise, then the static solution is optimal for \(\varPi _\mathsf{AR}^{(B,h,d)}\), i.e.,

$$\begin{aligned} z_\mathsf{AR}^{(B,h,d)}=z_\mathsf{Rob}^{(B,h,d)}. \end{aligned}$$
(4.11)

To prove Theorem 6, we need to introduce the one-stage models. Consider the one-stage adjustable robust problem \(\varPi _\mathsf{AR}^I(\mathcal{U}^{B,h,d})\)

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U}^{B,h,d}) = \min _{({\varvec{B}},{\varvec{h}},{\varvec{d}})\in \mathcal{U}^{B,h,d}} \max _{{\varvec{y}}}\;&{\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}\\&{\varvec{y}} \; \in \; {\mathbb R}^n_+,\nonumber \end{aligned}$$
(4.12)

where \(\mathcal{U}^{B,h,d}=\mathcal{U}^{B,h}\times \mathcal{U}^d\). The corresponding one-stage static robust problem \(\varPi _\mathsf{Rob}^I(\mathcal{U}^{B,h,d})\) can be formulated as follows

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U}^{B,h,d}) = \max _{{\varvec{y}}}\;\min _{{\varvec{d}}\in \mathcal{U}^d}\;&{\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}, \; \; \forall ({\varvec{B}},{\varvec{h}})\in \mathcal{U}^{B,h}\\&{\varvec{y}} \; \in \; {\mathbb R}^{n}_+,\nonumber \end{aligned}$$
(4.13)

We can reformulate these models as optimization problems over \(T(\mathcal{U}^{B,h},{\varvec{e}})\).

Lemma 7

The one-stage adjustable robust problem \(\varPi _\mathsf{AR}^I(\mathcal{U}^{B,h,d})\) defined in (4.12) can be written as

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U}^{B,h,d}) = \min _{\lambda ,{\varvec{b}},t,{\varvec{d}}} \{\lambda t\;|\;\lambda {\varvec{b}} \ge {\varvec{d}},({\varvec{b}},t)\in T(\mathcal{U}^{B,h},{\varvec{e}}), {\varvec{d}}\in \mathcal{U}^d\}. \end{aligned}$$

We present the proof of Lemma 7 in “Appendix 4”.

Lemma 8

The one-stage static-robust problem \(\varPi _\mathsf{Rob}^I(\mathcal{U}^{B,h,d})\) defined in (4.13) can be written as

$$\begin{aligned} z_\mathsf{Rob}^I(\mathcal{U}^{B,h,d}) = \min _{\lambda ,{\varvec{b}},t,{\varvec{d}}} \{\lambda t\;|\;\lambda {\varvec{b}} \ge {\varvec{d}},({\varvec{b}},t)\in \mathsf{conv}(T(\mathcal{U}^{B,h},{\varvec{e}})), {\varvec{d}}\in \mathcal{U}^d\}. \end{aligned}$$

We present the proof of Lemma 8 in “Appendix 4”. Now, with the formulations in Lemma 7 and Lemma 8, we are ready to prove Theorem 6.

Proof of Theorem 6

Suppose the optimal solution of \(\varPi _\mathsf{Rob}^{(B,h,d)}\) is \((\tilde{{\varvec{x}}},\tilde{{\varvec{y}}})\), then \({\varvec{x}}=\tilde{{\varvec{x}}}, {\varvec{y}}({\varvec{B}},{\varvec{h}},{\varvec{d}})=\tilde{{\varvec{y}}}\) for all \(({\varvec{B}},{\varvec{h}},{\varvec{d}})\in \mathcal{U}\) is a feasible solution to \(\varPi _\mathsf{AR}^{(B,h,d)}\). Therefore,

$$\begin{aligned} z_\mathsf{AR}^{(B,h,d)}\ge z_\mathsf{Rob}^{(B,h,d)}. \end{aligned}$$
(4.14)

On the other hand, suppose \(({\varvec{x}}^*, {\varvec{y}}^*({\varvec{B}},{\varvec{h}},{\varvec{d}}),({\varvec{B}},{\varvec{h}},{\varvec{d}})\in \mathcal{U}^{B,h,d})\) is a fully-adjustable optimal solution for \(\varPi _\mathsf{AR}^{(B,h,d)}\). As discussed earlier, we can assume without loss of generality that \({\varvec{h}}-{\varvec{A}}{\varvec{x}}^*>{\varvec{0}}\) for all \({\varvec{h}}\) such that \(({\varvec{B}},{\varvec{h}})\in \mathcal{U}^{B,h}\) for some \({\varvec{B}}\). Then,

$$\begin{aligned} z_\mathsf{AR}^{(B,h,d)}&= {\varvec{c}}^T{\varvec{x}}^*+\min _{({\varvec{B}},{\varvec{h}},{\varvec{d}})\in \mathcal{U}^{B,h,d}} \max _{{\varvec{y}} \ge {\varvec{0}}} \left\{ {\varvec{d}}^T{\varvec{y}}\;\left| \;{\varvec{B}}{\varvec{y}} \le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*\right. \right\} \nonumber \\&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{AR}^I(\mathcal{U}^{B,h-Ax^*,d}), \end{aligned}$$
(4.15)

and

$$\begin{aligned} z_\mathsf{Rob}^{(B,h,d)}&\ge {\varvec{c}}^T{\varvec{x}}^*+ \max _{{\varvec{y}}\ge {\varvec{0}}} \min _{{\varvec{d}}\in \mathcal{U}^d} \left\{ {\varvec{d}}^T{\varvec{y}}\;\left| \;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*,\forall ({\varvec{B}},{\varvec{h}})\in \mathcal{U}^{B,h}\right. \right\} \nonumber \\&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{Rob}^I(\mathcal{U}^{B,h-Ax^*,d}). \end{aligned}$$
(4.16)

Since \(\mathcal{U}^{B,h}\) is constraint-wise, so is \(\mathcal{U}^{B,h-Ax^*}\). Therefore, \(T(\mathcal{U}^{B,h-Ax^*},{\varvec{e}})\) is convex from Lemma 1 and \(T(\mathcal{U}^{B,h-Ax^*},{\varvec{e}})=\mathsf{conv}(T(\mathcal{U}^{B,h-Ax^*},{\varvec{e}}))\). From Lemma 7 and Lemma 8, this implies that

$$\begin{aligned} z_\mathsf{AR}^I(\mathcal{U}^{B,h-Ax^*,d})=z_\mathsf{Rob}^I(\mathcal{U}^{B,h-Ax^*,d}). \end{aligned}$$

Therefore, from (4.15) and (4.16), we have

$$\begin{aligned} z_\mathsf{AR}^{(B,h,d)}\le z_\mathsf{Rob}^{(B,h,d)}. \end{aligned}$$

Together with (4.14), we have \(z_\mathsf{AR}^{(B,h,d)}=z_\mathsf{Rob}^{(B,h,d)}\).

We would like to note, we can not extend the approximation bounds similar to Theorem 5 in this case. In fact, the measure of non-convexity of \(T(\mathcal{U}^{B,h},{\varvec{e}})\) is not even well defined in this case since \(\mathcal{U}^{B,h}\) is not down-monotone.

5 Conclusion

In this paper, we study the performance of static robust solution as an approximation of two-stage adjustable robust linear optimization problem under uncertain constraints and objective coefficients. We give a tight characterization of the performance of static solution and relate it to the measure of non-convexity of the transformation \(T(\mathcal{U},\cdot )\) of the uncertainty set \(\mathcal U\). In particular, we show that a static solution is optimal if \(T(\mathcal{U}, {\varvec{h}})\) is convex for all \({\varvec{h}} >{\varvec{0}}\). If \(T(\mathcal{U},\cdot )\) is not convex, the measure of non-convexity of \(T(\mathcal{U},\cdot )\) gives a tight bound on the performance of static solutions. For several interesting families of uncertainty sets such as constraint-wise or scaled projections, we show that \(T(\mathcal{U},{\varvec{h}})\) is convex for all \({\varvec{h}}>0\); thereby, extending the result of Ben-Tal and Nemirovski [4] for the case where \(\mathcal U\) is contained in the non-negative orthant. Also, our approximation bound is better than the symmetry bound in Bertsimas and Goyal [8].

We also extend our result to models where both constraint and objective coefficients are uncertain. We show that if \(\mathcal{U}= \mathcal{U}^B \times \mathcal{U}^d\), where \(\mathcal{U}^B\) is the set of uncertain second-stage constraint matrices \({\varvec{B}}\) and \(\mathcal{U}^d\) is the set of uncertain second-stage objective, then the performance of static solution is related to the measure of non-convexity of \(T(\mathcal{U}^B, \cdot )\). In particular, a static solution is optimal if \(T(\mathcal{U}^B, {\varvec{h}})\) is convex for all \({\varvec{h}}>{\varvec{0}}\). Surprisingly, the performance of static solution does not depend on the uncertainty set \(\mathcal{U}^d\). We also present several examples to illustrate such optimality and the tightness of the bound.

Our results develop new geometric intuition about the performance of static robust solutions for adjustable robust problems. The reformulations of the adjustable robust and static robust problems based on the transformation \(T(\mathcal{U}, \cdot )\) of the uncertainty set \(\mathcal{U}\) give us interesting insights about properties of \(\mathcal{U}\) where the static robust solution does not perform well. Therefore, our results provide useful guidance in selecting uncertainty sets such that the adjustable robust problem can be well approximated by a static solution.