1 Introduction

Duality theory of linear programming and optimality principles of many classes of nonlinear optimization problems are often grounded in Farkas’ lemma or its generalizations. In 1901, Farkas [1] initially established a dual characterization for a system of linear inequalities, which was then used to derive necessary optimality principles for nonlinear programming problems, and these principles, known as Kuhn–Tucker conditions, were published in the early 1950s by Kuhn and Tucker [2]. Importantly, Farkas’ lemma provides a numerically verifiable dual condition for non-negativity of a linear function over a system of linear inequalities as the dual condition can be verified by solving a linear program. We refer the interested reader to [3,4,5,6,7,8] for the generalizations of Farkas’ lemma and their applications to linear and nonlinear optimization problems.

In this paper, by establishing a new generalization of Farkas’ lemma, we present strong duality theorems for affinely adjustable two-stage robust linear programs with a general uncertainty set. Affinely adjustable two-stage robust linear programs appear in dynamic decision-making models in the face of evolving uncertainty (see Sect. 3 of this paper and also [9, 10]). Adjustable robust optimization is a powerful methodology, which was pioneered by Ben-Tal and his co-workers (see, e.g., [11]) for treating multi-stage robust optimization problems. It offers less conservative decisions than the classical (deterministic) robust optimization [9] as some of the decisions can be adjusted at later stages.

However, even two-stage robust optimization problems with adjustable variables are in general computationally intractable. Indeed, the second-stage adjustable decision is a mapping and so it is generally very difficult to establish numerically verifiable solvability characterizations or approximations of the system with a mapping unless the system is restricted to some special classes of mappings. A great deal of recent research on two-stage adjustable robust linear programming assumes that the second-stage adjustable decision satisfies an affine decision rule, which restricts the second-stage mapping to depend affinely on the uncertain parameters (see, e.g., [9, 11,12,13,14,15,16] and the references therein).

Our main contributions in this paper include the following:

  1. (i)

    By introducing the notion of a parameterized affine rule, we first establish a new affinely adjustable two-stage version of Farkas’ lemma (see Theorem 2.1). This two-stage Farkas’ lemma provides a numerically verifiable dual condition for non-negativity of a linear function over a two-stage system of linear homogeneous inequalities with affinely adjustable variables, where the dual condition can be verified by solving a semidefinite linear program. We give a direct and self-contained proof for deriving the generalized Farkas’ lemma using the Hahn–Banach hyperplane separation theorem together with a variable transformation.

  2. (ii)

    Employing the adjustable Farkas’ lemma, we then establish strong duality between an affinely adjustable two-stage robust linear program and its dual semidefinite program. In the case of an ellipsoidal uncertainty set, we derive a strong duality for a two-stage robust linear program with its dual second-order cone program. These strong duality theorems not only show that the primal and dual program values are equal, but also allow one to find the optimal value of a two-stage robust linear program by solving a conic linear program such as a semidefinite linear program or a second-order cone program.

Although we employ a traditional approach by making use of the Hahn–Banach hyperplane separation theorem to establish the new version of Farkas’ lemma, our use of a variable transformation permits us to obtain numerically verifiable dual conditions and consequently formulate a dual conic linear program for a two-stage robust linear program. We utilize a general closed cone regularity condition, which guarantees that strong duality holds between the primal and dual pairs, and hence obtain an exact conic linear program for the two-stage robust program.

The regularity condition is guaranteed by a Slater-type strict inequality condition. As an illustration, we obtain a semidefinite linear program as an exact dual problem for an adjustable two-stage lot-sizing problem on a network under a simple strict inequality condition and show how optimal storage cost of an adjustable two-stage lot-sizing problem under a ball uncertainty set can be found by solving its dual semidefinite program, using a commonly available software package.

The outline of the paper is as follows. Section 2 presents generalizations of Farkas’ lemma for systems of parametric linear inequalities involving adjustable variables. Section 3 establishes strong duality between two-stage robust linear programs and their dual conic linear programs and provides exact conic linear programs for these two-stage programs under a regularity condition. Section 4 describes an application of our duality results to find the optimal storage cost of an adjustable two-stage lot-sizing problem with uncertain demand by solving its dual semidefinite program. Numerical examples are also given to illustrate how a two-stage program and a two-stage lot-sizing problem can be solved, using a MATLAB software package. The last section summarizes the main results and provides perspectives.

2 Adjustable Farkas’ Lemma

Let us first start by fixing some notations and definitions. The notation \({\mathbb {R}}^n\) signifies the Euclidean space whose norm is denoted by \(\Vert \cdot \Vert \) for each \(n\in {\mathbb {N}}\). The inner product in \({\mathbb {R}}^n\) is defined by \(\langle x,y\rangle :=x^\top y\) for all \(x, y\in {\mathbb {R}}^n.\) The origin of any space is denoted by 0 but we may use \(0_n\) for the origin of \({\mathbb {R}}^n\) in situations, where some confusion might be possible. For a non-empty set \(\varOmega \subset {\mathbb {R}}^n,\) \(\mathrm{conv\,}\varOmega \) denotes the convex hull of \(\varOmega \) and \(\mathrm{cl\,}\varOmega \) stands for the closure of \(\varOmega \), while \(\mathrm{coneco\,}\varOmega :={\mathbb {R}}_+\mathrm{conv\,}\varOmega \) stands for the convex conical hull of \(\varOmega \cup \{0_n\}\), where \({\mathbb {R}}_+:=[0,+\infty [\subset {\mathbb {R}}.\) As usual, the symbol \(I_n\) stands for the identity \((n\times n)\) matrix. A symmetric \((n\times n)\) matrix A is said to be positive semidefinite, denoted by \(A\succeq 0\), whenever \(x^\top Ax\ge 0\) for all \(x\in {\mathbb {R}}^n.\) If \(x^\top Ax> 0\) for all \(x\in {\mathbb {R}}^n\setminus \{0_n\}\), then A is called positive definite, denoted by \(A\succ 0\).

A general parametric (semi-infinite) linear homogeneous system with adjustable variables reads as follows:

$$\begin{aligned} x\in {\mathbb {R}}^n,\; x(\cdot ),\; (a^{0}_j+\sum ^{s}_{r=1}u_ra_{j}^r)^\top x+ \theta _{j}^\top x(u)\le 0 ,\; j=1,\ldots ,q, \; \forall u\in U, \end{aligned}$$
(1)

where \(a_j^r\in {\mathbb {R}}^n, r=0,1,\ldots ,s, j=1,\ldots ,q\) and \(\theta _j:=(\theta ^1_j,\ldots ,\theta _j^m)\in {\mathbb {R}}^m, j=1,\ldots ,q\) are fixed and the set U is assumed to be a spectrahedron (see, e.g., [17]) given by

$$\begin{aligned} U:=\{u:=(u_1,\ldots ,u_s)\in {\mathbb {R}}^s \,:\, A_0+\sum \limits _{r=1}^su_rA_r\succeq 0 \} \end{aligned}$$
(2)

with the symmetric square matrices \(A_r, r=0,1,\ldots ,s\). In what follows, we assume that U is non-empty and bounded.

Note that, in (1), the adjustable variable \(x(\cdot )\) is a mapping \(x:U\rightarrow {\mathbb {R}}^m\), and so it is generally hard to obtain numerically verifiable dual solvability characterizations of the system with a mapping \(x(\cdot )\) unless the system is restricted to some special classes of mappings. We now introduce a parameterized affine rule, where the mapping \(x(\cdot )\) satisfies the rule:

$$\begin{aligned} x(u):=\rho y+(1-\rho )Wu. \end{aligned}$$
(3)

Here, \(y\in {\mathbb {R}}^m\) and \(W\in {\mathbb {R}}^{m\times s}\) are non-adjustable variables and \(\rho \in [0,1]\) is a scale parameter. If \(\rho \in ]0,1[\) then, by considering \({{\widetilde{y}}}:=\rho y\) and \({{\widetilde{W}}}:=(1-\rho )W\), the second-stage decision rule in (3) becomes the standard affine rule \(x(u):={{\widetilde{y}}}+{{\widetilde{W}}}u\) (cf. [11, Page 356]) (see also, [10, Equation (8)]), where \({{\widetilde{y}}}\in {\mathbb {R}}^m\) and \({{\widetilde{W}}}\in {\mathbb {R}}^{m\times s}\) are non-adjustable variables. Other special cases of commonly used affine rules and their corresponding results are given later in this section.

We now consider a parametric linear homogeneous inequality system with affinely adjustable variables as follows:

$$\begin{aligned} x\in {\mathbb {R}}^n, y\in {\mathbb {R}}^m, W\in {\mathbb {R}}^{m\times s}, \;&(a^{0}_j+\sum ^{s}_{r=1}u_ra_{j}^r)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\\&x(u)=\rho y+(1-\rho )Wu, \; \forall u\in U, \end{aligned}$$
(RS)

where U is the parameter set given as in (2). In what follows, we use \(W_r, r=1,\ldots ,s\) as the columns of the matrix W.

Let \(a_j:{\mathbb {R}}^s\rightarrow {\mathbb {R}}^n, j=1,\ldots ,q,\) be given by

$$\begin{aligned}&a_j(u):=a^{0}_j+\sum ^{s}_{r=1}u_ra_{j}^r \; \text{ for } \; u:=(u_1,\ldots , u_s)\in {\mathbb {R}}^s \end{aligned}$$
(4)

with \(a_j^r\in {\mathbb {R}}^n, r=0,1,\ldots ,s, j=1,\ldots ,q\) fixed as above and denote

$$\begin{aligned} C:=\mathrm{coneco}\big \{\big (a_j(u), \rho \theta _j, (1-\rho )u_1\theta _j,\ldots , (1-\rho )u_s\theta _j\big )\,:\, j=1,\ldots ,q, u:=(u_1,\ldots ,u_s)\in U\big \}. \end{aligned}$$
(5)

Theorem 2.1

(Adjustable Farkas’ Lemma) Let \(\wp \in {\mathbb {R}}^n, \upsilon \in {\mathbb {R}}^m, \sigma _r\in {\mathbb {R}}^m, r=1,\ldots ,s\) and assume that the adjustable linear homogeneous inequality system (RS) has a solution; i.e.,

$$\begin{aligned} X:=\{(x,y,W)\in {\mathbb {R}}^{n+m+m\times s}\,:\,\;&a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\\&x(u)=\rho y+(1-\rho )Wu, \forall u\in U\}\ne \emptyset . \end{aligned}$$

Let \(W_r, r=1,\ldots ,s\) be the columns of the matrix W. Suppose that the cone C in (5) is closed. Then, the following statements are equivalent:

(a) The following implication holds:

$$\begin{aligned} \big [&x\in {\mathbb {R}}^n, y\in {\mathbb {R}}^m, W\in {\mathbb {R}}^{m\times s}, \; a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\\&x(u)=\rho y+(1-\rho )Wu ,\; \forall u\in U \big ]\Longrightarrow \wp ^\top x+\upsilon ^\top y+\sum _{r=1}^s\sigma _r^\top W_r \ge 0. \end{aligned}$$

(b) There exist \(\lambda _j^0\ge 0, \lambda _j^r\in {\mathbb {R}}, j=1,\ldots ,q, r=1,\ldots ,s\) such that

$$\begin{aligned}&\wp +\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum ^{s}_{r=1}\lambda ^r_ja^r_j)=0, \; \upsilon +\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0,\\&\sigma _r+\sum \limits _{j=1}^q \lambda _j^r(1-\rho )\theta _j =0, r=1,\ldots ,s, \; \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, \; j=1,\ldots ,q. \end{aligned}$$

To prove this theorem, we need the following technical lemma.

Lemma 2.1

Consider the cone C given as in (5). Then, for each \(v^*\in C\), there exist \(\alpha _j\ge 0, j=1,\ldots ,q\) and \( u^j:=(u^j_1,\ldots ,u^j_s)\in U, j=1,\ldots ,q\) such that

$$\begin{aligned} v^*=\sum \limits _{j=1}^q\alpha _j\big (a_j(u^j),\rho \theta _j,(1-\rho )u^j_1\theta _j,\ldots , (1-\rho )u^j_s\theta _j\big ), \end{aligned}$$

where the affine maps \(a_j, j=1,\ldots ,q\) are given as in (4) and the convex set U is defined as in (2).

Proof

Let \(v^*\in C.\) By the definition of C and the Carathéodory’s theorem, we find \(\alpha _{ji}\ge 0, j=1,\ldots ,q\) and \( u^{ji}:=(u^{ji}_1,\ldots ,u^{ji}_s)\in U, j=1,\ldots ,q, i=1,\ldots ,n+m+ms+1\) such that

$$\begin{aligned} v^*=\sum \limits _{j=1}^q\sum \limits _{i=1}^{n+m+ms+1}\alpha _{ji}\big (a_j(u^{ji}),\rho \theta _j,(1-\rho )u^{ji}_1\theta _j,\ldots , (1-\rho )u^{ji}_s\theta _j\big ). \end{aligned}$$
(6)

For each \(j\in \{1,\ldots ,q\},\) let \(\alpha _{j}:=\sum \limits _{i=1}^{n+m+ms+1}\alpha _{ji}\ge 0.\) If \(\alpha _j=0,\) then we take \(u^j:=(u^{j1}_1,\ldots ,u^{j1}_s)\in U.\) If \(\alpha _j>0\), then we put \(u^j:=\frac{1}{\alpha _j}\sum \limits _{i=1}^{n+m+ms+1}\alpha _{ji}u^{ji}\in U,\) where the latest relation holds due to the convexity of U. As \(a_j\) is an affine map, it holds that \(\alpha _ja_j(u^j)=\sum \limits _{i=1}^{n+m+ms+1}\alpha _{ji}a_j(u^{ji}).\) Now, we get by (6) that

$$\begin{aligned} v^*=\sum \limits _{j=1}^q\alpha _j\big (a_j(u^j),\rho \theta _j,(1-\rho )u^j_1\theta _j,\ldots , (1-\rho )u^j_s\theta _j\big ), \end{aligned}$$

which completes the proof of the lemma. \(\square \)

Proof of Theorem 2.1 [(a) \(\Longrightarrow \) (b)] Assume that (a) holds. We first show that

$$\begin{aligned} -(\wp ,\upsilon ,\sigma _1,\ldots ,\sigma _s)\in C, \end{aligned}$$
(7)

where C is given by (5).

(Applying the separation theorem) Suppose by contradiction that \(-(\wp ,\upsilon ,\sigma _1,\ldots ,\sigma _s)\notin C.\) As C is closed and convex, we can invoke the strong separation theorem (see, e.g., [18, Theorem 2.2]) to find \((x^*,y^*,z^*_1,\ldots ,z^*_s)\in {\mathbb {R}}^{n+m+ms}\setminus \{0_{n+m+ms}\}\) such that

$$\begin{aligned} \sup {\big \{\langle (x^*,y^*,z^*_1,\ldots ,z^*_s),c\rangle \,:\, c\in C\big \}}<-\wp ^\top x^*-\upsilon ^\top y^*-\sum _{r=1}^s\sigma _r^\top z^*_r. \end{aligned}$$
(8)

Note by \(0_{n+m+ms}\in C\) that

$$\begin{aligned} \wp ^\top x^*+\upsilon ^\top y^*+\sum _{r=1}^s\sigma _r^\top z^*_r<0. \end{aligned}$$
(9)

We assert further that

$$\begin{aligned} \langle (x^*,y^*,z^*_1,\ldots ,z^*_s),c\rangle \le 0\; \text{ for } \text{ all } \; c\in C. \end{aligned}$$
(10)

Otherwise, there exists \(c_0\in C\) such that \(\langle (x^*,y^*,z^*_1,\ldots ,z^*_s),c_0\rangle > 0.\) This, together with the fact that \(\lambda c_0\in C\) for all \(\lambda >0\), implies that

$$\begin{aligned} \langle (x^*,y^*,z^*_1,\ldots ,z^*_s),\lambda c_0\rangle =\lambda \langle (x^*,y^*,z^*_1,\ldots ,z^*_s),c_0\rangle \rightarrow +\infty \; \text{ as } \; \lambda \rightarrow +\infty . \end{aligned}$$

This clearly contradicts (8), and hence, assertion (10) is valid. This, together with the fact that \( \big (a_j(u), \rho \theta _j, (1-\rho )u_1\theta _j,\ldots , (1-\rho )u_s\theta _j\big )\in C, j=1,\ldots ,q \) for all \(u\in U\), implies that

$$\begin{aligned} a_j(u)^\top x^*+\rho \theta _j^\top y^*+(1-\rho )\sum _{r=1}^su_r\theta _j^\top z^*_r\le 0, \; j=1,\ldots ,q,\; \forall u:=(u_1,\ldots ,u_s)\in U. \end{aligned}$$
(11)

By \(X\ne \emptyset ,\) we find \(({{\overline{x}}},{{\overline{y}}},{{\overline{W}}})\in {\mathbb {R}}^{n+m+m\times s}\) such that \(a_j(u)^\top {{\overline{x}}}+\rho \theta _j^\top \overline{y}+(1-\rho )\sum \limits _{r=1}^su_r\theta _j^\top {{\overline{W}}}_r\le 0, j=1,\ldots ,q\) for all \(u\in U\), where \({{\overline{W}}}_r, r=1,\ldots ,s\), are the columns of the matrix \({{\overline{W}}}\). This, together with (11), entails that

$$\begin{aligned} a_j(u)^\top x^\lambda + \rho \theta _j^\top y^\lambda +(1-\rho )\sum \limits _{r=1}^su_r\theta _j^\top W_{r}^{\lambda } \le 0,\; j=1,\ldots ,q,\; \forall u:=(u_1,\ldots ,u_s)\in U,\end{aligned}$$
(12)

where \(x^\lambda :={{\overline{x}}}+\lambda x^*, y^\lambda :={{\overline{y}}}+\lambda y^*, W_r^\lambda :={{\overline{W}}}_r+\lambda z^*_r, r=1,\ldots , s\) for \(\lambda >0.\) In other words, by denoting \(W^\lambda , \lambda >0,\) the matrix with its columns as \(W_r^\lambda , r=1,\ldots ,s\), we assert by (12) that \(x^\lambda \in {\mathbb {R}}^n, y^\lambda \in {\mathbb {R}}^m, W^\lambda \in {\mathbb {R}}^{m\times s} \) and \( a_j(u)^\top x^\lambda + \theta _{j}^\top x(u) \le 0, x(u)=\rho y^\lambda +(1-\rho )W^\lambda u, j=1,\ldots ,q \) for all \(u\in U\). On the one hand, as (a) holds, we arrive at

$$\begin{aligned} \wp ^\top x^\lambda +\upsilon ^\top y^\lambda +\sum _{r=1}^s\sigma _r^\top W^\lambda _r \ge 0\; \text{ for } \text{ all } \; \lambda >0. \end{aligned}$$
(13)

Now, by (9), we obtain that

$$\begin{aligned} \wp ^\top x^\lambda +\upsilon ^\top y^\lambda +\sum _{r=1}^s\sigma _r^\top W^\lambda _r=\wp ^\top \overline{x}+\upsilon ^\top {{\overline{y}}}+\sum _{r=1}^s\sigma _r^\top \overline{W}_r+\lambda (\wp ^\top x^*+\upsilon ^\top y^*+\sum _{r=1}^s\sigma _r^\top z^*_r)\rightarrow -\infty \end{aligned}$$

as \(\lambda \rightarrow +\infty .\) This clearly contradicts (13), and consequently, (7) holds.

(Employing a variable transformation) By (7) and Lemma 2.1, we find \(\alpha _j\ge 0, j=1,\ldots ,q\) and \( u^j:=(u^j_1,\ldots ,u^j_s)\in U, j=1,\ldots ,q\) such that

$$\begin{aligned}&-\wp =\sum \limits _{j=1}^q\alpha _j\big (a^0_j+\sum ^{s}_{r=1}u^j_ra^r_j\big ),\;\\&-\upsilon =\sum \limits _{j=1}^q\alpha _j\rho \theta _j,\; -\sigma _r= \sum \limits _{j=1}^q\alpha _ju^j_r(1-\rho )\theta _j, r=1,\ldots , s.\end{aligned}$$

Letting \(\lambda ^0_j:=\alpha _j\ge 0, \lambda _j^r:=\alpha _ju^j_r\in {\mathbb {R}}, j=1,\ldots ,q, r=1,\ldots , s\), we obtain that

$$\begin{aligned}&\wp +\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum ^{s}_{r=1}\lambda ^r_ja^r_j)=0, \;\upsilon +\sum \limits _{j=1}^q\lambda _j^0\rho \theta _j=0,\;\\&\sigma _r+\sum ^{q}_{j=1}\lambda ^r_j(1-\rho )\theta _j=0, r=1,\ldots , s. \end{aligned}$$

The relation \(u^j\in U\) means that \(A_0+\sum \limits _{r=1}^su^j_rA_r\succeq 0\) for \(j=1,\ldots ,q.\) We show that

$$\begin{aligned} \lambda _j^0A_0+\sum _{r=1}^s\lambda _j^rA_r\succeq 0,\; j=1,\ldots ,q. \end{aligned}$$
(14)

Let \(j\in \{1,\ldots ,q\}\) be arbitrary. If \(\lambda _j^0= 0\), then \(\lambda _j^r=0\) for all \(r=1,\ldots ,s,\) and hence, (14) holds trivially. If \(\lambda _j^0\ne 0,\) then

$$\begin{aligned} \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r=\lambda _j^0\left( A_0+\sum _{r=1}^s\frac{\lambda _j^r}{\lambda _j^0}A_r\right) = \lambda _j^0\left( A_0+\sum _{r=1}^su^j_rA_r\right) \succeq 0,\end{aligned}$$

showing (14) holds, too. So, we arrive at the conclusion that (b) holds.

[(b) \(\Longrightarrow \) (a)] Assume that (b) holds. Then, there exist \(\lambda _j^0\ge 0, \lambda _j^r\in {\mathbb {R}}, j=1,\ldots ,q, r=1,\ldots ,s\) such that

$$\begin{aligned}&\wp +\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum ^{s}_{r=1}\lambda ^r_ja^r_j)=0, \;\upsilon +\sum \limits _{j=1}^q\lambda _j^0\rho \theta _j=0, \end{aligned}$$
(15)
$$\begin{aligned}&\sigma _r+\sum ^{q}_{j=1}\lambda ^r_j (1-\rho )\theta _j=0, r=1,\ldots ,s, \; \lambda _j^0A_0+\sum _{r=1}^s\lambda _j^rA_r\succeq 0, j=1,\ldots , q. \end{aligned}$$
(16)

(Employing an inverse variable transformation) Given \(j\in \{1,\ldots ,q\}\), we consider the following matrix inequality

$$\begin{aligned} \lambda _j^0A_0+\sum _{r=1}^s\lambda _j^rA_r\succeq 0. \end{aligned}$$
(17)

We claim by (17) and the boundedness of U that if \(\lambda _j^0=0\), then \(\lambda _j^r=0\) for all \(r=1,\ldots ,s.\) Suppose, on the contrary, that \(\lambda _j^0=0\) but there exists \(r_0\in \{1,\ldots ,s\}\) with \(\lambda _j^{r_0}\ne 0.\) In this case, (17) becomes \(\sum \limits ^{s}_{r=1}\lambda _j^rA_r\succeq 0\). Let \({{\overline{u}}}:= ({{\overline{u}}}_1,\ldots ,{{\overline{u}}}_s)\in U.\) It follows by definition that \(A_0+\sum \limits _{r=1}^s{{\overline{u}}}_rA_r\succeq 0\) and

$$\begin{aligned} A_0+\sum _{r=1}^s(\overline{u}_r+\gamma \lambda _j^r)A_j^r=\left( A_0+\sum _{r=1}^s\overline{u}_rA_r\right) +\gamma \sum _{r=1}^s\lambda _j^rA_r\succeq 0 \text{ for } \text{ all } \gamma >0. \end{aligned}$$

This guarantees that \(\overline{u}+\gamma (\lambda _j^1,\ldots ,\lambda _j^s)\in U\) for all \(\gamma >0\), which contradicts the fact that U is a bounded set. So, our claim must be true.

By \(U\ne \emptyset ,\) for each \(j\in \{1,\ldots ,q\}\), we can take \({{\widehat{u}}}^j:=({{\widehat{u}}}^j_1,\ldots ,{{\widehat{u}}}^j_s)\in U\) and define \(\widetilde{u}^j:=({{\widetilde{u}}}^j_1,\ldots ,{{\widetilde{u}}}^j_s)\), where

$$\begin{aligned} {\widetilde{u}}^j_r:={\left\{ \begin{array}{ll} {{\widehat{u}}}^j_r, &{} \text{ if } \lambda _j^0=0,\\ \frac{\lambda _j^r}{\lambda _j^0}, &{} \text{ if } \lambda _j^0\ne 0,\end{array}\right. } \; r=1,\ldots ,s. \end{aligned}$$

Then, it can be checked that

$$\begin{aligned} A_0+\sum _{r=1}^s{{\widetilde{u}}}^j_rA_r ={\left\{ \begin{array}{ll} A_0+\sum \limits _{r=1}^s{{\widehat{u}}}^j_rA_r, &{} \text{ if } \lambda _j^0=0,\\ \frac{1}{\lambda _j^0}(\lambda _j^0 A_0+\sum \limits _{r=1}^s\lambda _j^r A_r), &{} \text{ if } \lambda _j^0\ne 0,\end{array}\right. } \end{aligned}$$

which shows that \(A_0+\sum \limits _{r=1}^s{{\widetilde{u}}}^j_rA_r\succeq 0\), and so, \(\widetilde{u}^j\in U.\) Then, we have

$$\begin{aligned}&\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum \limits ^{s}_{r=1}\lambda ^r_ja^r_j)=\sum \limits _{j=1}^q\lambda _j^0\big (a^0_j+\sum \limits ^{s}_{r=1}\widetilde{u}^j_ra^r_j\big ), \end{aligned}$$
(18)
$$\begin{aligned}&\sum ^{q}_{j=1}\lambda _j^r\theta _j= \sum \limits _{j=1}^q\lambda _j^0 {{\widetilde{u}}}^j_r \theta _j, \; r=1,\ldots , s, \end{aligned}$$
(19)

where we note that if \(\lambda _j^0=0\), then \(\lambda _j^r=0\) for all \(r=1,\ldots ,s, j=1,\ldots , q\) as shown above.

To prove (a), let \((x,y,W)\in {\mathbb {R}}^{n+m+m\times s}\) be such that \(a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, j=1,\ldots ,q\) for all \(u\in U\), where \(x(u)=\rho y+(1-\rho )Wu \). It means that

$$\begin{aligned}&a_j(u)^\top x+\rho \theta _j^\top y+(1-\rho )\sum _{r=1}^su_r\theta _j^\top W_r \le 0,\;\nonumber \\&j=1,\ldots ,q,\;\forall u:=(u_1,\ldots ,u_s)\in U, \end{aligned}$$
(20)

where \(W_r, r=1,\ldots ,s\) are the columns of the matrix W. Observe by (15), (16), (18) and (19) that

$$\begin{aligned}&\wp ^\top x+\upsilon ^\top y+\sum _{r=1}^s\sigma ^\top _rW_r \\&\quad = -\left( \sum \limits _{j=1}^q\lambda ^0_j\Bigg (a_j({{\widetilde{u}}}^j)^\top x+\rho \theta _j^\top y+(1-\rho )\sum _{r=1}^s {{\widetilde{u}}}^j_r \theta _j^\top W_r \Bigg )\right) \ge 0, \end{aligned}$$

where the last inequality holds due to (20) and the fact that \({{\widetilde{u}}}^{j}\in U, j=1,\ldots ,q.\) So, (a) holds. The proof of the theorem is complete. \(\square \)

Let us now examine some particular cases of the adjustable linear homogeneous inequality system (RS). The first particular case is \(x(u):=\rho (x_1,\ldots ,x_m)+(1-\rho )Wu\) with \(1\le m<n\); i.e., the adjustable variable \(x(\cdot )\) depends on some components of the non-adjustable (first-stage) variable \(x:=(x_1,\ldots ,x_m,\ldots ,x_n).\)

Corollary 2.1

(Partially adjustable Farkas’ lemma) Let \(1\le m<n\) and denote \(x:=(\xi ,\zeta ),\) where \(\xi :=(x_1,\ldots ,x_m)\) and \(\zeta :=(x_{m+1},\ldots ,x_n)\) are partitions of \(x:=(x_1,\ldots ,x_m,\ldots ,x_n)\). Let \((\wp _1,\wp _2)\in {\mathbb {R}}^m\times {\mathbb {R}}^{n-m}, \sigma _r\in {\mathbb {R}}^m, r=1,\ldots , s\), and assume that the adjustable linear homogeneous inequality system (RS) with \(x(u):=\rho \xi +(1-\rho )Wu\) has a solution; i.e.,

$$\begin{aligned} X_2:=\{(x,W)\in {\mathbb {R}}^{n}\times {\mathbb {R}}^{m\times s}\,:\,&a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\\&x(u)=\rho \xi +(1-\rho )Wu, \forall u\in U\}\ne \emptyset . \end{aligned}$$

Suppose that the following cone

$$\begin{aligned} C_2:=\mathrm{coneco}\big \{&\big ( a_j^1(u)+\rho \theta ^1_j, \ldots , a_j^m(u)+\rho \theta ^m_j,a_j^{m+1}(u),\ldots , a_j^n(u),\nonumber \\&\quad (1-\rho )u_1\theta _j,\ldots , (1-\rho )u_s\theta _j\big )\;:\nonumber \\&j=1,\ldots ,q, u:=(u_1,\ldots ,u_s)\in U\big \} \end{aligned}$$
(21)

is closed, where \(a_j(u)=(a_j^1(u),\ldots ,a_j^n(u))\) for \(u\in U.\) We also partition \(a^r_j \) as \(a^r_j:=(a^r_{j1},a^r_{j2})\) with \(a^r_{j1}\in {\mathbb {R}}^m\) and \(a^r_{j2}\in {\mathbb {R}}^{n-m}\) for \( r=0,1,\ldots ,s, j=1,\ldots , q. \) Then, the following statements are equivalent:

(i) The following implication holds:

$$\begin{aligned} \big [&x\in {\mathbb {R}}^n, W\in {\mathbb {R}}^{m\times s}, \; a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\\&x(u)=\rho \xi +(1-\rho )Wu ,\; \forall u\in U \big ]\Longrightarrow \left( \begin{array}{c} \wp _1\\ \wp _2 \\ \end{array} \right) ^\top x+ \sum _{r=1}^s\sigma _r^\top W_r \ge 0, \end{aligned}$$

where \(W_r, r=1,\ldots ,s\) are the columns of the matrix W.

(ii) There exist \(\lambda _j^0\ge 0, \lambda _j^r\in {\mathbb {R}}, j=1,\ldots ,q, r=1,\ldots ,s\) such that

$$\begin{aligned}&\wp _1+\sum \limits _{j=1}^q(\lambda _j^0a^0_{j1}+\lambda _j^0\rho \theta _j+\sum ^{s}_{r=1}\lambda ^r_ja^r_{j1})=0, \; \wp _2+\sum \limits _{j=1}^q(\lambda _j^0a^0_{j2}+\sum ^{s}_{r=1}\lambda ^r_ja^r_{j2})=0, \\&\sigma _r+\sum ^{q}_{j=1}\lambda _j^r(1-\rho )\theta _j=0,\; r=1,\ldots , s, \; \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, \; j=1,\ldots ,q. \end{aligned}$$

Proof

Let \({\widetilde{x}}:=(\xi ,\zeta ,W_1,\ldots ,W_s),\) \({\widetilde{a}}_j^0:=(a^0_{j1}+\rho \theta _j, a^0_{j2}, 0_{m},\ldots ,0_m), {\widetilde{a}}_j^r:=(a^r_{j1},a^r_{j2},\varTheta ^r_j), r=1,\ldots ,s, j=1,\ldots , q, \) where \(\varTheta ^r_j:=(0_m,\ldots ,(1-\rho )\theta _j,\ldots ,0_m)\in {\mathbb {R}}^{m\times s}\) with \(\theta _j\) standing at the rth position, and define affine maps \({\widetilde{a}}_j:{\mathbb {R}}^{s}\rightarrow {\mathbb {R}}^{n+m\times s}, j=1,\ldots ,q\) as

$$\begin{aligned}&{\widetilde{a}}_j(u):={\widetilde{a}}^{0}_j+\sum ^{s}_{r=1} u_r{\widetilde{a}}_{j}^r \; \text{ for } \; u:=(u_1,\ldots , u_s)\in {\mathbb {R}}^{s}. \end{aligned}$$

In what follows, we also use the notation \({\widetilde{a}}_j(u):=({\widetilde{a}}_j^1(u),\ldots ,{\widetilde{a}}_j^{n+m\times s}(u))\) for \(u\in U\) and \(j=1,\dots , q.\) Set \({\widetilde{\theta }}_{j}:=0_m, j=1,\ldots ,q\), and observe that \(X_2\ne \emptyset \), if and only if \({\widetilde{X}}\ne \emptyset \), where

$$\begin{aligned} {\widetilde{X}}:=\{({\widetilde{x}},{\widetilde{y}},{\widetilde{W}})\in {\mathbb {R}}^{n+m\times s}\times {\mathbb {R}}^m\times {\mathbb {R}}^{m\times s}\,:\,&{\widetilde{a}}_j(u)^\top {\widetilde{x}}+ {\widetilde{\theta }}_{j}^\top {\widetilde{x}}(u) \le 0, \; j=1,\ldots ,q,\\&{\widetilde{x}}(u)=\rho {\widetilde{y}}+(1-\rho ){\widetilde{W}}u, \forall u\in U\}. \end{aligned}$$

Assume (i) holds. Then, the following implication holds:

$$\begin{aligned} \big [&{\widetilde{x}}\in {\mathbb {R}}^{n+m\times s}, {\widetilde{y}}\in {\mathbb {R}}^{m}, {\widetilde{W}}\in {\mathbb {R}}^{m\times s}, \; {\widetilde{a}}_j(u)^\top {\widetilde{x}}+ {\widetilde{\theta }}_{j}^\top {\widetilde{x}}(u) \le 0, \; j=1,\ldots ,q,\nonumber \\&{\widetilde{x}}(u)= \rho {\widetilde{y}}+(1-\rho ){\widetilde{W}}u,\; \forall u\in U \big ]\Longrightarrow {\widetilde{\wp }}^\top {\widetilde{x}} \ge 0, \end{aligned}$$
(22)

where \( {\widetilde{\wp }}:=(\wp _1,\wp _2,\sigma _1,\ldots ,\sigma _s).\) Moreover, it can be verified that the closedness of \(C_2\) guarantees the closedness of the following cone \({\widetilde{C}}\), where

$$\begin{aligned} {\widetilde{C}}&:=\mathrm{coneco}\big \{\big ({\widetilde{a}}_j(u), \rho {\widetilde{\theta }}_j, (1-\rho )u_1{\widetilde{\theta }}_j, \ldots , (1-\rho )u_s{\widetilde{\theta }}_j\big ), j=1,\ldots ,q\mid \\&u:=(u_1,\ldots ,u_s)\in U\big \}=\mathrm{coneco}\big \{\big ( a_j^1(u)+\rho \theta ^1_j, \ldots , a_j^m(u)+\rho \theta ^m_j,a_j^{m+1}(u),\\&\quad \quad \quad \;\; \ldots , a_j^n(u), (1-\rho )u_1\theta _j,\ldots , (1-\rho )u_s\theta _j,0_m, \ldots , 0_m\big ) \,:\, j=1,\ldots ,q,\\&\quad u:=(u_1,\ldots ,u_s)\in U\big \}. \end{aligned}$$

Applying the adjustable Farkas’ lemma given in Theorem 2.1, we find \(\lambda _j^0\ge 0, \lambda _j^r\in {\mathbb {R}}, j=1,\ldots ,q, r=1,\ldots ,s\) such that

$$\begin{aligned} {\widetilde{\wp }}+\sum \limits _{j=1}^{q}(\lambda _j^0{\widetilde{a}}^0_j+\sum ^{s}_{r=1}\lambda ^r_j{\widetilde{a}}^r_j)=0, \; \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, \; j=1,\ldots ,q, \end{aligned}$$

which shows that

$$\begin{aligned}&\wp _1+\sum \limits _{j=1}^q(\lambda _j^0a^0_{j1}+\lambda _j^0\rho \theta _j+\sum ^{s}_{r=1}\lambda ^r_ja^r_{j1})=0, \; \wp _2+\sum \limits _{j=1}^q(\lambda _j^0a^0_{j2}+\sum ^{s}_{r=1}\lambda ^r_ja^r_{j2})=0, \\&\sigma _r+\sum ^{q}_{j=1}\lambda _j^r(1-\rho )\theta _j=0, r=1,\ldots , s,\; \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, \; j=1,\ldots ,q. \end{aligned}$$

This means that (ii) is valid. In this way, we can employ Theorem 2.1 to show that the inverse implication also holds and so, the proof is complete. \(\square \)

Remark 2.1

For the case of \(x(u):=\rho x+(1-\rho )Wu\) (i.e., the adjustable variable depends on the entire components of the non-adjustable variable), we can employ the adjustable Farkas’ lemma in Theorem 2.1 to derive the following version of the Farkas’ lemma.

Let \(\wp \in {\mathbb {R}}^n, \sigma _r\in {\mathbb {R}}^n, i=1,\ldots , s\), and assume that the adjustable linear homogeneous inequality system (RS) with \(m=n\) and \(x(u):=\rho x+(1-\rho )Wu\) has a solution; i.e.,

$$\begin{aligned} X_1:=\{(x,W)\in {\mathbb {R}}^{n}\times {\mathbb {R}}^{n\times s}\,:\,\,&a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\\&x(u)=\rho x+(1-\rho )Wu, \forall u\in U\}\ne \emptyset . \end{aligned}$$

Suppose that the following cone \( C_1\) is closed, where

$$\begin{aligned} C_1&:=\mathrm{coneco}\big \{\big (a_j(u)+\rho \theta _j, (1-\rho )u_1\theta _j,\ldots , (1-\rho )u_s\theta _j\big ) \,:\,\\&j=1,\ldots ,q, u:=(u_1,\ldots ,u_s)\in U\big \}. \end{aligned}$$

Then, the following statements are equivalent:

  1. (i)

    The following implication holds:

    $$\begin{aligned} \big [&x\in {\mathbb {R}}^n, W\in {\mathbb {R}}^{n\times s}, \; a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\\&x(u)=\rho x+(1-\rho )Wu ,\; \forall u\in U \big ]\Longrightarrow \wp ^\top x+ \sum _{r=1}^s\sigma _r^\top W_r \ge 0, \end{aligned}$$

    where \(W_r, r=1,\ldots ,s\) are the columns of the matrix W.

  2. (ii)

    There exist \(\lambda _j^0\ge 0, \lambda _j^r\in {\mathbb {R}}, j=1,\ldots ,q, r=1,\ldots ,s\) such that

    $$\begin{aligned}&\wp +\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\lambda _j^0\rho \theta _j+\sum ^{s}_{r=1}\lambda ^r_ja^r_j)=0, \; \sigma _r+\sum ^{q}_{j=1}\lambda _j^r(1-\rho )\theta _j=0, r=1,\ldots ,s,\\&\lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, \; j=1,\ldots ,q. \end{aligned}$$

In the second particular case, where \(\theta _j:=0_m, j=1,\ldots ,q\) (i.e., there are no adjustable variables in the system (RS)), the adjustable Farkas’ lemma reduces to the semi-infinite Farkas’ lemma (see, e.g., [19]) for a linear homogeneous inequality system.

Corollary 2.2

(Semi-infinite Farkas’ Lemma) Let \(\wp \in {\mathbb {R}}^n\), and assume that the adjustable linear homogeneous inequality system (RS) with \(\theta _j:=0_m, j=1,\ldots ,q\) has a solution; i.e.,

$$\begin{aligned} {\widetilde{X}}:=\{x\in {\mathbb {R}}^{n}\,:\,\;&a_j(u)^\top x \le 0, \; j=1,\ldots ,q,\; \forall u\in U\}\ne \emptyset . \end{aligned}$$

Suppose that the following cone \({{\widetilde{C}}}\) is closed, where

$$\begin{aligned} {{\widetilde{C}}}:=\mathrm{coneco}\big \{a_j(u) \,:\, j=1,\ldots ,q, u\in U\big \}. \end{aligned}$$
(23)

Then, the following statements are equivalent:

  1. (i)

    The following implication holds:

    $$\begin{aligned} \big [ x\in {\mathbb {R}}^n, \; a_j(u)^\top x \le 0, \; j=1,\ldots ,q,\; \forall u\in U \big ]\Longrightarrow \wp ^\top x \ge 0. \end{aligned}$$
  2. (ii)

    There exist \(\lambda _j^0\ge 0, \lambda _j^r\in {\mathbb {R}}, j=1,\ldots ,q, r=1,\ldots ,s\) such that

    $$\begin{aligned}&\wp +\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum ^{s}_{r=1}\lambda ^r_ja^r_j)=0, \; \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, \; j=1,\ldots ,q. \end{aligned}$$

Proof

Observe first that \({{\widetilde{X}}}\ne \emptyset \), if and only if \(X\ne \emptyset \), where

$$\begin{aligned} X:=\{(x,y,W)\in {\mathbb {R}}^{n+m+m\times s}\,:\,\;&a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\\&x(u)=\rho y+(1-\rho )Wu, \forall u\in U\}, \end{aligned}$$

with \(\theta _j:=0_m, j=1,\ldots ,q.\)

Assume (i) holds. Then, by letting \(\upsilon :=\sigma _r:=0_m, r=1,\ldots ,s\), we get from (i) the following implication

$$\begin{aligned} \big [&x\in {\mathbb {R}}^n, y\in {\mathbb {R}}^m, W\in {\mathbb {R}}^{m\times s}, \; a_j(u)^\top x+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q,\nonumber \\&x(u)=\rho y+(1-\rho )Wu ,\; \forall u\in U \big ]\Longrightarrow \wp ^\top x+\upsilon ^\top y+\sum _{r=1}^s\sigma _r^\top W_r \ge 0, \end{aligned}$$
(24)

which lands in the form (a) of Theorem 2.1. In this setting, it can be verified that the closedness of \({{\widetilde{C}}}\) implies the closedness of C in (5) as

$$\begin{aligned} C=\mathrm{coneco}\big \{\big (a_j(u), 0_m, 0_m, \ldots , 0_m\big ) \,:\, j=1,\ldots ,q, u\in U\big \}. \end{aligned}$$

Applying now Theorem 2.1, we assert that (24) is equivalent to saying that there exist \(\lambda _j^0\ge 0, \lambda _j^r\in {\mathbb {R}}, j=1,\ldots ,q, r=1,\ldots ,s\) such that

$$\begin{aligned}&\wp +\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum ^{s}_{r=1}\lambda ^r_ja^r_j)=0, \; \upsilon +\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0, \end{aligned}$$
(25)
$$\begin{aligned}&\sigma _r+\sum ^{q}_{j=1}\lambda _j^r(1-\rho )\theta _j=0, r=1,\ldots , s, \; \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, \; j=1,\ldots ,q. \end{aligned}$$
(26)

Note that conditions (25) and (26) are exactly (ii) because \(\upsilon =\sigma _r=0_m, r=1,\ldots , s\) and \(\theta _j:=0_m, j=1,\ldots ,q.\) In this way, we can employ Theorem 2.1 to show that the inverse implication also holds, and so the proof is complete. \(\square \)

We close this section with a remark that, in a special case where the parameter set U in (2) is a singleton, the adjustable Farkas’ lemma reduces to a classical form of the Farkas’ lemma for a linear homogeneous inequality system.

3 Exact SDPs for Two-Stage Robust Linear Programs

In this section, we apply the adjustable Farkas’ lemma obtained in Theorem 2.1 to derive strong duality relations for a two-stage robust linear optimization problem, where the dual problem is a conic linear program and so, the optimal value of the underlying two-stage robust linear optimization problem can be found by solving this conic linear program.

A linear program in the face of constraint data uncertainty can be captured by the program

$$\begin{aligned} \inf _{x\in {\mathbb {R}}^n}{\{c^\top x\,:\, a_j(u)^\top x\le b_j(u), \; j=1,\ldots ,q\}}, \end{aligned}$$
(27)

where \(c\in {\mathbb {R}}^n\) is fixed, \(u\in U\), the uncertainty set

$$\begin{aligned} U:=\big \{u:=(u_1,\ldots ,u_s)\in {\mathbb {R}}^s \,:\, A_0+\sum _{i=1}^su_iA_i\succeq 0\big \} \end{aligned}$$

is a non-empty bounded spectrahedron defined as in (2), and \(a_j:{\mathbb {R}}^s\rightarrow {\mathbb {R}}^n, b_j:{\mathbb {R}}^s\rightarrow {\mathbb {R}}, j=1,\ldots ,q\) are affine mappings given, respectively, by

$$\begin{aligned} a_j(u):=a^{0}_j+\sum \limits ^{s}_{i=1}u_ia_{j}^i,\; b_j(u):= b^0_j+\sum \limits ^{s}_{i=1}u_ib_j^i \; \text{ for } \; u:=(u_1,\ldots , u_s)\in {\mathbb {R}}^s, \end{aligned}$$

where \(a_j^i\in {\mathbb {R}}^n, b_j^i\in {\mathbb {R}}, i=0,1,\ldots ,s, j=1,\ldots ,q\) are fixed.

We consider an adjustable uncertain two-stage linear program of (27) as

$$\begin{aligned} \inf _{x\in {\mathbb {R}}^n, x(\cdot )}{\{c^\top x \,:\, a_j(u)^\top x+ \theta _{j}^\top x(u)\le b_j(u), \; j=1,\ldots ,q\}}, \end{aligned}$$
(UP)

where \(\theta _j:=(\theta ^1_j,\ldots ,\theta _j^m)\in {\mathbb {R}}^m, j=1,\ldots ,q\) are fixed, \(x:=(x_1,\ldots ,x_n)\) is the first-stage (here-and-now) decision vector and \(x(\cdot )\) is the second-stage (wait-and-see) adjustable decision vector given by

$$\begin{aligned} x(u):=\rho y+(1-\rho )Wu \end{aligned}$$

with the non-adjustable \(y\in {\mathbb {R}}^m\), \(W\in {\mathbb {R}}^{m\times s}\) and \(\rho \in [0,1]\) as in (3). The first-stage decisions \(x_1,\ldots ,x_n\) are made before the uncertain parameters \(u_1, \ldots ,u_s\) are realized, whereas the second-stage adjustable decisions can be made after some of the uncertain parameters have revealed their values.

The adjustable robust two-stage counterpart of (UP) reads as follows:

$$\begin{aligned} \inf _{x\in {\mathbb {R}}^n,y\in {\mathbb {R}}^m,W\in {\mathbb {R}}^{m\times s}}{}\{c^\top x \,:\,\;&a_j(u)^\top x+ \theta _{j}^\top x(u)\le b_j(u),\\&x(u)=\rho y+(1-\rho )Wu,\; j=1,\ldots ,q,\;\forall u\in U\}. \end{aligned}$$
(RP)

We propose a semidefinite programming (SDP) dual problem for (RP) as

$$\begin{aligned} \sup _{(\lambda ^0_j,\lambda ^i_j)}{}\Big \{-\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j)\,:\,\;&c+ \sum \limits _{j=1}^q(\lambda ^0_ja^0_j+\sum \limits _{i=1}^s\lambda ^i_ja^i_j)=0, \;\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0,\\&\; \sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0,\; \lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0, \\&\lambda ^0_j\in {\mathbb {R}}_+, \lambda ^i_j\in {\mathbb {R}}, i=1,\ldots ,s, j=1,\ldots ,q\Big \}. \end{aligned}$$
(DP)

The following theorem describes a strong duality relation between the adjustable robust two-stage linear problem (RP) and its semidefinite programming dual problem (DP).

Theorem 3.1

(Strong duality with SDP duals) Let the feasible set of problem (RP) be non-empty and let the following cone \(C_P\) be closed, where

$$\begin{aligned} C_P:=\mathrm{coneco}\big \{&\big (a_j(u),b_j(u), \rho \theta _j, (1-\rho )u_1\theta _j, \ldots , (1-\rho )u_s\theta _j\big )\,:\,\nonumber \\&j=1,\ldots ,q, u:=(u_1,\ldots ,u_s)\in U\big \}. \end{aligned}$$
(28)

Then, we have

$$\begin{aligned} \inf {(RP)}&=\max _{(\lambda ^0_j,\lambda ^i_j)}{}\Big \{-\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j)\,:\,\; c+ \sum \limits _{j=1}^q(\lambda ^0_ja^0_j+\sum \limits _{i=1}^s\lambda ^i_ja^i_j)=0, \\&\;\;\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0, \sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0, \lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0, \\&\lambda ^0_j\in {\mathbb {R}}_+, \lambda ^i_j\in {\mathbb {R}}, i=1,\ldots ,s, j=1,\ldots ,q\Big \}. \end{aligned}$$

Proof

(Verifying weak duality) Let us first show that

$$\begin{aligned} \inf {(RP)}\ge \sup {(DP)}. \end{aligned}$$
(29)

This holds trivially if the feasible set of problem (DP) is empty. Now, take any \(\lambda _j^0\ge 0, \lambda _j^i\in {\mathbb {R}}, , j=1,\ldots ,q, i=1,\ldots ,s \) such that

$$\begin{aligned}&c+\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum ^{s}_{i=1}\lambda ^i_ja^i_j)=0, \;\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0, \; \sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0, i=1,\ldots , s, \end{aligned}$$
(30)
$$\begin{aligned}&\lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0, j=1,\ldots ,q. \end{aligned}$$
(31)

Consider any \(j\in \{1,\ldots ,q\}\). By the compactness of U, as shown in the proof of Theorem 2.1, we conclude by (31) that if \(\lambda _j^0=0\), then \(\lambda _j^i=0\) for all \(i=1,\ldots ,s.\) Then, take \({{\widehat{u}}}^j:=({{\widehat{u}}}^j_1,\ldots ,{{\widehat{u}}}^j_s)\in U\) and define \({{\widetilde{u}}}^j:=({{\widetilde{u}}}^j_1,\ldots ,{{\widetilde{u}}}^j_s)\) with

$$\begin{aligned} {{\widetilde{u}}}_i^j:={\left\{ \begin{array}{ll} \widehat{u}^j_i, &{} \text{ if } \lambda _j^0=0,\\ \frac{\lambda _j^i}{\lambda _j^0}, &{} \text{ if } \lambda _j^0\ne 0,\end{array}\right. }\; i=1,\ldots , s. \end{aligned}$$

It shows that \({{\widetilde{u}}}^j\in U.\) Then, we have

$$\begin{aligned} \sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum \limits ^{s}_{i=1}\lambda ^i_ja^i_j)&=\sum \limits _{j=1}^q\lambda _j^0\big (a^0_j+\sum \limits ^{s}_{i=1}\widetilde{u}^j_ia^i_j\big ),\nonumber \\&\;\sum \limits _{j=1}^q(\lambda _j^0b^0_j+\sum \limits ^{s}_{i=1}\lambda ^i_jb^i_j)=\sum \limits _{j=1}^q\lambda _j^0\big (b^0_j+\sum \limits ^{s}_{i=1}\widetilde{u}^j_ib^i_j\big ), \end{aligned}$$
(32)
$$\begin{aligned}&\sum ^{q}_{j=1}\lambda _j^i\theta _j= \sum \limits _{j=1}^q\lambda _j^0 {{\widetilde{u}}}^j_i \theta _j, i=1,\ldots ,s, \end{aligned}$$
(33)

where we note that if \(\lambda _j^0=0\), then \(\lambda _j^i=0\) for all \(i=1,\ldots ,s, j=1,\ldots , q\) as said above.

Now, let \((x,y,W)\in {\mathbb {R}}^{n+m+m\times s}\) be a feasible point of problem (RP);

i.e., \(a_j(u)^\top x+\theta _{j}^\top \big (\rho y+(1-\rho )Wu\big ) \le b_j(u), j=1,\ldots ,q\) for all \(u\in U\). Denote by \(W_i, i=1,\ldots ,s\) the columns of the matrix W. We get by (30), (32) and (33) that

$$\begin{aligned}&-c^\top x-\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j)\\&=\sum \limits _{j=1}^q\lambda ^0_j\big [(a^0_j+\sum \limits _{i=1}^s\widetilde{u}^j_ia^i_j)^\top x+\rho \theta _j^\top y+(1-\rho )\sum _{i=1}^s \widetilde{u}^j_i\theta _j^\top W_i-(b^0_j+\sum \limits _{i=1}^s\widetilde{u}^j_ib^i_j)\big ]\\&=\sum \limits _{j=1}^q\lambda ^0_j\big [a_j(\widetilde{u}^j)^\top x+\theta _{j}^\top \big (\rho y+(1-\rho )W\widetilde{u}^j\big )-b_j({{\widetilde{u}}}^j)\big ]\le 0, \end{aligned}$$

where we should remind that \({{\widetilde{u}}}^j\in U\) for \(j=1,\ldots ,q.\) It entails that \(c^\top x\ge -\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j)\), and so (29) is valid.

(Verifying strong duality) To prove the converse inequality of (29), we may assume \(\overline{\mu }:=\inf {(RP)}\in {\mathbb {R}}.\) Then, it holds that

$$\begin{aligned} \big [&x\in {\mathbb {R}}^n, y\in {\mathbb {R}}^m, W\in {\mathbb {R}}^{m\times s}, \; a_j(u)^\top x+ \theta _{j}^\top x(u) \le b_j(u), \; j=1,\ldots ,q,\nonumber \\&x(u)=\rho y+(1-\rho )Wu ,\; \forall u\in U \big ]\Longrightarrow c^\top x-{{\overline{\mu }}} \ge 0. \end{aligned}$$
(34)

We now verify that

$$\begin{aligned} \big [&x\in {\mathbb {R}}^n, y\in {\mathbb {R}}^m, W\in {\mathbb {R}}^{m\times s}, \; t\in {\mathbb {R}}, \; a_j(u)^\top x+ \theta _{j}^\top \nonumber \\&x(u) + b_j(u)t\le 0, \; j=1,\ldots ,q, \; t\le 0,\nonumber \\&x(u)=\rho y+(1-\rho )Wu ,\; \forall u\in U \big ]\Longrightarrow c^\top x+{{\overline{\mu }}} t \ge 0. \end{aligned}$$
(35)

To see this, assume on the contrary that there exist \(({{\widehat{x}}},{{\widehat{y}}},\widehat{W})\in {\mathbb {R}}^{n+m+m\times s}\) and \({{\widehat{t}}}\le 0\) such that

$$\begin{aligned}&a_j(u)^\top {{\widehat{x}}}+\rho \theta _j^\top {{\widehat{y}}}+(1-\rho )\sum _{i=1}^s u_i\theta _j^\top {{\widehat{W}}}_i+ b_j(u)\widehat{t}\le 0,\; j=1,\ldots ,q,\; \forall u\in U, \end{aligned}$$
(36)
$$\begin{aligned}&c^\top \widehat{x}+{{\overline{\mu }}}{{\widehat{t}}}<0, \end{aligned}$$
(37)

where \({{\widehat{W}}}_i, i=1,\ldots ,s\) are the columns of the matrix \({{\widehat{W}}}.\) Let us consider the following possibilities:

Case 1 (\({{\widehat{t}}}=0\)): We get by (36) and (37) that

$$\begin{aligned}&a_j(u)^\top \widehat{x}+\rho \theta _j^\top {{\widehat{y}}}+(1-\rho )\sum _{i=1}^s u_i\theta _j^\top {{\widehat{W}}}_i\le 0,\; j=1,\ldots ,q,\; \forall u\in U,\\&c^\top \widehat{x}<0. \end{aligned}$$

Let \(({{\overline{x}}},{{\overline{y}}},{{\overline{W}}})\in {\mathbb {R}}^{n+m+m\times s}\) be a feasible point of problem (RP);

i.e., \(a_j(u)^\top {{\overline{x}}}+\rho \theta _j^\top \overline{y}+(1-\rho )\sum \limits _{r=1}^su_r\theta _j^\top {{\overline{W}}}_r\le b_j(u), j=1,\ldots ,q\) for all \(u\in U\), where \({{\overline{W}}}_r, r=1,\ldots ,s\) are the columns of the matrix \({{\overline{W}}}\). It entails that

$$\begin{aligned}&a_j(u)^\top x^\lambda + \rho \theta _j^\top y^\lambda +(1-\rho )\sum \limits _{r=1}^su_r\theta _j^\top W_{r}^{\lambda } \le b_j(u),\; j=1,\ldots ,q,\; \forall \nonumber \\&u:=(u_1,\ldots ,u_s)\in U, \end{aligned}$$
(38)

where \(x^\lambda :=\overline{x}+\lambda {{\widehat{x}}}, y^\lambda :={{\overline{y}}}+\lambda {{\widehat{y}}}, W_i^\lambda :={{\overline{W}}}_i+\lambda {{\widehat{W}}}_i, i=1,\ldots , s\) for \(\lambda >0.\) In other words, by denoting \(W^\lambda , \lambda >0,\) the matrix with its columns as \(W_i^\lambda , i=1,\ldots ,s\) we assert by (38) that \(x^\lambda \in {\mathbb {R}}^n, y^\lambda \in {\mathbb {R}}^m, W^\lambda \in {\mathbb {R}}^{m\times s} \) and \( a_j(u)^\top x^\lambda + \theta _{j}^\top x(u) \le b_j(u), x(u)=\rho y^\lambda +(1-\rho )W^\lambda u, j=1,\ldots ,q \) for all \(u\in U\). Then, by (34), we arrive at the assertion that \(c^\top x^\lambda \ge {{\overline{\mu }}}\) for all \(\lambda >0\). This assertion contradicts the fact that \(c^\top {{\widehat{x}}}<0\) and thus

$$\begin{aligned} c^\top x^\lambda =c^\top {{\overline{x}}}+\lambda c^\top {{\widehat{x}}} \rightarrow -\infty \; \text{ as } \; \lambda \rightarrow +\infty . \end{aligned}$$

Case 2 (\({{\widehat{t}}}<0\)): We get by (36) that

$$\begin{aligned}&a_j(u)^\top \left( \frac{ {{\widehat{x}}}}{-\widehat{t}}\right) +\rho \theta _j^\top \left( \frac{ {{\widehat{y}}}}{-\widehat{t}}\right) +(1-\rho )\sum _{i=1}^s u_i\theta _j^\top \\&\qquad \times \left( \frac{\widehat{W}_i}{-{{\widehat{t}}}}\right) \le b_j(u),\;\; j=1,\ldots ,q,\; \forall u\in U.\end{aligned}$$

Similarly, by (34), we obtain that \(c^\top \left( \frac{{{\widehat{x}}}}{-{{\widehat{t}}}}\right) \ge {{\overline{\mu }}} \), which contradicts to the inequality in (37). Consequently, (35) holds.

(Homogenization of (35)) Let \({\widetilde{x}}:=(x,t), {\widetilde{a}}_j^r:=(a^r_j,b^r_j), r=0,1,\ldots ,s, j=1,\ldots , q, {\widetilde{a}}_{q+1}^0:=(0_n,1), \) \({\widetilde{a}}_{q+1}^r:=(0_n,0), r=1,\ldots , s\) and \(\theta _{q+1}:=0_m\), and define affine maps \({\widetilde{a}}_j:{\mathbb {R}}^{s}\rightarrow {\mathbb {R}}^{n+1}, j=1,\ldots ,q+1\) as

$$\begin{aligned}&{\widetilde{a}}_j(u):={\widetilde{a}}^{0}_j+\sum ^{s}_{r=1} u_r{\widetilde{a}}_{j}^r \; \text{ for } \; u:=(u_1,\ldots , u_s)\in {\mathbb {R}}^{s}. \end{aligned}$$

In what follows, we also use the notation \({\widetilde{a}}_j(u):=({\widetilde{a}}_j^1(u),\ldots ,{\widetilde{a}}_j^{n+1}(u))\) for \(u\in U\) and \(j=1,\dots , q+1.\) From (35), we obtain the following implication:

$$\begin{aligned} \big [&{\widetilde{x}}\in {\mathbb {R}}^{n+1}, y\in {\mathbb {R}}^{m}, W\in {\mathbb {R}}^{m\times s}, \; {\widetilde{a}}_j(u)^\top {\widetilde{x}}+ \theta _{j}^\top x(u) \le 0, \; j=1,\ldots ,q+1,\nonumber \\&x(u)= \rho y+(1-\rho )W u,\; \forall u\in U \big ]\Longrightarrow {\widetilde{c}}^\top {\widetilde{x}} \ge 0, \end{aligned}$$
(39)

where \( {\widetilde{c}}:=(c,{{\overline{\mu }}}).\) Moreover, it can be verified that the closedness of \(C_P\) guarantees the closedness of the following cone \({\widetilde{C}}\), where

$$\begin{aligned} {\widetilde{C}}&:=\mathrm{coneco}\big \{\big ({\widetilde{a}}_j(u), \rho \theta _j, (1-\rho )u_1\theta _j, \ldots , (1-\rho )u_s\theta _j\big ) \,:\,\\&j=1,\ldots ,q+1, u:=(u_1,\ldots ,u_s)\in U\big \}\\&=\mathrm{coneco}\big \{\big (0_n,1, 0_m,0_m,\ldots , 0_m\big ), \big (a_j(u),b_j(u), \rho \theta _j, (1-\rho )u_1\theta _j,\ldots , \\&\quad \quad (1-\rho )u_s\theta _j\big )\,:\, j=1,\ldots ,q, u:=(u_1,\ldots ,u_s)\in U\Bigg \}. \end{aligned}$$

Applying the adjustable Farkas’ lemma given in Theorem 2.1, we find \(\lambda _j^0\ge 0, \lambda _j^r\in {\mathbb {R}}, j=1,\ldots ,q+1, r=1,\ldots ,s\) such that

$$\begin{aligned}&{\widetilde{c}}+\sum \limits _{j=1}^{q+1}(\lambda _j^0{\widetilde{a}}^0_j+\sum ^{s}_{r=1}\lambda ^r_j{\widetilde{a}}^r_j)=0, \; \sum \limits _{j=1}^{q+1} \lambda _j^0\rho \theta _j=0,\\&\sum ^{q+1}_{j=1}\lambda _j^r(1-\rho )\theta _j=0, r=1,\ldots , s,\; \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, \; j=1,\ldots ,q+1, \end{aligned}$$

which shows that

$$\begin{aligned}&c+\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum ^{s}_{i=1}\lambda ^i_ja^i_j)=0, \; \; \sum \limits _{j=1}^{q} \lambda _j^0\rho \theta _j=0, \\&\sum ^{q}_{j=1}\lambda _j^r(1-\rho )\theta _j=0, r=1,\ldots , s,\; \lambda _j^0A_0+\sum _{r=1}^s\lambda ^r_jA_r\succeq 0, j=1,\ldots ,q,\\&{{\overline{\mu }}}+\lambda ^0_{q+1}+\sum \limits _{j=1}^q(\lambda _j^0b^0_j+\sum ^{s}_{i=1}\lambda ^i_jb^i_j)=0. \end{aligned}$$

The last equality entails that \({{\overline{\mu }}}\le -\sum \limits _{j=1}^q(\lambda _j^0b^0_j+\sum \limits ^{s}_{i=1}\lambda ^i_jb^i_j)\). So, \({{\overline{\mu }}}\le \sup {(DP)}\), which, together with (29), guarantees that \(\inf {(RP)}= \sup {(DP)}\) and the dual problem (DP) attains its solutions. The proof of the theorem is complete. \(\square \)

Remark 3.1

For the case of \(x(u):=\rho (x_1,\ldots ,x_m)+(1-\rho )Wu\) with \(1\le m<n\) (i.e., the second-stage decision variable \(x(\cdot )\) depends on some components of the first-stage decision variable \(x:=(x_1,\ldots ,x_m,\ldots ,x_n)\)), we can employ the Farkas’ lemma in Corollary 2.1 instead of Theorem 2.1 to derive a strong duality relation between the adjustable robust two-stage linear problem (RP) and its corresponding dual semidefinite programming problem.

Let us consider a particular case of the adjustable robust two-stage linear problem (RP), where the uncertainty set U in (2) is replaced by the following ellipsoid

$$\begin{aligned} E:=\big \{u\in {\mathbb {R}}^{s} \,:\, u^\top Mu\le 1\big \} \end{aligned}$$
(40)

with a symmetric \(({s}\times {s})\) matrix \(M\succ 0\). Let B be an \(({s}\times {s})\) matrix such that \(M=B^\top B\).

In this case, we obtain strong duality with a second-order cone programming (SOCP) dual for the adjustable robust two-stage linear problem (RP) under ellipsoidal uncertainty.

Corollary 3.1

(Strong duality with SOCP duals) Consider the problem (RP) with U in (2) replaced by E in (40), and let the feasible set of problem (RP) be non-empty. Assume the following cone \(C_P\) be closed, where

$$\begin{aligned}C_P:=\mathrm{coneco}\big \{&\big (a_j(u),b_j(u), \rho \theta _j, (1-\rho )u_1\theta _j, \ldots , (1-\rho )u_s\theta _j\big )\,:\, \\&j=1,\ldots ,q, u:=(u_1,\ldots ,u_s)\in E\big \}. \end{aligned}$$

Then, we have

$$\begin{aligned} \inf {(RP)}&=\max _{(\lambda ^0_j,\lambda ^i_j)}{}\Big \{-\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j) \,:\,\; \\&\quad c+ \sum \limits _{j=1}^q(\lambda ^0_ja^0_j+\sum \limits _{i=1}^s\lambda ^i_ja^i_j)=0, \;\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0,\\&\quad \; \sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0,\; \Vert B(\lambda _j^1,\ldots ,\lambda _j^s)\Vert \le \lambda _j^0, \\&\quad \lambda ^0_j\in {\mathbb {R}}_+, \lambda ^i_j\in {\mathbb {R}}, i=1,\ldots ,s, j=1,\ldots ,q\Big \}. \end{aligned}$$

Proof

Let \(A_i, i=0,1,\ldots ,s\) be \((s+1)\times (s+1)\) matrices given by

$$\begin{aligned} A_0:= \left( \begin{array}{cc} M^{-1} &{} 0 \\ 0&{} 1 \\ \end{array} \right) ,\quad A_i:= \left( \begin{array}{cc} 0 &{} e_i \\ e^\top _i&{} 0 \\ \end{array} \right) ,\; i=1,\ldots , s, \end{aligned}$$
(41)

where \(e_i\in {\mathbb {R}}^{s}\) is a vector whose ith component is one and all others are zero. Then, the ellipsoid E in (40) lands in the form of the spectrahedron U in (2). Therefore, we invoke Theorem 3.1 to assert that

$$\begin{aligned} \inf {(RP)}&=\max _{(\lambda ^0_j,\lambda ^i_j)}{}\Big \{-\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j)\,:\,\; \\&\quad c+ \sum \limits _{j=1}^q(\lambda ^0_ja^0_j+\sum \limits _{i=1}^s\lambda ^i_ja^i_j)=0, \;\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0,\\&\quad \; \sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0,\; \lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0, \\&\quad \lambda ^0_j\in {\mathbb {R}}_+, \lambda ^i_j\in {\mathbb {R}}, i=1,\ldots ,s, j=1,\ldots ,q\Big \}, \end{aligned}$$

where \(A_i, i=0,1,\ldots ,s\) are given as in (41). In this case, we can verify that

$$\begin{aligned} \lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0 \Leftrightarrow \Vert B(\lambda _j^1,\ldots ,\lambda _j^s)\Vert \le \lambda _j^0,\; j=1,\ldots ,q. \end{aligned}$$

So, the proof is complete. \(\square \)

As an application of Theorem 3.1, we derive, in the following corollary, a characterization of solution in terms of linear matrix inequalities for the adjustable robust two-stage linear problem (RP).

Corollary 3.2

(Characterization of solution for (RP)) Let the optimal solution set of problem (RP) be non-empty and let the following cone \(C_P\) be closed, where

$$\begin{aligned}C_P:=\mathrm{coneco}\big \{&\big (a_j(u),b_j(u), \rho \theta _j, (1-\rho )u_1\theta _j, \ldots , (1-\rho )u_s\theta _j\big )\,:\, \\&j=1,\ldots ,q, u:=(u_1,\ldots ,u_s)\in U\big \}. \end{aligned}$$

Let \(({{\overline{x}}},\overline{y},{{\overline{W}}})\in {\mathbb {R}}^{n+m+m\times s}\) be a feasible point of problem (RP). Then, \(({{\overline{x}}},{{\overline{y}}},{{\overline{W}}})\) is an optimal solution of problem (RP) if and only if there exist \(\lambda _j^0\ge 0, \lambda _j^i\in {\mathbb {R}}, j=1,\ldots ,q, i=1,\ldots ,s \), such that

$$\begin{aligned}&c+\sum \limits _{j=1}^q(\lambda _j^0a^0_j+\sum ^{s}_{i=1}\lambda ^i_ja^i_j)=0, \; \sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0, \nonumber \\&\sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0, i=1,\ldots ,s,\; \lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0, j=1,\ldots ,q, \nonumber \\&c^\top {{\overline{x}}} +\sum \limits _{j=1}^q(\lambda _j^0b^0_j+\sum ^{s}_{i=1}\lambda ^i_jb^i_j)= 0. \end{aligned}$$
(42)

Proof

[\(\Longrightarrow \)] Let \(({{\overline{x}}},{{\overline{y}}},{{\overline{W}}})\in {\mathbb {R}}^{n+m+m\times s}\) be an optimal solution of problem (RP). Since \(C_P\) is closed, we assert by Theorem 3.1 that

$$\begin{aligned} c^\top {{\overline{x}}}&=\max _{(\lambda ^0_j,\lambda ^i_j)}{}\Big \{-\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j)\,:\,\;\\&\quad c+ \sum \limits _{j=1}^q(\lambda ^0_ja^0_j+\sum \limits _{i=1}^s\lambda ^i_ja^i_j)=0, \;\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0,\\&\quad \; \sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0, \lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0, \\&\quad \lambda ^0_j\in {\mathbb {R}}_+, \lambda ^i_j\in {\mathbb {R}}, i=1,\ldots ,s, j=1,\ldots ,q\Big \}. \end{aligned}$$

This shows that there exist \(\lambda _j^0\ge 0, \lambda _j^i\in {\mathbb {R}}, j=1,\ldots ,q, i=1,\ldots ,s \) such that

$$\begin{aligned}&c+ \sum \limits _{j=1}^q(\lambda ^0_ja^0_j+\sum \limits _{i=1}^s\lambda ^i_ja^i_j)=0, \;\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0,\\&\; \sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0, i=1,\ldots ,s,\; \lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0,\; j=1,\ldots ,q, \\&c^\top \overline{x}=-\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j), \end{aligned}$$

and so, we obtain (42).

[\(\Longleftarrow \)] Assume that there exist \(\lambda _j^0\ge 0, \lambda _j^i\in {\mathbb {R}}, , j=1,\ldots ,q, i=1,\ldots ,s \) such that (42) holds. Under the closedness of \(C_P,\) we invoke Theorem 3.1 again to conclude that

$$\begin{aligned} \min {(RP)}&=\max _{(\lambda ^0_j,\lambda ^i_j)}{}\Big \{-\sum \limits _{j=1}^q(\lambda ^0_jb^0_j+\sum \limits _{i=1}^s\lambda ^i_jb^i_j)\,:\,\;\nonumber \\&\quad c+ \sum \limits _{j=1}^q(\lambda ^0_ja^0_j+\sum \limits _{i=1}^s\lambda ^i_ja^i_j)=0, \;\sum \limits _{j=1}^q \lambda _j^0\rho \theta _j=0,\nonumber \\&\quad \; \sum ^{q}_{j=1}\lambda _j^i(1-\rho )\theta _j=0, \lambda _j^0A_0+\sum _{i=1}^s\lambda ^i_jA_i\succeq 0, \nonumber \\&\quad \lambda ^0_j\in {\mathbb {R}}_+, \lambda ^i_j\in {\mathbb {R}}, i=1,\ldots ,s, j=1,\ldots ,q\Big \}. \end{aligned}$$
(43)

Now, let \(({{\widehat{x}}},{{\widehat{y}}},\widehat{W})\in {\mathbb {R}}^{n+m+m\times s}\) be an arbitrary feasible point of problem (RP). Taking (42) and (43) into account, we see that

$$\begin{aligned} c^\top {{\overline{x}}}= -\sum \limits _{j=1}^q(\lambda _j^0b^0_j+\sum ^{s}_{i=1}\lambda ^i_jb^i_j)\le \min {(RP)}\le c^\top {{\widehat{x}}}. \end{aligned}$$

So, \(({{\overline{x}}},\overline{y},{{\overline{W}}})\) is an optimal solution of problem (RP), which finishes the proof. \(\square \)

We now provide some sufficient criteria, which guarantee that the cone \(C_P\) in Theorem 3.1 is closed.

Proposition 3.1

(Sufficiency for the closedness of \({\varvec{C}_P}\)) Assume that one of the following conditions holds:

(a) :

The uncertainty set U of problem (RP) is a polytope;

(b) :

The Slater condition holds for the problem (RP); i.e., there is \(({{\widehat{x}}},{{\widehat{y}}},\widehat{W})\in {\mathbb {R}}^{n+m+m\times s}\) such that

$$\begin{aligned} a_j(u)^\top {{\widehat{x}}}+ \theta _{j}^\top x(u)< b_j(u), \; x(u)=\rho \widehat{y}+(1-\rho ){{\widehat{W}}}u,\; j=1,\ldots ,q,\;\forall u\in U. \end{aligned}$$
(44)

Then, the following cone \(C_P\) is closed, where

$$\begin{aligned} C_P:=\mathrm{coneco}\big \{&\big (a_j(u),b_j(u), \rho \theta _j, (1-\rho )u_1\theta _j, \ldots , (1-\rho )u_s\theta _j\big )\,:\, \\&j=1,\ldots ,q, u:=(u_1,\ldots ,u_s)\in U\big \}. \end{aligned}$$

Proof

For each \(j\in \{1,\ldots ,q\},\) let us define an affine map \(H_j:{\mathbb {R}}^s\rightarrow {\mathbb {R}}^n\times {\mathbb {R}}\times {\mathbb {R}}^m\times {\mathbb {R}}^{ms}\) as

$$\begin{aligned} H_j(u):= & {} \big (a_j(u),b_j(u), \rho \theta _j, (1-\rho )u_1\theta _j, \ldots , (1-\rho )u_s\theta _j\big ),\;\\ u:= & {} (u_1,\ldots , u_s)\in U. \end{aligned}$$

Then, \(H_j(U):=\bigcup \limits _{u \in U}H_j(u)\), and so it follows that

$$\begin{aligned} C_P=\mathrm{coneco}\left( \bigcup \limits _{j=1}^{q}H_j(U)\right) . \end{aligned}$$

(a) When U is a polytope, the set \(H_j(U)\) is also a polytope for each \(j\in \{1,\ldots ,q\}\) (see, e.g., [20, Proposition 1.29]), and so \(H_j(U)\) is the convex hull of some finite set (see, e.g., [20, Theorem 1.26]). It means that for each \(j\in \{1,\ldots ,q\},\) there exist \(k_j\in {\mathbb {N}}\) and \(u^i\in U, i=1,\ldots , k_j\) such that

\(H_j(U)=\mathrm{conv}\big \{\big (a_j(u^i),b_j(u^i), \rho \theta _j, (1-\rho )u^i_1\theta _j, \ldots , (1-\rho )u^i_s\theta _j\big ) \,:\, i=1,\ldots ,k_j\big \}.\) It follows that

$$\begin{aligned}&\mathrm{conv}\left( \bigcup \limits _{j=1}^{q}H_j(U)\right) \\&=\mathrm{conv}\big \{\big (a_j(u^i),b_j(u^i), \rho \theta _j, (1-\rho )u^i_1\theta _j, \ldots , (1-\rho )u^i_s\theta _j\big ) \,:\,\\&\quad i=1,\ldots ,k_j,\;j=1,\ldots ,q\big \}\end{aligned}$$

is a polytope. So, \(C_P=\mathrm{coneco}\left[ \mathrm{conv}\left( \bigcup \limits _{j=1}^{q}H_j(U)\right) \right] \) is a polyhedral cone and hence is closed (see, e.g., [18, Proposition 3.9]).

(b) As U is compact and \(H_j\) is affine (and thus continuous) for \(j\in \{1,\ldots ,q\}\), the set \(H_j(U)\) is a compact set. This entails that \(\left( \bigcup \limits _{j=1}^{q}H_j(U)\right) \) is also a compact set. Moreover, \(H_j(U)\) is a convex set for each \(j\in \{1,\ldots ,q\}\) due to the convexity of U (see, e.g., [18, Proposition 1.23]). Assume now that the Slater condition (44) holds. We assert that

$$\begin{aligned} (0_{n},0,0_m,\underbrace{0_m,\ldots ,0_m}_{s})\notin \mathrm{conv}\left( \bigcup \limits _{j=1}^{q}H_j(U)\right) . \end{aligned}$$
(45)

Assume on the contrary that there exist \( u^j\in U, {{\overline{\mu }}}_j\ge 0, j=1,\ldots ,q\) with \(\sum \limits _{j=1}^q{{\overline{\mu }}}_j=1\) such that

$$\begin{aligned}&(0_{n},0,0_m,\underbrace{0_m,\ldots ,0_m}_{s})\nonumber \\&\quad =\sum \limits _{j=1}^q{{\overline{\mu }}}_j\big (a_j(u^j),b_j(u^j), \rho \theta _j, (1-\rho )u^j_1\theta _j, \ldots , (1-\rho )u^j_s\theta _j\big ). \end{aligned}$$
(46)

Denote by \({{\widehat{W}}}_r, r=1,\ldots ,s\), the columns of the matrix \({{\widehat{W}}}\). From (44), we obtain that

$$\begin{aligned} \sum \limits _{j=1}^q{{\overline{\mu }}}_j\left( a_j( u^j)^\top \widehat{x} + \rho \theta _j^\top {{\widehat{y}}} +(1-\rho )\sum _{r=1}^su_r^j\theta _j^\top {{\widehat{W}}}_r\right) < \sum \limits _{j=1}^q{{\overline{\mu }}}_jb_j(u^j), \end{aligned}$$

which clearly contradicts (46), and so (45) holds. Invoking [7, Proposition 1.4.7], we conclude that \(C_P\) is closed. \(\square \)

We close this section with a simple example that shows how we can employ the strong duality obtained in Theorem 3.1 to verify the optimal value of an adjustable robust two-stage linear problem by solving its corresponding SDP dual problem.

Example 3.1

Consider an adjustable uncertain two-stage linear problem of the form:

$$\begin{aligned} \inf _{x\in {\mathbb {R}}^3, x(\cdot )}{}\Big \{-x_1-2x_2-3x_3\,:\,\;&u_1x_1+u_2x_2+u_3x_3+\theta ^\top x(u)\le 8, \\&x_1+2x_2\le 1, x_3\le 1, \\&-3x_1-x_2-x_3+\theta ^\top x(u)\le 0 \Big \}, \end{aligned}$$
(EU)

where \(\theta :=(1,1,1)\), \(x:=(x_1,x_2,x_3)\in {\mathbb {R}}^3\) is the first-stage decision variable and \(x(\cdot )\) is the second-stage decision variable, which is an adjustable variable that is depending on uncertain \(u:=(u_1,u_2,u_3)\in U.\) Here, we consider a linear decision rule \(x(\cdot )\) given by

$$\begin{aligned} x(u):=\rho y+(1-\rho )Wu,\; u\in U, \end{aligned}$$

where \(y\in {\mathbb {R}}^3\) and \(W\in {\mathbb {R}}^{3\times 3}\) are non-adjustable variables, \(\rho \in ]0,1[\) and the uncertainty U is an elliptope (see, e.g., [21, Pages 255–256]) given by

$$\begin{aligned} U:=\{(u_1,u_2,u_3)\in {\mathbb {R}}^3\,:\,\;&2u_1u_2u_3-u_1^2-u_2^2-u_3^2+1\ge 0,\nonumber \\&1-u_1^2\ge 0, 1-u_2^2\ge 0, 1-u_3^2\ge 0\}. \end{aligned}$$
(47)

To treat this problem, let us consider a robust counterpart of (EU) as follows:

$$\begin{aligned} \inf _{x\in {\mathbb {R}}^3,y\in {\mathbb {R}}^3,W\in {\mathbb {R}}^{3\times 3}}{}\Big \{&-x_1-2x_2-3x_3 \,:\, u_1x_1+u_2x_2+u_3x_3+\theta ^\top x(u)\le 8, \\&x_1+2x_2\le 1, x_3\le 1, -3x_1-x_2-x_3+\theta ^\top x(u)\le 0,\\&x(u):=\rho y+(1-\rho )Wu, \;\forall u\in U \Big \}. \end{aligned}$$
(ER)

Note that the uncertainty set U in (47) is a spectrahedron and is depicted in Fig. 5.8 of [21, Page 232], and so the problem (ER) can be expressed in terms of problem (RP), where \(c:=(-1,-2,-3), \theta _1:=\theta _4:=(1,1,1),\) \( \theta _2:=\theta _3:=0_3\), the spectrahedron U is described by

$$\begin{aligned}&A_0:= \left( \begin{array}{ccc} 1 &{} 0&{}0 \\ 0&{} 1 &{}0\\ 0&{} 0&{}1 \\ \end{array} \right) , A_1:= \left( \begin{array}{ccc} 0 &{} 1&{}0 \\ 1&{} 0 &{}0\\ 0&{} 0&{}0\\ \end{array} \right) , A_2:= \left( \begin{array}{ccc} 0 &{} 0&{}1 \\ 0&{} 0 &{}0\\ 1&{} 0&{}0\\ \end{array} \right) , A_3:= \left( \begin{array}{ccc} 0 &{} 0&{}0 \\ 0&{} 0 &{}1\\ 0&{} 1&{}0\\ \end{array} \right) , \end{aligned}$$

and the affine mappings \(a_j:{\mathbb {R}}^3\rightarrow {\mathbb {R}}^3, b_j:{\mathbb {R}}^3\rightarrow {\mathbb {R}}, j=1,2,3,4\) are defined by \(a^0_1:=(0,0,0), a^1_1:=(1,0,0), \) \(a^2_1:=(0,1,0), a^3_1:=(0,0,1), a^0_2:=(1,2,0), a^0_3:=(0,0,1), a^0_4:=(-3,-1,-1), a^i_j:=(0,0,0)\in {\mathbb {R}}^3, i=1,2,3,\) \( j=2,3,4, b^0_1:=8, b^0_j:=1, j=2,3, b^0_4:=0, b^i_j:=0\in {\mathbb {R}}, i=1,2,3, j=1,2,3,4.\)

By direct calculation, we see that the problem (ER) has optimal solutions (for instance, \(({{\overline{x}}},{{\overline{y}}}, {{\overline{W}}})\), where \({{\overline{x}}}:=(1,0,1), {{\overline{y}}}:=0_3\) and \( {{\overline{W}}}:=0_{3\times 3}\), is an optimal solution of problem (ER)) with the optimal value \(\min {(ER)}=-4.\) Let us now verify the optimal value of problem (ER). Since the Slater condition holds for the problem (ER), the corresponding cone \(C_P\) is closed as shown by Proposition 3.1. So, we assert by Theorem 3.1 that the exact SDP duality between (ER) and (ED) holds, where (ED) is the following SDP dual problem:

$$\begin{aligned} \max _{(\lambda ^0_j,\lambda ^i_j)}{}\Big \{&-\sum \limits _{j=1}^4(\lambda ^0_jb^0_j+\sum \limits _{i=1}^3\lambda ^i_jb^i_j)\,:\, (-1,-2,-3)+ \sum \limits _{j=1}^4(\lambda ^0_ja^0_j+\sum \limits _{i=1}^3\lambda ^i_ja^i_j)=0, \\&\sum \limits _{j=1}^4\lambda ^0_j\rho \theta _j=0, \sum \limits _{j=1}^4\lambda ^i_j(1-\rho )\theta _j=0, \lambda _j^0A_0+\sum _{i=1}^3\lambda ^i_jA_i\succeq 0, \\&\lambda ^0_j\in {\mathbb {R}}_+, \lambda ^i_j\in {\mathbb {R}}, i=1,2,3, j=1,2,3,4\Big \}, \end{aligned}$$
(ED)

which amounts to the following one

$$\begin{aligned} \max _{(\lambda ^0_j,\lambda ^i_j)}{}\Big \{&-8\lambda ^0_1-\lambda ^0_2-\lambda ^0_3\,:\, \lambda ^1_1+\lambda ^0_2-3\lambda ^0_4-1=0, \lambda ^2_1+2\lambda ^0_2-\lambda ^0_4-2=0,\\&\lambda _1^3+\lambda _3^0-\lambda ^0_4-3=0, \lambda ^i_1+\lambda ^i_4=0, i=0,1,2,3,\\&\left( \begin{array}{ccc} \lambda ^0_j &{} \lambda ^1_j&{}\lambda ^2_j \\ \lambda ^1_j&{} \lambda ^0_j &{}\lambda ^3_j\\ \lambda ^2_j&{} \lambda ^3_j&{}\lambda ^0_j\\ \end{array} \right) \succeq 0, \lambda ^0_j\in {\mathbb {R}}_+, \lambda ^i_j\in {\mathbb {R}}, i=1,2,3, j=1,2,3,4\Big \}. \end{aligned}$$
(ES)

Using the MATLAB toolbox CVX (see, e.g., [22, 23]), we solve the problem (ES) and the solver returns the optimal value as \(-4.\) It shows that the exact SDP duality holds for the problem (ER), that is, \(\max {(ES)}=-4=\min {(ER)}\).

4 Adjustable Two-Stage Lot-Sizing Problems

In this section, we describe how the optimal storage cost of an adjustable two-stage lot-sizing problem with uncertain demand can be found by solving a corresponding semidefinite program in our duality scheme.

A lot-sizing on a network problem is to determine the stock allocation \(x_i\), for \(i\in \{1,\ldots ,n\}\) stores prior to knowing the realization of the demand u at each location, where the demand u is uncertain and is assumed to reside in an uncertainty set \(U\subset {\mathbb {R}}^n.\) After we observe the realization of the demand u, we can transport stock \(y_{ij}, i=1,\ldots ,n,j=1,\ldots ,n\) from store i to store j at unit cost \(\tau _{ij}\) in order to meet all demand. The goal is to minimize the worst case storage costs with unit costs \(c_i, i=1,\ldots ,n\) and the cost arising from shifting the products from one store to another. This model can be rewritten as an adjustable two-stage linear program (see, e.g., [16, 24]).

We consider an adjustable uncertain two-stage lot-sizing on a network problem of the form:

$$\begin{aligned} \inf _{x\in {\mathbb {R}}^n,t\in {\mathbb {R}}}{}\{c^\top x+t\,:\,\;&\forall u:=(u_1,\ldots ,u_n)\in U, \exists y:=(y_{ij})\in {\mathbb {R}}^{n^2},\\&y_{ij}\ge 0, i=1,\ldots ,n, j=1,\ldots ,n,\\&0\le x_i\le M_i, i=1,\ldots ,n, \\&u_i-x_i\le \sum _{j=1}^ny_{ji}-\sum _{j=1}^ny_{ij}, i=1,\ldots ,n, \\&\sum _{i=1}^n\sum _{j=1}^n\tau _{ij}y_{ij}\le t \}, \end{aligned}$$
(UL)

where \(c:=(c_1,\ldots ,c_n), c_i\in {\mathbb {R}}, i=1,\ldots ,n \) are the storage unit costs and \(M_i>0, i=1,\ldots ,n\) are the capacities of the stores, \(\tau _{ij}\ge 0, \tau _{ii}= 0, i=1,\ldots ,n,j=1,\ldots ,n\) are transport unit costs, u is uncertain and the uncertainty set \(U:=\big \{u:=(u_1,\ldots ,u_n)\in {\mathbb {R}}^n \,:\, A_0+\sum _{r=1}^nu_rA_r\succeq 0\big \}\) is defined as in (2) with \(s:=n.\)

The adjustable robust two-stage lot-sizing on a network counterpart of (UL) reads as follows:

$$\begin{aligned} \inf _{x\in {\mathbb {R}}^n,t\in {\mathbb {R}},y:=(y_{ij})\in {\mathbb {R}}^{n^2}}{}\{c^\top x+t \,:\,&y_{ij}\ge 0, i=1,\ldots ,n, j=1,\ldots ,n,\\&0\le x_i\le M_i, i=1,\ldots ,n, \\&u_i-x_i\le \sum _{j=1}^ny_{ji}-\sum _{j=1}^ny_{ij}, i=1,\ldots ,n, \\&\sum _{i=1}^n\sum _{j=1}^n\tau _{ij}y_{ij}\le t, \forall u:=(u_1,\ldots ,u_n)\in U\}. \end{aligned}$$
(RL)

For a vector \(y\in {\mathbb {R}}^{n^2}\), we can label it as \(y:=(y_{11},\ldots ,y_{1n},y_{21},\ldots ,y_{2n},\ldots ,y_{n1},\ldots ,y_{nn}):=\sum _{i=1}^n\sum _{j=1}^ny_{ij}e_{ij},\) where \(e_{ij}\) is a vector in \({\mathbb {R}}^{n^2}\) whose ijth element is one and the others are zero for \(i,j\in \{1,\ldots ,n\}.\) In this vein, we may write \(y:=(y_{ij}) \in {\mathbb {R}}^{n^2}\) for short. We also use the notation \(e_k\) to denote the unit vector \(e_k\in {\mathbb {R}}^n, k\in \{1,\ldots ,n\}\), whose kth element is one and the others are zero. Let us denote \({{\widetilde{c}}}:=(c,1), {{\widetilde{x}}}:=(x,t)\in {\mathbb {R}}^n\times {\mathbb {R}}, \)

\( a^0_{ij}:=a^r_{ij}:=0_{n+1}, b^0_{ij}:=b^r_{ij}:=0, \theta _{ij}:=-e_{ij}, r=1,\ldots , s, i=1,\ldots ,n,j=1,\ldots ,n\) and set

$$\begin{aligned} a^0_k&:= \left\{ \begin{array}{ll} (-e_{k},0), &{} k=1,\ldots ,n, \\ (e_{k-n},0), &{} k=n+1,\ldots ,2n, \\ (-e_{k-2n},0), &{} k=2n+1,\ldots ,3n,\\ (0_n,-1), &{} k=3n+1, \end{array} \right. \quad a^r_k:= 0_{n+1}, k=1,\ldots , 3n+1, r=1,\ldots ,n, \\ b^0_k&:= \left\{ \begin{array}{ll} 0, &{} k=1,\ldots , n, \\ M_{k-n}, &{} k=n+1,\ldots ,2n, \\ 0, &{} k=2n+1,\ldots ,3n+1, \end{array} \right. b^r_k:= \left\{ \begin{array}{ll} 0, &{} k=1,\ldots , 2n, r=1,\ldots ,n, \\ -1, &{} k=2n+i, r=i, i=1,\ldots ,n, \\ 0, &{} k=2n+i, r\in \{1,\ldots ,n\}\setminus \{i\},\\ &{} i=1,\ldots ,n,\\ 0, &{} k=3n+1, r=1,\ldots , n, \end{array} \right. \\ \theta _k&:= \left\{ \begin{array}{ll} 0_{n^2}, &{} k=1,\ldots , 2n, \\ \sum _{i=1}^ne_{i(k-2n)}-\sum _{i=1}^ne_{(k-2n)i}, &{} k=2n+1,\ldots ,3n, \\ (\tau _{11},\ldots ,\tau _{1n},\tau _{21},\ldots ,\tau _{2n},\ldots ,\tau _{n1},\ldots ,\tau _{nn})\in {\mathbb {R}}^{n^2}, &{} k=3n+1. \end{array} \right. \end{aligned}$$

Then, the problem (RL) can be written as

$$\begin{aligned}&\inf _{{{\widetilde{x}}}\in {\mathbb {R}}^{n+1}, y\in {\mathbb {R}}^{n^2}}{}\{{{\widetilde{c}}}^\top {{\widetilde{x}}} \,:\, a_{ij}(u)^\top {{\widetilde{x}}}+ \theta _{ij}^\top x(u)\le b_{ij}(u), i=1,\ldots ,n, j=1,\ldots ,n,\\&a_{k}(u)^\top {{\widetilde{x}}}+ \theta _{k}^\top x(u)\le b_{k}(u), k=1,\ldots ,3n+1,\; x(u)=y,\; \forall u \in U\}, \end{aligned}$$
(AL)

where \( a_{ij}(u):=a^{0}_{ij}+\sum \limits ^{n}_{r=1}u_ra_{ij}^r,\; b_{ij}(u):= b^0_{ij}+\sum \limits ^{n}_{r=1}u_rb_{ij}^r, a_{k}(u):=a^{0}_{k}+\sum \limits ^{n}_{r=1}u_ra_{k}^r,\; b_{k}(u):= b^0_{k}+\sum \limits ^{n}_{r=1}u_rb_{k}^r\) for \( u:=(u_1,\ldots , u_n)\in {\mathbb {R}}^n.\)

The problem (AL) lands in the form of (RP) with \(\rho :=1\), and therefore, we have a corresponding SDP dual problem for (RL) as

$$\begin{aligned} \sup _{(\lambda ^0_{ij},\lambda ^r_{ij},\lambda ^0_k,\lambda ^r_k)}{}\Big \{&-\sum \limits _{i=1,j=1}^n(\lambda ^0_{ij}b^0_{ij}+\sum \limits _{r=1}^n\lambda ^r_{ij}b^r_{ij}) -\sum \limits _{k=1}^{3n+1}(\lambda ^0_kb^0_k+\sum \limits _{r=1}^{n}\lambda ^r_kb^r_k)\,:\,\\&\sum \limits _{i=1,j=1}^n\lambda ^0_{ij}\theta _{ij}+\sum \limits _{k=1}^{3n+1}\lambda ^0_k\theta _k=0, \\&{{\widetilde{c}}}+ \sum \limits _{i=1,j=1}^n(\lambda ^0_{ij}a^0_{ij}+\sum \limits _{r=1}^n\lambda ^r_{ij}a^r_{ij}) +\sum \limits _{k=1}^{3n+1}(\lambda ^0_ka^0_k+\sum \limits _{r=1}^{n}\lambda ^r_ka^r_k)=0,\\&\lambda _{ij}^0A_0+\sum _{r=1}^n\lambda ^r_{ij}A_r\succeq 0, \lambda _{k}^0A_0+\sum _{r=1}^n\lambda ^r_{k}A_r\succeq 0,\\&\lambda ^0_{ij}\in {\mathbb {R}}_+, \lambda ^r_{ij}\in {\mathbb {R}}, \lambda ^0_k\in {\mathbb {R}}_+, \lambda ^r_k\in {\mathbb {R}}, \\&r=1,\ldots ,n, i=1,\ldots , n, j=1,\ldots ,n, k=1,\ldots , 3n+1\Big \}, \end{aligned}$$

which amounts to the following one:

$$\begin{aligned} \sup _{(\lambda ^0_{ij},\lambda ^r_{ij},\lambda ^0_k,\lambda ^r_k)}{}\Big \{&-\sum \limits _{k=n+1}^{2n}\lambda ^0_{k}M_{k-n}+\sum \limits _{r=1}^n\lambda ^r_{2n+r} \,:\, -\sum \limits _{i=1,j=1}^n\lambda ^0_{ij}e_{ij}\\&+\sum \limits _{k=2n+1}^{3n}\lambda ^0_k\big (\sum _{i=1}^ne_{i(k-2n)}-\sum _{i=1}^ne_{(k-2n)i}\big )\\&+\lambda _{3n+1}^0 (\tau _{11},\ldots ,\tau _{1n},\tau _{21},\ldots ,\tau _{2n},\ldots ,\tau _{n1},\ldots ,\tau _{nn})=0,\\&c- \sum \limits _{k=1}^n\lambda ^0_{k}e_{k}+\sum \limits _{k=n+1}^{2n}\lambda ^0_{k}e_{k-n} -\sum \limits _{k=2n+1}^{3n}\lambda ^0_ke_{k-2n}=0, 1-\lambda ^0_{3n+1}=0,\\&\lambda _{ij}^0A_0+\sum _{r=1}^n\lambda ^r_{ij}A_r\succeq 0, \lambda _{k}^0A_0+\sum _{r=1}^n\lambda ^r_{k}A_r\succeq 0,\\&\lambda ^0_{ij}\in {\mathbb {R}}_+, \lambda ^r_{ij}\in {\mathbb {R}}, \lambda ^0_k\in {\mathbb {R}}_+, \lambda ^r_k\in {\mathbb {R}}, \\&r=1,\ldots ,n, i=1,\ldots , n, j=1,\ldots ,n, k=1,\ldots , 3n+1\Big \}.\nonumber \\ \end{aligned}$$
(DL)

The following result provides a strong duality relation between the adjustable robust two-stage lot-sizing problem (RL) and its semidefinite programming dual problem (DL).

Corollary 4.1

(Strong SDP duality for (RL)) Assume that there exist \({{\widehat{x}}}:=({{\widehat{x}}}_1,\ldots ,{{\widehat{x}}}_n)\in {\mathbb {R}}^n\) and \( {{\widehat{y}}}:=({{\widehat{y}}}_{ij})\in {\mathbb {R}}^{n^2}\) such that \( {{\widehat{y}}}_{ij}\ge 0, i=1,\ldots ,n, j=1,\ldots ,n, 0\le {{\widehat{x}}}_i\le M_i, i=1,\ldots ,n, \) \( u_i-{{\widehat{x}}}_i< \sum _{j=1}^n{{\widehat{y}}}_{ji}-\sum _{j=1}^n{{\widehat{y}}}_{ij}, i=1,\ldots ,n,\) for all \(u:=(u_1,\ldots ,u_n)\in U\). Then, we have

$$\begin{aligned} \inf {(RL)}=\max {(DL)}. \end{aligned}$$
(48)

Proof

As mentioned above, it holds that

$$\begin{aligned} \inf {(RL)}=\inf {(AL)}, \end{aligned}$$
(49)

and the problem (AL) is in the form of (RP) with \(\rho :=1\). Under our assumption, we see that the feasible set of problem (AL) is non-empty and the following cone \(C_0\) is closed (cf. the proof of Proposition 3.1), where

$$\begin{aligned} C_0:=\mathrm{coneco}\big \{&\big (a_{ij}(u),b_{ij}(u), \theta _{ij}\big ),\big (a_{k}(u),b_{k}(u), \theta _{k}\big )\,:\, i=1,\ldots ,n, j=1,\ldots ,n,\\&k=1,\ldots ,3n+1, u:=(u_1,\ldots ,u_s)\in U\big \}. \end{aligned}$$

The closedness of \(C_0\) implies the closedness of \(C_L\), where

$$\begin{aligned} C_L:=\mathrm{coneco}\big \{&\big (a_{ij}(u),b_{ij}(u), \theta _{ij},0_{n^2},\ldots , 0_{n^2}\big ),\big (a_{k}(u),b_{k}(u), \theta _{k},0_{n^2},\ldots , 0_{n^2}\big )\,:\, \\&i=1,\ldots ,n, j=1,\ldots ,n, k=1,\ldots ,3n+1,\\&u:=(u_1,\ldots ,u_s)\in U\big \}. \end{aligned}$$

Invoking now Theorem 3.1 with \(\rho :=1\), we arrive at \( \inf {(AL)}= \max {(DL)}.\) This, together with (49), proves that (48) is valid. \(\square \)

Now, let us apply Corollary 4.1 to find the optimal storage cost of the problem (RL) by solving its corresponding SDP dual problem (DL) in the case, where the uncertainty set is \(U:=\{(u_1,\dots ,u_n)\in {\mathbb {R}}^n\,:\, u_1^2+\cdots +u^2_n\le 1\}.\) For the purpose of simplicity, we choose the capacities of the stores \(M_i:=100, i=1,\ldots ,n\), the unit storage costs \(c_i=\mathrm{C}>0, i=1,\ldots ,n\) and the transport unit costs \(\tau _{ij}:=i+j\) if \(i\ne j\) and \(\tau _{ij}:=0\) if \(i=j\) for \( i=1,\ldots ,n, j=1,\ldots ,n.\)

More explicitly, we solve the SDP dual problem (DL) for \(n=2,3,4\) by using the MATLAB toolbox CVX (see, e.g., [22, 23]). The size of the SDP often depends on the matrices \(A_r, r=1,\ldots ,n\) of the uncertainty sets. For instance, when n=4, our procedure involves 29 \((5\times 5)\) matrices. The numerical tests are conducted on a computer with a 3.60GHz Intel(R) Core(TM) i7-4790 and 16.0GB RAM, equipped with MATLAB R2017b. The following table of test results shows the worst-case total storage cost and the corresponding time in seconds for the problem (RL) (Table 1).

Table 1 Worst-case total storage costs

In passing, we note that in the case of a polytope uncertainty set U, the problem (RL) and its reformulation  (DL) reduce to linear programs, respectively. We refer the interested reader to [24] for a dual formulation test and to [16] for a Fourier–Motzkin elimination test on lot-sizing problems.

5 Conclusion and Future Work

In this paper, we have shown how strong duality between an affinely adjustable two-stage robust linear program with a spectrahedron uncertainty set and its dual conic linear program can be obtained by establishing, for the first time, a new version of Farkas’ lemma for a parametric linear inequality system with affinely adjustable variables. We have also established that in the case of an ellipsoidal uncertainty set, our strong duality yields a second-order cone program as its exact dual with the same optimal value of its robust linear program. We have evaluated the efficacy of our main duality theorem by applying it to find the optimal storage cost of an adjustable two-stage lot-sizing problem under a ball uncertainty set via its dual semidefinite program.

Recently, exact conic programming relaxations and duals have been given for various classes of (static) single-stage robust SOS-convex polynomial programs under suitable uncertainty sets [25,26,27]. This allowed robust solutions of special classes of uncertain convex programs to be found by solving related conic programs. The approach, presented in this paper, may be extended to study affinely adjustable robust two-stage SOS-convex polynomial programs by way of examining suitable dual conic programs and it will be treated in a forthcoming study.