1 Introduction

We consider linear two-stage stochastic programs of the form

$$\begin{aligned} \begin{array}{ll} \displaystyle {\text {maximize}} &{} \displaystyle \varvec{c}^\top \varvec{x} + \mathbb {E} \left[ Q (\varvec{x}, \varvec{\tilde{\xi }}) \right] \\ \displaystyle \text {subject to} &{} \displaystyle \varvec{A} \varvec{x} = \varvec{b}, \,\, \varvec{x} \in \mathbb {R}^{n_1}_+, \end{array} \end{aligned}$$
(1a)

where the problem data \(\varvec{c} \in \mathbb {R}^{n_1}\), \(\varvec{A} \in \mathbb {R}^{m_1 \times n_1}\) and \(\varvec{b} \in \mathbb {R}^{m_1}\) is deterministic. For a fixed realization \(\varvec{\xi } \in \mathbb {R}^k\) of the random vector \(\varvec{\tilde{\xi }}\), \(Q (\varvec{x}, \varvec{\xi })\) denotes the optimal value of the second-stage problem

$$\begin{aligned} \begin{array}{ll} \displaystyle {\text {maximize}} &{} \displaystyle \varvec{q} (\varvec{\xi })^\top \varvec{y} \\ \displaystyle \text {subject to} &{} \displaystyle \varvec{T} (\varvec{\xi }) \varvec{x} + \varvec{W} (\varvec{\xi }) \varvec{y} = \varvec{h} (\varvec{\xi }), \,\, \varvec{y} \in \mathbb {R}^{n_2}_+, \end{array} \end{aligned}$$
(1b)

where \(\varvec{q} : \mathbb {R}^k \rightarrow \mathbb {R}^{n_2}\), \(\varvec{T} : \mathbb {R}^k \rightarrow \mathbb {R}^{m_2 \times n_1}\), \(\varvec{W} : \mathbb {R}^k \rightarrow \mathbb {R}^{m_2 \times n_2}\) and \(\varvec{h} : \mathbb {R}^k \rightarrow \mathbb {R}^{m_2}\) depend affinely on \(\varvec{\xi }\). The stochastic program (1) has fixed recourse if the recourse matrix \(\varvec{W}\) does not depend on the realization of \(\varvec{\tilde{\xi }}\), and it displays random recourse otherwise. By convention, the optimal values of (1a) and (1b) are defined to be \(+ \infty \) if the respective problems are infeasible. For a survey of the vast stochastic programming literature, which dates back to the 1950s, we refer to [2, 10, 11].

While stochastic programming problems were always believed to be difficult to solve, this suspicion has only been confirmed recently by the seminal work of Dyer and Stougie [4]. Amongst others, that paper argues that calculating the expected recourse value \(\mathbb {E} [Q (\varvec{x}, \varvec{\tilde{\xi }})]\) for a fixed first-stage decision \(\varvec{x}\) is #P-hardFootnote 1 even if the stochastic program (1) exhibits fixed recourse and the random vector \({\tilde{\varvec{\xi }}}\) is governed by a uniform distribution supported on \([0, 1]^k\). This result has attracted significant attention in the stochastic programming and robust optimization communities, where it provides a formal justification for developing approximate solutions schemes. The proof of this statement studies the expected recourse value \(\mathcal {Q} (\varvec{\alpha }, \beta ) = \mathbb {E} [Q (\varvec{\tilde{\xi }}; \varvec{\alpha }, \beta )]\) of the second-stage problem

$$\begin{aligned} \begin{array}{ll} \displaystyle {\text {maximize}} &{} \displaystyle \varvec{\xi }^\top \varvec{y} - \beta z \\ \displaystyle \text {subject to} &{} \displaystyle \varvec{y} \le \varvec{\alpha } z, \,\, \varvec{y} \in \mathbb {R}^k_+, \,\, z \in [0, 1] \end{array} \end{aligned}$$
(2)

under the uniform distribution supported on \([0,1]^k\), where \({\varvec{\alpha }}\in \mathbb R^k_+\) and \(\beta \in \mathbb R_+\) are fixed parameters. Note that problem (2) does not involve any first-stage decisions. It is claimed in [4] that \(1 - \mathcal {Q} (\varvec{\alpha }, \beta )\) coincides with the volume of the knapsack polytope \(P (\varvec{\alpha }, \beta ) = \{ \varvec{\xi } \in [0,1]^k \, : \, \varvec{\alpha }^\top \varvec{\xi } \le \beta \}\), the calculation of which is known to be #P-hard [5]. We now show that this claim is false.

Proposition 1

For \(\varvec{\alpha } = \mathbf {e}\) and \(\beta = 1\), where \(\mathbf {e} \in \mathbb {R}^k\) is the vector of all ones, the expected recourse value of problem (2) satisfies

$$\begin{aligned} \mathcal {Q} (\varvec{\alpha }, \beta ) = \frac{k-2}{2} + \frac{1}{(k+1)!}. \end{aligned}$$

Proof

It is shown in [4] that for a given realization \(\varvec{\xi }\) of the random vector \(\varvec{\tilde{\xi }}\), the unique optimal second-stage decisions are \(\varvec{y} = \varvec{0}\) and \(z = 0\) if \(\varvec{\alpha }^\top \varvec{\xi } \le \beta \) and \(\varvec{y} = \varvec{\alpha }\) and \(z = 1\) otherwise. We thus conclude that the expected recourse value of problem (2) satisfies

$$\begin{aligned} \mathcal {Q} (\varvec{\alpha }, \beta )&= \mathbb E\left[ \max \left\{ \mathbf e^\top \varvec{\tilde{\xi }}-1 ,0\right\} \right] \\&= \mathbb E\left[ \mathbf e^\top \varvec{\tilde{\xi }}-1 \right] + \mathbb E\left[ \max \left\{ 1- \mathbf e^\top \varvec{\tilde{\xi }},0\right\} \right] \\&= \frac{k-2}{2} +\int _0^1\mathrm{d}\xi _1\int _0^{1-\xi _1}\mathrm{d}\xi _2\cdots \int _0^{1-\sum _{i=1}^{k-1}\xi _i}\mathrm{d}\xi _k \left( 1- \mathbf e^\top \varvec{\xi }\right) \\&= \frac{k-2}{2} +\frac{1}{(k+1)!}, \end{aligned}$$

where the integral in the penultimate line can be viewed as the normalization constant of a Dirichlet distribution of order \(k+1\) with parameters \((\mathbf e^\top ,2)^\top \) and thus evaluates to \(1/(k+1)!\), see [1]. \(\square \)

Proposition 1 implies that \(1 - \mathcal {Q} (\mathbf {e}, 1)\) is strictly negative for \(k\ge 4\) and can therefore not be equal to the volume of the polytope \(P(\mathbf e,1)\).

The remainder of this paper develops as follows. In Sect. 2 we correct the proof of Dyer and Stougie and show that calculating the expected recourse value of a linear two-stage stochastic program with fixed recourse is indeed \(\#\)P-hard. Moreover, we strengthen the original result by showing that approximating this expected recourse value within an exponentially small error remains #P-hard. Similar arguments as in [4] then allow us to conclude that finding an approximately optimal solution is #P-hard as well. Section 3 shows that linear two-stage stochastic program with random recourse are #P-hard to approximate within an exponentially small error, and NP-hard to approximate within an inverse polynomial error. Unless \(\mathrm{P} = \mathrm{NP}\), these stronger results preclude the existence of fully polynomial-time approximation schemes (FPTAS) for the generic linear two-stage stochastic program (1), see [6]. We provide concluding remarks in Sect. 4.

Notation Vectors and matrices are denoted by bold lower-case and upper-case letters, respectively, while scalars are printed in regular font. We notationally highlight random objects by a tilde sign. We use \(\mathbf {e}_i\) and \(\mathbf {e}\) to refer to the i-th canonical basis vector and the vector of ones, respectively. In both cases, the dimension will be clear from the context. We define the indicator function \(\mathbb {I}_{[ \mathcal {E} ]}\) as \(\mathbb {I}_{[ \mathcal {E} ]} = 1\) if the logical expression \(\mathcal {E}\) holds true and 0 otherwise. For any matrix \({\varvec{M}}\in \mathbb R^{m\times n}\), we define its norm by \(\Vert {\varvec{M}}\Vert _1=\max _{{\varvec{x}}\in \mathbb R^n}\{\Vert {\varvec{M}}{\varvec{x}}\Vert _1~:~\Vert {\varvec{x}}\Vert _1=1\}\).

2 Stochastic programs with fixed recourse

In this section we correct the #P-hardness proof of Dyer and Stougie for linear two-stage stochastic programs with fixed recourse. To this end, we show first that calculating the volume of a knapsack polytope \(P (\varvec{\alpha }, \beta ) = \{ \varvec{\xi } \in [0,1]^k \, : \, \varvec{\alpha }^\top \varvec{\xi } \le \beta \}\) to within a sufficiently high accuracy is #P-hard.

Lemma 1

Computing the volume of the knapsack polytope \(P (\varvec{\alpha }, \beta )\) for \({\varvec{\alpha }}\in \mathbb R_+^k\) and \(\beta \in \mathbb R_+\) to within an absolute accuracy of \(\epsilon \) is #P-hard whenever

$$\begin{aligned} \epsilon <\frac{1}{2k!(\Vert {\varvec{\alpha }}\Vert _1+2)^k(k+1)^{k+1}\prod _{j=1}^k\alpha _j}. \end{aligned}$$
(3)

The proof of Lemma 1 relies on a reduction from the #Parity problem, which is known to be #P-hard [5, Lemma 1].

figure a

Proof of Lemma 1

Consider an instance of the #Parity problem with input \(\varvec{\alpha } \in \mathbb {N}^{k}\) and \({\beta }\in \mathbb N\), and select any non-negative \(\epsilon \) satisfying (3). If \({\varvec{\alpha }}^\top \mathbf e< \beta \), then the knapsack polytope \(P({\varvec{\alpha }},\beta )\) reduces to the unit hypercube, in which case the solution to the #Parity problem is \(D=0\). From now on we may thus focus on the case \({\varvec{\alpha }}^\top \mathbf e\ge \beta \).

Introduce knapsack budgets \(\gamma _{j}=(\beta +\frac{j}{k+1})\), \( j=0,\dots ,k\), and define the Vandermonde matrix

$$\begin{aligned} {\varvec{F}}=\begin{bmatrix} (\gamma _0)^k&(\gamma _0)^{k-1}&\cdots&(\gamma _0)^0 \\ \vdots&\vdots&\ddots&\vdots \\ (\gamma _k)^k&(\gamma _k)^{k-1}&\cdots&(\gamma _k)^0 \end{bmatrix}. \end{aligned}$$

Note that

$$\begin{aligned} \text {det}({\varvec{F}})~=\prod _{0\le i<j\le k} (\gamma _j-\gamma _i)~ \ne ~ 0 \end{aligned}$$

because all knapsack budgets are mutually distinct. Thus, \({\varvec{F}}\) is invertible. We also find

$$\begin{aligned} \Vert {\varvec{F}}^{-1}\Vert _1 ~=~ \max _{i=0,\dots ,k} ~ \prod _{\begin{array}{c} j=0\\ j\ne i \end{array}}^k\frac{1+|\gamma _j|}{|\gamma _j-\gamma _i|} ~\le ~ \frac{(2+\beta )^k}{(\frac{1}{k+1})^k}~ \le ~(\Vert {\varvec{\alpha }}\Vert _1+2)^k(k+1)^k, \end{aligned}$$
(4)

where the first equality follows from [7, Theorem 1] and the observation that \(\Vert {\varvec{F}}^{-1}\Vert _1= \Vert (\varvec{F}^{-1})^\top \Vert _\infty \), which holds since the 1-norm of a matrix is the maximum of its column sums and the \(\infty \)-norm is the maximum of its row sums. The last inequality follows from the fact that \({\varvec{\alpha }}^\top \mathbf e\ge \beta \). Next, set \({\varvec{g}}=[\mathrm{Vol}(P({\varvec{\alpha }},\gamma _{0})), \ldots , \mathrm{Vol}(P({\varvec{\alpha }},\gamma _{k}))]^\top \), which contains the volumes of the \(k+1\) knapsack polytopes with budgets \(\gamma _0,\ldots ,\gamma _k\). As \({\varvec{F}}\) is invertible, the system of linear equations

$$\begin{aligned} {\varvec{F}}{\varvec{x}}=\bigg (k!\prod _{j=1}^k\alpha _j\bigg )\, {\varvec{g}} \end{aligned}$$

has a unique solution \({\varvec{x}}^\star \in \mathbb R^{k+1}\). We know from the proof of Theorem 1 in [5] that \({\varvec{x}}^\star \) is in fact an integer vector and that its first element coincides with the solution D of the #Parity problem.

In what follows we assume that the vector \({\varvec{g}}\) of the different knapsack polytopes’ exact volumes is not available but that we can compute an approximation \({\varvec{g}}_\epsilon ={\varvec{g}}+\epsilon {\varvec{d}}\) with \({\varvec{d}}\in [-1,1]^{k+1}\). Let \(\varvec{x}^\star _\epsilon \) be the solution of the perturbed system

$$\begin{aligned} {\varvec{F}}{{\varvec{x}}}=\bigg (k!\prod _{j=1}^k\alpha _j\bigg ) {\varvec{g}}_\epsilon , \end{aligned}$$
(5)

and note that \(\varvec{x}^\star _\epsilon \) can be computed efficiently by Gaussian elimination, for instance. By left-multiplying the above equation with \({\varvec{F}}^{-1}\) we obtain

$$\begin{aligned} {\varvec{x}}^\star _\epsilon =\bigg (k!\prod _{j=1}^k\alpha _j\bigg ){\varvec{F}}^{-1}{\varvec{g}} + \epsilon \bigg (k!\prod _{j=1}^k\alpha _j\bigg ){\varvec{F}}^{-1}{\varvec{d}} = {\varvec{x}}^\star + \epsilon \bigg (k!\prod _{j=1}^k\alpha _j\bigg ){\varvec{F}}^{-1}{\varvec{d}}, \end{aligned}$$

where the second equality follows from the definition of \({\varvec{x}}^\star \). Thus, we find

$$\begin{aligned} \left\| \varvec{x}^\star _\epsilon - {\varvec{x}}^\star \right\| _1 = \epsilon \bigg (k!\prod _{j=1}^k\alpha _j\bigg ) \left\| {\varvec{F}}^{-1}{\varvec{d}}\right\| _1 < \frac{1}{2}\frac{\Vert {\varvec{F}}^{-1}\Vert _1\Vert {\varvec{d}}\Vert _1}{(\Vert \varvec{\alpha }\Vert _1+2)^k(1+k)^{k+1}}\le \frac{\Vert {\varvec{d}}\Vert _1}{2(k+1)} \le \frac{1}{2}, \end{aligned}$$
(6)

where the first inequality holds due to our choice of \(\epsilon \) and the definition of the matrix norm, the second inequality follows from (4), and the third inequality holds because \({\varvec{d}}\in [-1,1]^{k+1}\). As any two corresponding components of \(\varvec{x}^\star _\epsilon \) and \({\varvec{x}}^\star \) differ by strictly less than \(\frac{1}{2}\) and as the first component of \({\varvec{x}}^\star \) coincides with D, we can compute D by rounding the first component of \(\varvec{x}^\star _\epsilon \) to the nearest integer. In summary, we have found the following procedure for solving the #Parity problem: (i) construct \({\varvec{F}}\), (ii) determine \({\varvec{g}}_\epsilon \) by calling \(k+1\) times an algorithm for computing the volume of a knapsack polytope to within accuracy \(\epsilon \), (iii) solve the system of linear equations (5) to obtain \(\varvec{x}^\star _\epsilon \), and (iv) round the first component of \(\varvec{x}^\star _\epsilon \) to the nearest integer. All operations with the exception of the volume calculations can be carried out in time polynomial in the bit length of \({\varvec{\alpha }}\) and \(\beta \). Thus, if we could approximate the volume of \(P({\varvec{\alpha }},\beta )\) for \({\varvec{\alpha }}\in \mathbb R^k_+\) and \(\beta \in \mathbb R_+\) in time polynomial in the bit length of \({\varvec{\alpha }}\) and \(\beta \), then we could solve the #Parity problem efficiently. We conclude that approximating the volume of a knapsack polytope is at least as hard as solving the #P-hard #Parity problem. \(\square \)

In the remainder we denote by \({\mathcal {Q}}(\varvec{\alpha },\beta )=\mathbb {E} [Q (\varvec{\tilde{\xi }}; \varvec{\alpha }, \beta )]\) the expected recourse value of the second-stage problem (2). We first show that the volume of the knapsack polytope \(P (\varvec{\alpha }, \beta )\) can be expressed in terms of the partial derivative \(\partial {\mathcal {Q}} (\varvec{\alpha }, \beta )/\partial \beta \), which we abbreviate as \({\mathcal {Q}}'(\varvec{\alpha },\beta )\).

Lemma 2

For every \(\varvec{\alpha }\in \mathbb R^k_+\) and \(\beta \in \mathbb R_+\) we have \(\mathrm{Vol}(P({\varvec{\alpha }},\beta ))= {\mathcal {Q}}' (\varvec{\alpha }, \beta )+1\).

Proof

Recall that the unique optimal second-stage decisions of problem (2) are \(\varvec{y} = \varvec{0}\) and \(z = 0\) if \(\varvec{\alpha }^\top \varvec{\xi } \le \beta \) and \(\varvec{y} = \varvec{\alpha }\) and \(z = 1\) otherwise, see [4]. Thus, we have \({\mathcal {Q}}({\varvec{\alpha }},\beta )= \mathbb E[ \max \{ {\varvec{\alpha }}^\top \varvec{\tilde{\xi }} -\beta ,0\}]\). As \(\mathbb E[\varvec{\tilde{\xi }}]= \mathbf e/2\) and because the relation

$$\begin{aligned} \max \{\gamma -\beta ,0\}=\gamma -\int _0^\beta {\mathbb I}_{[\gamma \ge y]}\, \mathrm{d}y \end{aligned}$$

holds for any \(\beta \in \mathbb R\) and \(\gamma \in \mathbb R_+\), we find

$$\begin{aligned} {\mathcal {Q}}({\varvec{\alpha }},\beta )~&= \frac{{\varvec{\alpha }}^\top \mathbf e}{2} - \mathbb E\left[ \int _0^\beta \mathbb I_{\left[ {\varvec{\alpha }}^\top \varvec{\tilde{\xi }} \ge y\right] }\,\mathrm{d} y \right] \\&= \frac{{\varvec{\alpha }}^\top \mathbf e}{2} - \int _0^\beta \mathbb E\left[ \mathbb I_{\left[ {\varvec{\alpha }}^\top \varvec{\tilde{\xi }}\ge y\right] }\right] \,\mathrm{d} y \\&= \frac{{\varvec{\alpha }}^\top \mathbf e}{2} - \int _0^\beta [1-\mathrm{Vol}(P({\varvec{\alpha }},y))] \,\mathrm{d} y, \end{aligned}$$

where the second equality follows from Fubini’s theorem. Note that the volume \(\mathrm{Vol}(P({\varvec{\alpha }},\beta ))\) of the knapsack polytope changes continuously with \(\beta \in \mathbb R\), and thus \({\mathcal {Q}}({\varvec{\alpha }},\beta )\) is continuously differentiable in \(\beta \). The claim then follows by differentiating both sides of the above relation with respect to \(\beta \). \(\square \)

We are now armed to prove the #P-hardness of computing \({\mathcal {Q}}(\varvec{\alpha },\beta )\) to high accuracy. To this end, we show that calculating \({\mathcal {Q}}(\varvec{\alpha },\beta )\) to high accuracy allows us to closely approximate its derivative \({\mathcal {Q}}' (\varvec{\alpha },\beta )\) and thus, by virtue of Lemma 2, the volume of the knapsack polytope \(\mathrm{Vol}(P({\varvec{\alpha }},\beta ))\). The # P-hardness then follows from Lemma 1.

Theorem 1

Computing the expected recourse value \({\mathcal {Q}}(\varvec{\alpha },\beta )\) of the second-stage problem (2) for \({\varvec{\alpha }}\in \mathbb R^k_+\) and \(\beta \in \mathbb R_+\) to within an absolute accuracy of \(\delta \) is #P-hard whenever

$$\begin{aligned} \delta < \left( \frac{\alpha _k}{2k!(1+\alpha _k)(\Vert {\varvec{\alpha }}\Vert _1+2)^k(k+1)^{k+1}\prod _{j=1}^k\alpha _j}\right) ^2. \end{aligned}$$
(7)

Proof

Consider a knapsack polytope \(P({\varvec{\alpha }},\beta )\) encoded by the parameters \({\varvec{\alpha }}\in \mathbb R^k_+\) and \(\beta \in \mathbb R_+\), where we assume without loss of generality that \(k \ge 2\). Select any non-negative \(\delta \) satisfying (7) and set \(h=2\sqrt{\delta }\). In the following, we denote by \({\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta )\) and \({\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta +h)\) any \(\delta \)-approximations of the expected recourse values \({\mathcal {Q}}({\varvec{\alpha }},\beta )\) and \({\mathcal {Q}}({\varvec{\alpha }},\beta +h)\), respectively. Then, we obtain

$$\begin{aligned}&\displaystyle \left| \frac{{\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta +h)-{\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta )}{h}+1-\mathrm{Vol}(P({\varvec{\alpha }},\beta ))\right| \\&\quad = \displaystyle \left| \frac{{\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta +h)-{\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta )}{h}-{\mathcal {Q}}'({\varvec{\alpha }},\beta )\right| \\&\quad \le \displaystyle \left| \frac{{\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta +h)-{\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta )}{h}-\frac{{\mathcal {Q}}({\varvec{\alpha }},\beta +h)-{\mathcal {Q}}({\varvec{\alpha }},\beta )}{h}\right| \\&\qquad +\left| \frac{{\mathcal {Q}}({\varvec{\alpha }},\beta +h)-{\mathcal {Q}}({\varvec{\alpha }},\beta )}{h}-{\mathcal {Q}}'({\varvec{\alpha }},\beta )\right| \\&\quad \le \displaystyle \frac{2\delta }{h}+ \frac{h}{2}\sup _{\ell \in [0,h]} \left| {\mathcal {Q}}''({\varvec{\alpha }},\ell )\right| ~ = ~ \sqrt{\delta }\left( 1+\sup _{\ell \in [0,h]}\left| {\mathcal {Q}}''({\varvec{\alpha }},\ell )\right| \right) , \end{aligned}$$

where the first equality holds due to Lemma 2, and the second inequality follows from Taylor’s theorem. To derive an upper bound on the last expression, we observe that \({\mathcal {Q}}''({\varvec{\alpha }},\beta )=\partial \mathrm{Vol}(P({\varvec{\alpha }},\beta ))/\partial \beta \ge 0\) by virtue of Lemma 2. As shown in [5], the volume of the knapsack polytope \(P({\varvec{\alpha }},\beta )\) admits the following closed-form expression.

$$\begin{aligned} \mathrm{Vol}(P({\varvec{\alpha }},\beta ))=\frac{\sum _{{\varvec{v}}\in \{0,1\}^k} (-1)^{\mathbf e^\top {\varvec{v}}}\max \{0, \beta -{\varvec{\alpha }}^\top {\varvec{v}}\}^k}{k!\prod _{j=1}^k\alpha _j} \end{aligned}$$
(8)

To simplify the following derivation, we introduce the shorthand notation \({\varvec{\alpha }}_{k-1}=(\alpha _1,\ldots , \alpha _{k-1})^\top \). Differentiating the right-hand side of (8) with respect to \(\beta \) yields

$$\begin{aligned} \begin{array}{cll} &{}\displaystyle \mathcal {Q}'' ({\varvec{\alpha }},\beta )\\ &{}\quad =\displaystyle \frac{\sum _{{\varvec{v}}\in \{0,1\}^k} (-1)^{\mathbf e^\top {\varvec{v}}}\max \{0, \beta -{\varvec{\alpha }}^\top {\varvec{v}}\}^{k-1}}{(k-1)!\prod _{j=1}^k\alpha _j}\\ &{}\quad =\displaystyle \frac{\sum _{{\varvec{v}}\in \{0,1\}^k:v_k=0} (-1)^{\mathbf e^\top {\varvec{v}}}\max \{0, \beta -{\varvec{\alpha }}^\top {\varvec{v}}\}^{k-1}+\sum _{{\varvec{v}}\in \{0,1\}^k:v_k=1} (-1)^{\mathbf e^\top {\varvec{v}}}\max \{0, \beta -{\varvec{\alpha }}^\top {\varvec{v}}\}^{k-1}}{(k-1)!\prod _{j=1}^k\alpha _j}\\ &{}\quad =\displaystyle \frac{\sum _{{\varvec{v}}\in \{0,1\}^{k-1}} (-1)^{\mathbf e^\top {\varvec{v}}}\max \{0, \beta -{\varvec{\alpha }}_{k-1}^\top {\varvec{v}}\}^{k-1}-\sum _{{\varvec{v}}\in \{0,1\}^{k-1}} (-1)^{\mathbf e^\top {\varvec{v}}}\max \{0, \beta -{\varvec{\alpha }}_{k-1}^\top {\varvec{v}}-\alpha _k\}^{k-1}}{(k-1)!\prod _{j=1}^k\alpha _j}\\ &{}\quad =\displaystyle \frac{1}{\alpha _k}\left( \mathrm{Vol}(P({\varvec{\alpha }}_{k-1},\beta ))-\mathrm{Vol}(P({\varvec{\alpha }}_{k-1},\beta -\alpha _k))\right) ~ \le ~ \frac{1}{\alpha _k}. \end{array} \end{aligned}$$

Since this upper bound is independent of \(\beta \), we find that \(\sup _{\ell \in [0,h]}{\mathcal {Q}}''({\varvec{\alpha }},\ell )\le \frac{1}{\alpha _k}\). In summary, we have thus shown that

$$\begin{aligned} \left| \frac{{\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta +h)-{\mathcal {Q}}_\delta ({\varvec{\alpha }},\beta )}{h}+1-\mathrm{Vol}(P({\varvec{\alpha }},\beta ))\right| ~\le ~ \sqrt{\delta }\left( 1+\frac{1}{\alpha _k}\right) ~ <~ \epsilon , \end{aligned}$$

where \(\epsilon = 1/(2k!(\Vert {\varvec{\alpha }}\Vert _1+2)^k(k+1)^{k+1}\prod _{j=1}^k\alpha _j)\). Thus, we can compute the volume of the knapsack polytope \(P (\varvec{\alpha }, \beta )\) to within an absolute accuracy of \(\epsilon \) by computing the expected recourse values of \({\mathcal {Q}}({\varvec{\alpha }},\beta )\) and \({\mathcal {Q}}({\varvec{\alpha }},\beta + h)\) to within accuracy \(\delta \), dividing their difference by h and adding 1. With the exception of computing the expected recourse values, all of these operations can be carried out in time polynomial in the bit length of the instance of (1) that corresponds to problem (2). We conclude that computing the expected recourse value \({\mathcal {Q}}({\varvec{\alpha }},\beta )\) to within accuracy \(\delta \) is at least as hard as computing the volume of a knapsack polytope \(P(\varvec{\alpha }, \beta )\) to within accuracy \(\epsilon \). The claim then follows from Lemma 1. \(\square \)

Since the expected recourse value of problem (2) can be computed through a linear two-stage stochastic program (1) with fixed recourse, we conclude that approximating the optimal value of a linear two-stage stochastic program with fixed recourse to high accuracy is # P-hard as well.

Theorem 1 extends to the problem of determining an approximately optimal decision to problem (1). To this end, we call a feasible solution \(\varvec{x}\) to problem (1) \(\epsilon \) -optimal if \(| f^\star - ( \varvec{c}^\top \varvec{x} + \mathbb {E} [ Q (\varvec{x}, \varvec{\tilde{\xi }}) ])| \, / \max \{ |f^\star |, 1 \} \le \epsilon \), where \(f^\star \) denotes the optimal value of problem (1).

For fixed \(c \in \mathbb {R}\), we consider the following instance of problem (1) with fixed recourse:

$$\begin{aligned} \begin{array}{ll} \displaystyle {\text {maximize}} &{} \displaystyle -c x + \mathbb {E} \left[ Q (x, \varvec{\tilde{\xi }}; \varvec{\alpha }, \beta ) \right] \\ \displaystyle \text {subject to} &{} \displaystyle x \in [0,1], \end{array} \end{aligned}$$
(9)

where \(\varvec{\tilde{\xi }}\) follows a uniform distribution supported on \([0,1]^k\), and for a fixed realization \(\varvec{\xi }\) the recourse function \(Q (x, \varvec{\xi }; \varvec{\alpha }, \beta )\) is the optimal value of the second-stage problem

$$\begin{aligned} \begin{array}{ll} \displaystyle {\text {maximize}} &{} \displaystyle \varvec{\xi }^\top \varvec{y} - \beta z \\ \displaystyle \text {subject to} &{} \displaystyle \varvec{y} \le \varvec{\alpha } z, \,\, \varvec{y} \in \mathbb {R}^k_+, \,\, 0 \le z \le x. \end{array} \end{aligned}$$

As before, we restrict ourselves in the following to the nontrivial case where \(\varvec{\alpha }^\top \mathbf {e} \ge \beta \).

Theorem 2

Determining an \(\epsilon \)-optimal decision to the linear two-stage stochastic program (9) with fixed recourse is #P-hard whenever \(\epsilon < \delta / (8 \max \{ 1, \varvec{\alpha }^\top \mathbf {e} - \beta \})\) for \(\delta \) defined in Theorem 1.

Proof

Note that \(\mathbb {E} \left[ Q (x, \varvec{\tilde{\xi }}; \varvec{\alpha }, \beta ) \right] = \mathcal {Q} (\varvec{\alpha }, \beta ) x\), where \(\mathcal {Q} (\varvec{\alpha }, \beta )\) denotes the expected recourse value of the second-stage problem (2). Thus, the optimal solution \(x^\star \) and the optimal value \(f^\star \) of problem (9) satisfy \(x^\star = 0\) and \(f^\star = 0\) if \(c > \mathcal {Q} (\varvec{\alpha }, \beta )\), \(x^\star \in [0,1]\) and \(f^\star = 0\) if \(c = \mathcal {Q} (\varvec{\alpha }, \beta )\), and \(x^\star = 1\) and \(f^\star = \mathcal {Q} (\varvec{\alpha }, \beta ) - c\) if \(c < \mathcal {Q} (\varvec{\alpha }, \beta )\). Assume now that an \(\epsilon \)-optimal decision x to problem (9) could be found in polynomial time, and consider the following bisection search:

  1. 1.

    Set \([\underline{c}, \overline{c}] = [0, \varvec{\alpha }^\top \mathbf {e} - \beta ]\).

  2. 2.

    Find the \(\epsilon \)-optimal solution x to problem (9) with \(c = (\underline{c} + \overline{c}) / 2\).

  3. 3.

    Set \(\overline{c} = c\) if \(x < \frac{1}{2}\) and set \(\underline{c} = c\) otherwise.

  4. 4.

    Go back to Step 2 as long as \(\overline{c} - \underline{c} > \delta \).

We claim that \(\underline{c} - \frac{\delta }{2} \le \mathcal {Q} (\varvec{\alpha }, \beta ) \le \overline{c} + \frac{\delta }{2}\) in every iteration of this algorithm. This is certainly the case in the first iteration. Assume now that the condition is satisfied in the i-th iteration. We then distinguish the two cases \(| \mathcal {Q} (\varvec{\alpha }, \beta ) - c | > \frac{\delta }{2}\) and \(| \mathcal {Q} (\varvec{\alpha }, \beta ) - c | \le \frac{\delta }{2}\).

Assume first that \(| \mathcal {Q} (\varvec{\alpha }, \beta ) - c | > \frac{\delta }{2}\), and let x be an \(\epsilon \)-optimal solution to problem (9). Since \(c, \mathcal {Q} (\varvec{\alpha }, \beta ) \in [0, \varvec{\alpha }^\top \mathbf {e} - \beta ]\), we have \({\max \{ 1, | \mathcal {Q} (\varvec{\alpha }, \beta ) - c | x^\star \}} \le \max \{ 1, \varvec{\alpha }^\top \mathbf {e} - \beta \}\) and thus

$$\begin{aligned} \frac{\left| (\mathcal {Q} (\varvec{\alpha }, \beta ) - c) (x^\star - x) \right| }{{\max \{ 1, | \mathcal {Q} (\varvec{\alpha }, \beta ) - c | x^\star \}}} \le \epsilon&\Longrightarrow \,\, \left| \mathcal {Q} (\varvec{\alpha }, \beta ) - c \right| \left| x^\star - x \right| \le \epsilon \max \{ 1, \varvec{\alpha }^\top \mathbf {e} - \beta \} \\&\Longleftrightarrow \,\, \left| x^\star - x \right| \le \frac{\epsilon \max \{ 1, \varvec{\alpha }^\top \mathbf {e} - \beta \}}{\left| \mathcal {Q} (\varvec{\alpha }, \beta ) - c \right| } \\&\Longrightarrow \,\, \left| x^\star - x \right| \le \frac{1}{4}, \end{aligned}$$

where the last implication follows from the induction hypothesis and the definition of \(\epsilon \). Thus, \(x < \frac{1}{2}\) implies that \(x^\star = 0\), that is, \(c > \mathcal {Q} (\varvec{\alpha }, \beta )\), whereas \(x \ge \frac{1}{2}\) implies that \(x^\star = 1\) and therefore \(c < \mathcal {Q} (\varvec{\alpha }, \beta )\). In either case, we have \(\underline{c} - \frac{\delta }{2} \le \mathcal {Q} (\varvec{\alpha }, \beta ) \le {\overline{c} + \frac{\delta }{2}}\) in the \((i+1)\)-th iteration. Note that the case \(c = \mathcal {Q} (\varvec{\alpha }, \beta )\) is excluded since we assumed that \(| \mathcal {Q} (\varvec{\alpha }, \beta ) - c | > \frac{\delta }{2}\).

Assume now that \(| \mathcal {Q} (\varvec{\alpha }, \beta ) - c | \le \frac{\delta }{2}\). If we set \(\overline{c} = c\) in the i-th iteration, then \(| \mathcal {Q} (\varvec{\alpha }, \beta ) - \overline{c} | \le \frac{\delta }{2}\) in the \((i+1)\)-th iteration, that is, \(\mathcal {Q} (\varvec{\alpha }, \beta ) \le \overline{c} + \frac{\delta }{2}\), while \(\underline{c}\) remains unchanged. Likewise, if we set \(\underline{c} = c\), then \(| \mathcal {Q} (\varvec{\alpha }, \beta ) - \underline{c} | \le \frac{\delta }{2}\) in the next iteration, that is, \(\underline{c} - \frac{\delta }{2} \le \mathcal {Q} (\varvec{\alpha }, \beta )\), while \(\overline{c}\) remains unchanged. Thus, the condition \(\underline{c} - \frac{\delta }{2} \le \mathcal {Q} (\varvec{\alpha }, \beta ) \le \overline{c} + \frac{\delta }{2}\) is preserved in both cases. \(\square \)

Another consequence of Theorem 1 is that computing the expected value of the non-negative part of a linear combination of uniformly distributed random variables is # P-hard.

Corollary 1

For fixed values \(\varvec{\alpha } \in \mathbb {R}^k_+\) and \(\beta \in \mathbb {R}_+\), approximating the expectation

$$\begin{aligned} \mathbb {E} \left[ \max \left\{ \varvec{\alpha }^\top \varvec{\tilde{\xi }} - \beta , 0 \right\} \right] \end{aligned}$$

to within an absolute accuracy \(\delta \) that satisfies the condition of Theorem 1 is #P-hard even if \(\varvec{\tilde{\xi }}\) follows the uniform distribution supported on \([0, 1]^k\).

3 Stochastic programs with random recourse

We now show that approximately computing the optimal value of the two-stage stochastic program (1) with random recourse is strongly #P-hard, that is, the problem remains #P-hard under a unary encoding [6]. Our proof relies on the auxiliary result that calculating the volume of an order polytope to within high accuracy is #P-hard. For a set \(S = \{ 1, \ldots , k \}\) and a partial order \(\preceq \) defined on S, the order polytope is \(P (S; \preceq ) = \{ \varvec{\xi } \in [0,1]^k \, : \, \xi _i \le \xi _j \,\, \forall i, j \in S, \, i \preceq j \}\).

Lemma 3

Computing the volume of the order polytope \(P (S; \preceq )\) to within an absolute accuracy of \(\epsilon \) is #P-hard whenever \(\epsilon < \frac{1}{2k!}\).

The proof of Lemma 3 constructs a reduction from the #LinearExtension problem, whose #P-hardness has been established in [3].

figure b

We remind the reader that a linear extension of S under \(\preceq \) is a permutation \(i_1, \ldots , i_k\) of the elements in S that satisfies \(\ell \le \ell '\) whenever \(i_{\ell } \preceq i_{\ell '}\).

Proof of Lemma 3

Fix an instance of the #LinearExtension problem with input S and \(\preceq \), and let \(\vartheta \) be an approximation of \(\mathrm{Vol} (P (S; \preceq ))\) to within an absolute accuracy of \(\epsilon \in [0, 1/2k!)\). It is shown in [3] that the number of linear extensions \(N (S; \preceq )\) of S under \(\preceq \) satisfies \(N (S; \preceq )=k! \, \mathrm{Vol} (P(S; \preceq ))\). Thus, by multiplying \(\vartheta \) with k! and rounding the result to the nearest integer, we can compute \(N (S; \preceq )\). Since each step except for the volume calculation can be carried out in time polynomial in the bit length of \((S, \preceq )\), we conclude that approximating the volume of an order polytope is at least as hard as solving the #P-hard #LinearExtension problem. \(\square \)

Lemma 3 allows us to establish the first main result of this section. The result studies the expected recourse value \({\mathcal {Q}}(S, \preceq ) = \mathbb {E} [Q (\varvec{\tilde{\xi }}; S, \preceq )]\) of the second-stage problem

$$\begin{aligned} \begin{array}{ll} \displaystyle {\text {maximize}} &{} \displaystyle z \\ \displaystyle \text {subject to} &{} \displaystyle z \le \sum _{i, j \in S \, : \, i \preceq j} (\xi _i - \xi _j) y_{ij} \\ &{} \displaystyle \varvec{y} \in \mathbb {R}^{k \times k}_+, \,\, z \in [0, 1] \end{array} \end{aligned}$$
(10)

under the uniform distribution supported on \([0,1]^k\), where S and \(\preceq \) are fixed parameters. Note that problem (10) exhibits random recourse but does not involve any first-stage decisions.

Theorem 3

Computing the expected recourse value \({\mathcal {Q}}(S, \preceq )\) of the second-stage problem (10) for S and \(\preceq \) to within an absolute accuracy of \(\epsilon \) is strongly # P-hard whenever \(\epsilon < \frac{1}{2k!}\).

Proof

One readily verifies that for a fixed realization \(\varvec{\xi }\) of the random vector \(\varvec{\tilde{\xi }}\), the optimal value of the second-stage problem (10) is 0 if \(\varvec{\xi } \in P(S; \preceq )\) and 1 otherwise. We thus conclude that

$$\begin{aligned} {\mathcal {Q}}(S, \preceq ) \, = \, \mathbb {E} [Q (\varvec{\tilde{\xi }}; S, \preceq )] \, = \, \mathbb {E} \left[ \mathbb {I}_{[\varvec{\tilde{\xi }} \notin P (S, \preceq )]} \right] \, = \, 1 - \mathrm{Vol} (P(S; \preceq )), \end{aligned}$$

that is, computing the expected recourse value of (10) to within an absolute accuracy of \(\epsilon \) allows us to compute the volume of the associated order polytope to within the same accuracy. From Lemma 3 we know, however, that the latter problem is #P-hard.

It remains to show that computing the expected recourse value \({\mathcal {Q}}(S, \preceq )\) is strongly #P-hard. Assume, to the contrary, that there is a pseudo-polynomial time algorithm for computing \({\mathcal {Q}}(S, \preceq )\) to within an accuracy of \(\epsilon \), that is, an algorithm whose runtime can be bounded by a polynomial in the bit length of the description of the affine mappings \(\varvec{q}\), \(\varvec{T}\), \(\varvec{W}\) and \(\varvec{h}\) in (1) that correspond to problem (10), as well as the largest numeric value occuring in that description. Since the affine mappings in problem (10) only contain the coefficients \(-1\), 0 and 1, the runtime of such an algorithm would indeed be polynomial in the bit length of the description of problem (10). Thus, the existence of a pseudo-polynomial time algorithm for computing \({\mathcal {Q}}(S, \preceq )\) to within an accuracy of \(\epsilon \) would imply that the volume of the associated order polytope could be computed efficiently, which establishes the strong #P-hardness of the problem. \(\square \)

Since the expected recourse value of problem (10) can be computed through a linear two-stage stochastic program (1) with random recourse, we conclude that approximating the optimal value of a linear two-stage stochastic program with random recourse to high accuracy is # P-hard as well.

We conclude this section by showing that determining an approximately optimal decision to problem (1) with random recourse is strongly NP-hard. This implies that unless the problems in NP admit an efficient solution scheme, there is no fully polynomial-time approximation scheme (FPTAS) for computing the maximum of problem (1) with random recourse, see [6].

To this end, we recall the definition of the strongly NP-hard Integer Feasibility Problem [6]:

figure c

The following lemma shows that we can round fractional solutions to the Integer Feasibility Problem if they are sufficiently close to a binary vector.

Lemma 4

Fix any \(\epsilon ^\prime < \min _i \{ (\sum _j | A_{ij} |)^{-1} \}\) that satisfies \(\epsilon ^\prime < \frac{1}{2}\). The Integer Feasibility Problem has an affirmative answer if and only if there is a vector \(\varvec{y} \in ([ 0, \epsilon ^\prime ] \cup [1 - \epsilon ^\prime , 1])^n\) such that \(\varvec{A} \varvec{y} \le \varvec{b}\).

Proof

The ‘only if’ direction is immediate. As for the ‘if’ direction, fix a fractional vector \(\varvec{y}\) as described in the statement and let \(\varvec{y}^\prime \) be the closest binary vector, that is, \(y_i^\prime := 1\) if \(y_i \ge 1 - \epsilon ^\prime \); \(:= 0\) if \(y_i \le \epsilon ^\prime \). We then observe that \(\sum _j A_{ij} y_j^\prime \le \sum _j A_{ij} y_j + \sum _j |A_{ij}| \epsilon ^\prime < \sum _j A_{ij} y_j + 1 \le b_i + 1\) for all \(i = 1, \ldots , m\). Due to the integrality of \(\varvec{A}\), \(\varvec{y}^\prime \) and \(\varvec{b}\), we thus conclude that \(\varvec{A} \varvec{y}^\prime \le \varvec{b}\). \(\square \)

In the following, we fix an instance \((\varvec{A}, \varvec{b})\) of the Integer Feasibility Problem and consider the following instance of problem (1) with random recourse:

$$\begin{aligned} \begin{array}{ll} \displaystyle {\text {minimize}} &{} \displaystyle x + \mathbb {E} \left[ Q (x, \varvec{\tilde{\xi }}) \right] \\ \displaystyle \text {subject to} &{} \displaystyle x \in \mathbb {R}, \end{array} \end{aligned}$$
(11)

where \(\varvec{\tilde{\xi }}\) follows a uniform distribution supported on \([0,1]^k\), and for a fixed realization \(\varvec{\xi }\) the recourse function \(Q (x, \varvec{\xi })\) amounts to the optimal value of the second-stage problem

$$\begin{aligned} \begin{array}{lll} \displaystyle {\text {minimize}} &{} \displaystyle \mathbf {e}^\top \varvec{y} \\ \displaystyle \text {subject to} &{} \displaystyle \varvec{y} \in \mathbb {R}^n_+, \,\, \varvec{\lambda } \in \mathbb {R}^m_+, \,\, x \ge \mathbf {e}^\top \varvec{y} \\ &{} \displaystyle \left. \begin{array}{l} \displaystyle y_i \ge \xi _i + (\varvec{b} - \varvec{A} \varvec{\xi })^\top \varvec{\lambda } \\ \displaystyle y_i \ge (1 - \xi _i) + (\varvec{b} - \varvec{A} \varvec{\xi })^\top \varvec{\lambda } \end{array} \,\, \right\} \,\, \displaystyle \forall i = 1, \ldots , m. \end{array} \end{aligned}$$

Note that this problem does not have relatively complete recourse, that is, for small values of x the second-stage problem may become infeasible with positive probability, which in turn implies that this choice of x is infeasible in the first-stage problem (11). Problems without relatively complete recourse are common in stochastic programming, see [2]. Also, the problem satisfies \(n = m = k\).

Theorem 4

Determining an \(\epsilon \)-optimal decision to the linear two-stage stochastic program (11) with random recourse is strongly NP-hard whenever \(\epsilon < \epsilon ^\prime / 4n\) for \(\epsilon ^\prime \) defined in Lemma 4.

Proof

If the second-stage problem is feasible for a fixed \(\varvec{\xi }\), then any optimal solution \((\varvec{y} (\varvec{\xi }), \varvec{\lambda } (\varvec{\xi }))\) to the second-stage problem satisfies \(y_i (\varvec{\xi }) = \max \{ \xi _i, 1 - \xi _i \}\), \(i = 1, \ldots , n\), if \(\varvec{A} \varvec{\xi } \le \varvec{b}\) and \(\varvec{y} (\varvec{\xi }) = \varvec{0}\) otherwise. Assuming that \(\{ \varvec{\xi } \in \mathbb {R}^n \, : \, \varvec{A} \varvec{\xi } \le \varvec{b} \} \ne \emptyset \), the optimal solution \(x^\star \) to problem (11) thus satisfies \(x^\star = \max \{ \sum _{i = 1}^m \max \{ \xi _i, 1 - \xi _i \} \, : \, \varvec{A} \varvec{\xi } \le \varvec{b} \}\). Lemma 4 then implies that the answer to the Integer Feasibility Problem is affirmative if and only if \(x^\star = n\), and it is negative if and only if \(x^\star < n - \epsilon ^\prime \).

Assume now that an \(\epsilon \)-optimal solution x to problem (11) could be found in polynomial time. In that case, we obtain

$$\begin{aligned}&\frac{\left| x^\star + \mathbb {E} \left[ Q (x^\star , \varvec{\tilde{\xi }}) \right] - x - \mathbb {E} \left[ Q (x, \varvec{\tilde{\xi }}) \right] \right| }{\max \left\{ 1, x^\star + \mathbb {E} \left[ Q (x^\star , \varvec{\tilde{\xi }}) \right] \right\} } \,\, \le \,\, \epsilon \\&\quad \Longleftrightarrow \left| x^\star - x \right| \,\, \le \,\, \epsilon \cdot \max \left\{ 1, x^\star + \mathbb {E} \left[ Q (x^\star , \varvec{\tilde{\xi }}) \right] \right\} \\&\quad \Longrightarrow \left| x^\star - x \right| \,\, \le \,\, 2n \epsilon \,\, < \,\, \frac{\epsilon ^\prime }{2}, \end{aligned}$$

where the equivalence follows from the fact that \(\mathbb {E} \left[ Q (x^\star , \varvec{\tilde{\xi }}) \right] = \mathbb {E} \left[ Q (x, \varvec{\tilde{\xi }}) \right] \) since \(x^\star \) and x are both feasible, the first implication holds since both \(x^\star \) and \(\mathbb {E} \left[ Q (x^\star , \varvec{\tilde{\xi }}) \right] \) do not exceed n by construction, and the last implication follows from the definition of \(\epsilon \). We thus conclude that \(x^\star = n\) whenever \(x > n - \frac{\epsilon ^\prime }{2}\) and \(x^\star < n - \epsilon ^\prime \) whenever \(x < n - \frac{\epsilon ^\prime }{2}\), that is, we could decide the NP-hard Integer Feasibility Problem in polynomial time. \(\square \)

4 Conclusion

Despite the hardness results addressed in this paper, there are many interesting two-stage stochastic programs that can be solved to within workable accuracy. In [12] it is argued, for instance, that two-stage stochastic programs with relatively complete recourse can be solved efficiently with the sample average approximation method to a moderate relative accuracy of 10 % or even 1 %. Recall that a two-stage stochastic program has relatively complete recourse if for every feasible first-stage decision and for every possible realization of the uncertain parameters there exists a feasible second-stage decision. We emphasize, however, that problem (1) was not assumed to have relatively complete recourse. Modern methods of Quasi-Monte Carlo (QMC) integration provide other effective tools for solving two-stage stochastic programs. Specifically, it is known that integration via QMC algorithms overcomes the curse of dimensionality in certain mixed Sobolev spaces [13]. This means that the number of function evaluations needed to guarantee a relative accuracy of \(\epsilon \) across all integrands in the underlying Sobolev space is at most polynomial in \(\epsilon ^{-1}\) and the dimension of \({\varvec{\xi }}\). This result is consistent with the complexity theorems discussed in this note because the optimal value functions \(Q({\varvec{x}},{\varvec{\xi }})\) of linear two-stage stochastic programs fail to belong to the relevant Sobolev spaces. Moreover, the critical accuracies given in Theorems 1 and 3, beyond which the evaluation of the expected value functions becomes intractable, are exponentially small in the input sizes of the corresponding stochastic programs. Under relatively complete recourse and other technical conditions, it can be shown, however, that \(Q({\varvec{x}},{\varvec{\xi }})\) can be approximated by appropriate Sobolev integrands, which in turn implies that \(Q({\varvec{x}},{\varvec{\xi }})\) can be integrated efficiently via QMC algorithms with attractive convergence guarantees independent of the dimension of \({\varvec{\xi }}\) [8].