1 Introduction

Assume we want to approximate an unknown real or complex-valued function on a set D based on a finite number n of function values that may be evaluated at randomly and adaptively chosen points. In general, these function values do not determine the function uniquely, and so we cannot expect our approximation to be correct. We make an approximation error, which we measure in the space \(L^2(D,{\mathcal {A}},\mu )\) of quadratically integrable functions on D with respect to an arbitrary measure \(\mu \). In order to make any meaningful statement regarding this error, we need to have additional a priori knowledge of the unknown function. Here, we assume structural knowledge of the form that it is contained in the unit ball \(F_\circ \) of a Hilbert space F that is compactly embedded in \(L^2(D,{\mathcal {A}},\mu )\). For instance, it may be bounded with respect to some Sobolev norm on a compact manifold D. The error of the randomized algorithm or Monte Carlo method \(A_n\) is the quantity

$$\begin{aligned} e^{\mathrm{ran}}(A_n) = \sup \limits _{f\in F_\circ } \left( {\mathbb {E}}\int _D \left| f-A_n(f)\right| ^2 ~\mathrm{d}\mu \right) ^{1/2} . \end{aligned}$$

The error of an optimal randomized algorithm that asks for at most n function values is denoted by

$$\begin{aligned} e(n)=\inf _{A_n} e^{\mathrm{ran}}(A_n). \end{aligned}$$

While it seems impossible to provide such algorithms, the optimal deterministic algorithm evaluating n arbitrary linear functionals is well known. It is given by the orthogonal projection \(P_n\) onto the span of the first n functions in the singular value decomposition of the embedding \(T:F\hookrightarrow L^2\). Its worst case error is the \((n+1)\)-st largest singular value or approximation number \(\sigma (n+1)\) of that embedding, the square root of the \((n+1)\)-st largest eigenvalue of the operator \(W=T^*T\).

The algorithm \(P_n\) asks for the first n coefficients of f with respect to the singular value decomposition of the embedding T. In most applications, however, it is not possible to sample these coefficients, and we may only make use of function values. This leads to the following questions:

  • How does the error e(n) of optimal randomized algorithms using n function values compare to the error \(\sigma (n+1)\) of the orthogonal projection \(P_n\)?

  • If possible, find a randomized algorithm \(A_n\) whose error is close to \(\sigma (n+1)\).

These are not new questions in the fields of Monte Carlo methods and information-based complexity. There are several results for particular spaces F where e(n) behaves similarly to the error of \(P_n\). See, for instance, Traub, Wasilkowski and Woźniakowski [20], Mathé [13] and Heinrich [5]. Results by Cohen, Davenport and Leviatan [2] and Cohen and Migliorati [3] contain a similar message, see Remark 3. In 1992, Novak [16] proved that

$$\begin{aligned} e(n) \ge \frac{\sigma (2n)}{\sqrt{2}} \end{aligned}$$

holds for arbitrary spaces F. This means that optimal randomized algorithms using n function values are never much better than the orthogonal projection \(P_n\). On the other hand, Wasilkowski and Woźniakowski [23] proved in 2006 that

$$\begin{aligned} \sigma (n) \preccurlyeq n^{-p} (\ln n)^q \quad \Rightarrow \quad e(n) \preccurlyeq n^{-p} (\ln n)^q (\ln \ln n)^{p+1/2} \end{aligned}$$

for all \(p>0\) and \(q\ge 0\). Here, we write \(x_n\preccurlyeq y_n\) if there is some \(C>0\) and \(n_0 \in {\mathbb {N}}\) such that \(x_n\le C y_n\) for all \(n \ge n_0\). If \(x_n \preccurlyeq y_n\) and \(y_n \preccurlyeq x_n\), we write \(x_n\asymp y_n\). This means that optimal randomized algorithms using function values are always almost as good as the orthogonal projection \(P_n\). The proof of this result is constructive. It raises the question whether the additional power of the double logarithm is necessary or not. In fact, Novak and Woźniakowski showed in 2012 that this is not the case for \(q=0\); that is,

$$\begin{aligned} \sigma (n) \preccurlyeq n^{-p} \quad \Rightarrow \quad e(n) \preccurlyeq n^{-p} \end{aligned}$$

for all \(p>0\). The proof of this result, however, is not constructive. Both proofs can be found in their monograph [18, Chapter 22]. In the present paper, we prove the corresponding statement for \(q>0\). More generally, we consider upper bounds with the following property. We say that the sequence \(L:{\mathbb {N}}\rightarrow (0,\infty )\) is regularly decreasing if there is some \(r\ge 0\) such that

$$\begin{aligned} L(m) \ge 2^{-r} L(n) \quad \hbox {whenever}\quad n \le m\le 2n. \end{aligned}$$
(1)

If there is some \(n_0\in {\mathbb {N}}\) such that L(n) is nonincreasing for \(n\ge n_0\), this is equivalent to \(L(2n) \asymp L(n)\). Property (1) is satisfied if \(L(n)n^r\) is nondecreasing. The sequence

$$\begin{aligned} L(n) = n^{-p} \left( 1+\log _2 n\right) ^q \end{aligned}$$

is regularly decreasing for any \(p>0\) and \(q\ge 0\). It satisfies (1) for \(r=p\). Another example is

$$\begin{aligned} L(n) = \left( 1+\log _2 n\right) ^{-q} \end{aligned}$$

for any \(q>0\), which satisfies (1) for \(r=q\). The sequence is not regularly decreasing if it decays exponentially or has huge jumps. We obtain the following result.

Theorem 1

If \(L:{\mathbb {N}}\rightarrow (0,\infty )\) is regularly decreasing, then

$$\begin{aligned} \sigma (n) \preccurlyeq L(n) \quad \Rightarrow \quad e(n) \preccurlyeq L(n). \end{aligned}$$

This solves Open Problem 99 as posed by Novak and Woźniakowski in [18]. One problem with this result is that it does not provide any algorithm; it only states the existence of good algorithms. Another problem is that the error bound is only asymptotic. The preasymptotic behavior of e(n) may, however, be very different from its asymptotic behavior. This is typically the case if the set D is a domain in high-dimensional Euclidean space.

These problems are tackled by Theorem 2. In Sect. 3, we provide a randomized algorithm \(A_n^r\) for any \(n\in {\mathbb {N}}\) and \(r\ge 0\). This algorithm is a refinement of the algorithm proposed by Wasilkowski and Woźniakowski [23]. It asks for at most n function values and satisfies the following error bound.

Theorem 2

Assume that \(L:{\mathbb {N}}\rightarrow (0,\infty )\) satisfies (1), and let \(c_r=2^{r\lceil 2r+3\rceil +1}\).

$$\begin{aligned}&\text {If}\quad&\sigma (n)&\le L(n) \quad&\text {for all}\qquad&n\in {\mathbb {N}},\\&\text {then}\quad&e^{\mathrm{ran}}(A_n^r)&\le c_r\, L(n) \quad&\text {for all}\qquad&n\in {\mathbb {N}}. \end{aligned}$$

The constant \(c_r\) only depends on the order r. If D is a domain in d-dimensional Euclidean space, this order is often independent of d or even strictly decreasing with d. See Sect. 3 for the definition of this algorithm and several examples.

We find that the error of randomized algorithms using n function values of the target function can get very close to the error of the orthogonal projection \(P_n\) and that this is achieved by the algorithm \(A_n^r\).

In Sect. 4, we use these algorithms for the integration of functions f in F with respect to probability measures \(\mu \). We simply exploit the relation

$$\begin{aligned} \int _D f~\mathrm{d}\mu = \int _D A_n^r f~\mathrm{d}\mu + \int _D (f- A_n^r f)~\mathrm{d}\mu . \end{aligned}$$

We compute the integral of \(A_n^r f\) and use a direct simulation to approximate the integral of \((f\,\)\(\,A_n^r f)\), which has a small variance. This technique is called variance reduction and widely used for Monte Carlo integration. See Heinrich [5, Theorem 5.3] for another example. Even if D is a high-dimensional domain, the resulting method can significantly improve on the error of a sole direct simulation for a relatively small number of samples.

These results are based on the a priori knowledge that our target function is contained in the unit ball of the space F. In Sect. 5, we discuss how this assumption can be weakened.

2 The Setting

Let \((D,{\mathcal {A}},\mu )\) be a measure space and \({\mathbb {K}}\in \left\{ {\mathbb {R}},{\mathbb {C}}\right\} \). The space \(L^2=L^2(D,{\mathcal {A}},\mu )\) is the space of quadratically integrable \({\mathbb {K}}\)-valued functions on \((D,{\mathcal {A}},\mu )\), equipped with the scalar product

$$\begin{aligned} \left\langle f,g\right\rangle _2=\int _D f\cdot {\overline{g}}~\mathrm{d}\mu . \end{aligned}$$

Let F be a second Hilbert space and \(F_\circ \) be its unit ball. We assume that F is a subset of \(L^2\) and that

$$\begin{aligned} T: F \rightarrow L^2,\quad Tf=f, \end{aligned}$$

is compact. With the embedding T, we associate a positive semi-definite and compact operator \(W=T^*T\) on the space F. By the spectral theorem, there is a (possibly finite) orthogonal basis \({\mathcal {B}}=\left\{ b_1,b_2,\dots \right\} \) of F, consisting of eigenvectors corresponding to a nonincreasing zero sequence \((\lambda _n)_{n\in {\mathbb {N}}}\) of eigenvalues of W. Let N be the cardinality of \({\mathcal {B}}\). One can easily check that \({\mathcal {B}}\) is orthogonal in \(L^2\), as well. We take the eigenvectors \(b_n\) to be normalized in \(L^2\). We call this basis the singular value decomposition of T.Footnote 1 The number \(\sigma (n)=\sqrt{\lambda _n}\) is called its n-th singular value or approximation number.

The worst case error of a deterministic algorithm \(A:F\rightarrow L^2\) is the quantity

$$\begin{aligned} e^{\mathrm{det}}(A)= \sup \limits _{f\in F_\circ } \left\| f- A(f)\right\| _2. \end{aligned}$$

The worst case error of a measurable randomized algorithm

$$\begin{aligned} A:F\times \Omega \rightarrow L^2, \quad (f,\omega )\rightarrow A^{\omega }(f), \end{aligned}$$

where \(\Omega \) is the sample space of some probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\), is the quantity

$$\begin{aligned} e^{\mathrm{ran}}(A)= \sup \limits _{f\in F_\circ } \left( {\mathbb {E}}_{\omega } \left\| f- A^{\omega }(f)\right\| _2^2\right) ^{1/2}. \end{aligned}$$

We usually skip the \(\omega \) in the notation. See Novak and Woźniakowski [17, Chapter 4] for a precise definition of such algorithms. We furthermore define the following minimal worst case errors within certain classes of algorithms.

The quantity

$$\begin{aligned} e^{\mathrm{det}}(n,T,\Lambda ^{\mathrm{all}}) = \inf \limits _{A\in {\mathcal {A}}_n^\mathrm{det, all}} e^{\mathrm{det}}(A) \end{aligned}$$

is the minimal worst case error within the class \({\mathcal {A}}_n^\mathrm{det, all}\) of all deterministic algorithms evaluating at most n linear functionals of the input function.

The quantity

$$\begin{aligned} e^{\mathrm{ran}}(n,T,\Lambda ^{\mathrm{all}}) = \inf \limits _{A\in {\mathcal {A}}_n^\mathrm{ran, all}} e^{\mathrm{ran}}(A) \end{aligned}$$

is the minimal worst case error within the class \({\mathcal {A}}_n^\mathrm{ran, all}\) of all measurable randomized algorithms evaluating at most n linear functionals.

The quantity

$$\begin{aligned} e^{\mathrm{det}}(n,T,\Lambda ^{\mathrm{std}}) = \inf \limits _{A\in {\mathcal {A}}_n^\mathrm{det, std}} e^{\mathrm{det}}(A) \end{aligned}$$

is the minimal worst case error within the class \({\mathcal {A}}_n^\mathrm{det, std}\) of all deterministic algorithms evaluating at most n function values of the input function.

The quantity

$$\begin{aligned} e(n) = e^{\mathrm{ran}}(n,T,\Lambda ^{\mathrm{std}}) = \inf \limits _{A\in {\mathcal {A}}_n^\mathrm{ran, std}} e^{\mathrm{ran}}(A) \end{aligned}$$

finally is the minimal worst case error within the class \({\mathcal {A}}_n^\mathrm{ran, std}\) of all measurable randomized algorithms evaluating at most n function values. This is the error to be analyzed. It was proved by Novak [16] that

$$\begin{aligned} e^{\mathrm{ran}}(n,T,\Lambda ^{\mathrm{std}}) \ge e^{\mathrm{ran}}(n,T,\Lambda ^{\mathrm{all}}) \ge \frac{1}{\sqrt{2}} e^{\mathrm{det}}(2n-1,T,\Lambda ^{\mathrm{all}}). \end{aligned}$$
(2)

The error \(e^{\mathrm{det}}(n,T,\Lambda ^{\mathrm{all}})\) is known to coincide with \(\sigma (n+1)\). We refer to Novak and Woźniakowski [17, Section 4.2.3]. The infimum is attained for the nonadaptive linear algorithm

$$\begin{aligned} P_n: F\rightarrow L^2, \quad P_n(f)=\sum _{k=1}^{n\wedge N} \left\langle f,b_k\right\rangle _2 b_k .\end{aligned}$$

Here, \(\log _2 x\) denotes the logarithm of \(x>0\) in base 2, whereas \(\ln x\) denotes its natural logarithm. The minimum of \(a\in {\mathbb {R}}\) and \(b\in {\mathbb {R}}\) is denoted by \(a\wedge b\). Recall that we write \(x_n\preccurlyeq y_n\) if there is a positive constant C and some \(n_0\in {\mathbb {N}}\) such that \(x_n\le C y_n\) for all \(n \ge n_0\). We write \(x_n\asymp y_n\) if \(x_n\preccurlyeq y_n\) and \(y_n\preccurlyeq x_n\).

3 A Method for Multivariate Approximation

Let us keep the notation of the previous section. For any \(m\in {\mathbb {N}}\) with \(m\le N\), we define

$$\begin{aligned} u_m=\frac{1}{m} \sum \limits _{j=1}^m \left| b_j\right| ^2. \end{aligned}$$

This is a probability density with respect to \(\mu \). We consider the probability measure

$$\begin{aligned} \mu _m: {\mathcal {A}}\rightarrow [0,1], \quad \mu _m(E)=\int _E u_m ~\mathrm{d}\mu \end{aligned}$$

on \((D,{\mathcal {A}})\). In view of optimal algorithms in \({\mathcal {A}}_n^\mathrm{det, all}\), we introduce the following family of algorithms in \({\mathcal {A}}_n^\mathrm{ran, std}\).

Algorithm

Let \({\varvec{n}}=\left( n_1,n_2,\dots \right) \) and \({\varvec{m}}=\left( m_1,m_2,\dots \right) \) be sequences of nonnegative integers such that \({\varvec{m}}\) is nondecreasing and bounded above by \(N=\left| {\mathcal {B}}\right| \). We define the algorithms \(M^{(k)}_{{\varvec{n}},{\varvec{m}}}: L^2\rightarrow L^2\) for \(k\in {\mathbb {N}}_0\) as follows.

  • Set \(M^{(0)}_{{\varvec{n}},{\varvec{m}}} =0\).

  • For \(k\ge 1\) and \(f\in L^2\), let \(X_1^{(k)},\ldots ,X_{n_k}^{(k)}\) be random variables with distribution \(\mu _{m_k}\) that are each independent of all the other random variables, and set

    $$\begin{aligned} M^{(k)}_{{\varvec{n}},{\varvec{m}}} f = M^{(k-1)}_{{\varvec{n}},{\varvec{m}}} f + \sum _{j=1}^{m_k} \left[ \frac{1}{n_k}\sum _{i=1}^{n_k} \frac{\left( f-M^{(k-1)}_{{\varvec{n}},{\varvec{m}}} f\right) \overline{b_j}}{u_{m_k}} \left( X_i^{(k)}\right) \right] b_j. \end{aligned}$$

Note that the expectation of each term in the inner sum is \(\langle f-M^{(k-1)}_{{\varvec{n}},{\varvec{m}}} f,b_j\rangle _2\). The algorithm \(M^{(k)}_{{\varvec{n}},{\varvec{m}}}\) hence approximates f in k steps. In the first step, \(n_1\) function values of f are used for standard Monte Carlo type approximations of its \(m_1\) leading coefficients with respect to the orthonormal system \({\mathcal {B}}\). In the second step, \(n_2\) values of the residue are used for standard Monte Carlo type approximations of its \(m_2\) leading coefficients and so on. In total, \(M^{(k)}_{{\varvec{n}},{\varvec{m}}}\) uses \(\sum _{j=1}^k n_j\) function values of f. The total number of approximated coefficients is \(m_k\).

Algorithms of this type have already been studied by Wasilkowski and Woźniakowski in [23]. The simple but crucial difference with the above algorithms is the variable number \(n_j\) of nodes in each approximation step. Note that this stepwise approximation is similar to several multilevel Monte Carlo methods as introduced by Heinrich in 1998, see [4].

The benefit from the kth step is controlled by \(m_k\) and \(n_k\) as follows.

Lemma 1

For all nondecreasing sequences \({\varvec{n}}\) and \({\varvec{m}}\) of nonnegative integers and all \(k\in {\mathbb {N}}\), we have

$$\begin{aligned} \sigma (m_k+1)^2 \le e^{\mathrm{ran}}\left( M^{(k)}_{{\varvec{n}},{\varvec{m}}}\right) ^2 \le \frac{m_k}{n_k} e^{\mathrm{ran}}\left( M^{(k-1)}_{{\varvec{n}},{\varvec{m}}}\right) ^2 + \sigma (m_k+1)^2. \end{aligned}$$

Lemma 1 corresponds to Theorem 22.14 by Novak and Woźniakowski [18]. The setting of the present paper is slightly more general, but the proof is the same. Since Lemma 1 is essential for the following investigation, I present the proof.

Proof

The lower bound holds true, since \(M^{(k)}_{{\varvec{n}},{\varvec{m}}}(b_{m_k+1})\) is perpendicular to \(b_{m_k+1}\). To prove the upper bound, let \(f\in F_\circ \). By \({\mathbb {E}}_{I}\) we denote the expectation with respect to the random variables \(X_i^{(j)}\) for \(j\in I\) and \(i=1\ldots n_j\). We need to estimate

$$\begin{aligned} {\mathbb {E}}_{\left\{ 1\ldots k\right\} } \left\| f-M^{(k)}_{{\varvec{n}},{\varvec{m}}}f\right\| _2^2 = \sum _{j=1}^N {\mathbb {E}}_{\left\{ 1\ldots k\right\} } \left| \left\langle f-M^{(k)}_{{\varvec{n}},{\varvec{m}}}f,b_j\right\rangle _2\right| ^2 . \end{aligned}$$

On the one hand, we have

$$\begin{aligned}&\sum _{j=m_k+1}^N {\mathbb {E}}_{\left\{ 1\ldots k\right\} } \left| \left\langle f-M^{(k)}_{{\varvec{n}},{\varvec{m}}}f,b_j\right\rangle _2\right| ^2 = \sum _{j=m_k+1}^N \left| \left\langle f,b_j\right\rangle _2\right| ^2 = \sum _{j=m_k+1}^N \left| \left\langle f,W b_j\right\rangle _F\right| ^2 \nonumber \\&\quad = \sum _{j=m_k+1}^N \left| \left\langle f,\sigma (j) b_j\right\rangle _F\right| ^2 \sigma (j)^2 \ \le \ \sigma (m_k+1)^2 \left\| f\right\| _F^2 \ \le \ \sigma (m_k+1)^2. \end{aligned}$$

We use the abbreviation

$$\begin{aligned} g_j = \frac{\left( f-M^{(k-1)}_{{\varvec{n}},{\varvec{m}}}f\right) \overline{b_j}}{u_{m_k}} \end{aligned}$$

for each \(j\le m_k\). Note that \(u_{m_k}=0\) implies \(b_j=0\), and we set \(g_j=0\) in this case. We then obtain, on the other hand, for each \(j\le m_k\), that

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}_{\left\{ k\right\} } \left| \left\langle f-M^{(k)}_{{\varvec{n}},{\varvec{m}}}f,b_j\right\rangle _2\right| ^2 = {\mathbb {E}}_{\left\{ k\right\} } \left| \left\langle f-M^{(k-1)}_{{\varvec{n}},{\varvec{m}}}f,b_j\right\rangle _2 - \frac{1}{n_k}\sum _{i=1}^{n_k} g_j\left( X_i^{(k)}\right) \right| ^2\\&\quad = {\mathbb {E}}_{\left\{ k\right\} } \left| \int _D g_j(x) ~\mathrm{d}\mu _{m_k}(x) - \frac{1}{n_k}\sum _{i=1}^{n_k} g_j\left( X_i^{(k)}\right) \right| ^2\\&\quad \le \frac{1}{n_k} \int _D \left| g_j(x)\right| ^2 ~\mathrm{d}\mu _{m_k}(x) = \frac{1}{n_k} \int _D \left| g_j(x)\right| ^2 u_{m_k}(x) ~\mathrm{d}\mu (x), \end{aligned} \end{aligned}$$

and hence

$$\begin{aligned} \begin{aligned}&\sum _{j=1}^{m_k} {\mathbb {E}}_{\left\{ k\right\} } \left| \left\langle f-M^{(k)}_{{\varvec{n}},{\varvec{m}}}f,b_j\right\rangle _2\right| ^2 \le \frac{1}{n_k} \int _D \sum _{j=1}^{m_k} \left| g_j(x)\right| ^2 u_{m_k}(x)~\mathrm{d}\mu (x)\\&\quad = \frac{m_k}{n_k} \int _D \left| \left( f-M^{(k-1)}_{{\varvec{n}},{\varvec{m}}} f\right) (x)\right| ^2 ~\mathrm{d}\mu (x) = \frac{m_k}{n_k} \left\| f-M^{(k-1)}_{{\varvec{n}},{\varvec{m}}} f\right\| _2^2 .\end{aligned} \end{aligned}$$

With Fubini’s theorem, this yields that

$$\begin{aligned} {\mathbb {E}}_{\left\{ 1\ldots k\right\} } \left\| f-M^{(k)}_{{\varvec{n}},{\varvec{m}}}f\right\| _2^2 \ \le \ \frac{m_k}{n_k} {\mathbb {E}}_{\left\{ 1\ldots k-1\right\} } \left\| f-M^{(k-1)}_{{\varvec{n}},{\varvec{m}}} f\right\| _2^2 + \sigma (m_k+1)^2, \end{aligned}$$

and the upper bound is proved. \(\square \)

We now define the algorithm of Theorem 2. We consider such algorithms \(M^{(k)}_{{\varvec{n}},{\varvec{m}}}\), where the number of nodes \(n_j\) is doubled in each step and the ratio \(\frac{m_j}{n_j}\) of approximated coefficients and computed function values is constant, say \(2^{-\ell }\). This way, the total number \(m_k\) of approximated coefficients is linear in the total number n of computed function values. This is necessary to achieve an error of the same order as with optimal algorithms using arbitrary linear information, which precisely compute the first n coefficients. The algorithms by Wasilkowski and Woźniakowski [23] do not have this property. If the ratio is small enough, Lemma 1 ensures that \(M^{(k)}_{{\varvec{n}},{\varvec{m}}}\) inherits optimal error bounds from \(M^{(k-1)}_{{\varvec{n}},{\varvec{m}}}\).

Algorithm

Given \(r\ge 0\), we set \(\ell _r=\lceil 2r+1 \rceil \) and define the sequences \({\varvec{n}}\) and \({\varvec{m}}\) by

$$\begin{aligned} n_j=\left\{ \begin{array}{lr} 0, &{} \quad \text {for }\; j\le \ell _r,\\ 2^{j-1}, &{} \quad \text {for }\; j> \ell _r, \end{array}\right. \quad \quad m_j=\left\{ \begin{array}{lr} 0, &{} \quad \text {for }\; j\le \ell _r,\\ 2^{j-1-\ell _r}\wedge N, &{} \quad \text {for }\; j> \ell _r. \end{array}\right. \end{aligned}$$

For \(n\in {\mathbb {N}}\), we choose \(k\in {\mathbb {N}}_0\) such that \(2^k \le n < 2^{k+1}\) and set

$$\begin{aligned} A_n^{r} = M^{(k)}_{{\varvec{n}},{\varvec{m}}}. \end{aligned}$$

The algorithm \(A_n^r\) obviously performs less than n function evaluations.

Proof of Theorem 2

Let \({\varvec{n}}\) and \({\varvec{m}}\) be defined as above and \(k\in {\mathbb {N}}_0\). We first show that

$$\begin{aligned} e^{\mathrm{ran}}\left( M^{(k)}_{{\varvec{n}},{\varvec{m}}}\right) \le {\bar{c}}_r\, L(2^k), \end{aligned}$$
(3)

where \({\bar{c}}_r=2^{r(\ell _r+1)+1}\). We use induction on k. If \(k\le \ell _r\), we have \(M^{(k)}_{{\varvec{n}},{\varvec{m}}}=0\) and

$$\begin{aligned} e^{\mathrm{ran}}\left( M^{(k)}_{{\varvec{n}},{\varvec{m}}}\right) \,=\, \sigma (1) \,\le \, L(1) \,\le \, 2^{rk} L(2^k) \,\le \, {\bar{c}}_r\, L(2^k). \end{aligned}$$

For \(k>\ell _r\), we inductively obtain with Lemma 1 that

$$\begin{aligned} e^{\mathrm{ran}}\left( M^{(k)}_{{\varvec{n}},{\varvec{m}}}\right) ^2&\le 2^{-\ell _r} e^{\mathrm{ran}}\left( M^{(k-1)}_{{\varvec{n}},{\varvec{m}}}\right) ^2 + \sigma (m_k+1)^2\\&\le 2^{-\ell _r} {\bar{c}}_r^2\, L\left( 2^{k-1}\right) ^2 + L\left( 2^{k-\ell _r-1}\right) ^2\\&\le 2^{-\ell _r}{\bar{c}}_r^2\, 2^{2r} L(2^k)^2 + 2^{2r(\ell _r+1)} L(2^k)^2\\&= \left( 2^{2r-\ell _r} + 2^{-2}\right) {\bar{c}}_r^2\, L(2^k)^2, \end{aligned}$$

where the term in brackets is smaller than 1. This shows (3). For \(n\in {\mathbb {N}}\), we choose \(k\in {\mathbb {N}}_0\) with \(2^k \le n < 2^{k+1}\) and obtain

$$\begin{aligned} e^{\mathrm{ran}}\left( A_n^r\right) \,=\, e^{\mathrm{ran}}\left( M^{(k)}_{{\varvec{n}},{\varvec{m}}}\right) \,\le \, {\bar{c}}_r\, L(2^k) \,\le \, 2^r {\bar{c}}_r\, L(n) \,=\, c_r\, L(n), \end{aligned}$$

as was to be proved. \(\square \)

Note that Theorem 1 is a direct consequence of Theorem 2. Of course, the best possible upper bound for \(\sigma (n)\) is \(\sigma (n)\) itself. If we combine Theorem 1 for \(L(n)=\sigma (n)\) with Novak’s lower bound (2), we obtain the following statement on the order of convergence.

Corollary 1

Assume that \(\sigma (2n) \asymp \sigma (n)\). Then

$$\begin{aligned} e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \asymp e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{all}}\right) \asymp e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{all}}\right) . \end{aligned}$$

Note that the error \(e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \) of optimal deterministic algorithms based on function values may perform much worse, as shown by Hinrichs, Novak and Vybíral [7], see also Novak and Woźniakowski [18, Section 26.6.1]. It is a very interesting question whether the condition on the decay of the singular values can be relaxed. Note that we use this condition both to prove the upper and the lower bound of Corollary 1. On the other hand, if we combine Theorem 2 for \(L(n)=\sigma (n)\) and the lower bound (2), we obtain the following optimality result.

Corollary 2

Assume that there is some \(r\ge 0\) such that \(\sigma (2n) \ge 2^{-r} \sigma (n)\) holds for all \(n\in {\mathbb {N}}\). We set \({\tilde{c}}_r=2^{r\lceil 2r+4\rceil +3/2}\). Then we have

$$\begin{aligned} e^{\mathrm{ran}}\left( A_n^r\right) \le {\tilde{c}}_r\, e^{\mathrm{ran}}\left( n,T, \Lambda ^{\mathrm{std}}\right) \qquad \text {for all} \quad n\in {\mathbb {N}}. \end{aligned}$$

Let us now consider some examples. In each example, we first discuss the order of convergence of \(e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \). We then talk about explicit upper bounds.

Example 1

(Approximation of mixed order Sobolev functions on the torus). Let D be the d-dimensional torus \({\mathbb {T}}^d\), represented by the unit cube \([0,1]^d\), where opposite faces are identified. Let \({\mathcal {A}}\) be the Borel \(\sigma \)-algebra on \({\mathbb {T}}^d\) and \(\mu \) the Lebesgue measure. Let F be the Sobolev space of complex-valued functions on D with dominating mixed smoothness \(r\in {\mathbb {N}}\), equipped with the scalar product

$$\begin{aligned} \left\langle f,g\right\rangle _F = \sum _{\left\| \alpha \right\| _\infty \le r} \left\langle D^\alpha f,D^\alpha g\right\rangle _2 . \end{aligned}$$
(4)

We know that

$$\begin{aligned} e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{all}}\right) \asymp n^{-r}\ln ^{r(d-1)} n. \end{aligned}$$

This classical result goes back to Babenko [1] and Mityagin [14]. Corollary 1 yields

$$\begin{aligned} e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \asymp n^{-r}\ln ^{r(d-1)} n. \end{aligned}$$

This is a new result. The optimal order is achieved by the algorithm \(A_n^r\) and the author does not know of any other algorithm with this property. It is still an open problem whether the same rate can be achieved with deterministic algorithms based on function values. So far, it is only known that

$$\begin{aligned} n^{-r}\ln ^{r(d-1)}n \preccurlyeq e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \preccurlyeq n^{-r}\ln ^{(r+1/2)(d-1)} n. \end{aligned}$$

The upper bound is achieved by Smolyak’s algorithm, see Sickel and Ullrich [19].

We now turn to explicit estimates. We know that there is some \(C_{r,d}>0\) such that

$$\begin{aligned} e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \le C_{r,d}\, n^{-r}\ln ^{r(d-1)} n \quad \text {for all}\quad n\ge 2. \end{aligned}$$
(5)

This upper bound is optimal as n tends to infinity. However, it is not useful to describe the error numbers for small values of n. Simple calculus shows that the right-hand side in (5) is increasing for \(n\le e^{d-1}\). The error numbers, on the other hand, are decreasing. Moreover, the right-hand side attains its minimum for \(n=2\) if restricted to \(n\le (d-1)^{d-1}\) and is hence larger than \(e^{\mathrm{ran}}\left( 2,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \). This means that the trivial upper bound

$$\begin{aligned} e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \le e^{\mathrm{ran}}\left( 2,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \quad \text {for all}\quad n\ge 2 \end{aligned}$$

is better than (5) for all \(n\le (d-1)^{d-1}\) and regardless of the value of \(C_{r,d}\). For these reasons, it is important to consider different error bounds, if the dimension d is large. See also the paper of Kühn, Sickel, and Ullrich [12]. Based on this paper, it is shown by the author [9] that

$$\begin{aligned} \sigma (n) \le \left( 2/n\right) ^p \quad \text {for all}\quad n\in {\mathbb {N}}, \quad \text {if}\quad p=\frac{r}{2+\ln d}. \end{aligned}$$

We obtain with Theorem 2 that

$$\begin{aligned} e^{\mathrm{ran}}\left( A^p_n\right) \le 2\cdot \left( 2^{\lceil 2p+4\rceil }/n\right) ^p \quad \text {for}\quad n\in {\mathbb {N}}. \end{aligned}$$
(6)

Example 2

(Approximation of mixed order Sobolev functions on the cube). Now, let D be the d-dimensional unit cube \([0,1]^d\) with the induced topology, and let \({\mathcal {A}}\) be the Borel \(\sigma \)-algebra and \(\mu \) the Lebesgue measure. Let F be the Sobolev space of complex-valued functions on \([0,1]^d\) with dominating mixed smoothness \(r\in {\mathbb {N}}\), equipped with the scalar product (4). Just like on the torus, we have

$$\begin{aligned} e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \asymp e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{all}}\right) \asymp n^{-r}\ln ^{r(d-1)}n, \end{aligned}$$

where the optimal rate is achieved by \(A_n^r\). Like in Example 1, the corresponding upper bounds are bad for \(n\le (d-1)^{d-1}\). In this range, we need different estimates for the approximation numbers. It is known that

$$\begin{aligned} \sigma (n) \le (2/n)^{p} \quad \text {for}\quad n\in {\mathbb {N}}, \quad \text {if}\quad p=\frac{1.1929}{2+\ln d}. \end{aligned}$$

This estimate cannot be improved significantly for \(n\le 2^d\), even if \(r=\infty \). See the author’s paper [9] for more details. With Theorem 2, we obtain the upper bound

$$\begin{aligned} e^{\mathrm{ran}}\left( A^p_n\right) \le 2\cdot (2^6/n)^{p} \quad \text {for}\quad n\in {\mathbb {N}}. \end{aligned}$$

Example 3

(Approximation in tensor product spaces). This example is more general than the previous ones. By \(H_1\otimes H_2\) we denote the tensor product of two Hilbert spaces \(H_1\) and \(H_2\). For \(j=1\ldots d\), let \((D_j,{\mathcal {A}}_j,\nu _j)\) be a \(\sigma \)-finite measure space and \(F_j\) be a Hilbert space of \({\mathbb {K}}\)-valued functions that is compactly embedded in \(L^2(D_j,{\mathcal {A}}_j,\nu _j)\). The \(\sigma \)-finity of the measure spaces ensures that

$$\begin{aligned} L^2(D_1,{\mathcal {A}}_1,\nu _1) \otimes \dots \otimes L^2(D_d,{\mathcal {A}}_d,\nu _d) = L^2(D,{\mathcal {A}},\mu ), \end{aligned}$$

where D is the Cartesian product of the sets \(D_j\) and \(\mu \) is the unique product measure of the measures \(\nu _j\) on the tensor product \({\mathcal {A}}\) of the \(\sigma \)-algebras \({\mathcal {A}}_j\). The tensor product space

$$\begin{aligned} F=F_1 \otimes \dots \otimes F_d \end{aligned}$$

is compactly embedded in \(L^2(D,{\mathcal {A}},\mu )\). Assuming that the approximation numbers of the univariate embeddings \(F_j\hookrightarrow L^2(D_j,{\mathcal {A}}_j,\nu _j)\) are of polynomial decay, that is,

$$\begin{aligned} e^{\mathrm{det}}\left( n,F_j\hookrightarrow L^2(D_j,{\mathcal {A}}_j,\nu _j), \Lambda ^{\mathrm{all}}\right) \asymp n^{-r_j} \end{aligned}$$

for some \(r_j>0\), it can be derived from Mityagin [14] and Nikol’skaya [15] that

$$\begin{aligned} e^{\mathrm{det}}\left( n,F\hookrightarrow L^2(D,{\mathcal {A}},\mu ), \Lambda ^{\mathrm{all}}\right) \asymp n^{-r}\ln ^{r(d_0-1)}n, \end{aligned}$$

where r is the minimum among all numbers \(r_j\) and \(d_0\) is its multiplicity. Corollary 1 implies

$$\begin{aligned} e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2(D,{\mathcal {A}},\mu ), \Lambda ^{\mathrm{std}}\right) \asymp n^{-r}\ln ^{r(d_0-1)}n, \end{aligned}$$

where the optimal order is achieved by \(A_n^r\). We do not discuss explicit estimates in this abstract setting.

Example 4

(Approximation of isotropic Sobolev functions on the torus). Let D again be the d-torus, this time represented by \([0,2\pi ]^d\). Let F be the Sobolev space of complex-valued functions on D with isotropic smoothness \(r\in {\mathbb {N}}\), equipped with the scalar product

$$\begin{aligned} \left\langle f,g\right\rangle _F = \sum _{\left\| \alpha \right\| _1 \le r} \left\langle D^\alpha f,D^\alpha g\right\rangle _2 . \end{aligned}$$

This example is not a tensor product problem. For this classical problem, it is known that

$$\begin{aligned}&e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \asymp e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{std}}\right) \\&\quad \asymp e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{all}}\right) \asymp e^{\mathrm{ran}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{all}}\right) \asymp n^{-r/d} \end{aligned}$$

for \(r>d/2\). In the case \(r\le d/2\), where function values are only defined almost everywhere, the last three relations stay valid. See Jerome [8], Triebel [21], Mathé [13], and Heinrich [6]. For \(n\le 2^d\), however, the function \(n^{-r/d}\) is not suited to describe the behavior of \(\sigma (n)\). It has been proved by Kühn, Mayer, and Ullrich [11] that there are positive constants \(b_r\) and \(B_r\) that do not depend on d such that

$$\begin{aligned} b_r \left( \frac{\log _2\left( 1+d/\log _2 n\right) }{\log _2 n}\right) ^{r/2} \le \sigma (n) \le B_r \left( \frac{\log _2\left( 1+d/\log _2 n\right) }{\log _2 n}\right) ^{r/2} \end{aligned}$$
(7)

for all \(d>1\) and \(n\in {\mathbb {N}}\) with \(d\le n \le 2^d\). If we apply Relation (2) and Theorem 2,Footnote 2 we obtain the existence of d-independent positive constants \({\tilde{b}}_r\) and \({\widetilde{B}}_r\) such that

$$\begin{aligned} {\tilde{b}}_r \left( \frac{\log _2\left( 1+d/\log _2 n\right) }{\log _2 n}\right) ^{r/2} \le e(n) \le {\widetilde{B}}_r \left( \frac{\log _2\left( 1+d/\log _2 n\right) }{\log _2 n}\right) ^{r/2} \end{aligned}$$

for all \(d>1\) and \(n\in {\mathbb {N}}\) with \(d\le n \le 2^{d-1}\). This optimal behavior is achieved by the algorithm \(A_n^r\).

Remark 1

(Implementation of these algorithms). The construction of the algorithms \(A_n^r\) is completely explicit. We are able to implement these algorithms if we know the singular value decomposition \({\mathcal {B}}\) of the embedding \(F\hookrightarrow L^2\) and if we are able to sample from the probability distributions \(\mu _m\). This task may be very hard. In Example 1 and 4, however, it is not. Here, \({\mathcal {B}}\) is the Fourier basis of \(L^2\), and all the random variables are independent and uniformly distributed on the unit cube. Also the case of general tensor product spaces F and \(L^2\) can be handled if the singular value decompositions \({\mathcal {B}}_j\) of the univariate embeddings \(F_j\hookrightarrow L^2(D_j,{\mathcal {A}}_j,\nu _j)\) are known. Then, the singular value decomposition of the embedding \(F\hookrightarrow L^2\) is given by

$$\begin{aligned} {\mathcal {B}}=\left\{ b^{(1)}\otimes \dots \otimes b^{(d)} \mid b^{(j)}\in {\mathcal {B}}_j \text { for }\;j=1\dots d\right\} , \end{aligned}$$

and the probability measure \(\mu _m\) is the average of m product densities; that is,

$$\begin{aligned} \mu _m=\frac{1}{m} \sum _{i=1}^m \bigotimes _{j=1}^d \eta _{i,j}, \end{aligned}$$

where \(\mathrm{d}\eta _{i,j}=|b_{i,j}|^2\mathrm{d}\nu _j\) with some \(b_{i,j}\in {\mathcal {B}}_j\). A random sample x from this distribution can be obtained as follows:

  1. (1)

    Get i from the uniform distribution on \(\left\{ 1,\dots ,m\right\} \).

  2. (2)

    Get \(x_1,\dots ,x_d\) independently from the probability distributions \(\eta _{i,1},\dots ,\eta _{i,d}\).

The second step can, for example, be done by rejection sampling, if the measures \(\eta _{i,j}\) have a bounded Lebesgue density. This way, the total sampling costs are linear in d. Another method of sampling from \(\mu _m\) is proposed by Cohen and Migliorati in [3, Section 5].

4 A Method for Multivariate Integration

In this section, we require the measure \(\mu \) to be finite. This ensures that the integral operator

$$\begin{aligned} I: F \rightarrow {\mathbb {K}}, \quad I(f)=\int _D f \mathrm{d}\mu \end{aligned}$$

is well defined and continuous on F. Let us assume that \(\mu \) is a probability measure. We want to approximate I(f) for an unknown function \(f\in F_\circ \) by a randomized algorithm \(Q_n\) that evaluates at most n function values of f. The worst case error of \(Q_n\) is the quantity

$$\begin{aligned} e^{\mathrm{ran}}(Q_n)=\sup \limits _{f\in F_\circ } \left( {\mathbb {E}}\left| I(f)-Q_n(f)\right| ^2\right) ^{1/2} . \end{aligned}$$

The minimal worst case error among such algorithms is denoted by

$$\begin{aligned} e^{\mathrm{ran}}(n,I,\Lambda ^{\mathrm{std}}) = \inf \limits _{Q_n} e^{\mathrm{ran}}(Q_n) . \end{aligned}$$

Like any method for \(L^2\)-approximation, the algorithm \(A_n^r\) from Sect. 3 can also be used for numerical integration.

Algorithm

For all \(r>0\), any \(n\in {\mathbb {N}}\), and \(f\in L^2\), let

$$\begin{aligned} Q_{2n}^r(f)=I(A_n^r f) + \frac{1}{n}\sum _{j=1}^n \left( f-A_n^r f\right) (X_j), \end{aligned}$$

where \(X_1,\dots ,X_n\) are random variables with distribution \(\mu \) that are independent of each other and the random variables in \(A_n^r\).

It is easy to verify that \(Q_{2n}^r\) is unbiased, evaluates at most 2n function values of f, and satisfies

$$\begin{aligned} {\mathbb {E}}\left| I(f)-Q_{2n}^r(f)\right| ^2 \le \frac{1}{n}\, {\mathbb {E}}\left\| f-A_n^r f\right\| _2^2 \end{aligned}$$

for each f in \(L^2\). We thus obtain the following corollary.

Corollary 3

Assume that \(L:{\mathbb {N}}\rightarrow (0,\infty )\) satisfies (1), and let \(c_r=2^{r\lceil 2r+3\rceil +1}\).

$$\begin{aligned}&\text {If}\quad&\sigma (n)&\le L(n) \quad&\text {for all}\qquad&n\in {\mathbb {N}},\\&\text {then}\quad&e^{\mathrm{ran}}(Q_{2n}^r)&\le c_r\,n^{-1/2} L(n) \quad&\text {for all}\qquad&n\in {\mathbb {N}}. \end{aligned}$$

In particular:

$$\begin{aligned}&e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{all}}\right) \preccurlyeq n^{-p} \ln ^q n\\&\quad \Rightarrow \quad e^{\mathrm{ran}}\left( n,I, \Lambda ^{\mathrm{std}}\right) \preccurlyeq n^{-p-1/2} \ln ^q n. \end{aligned}$$

The result on the order of convergence is quite general but not always optimal. An example is given by integration with respect to the Lebesgue measure \(\mu \) on the Sobolev space F with dominating mixed smoothness r on the d-dimensional unit cube, as treated by Novak and the author [10] and Ullrich [22]. In this case, we have

$$\begin{aligned}&e^{\mathrm{det}}\left( n,F\hookrightarrow L^2, \Lambda ^{\mathrm{all}}\right) \asymp n^{-r} \ln ^{r(d-1)} n,\\&e^{\mathrm{ran}}\left( n,I, \Lambda ^{\mathrm{std}}\right) \asymp n^{-r-1/2}. \end{aligned}$$

The main strength of Corollary 3 is that it provides us with unbiased methods for high-dimensional integration achieving a small error with a modest number of function values.

Example 5

(Integration of mixed order Sobolev functions on the torus). As in Example 1, let F be the Sobolev space of dominating mixed smoothness r on the d-torus, and let \(\mu \) be the Lebesgue measure. Among all randomized algorithms for multivariate integration in F, the randomized Frolov algorithm \(Q_n^*\) is known to have the optimal error rate. It is shown by Ullrich [22] that there is some constant \(c>2^d\) such that

$$\begin{aligned} e^{\mathrm{ran}}\left( Q^*_n\right) \le c\,n^{-r-1/2} \quad \quad \text {for }\;n\in {\mathbb {N}}. \end{aligned}$$
(8)

However, this estimate is trivial if n is not exponentially large in d. For smaller values of n, an error less than one is guaranteed by the direct simulation

$$\begin{aligned} S_n(f)=\frac{1}{n} \sum _{j=1}^n f(X_j), \end{aligned}$$

with independent and uniformly distributed random variables \(X_j\). It satisfies

$$\begin{aligned} e^{\mathrm{ran}}\left( S_n\right) \le n^{-1/2} \quad \quad \text {for }\; n\in {\mathbb {N}}. \end{aligned}$$
(9)

However, this error bound converges only slowly, as n tends to infinity. It does not reflect the smoothness of the integrands at all. The above method also guarantees nontrivial error bounds for smaller values of n but converges faster than \(S_n\). Relation (6) immediately yields that

$$\begin{aligned} e^{\mathrm{ran}}\left( Q^p_{2n}\right) \le C\,n^{-p-1/2} \quad \quad \text {for }\;n\in {\mathbb {N}}, \end{aligned}$$
(10)

with \(p=\frac{r}{2+\ln d}\) and \(C=2^{p \lceil 2p+4\rceil +1}\). For example, let \(d=500\) and \(r=8\). For one million function values, the estimate (8) for the Frolov algorithm is larger than one, the estimate (9) for the direct simulation gives the error \(10^{-3}\), and the estimate (10) for our new algorithm gives an error smaller than \(5\cdot 10^{-7}\).

Remark 2

(Implementation of these algorithms). We are able to implement the algorithms \(Q_{2n}^r\) under the following assumptions:

  • We are able to implement \(A_n^r\). This issue is discussed in Remark 1.

  • We know the integrals \(I(b_j)\) of the eigenfunctions \(b_j\in {\mathcal {B}}\) for all \(j\le 2^{-\ell _r}n\).

  • We can sample from the probability distribution \(\mu \).

In the above example, the implementation is particularly easy, since \({\mathcal {B}}\) is the Fourier basis and all the random variables are independent and uniformly distributed on the unit cube.

5 A Weaker Type of A Priori Knowledge

In the previous sections, we assumed that the target function f is contained in the unit ball of a Hilbert space F that is compactly embedded into \(L^2\), that is,

$$\begin{aligned} \left\| f\right\| _F \le 1. \end{aligned}$$
(11)

As we have seen in Sect. 2, the space F induces a nonincreasing sequence \(\sigma \), the singular numbers

$$\begin{aligned} \sigma (1) \ge \sigma (2) \ge \cdots > 0 \end{aligned}$$

of the embedding \(F\hookrightarrow L^2\). This sequence is either finite or tends to zero. It also induces a nested sequence V of subspaces

$$\begin{aligned} V_0 \subset V_1 \subset V_2 \subset \cdots \subset L^2, \qquad \dim (V_m) = m, \end{aligned}$$

where \(V_m\) is spanned by the first m elements of the singular value decomposition.

In turn, any such pair \(\left( \sigma ,V\right) \) induces a Hilbert space F that is compactly embedded into \(L^2\). We choose \(b_m\) as an element of the orthogonal complement of \(V_{m-1}\) in \(V_{m}\) with \(\left\| b_m\right\| _2=1\) and define F by its orthonormal basis \(\{\sigma (1) b_1,\sigma (2) b_2,\ldots \}\). It has the scalar product

$$\begin{aligned} \left\langle f,g\right\rangle _F = \sum \sigma (k)^{-2} \left\langle f,b_k\right\rangle _2 \overline{\left\langle g,b_k\right\rangle }_2, \end{aligned}$$

where we take the sum over the whole sequence \(\sigma \). It is not hard to see that the correspondence between F and the pair \(\left( \sigma ,V\right) \) is bijective up to the choice of the spaces \(V_m\) for which we have \(\sigma (m+1)=\sigma (m)\).

Let \(P_m\) denote the orthogonal projection onto \(V_m\) in \(L^2\). It is readily verified that our assumption (11) on the target function f implies that

$$\begin{aligned} \left\| f-P_m f\right\| _2^2 \le \sigma (m+1)^2 \quad \text {for all} \quad m\in {\mathbb {N}}_0. \end{aligned}$$
(12)

In general, however, (12) is strictly weaker than (11). For example, if \(\sigma (k)=1/k\) for \(k\in {\mathbb {N}}\), the function

$$\begin{aligned} f=\sum (\sigma (k)^2-\sigma (k+1)^2)^{1/2}\, b_k \end{aligned}$$

satisfies (12) but is not even contained in the space F. In Sect. 3, we constructed a randomized algorithm \(A_n^r: L^2 \rightarrow V_m\) and proved upper bounds on the mean square error \({\mathbb {E}}\left\| f- A_n^r(f)\right\| _2^2\) for any f from (11). In fact, the same error bounds hold for any f from (12). We state this as Theorem 3.

Theorem 3

Let \((D,{\mathcal {A}},\mu )\) be a measure space and \(L^2=L^2(D,{\mathcal {A}},\mu )\). For any \(m\in {\mathbb {N}}_0\), let \(V_m\) be an m-dimensional subspace of \(L^2\) such that \(V_m\subset V_{m+1}\), and let \(P_m: L^2\rightarrow V_m\) be the orthogonal projection onto \(V_m\). Assume that \(f\in L^2\) satisfies

$$\begin{aligned} \left\| f-P_m f\right\| _2^2 \le \varepsilon (m) \qquad \text {for all}\quad m\in {\mathbb {N}}_0, \end{aligned}$$
(13)

with some \(\varepsilon :{\mathbb {N}}_0\rightarrow (0,\infty )\). Then the randomized algorithm \(Q_m:L^2 \rightarrow V_m\) as defined below satisfies

$$\begin{aligned} {\mathbb {E}}\left\| f- Q_m f\right\| _2^2 \le 2\, \varepsilon (m) \end{aligned}$$

for any \(m=2^k\) and \(k\in {\mathbb {N}}_0\). The number of requested function values is at most

$$\begin{aligned} n\left( Q_m\right) \le 4\, m \cdot \max \limits _{0\le j \le k} \left\lceil \frac{\varepsilon (\lfloor 2^{j-1}\rfloor )}{\varepsilon (2^j)}\right\rceil . \end{aligned}$$
(14)

To define the algorithm \(Q_m\), we choose \(b_n\) in the orthogonal complement of \(V_{n-1}\) in \(V_n\) with \(\left\| b_n\right\| _2=1\) for all \(n\in {\mathbb {N}}\). For \(j\in {\mathbb {N}}\), we set

$$\begin{aligned} m_j=2^{j-1} \qquad \text {and}\qquad n_j=2^j \left\lceil \frac{\varepsilon (\lfloor 2^{j-2}\rfloor )}{\varepsilon (2^{j-1})} \right\rceil . \end{aligned}$$

Then the method \(M^{(k)}_{{\varvec{n}},{\varvec{m}}}: L^2\rightarrow V_{m_k}\) for \(k\in {\mathbb {N}}_0\) can be defined as in Sect. 3. Given \(m=2^k\) for some \(k\in {\mathbb {N}}_0\), we define \(Q_m=M^{(k+1)}_{{\varvec{n}},{\varvec{m}}}: L^2\rightarrow V_m\).

Proof

We only sketch the proof since it is very similar to the proof of Theorem 2. Just like in Lemma 1, we can show for any \(k\in {\mathbb {N}}_0\) that

$$\begin{aligned} {\mathbb {E}}\left\| f- M^{(k+1)}_{{\varvec{n}},{\varvec{m}}} f\right\| _2^2 \le \frac{m_{k+1}}{n_{k+1}}\, {\mathbb {E}}\left\| f- M^{(k)}_{{\varvec{n}},{\varvec{m}}} f\right\| _2^2 + \varepsilon (m_{k+1}). \end{aligned}$$

The statement follows by induction on \(k\in {\mathbb {N}}_0\). \(\square \)

Note that we did not impose any condition on the upper bound \(\varepsilon :{\mathbb {N}}_0\rightarrow (0,\infty )\). If \(\varepsilon \) is regularly decreasing, the maximum in (14) is bounded by a constant that does not depend on m. Roughly speaking, the algorithm \(Q_m\) admits a mean square error of order \(\varepsilon (m)\) with a sample size of order m for any f from (13).

Remark 3

(Optimal approximation within a subspace). Let D be a Borel subset of \({\mathbb {R}}^d\) with positive Lebesgue measure, \({\mathcal {A}}\) be the Borel sigma algebra on D, and \(\mu \) be a probability measure on \((D,{\mathcal {A}})\). The best approximation of \(f\in L^2(D,{\mathcal {A}},\mu )\) within the subspace \(V_m\) is given by \(P_m f\). Its error is given by the number

$$\begin{aligned} e_m(f)=\inf \limits _{v\in V_m} \left\| f-v\right\| _2 = \left\| f-P_m f\right\| _2. \end{aligned}$$

In general, we cannot find \(P_m f\) by sampling only a finite number of function values of f. What we can provide is a random approximation \(v_m\) within \(V_m\) whose root mean square error

$$\begin{aligned} \left( {\mathbb {E}}\left\| f-v_m\right\| _2^2\right) ^{1/2} \end{aligned}$$

is close to \(e_m(f)\). If we know the numbers \(e_m(f)\) for all \(m\in {\mathbb {N}}\) (or some good upper bound) and if they are regularly decreasing, we can choose \(v_m\) as the output of the method \(Q_m\) from Theorem 3, which uses a sample size of order m. But even if we do not know anything about \(f\in L^2\), we can still find an approximation \(v_m\) like above. We only need the mild assumption that \(V_m\) consists of functions defined everywhere on D and that for each \(x\in D\), there is some \(v\in V_m\) with \(v(x)\ne 0\). We can then choose \(v_m\) as the output of a weighted least squares method, see Cohen and Migliorati [3, Theorem 2.1 (iv)]. The sample size of this method, however, is at least of order \(m\ln m\). In both cases, the involved proportionality constants are independent of the dimension of the domain D.