1 Introduction

Professor Kai-Tai Fang and his collaborators have demonstrated the effectiveness of low discrepancy points as space filling designs [4,5,6, 11]. They have promoted discrepancy as a quality measure for statistical experimental designs to the statistics, science, and engineering communities [7,8,9,10].

Low discrepancy uniform designs, \(\mathscr {U}= \{\varvec{u}_i\}_{i=1}^N\), are typically constructed so that their empirical distributions, \(F_\mathscr {U}\), approximate \(F_{unif }\), the uniform distribution on the unit cube, \((0,1)^d\). The discrepancy measures the magnitude of \(F_{unif }-F_\mathscr {U}\). The uniform design is a commonly used space filling design for computer experiments [5] and can be constructed using JMP® [20].

When the target probability distribution for the design, \(F\), defined over the experimental domain \(\varOmega \), is not the uniform distribution on the unit cube, then the desired design, \(\mathscr {X}\), is typically constructed by transforming a low discrepancy uniform design, i.e.,

$$\begin{aligned} \mathscr {X}= \{\varvec{x}_i\}_{i=1}^N = \{\varvec{\varPsi }(\varvec{u}_i)\}_{i=1}^N = \varvec{\varPsi }(\mathscr {U}), \qquad \varvec{\varPsi }: (0,1)^d\rightarrow \varOmega . \end{aligned}$$
(5.1)

Note that \(F\) may differ from \(F_{unif }\) because \(\varOmega \ne (0,1)^d\) and/or \(F\) is non-uniform. A natural transformation, \(\varvec{\varPsi }(\varvec{u})=\bigl (\varPsi _1(u_1),\ldots ,\varPsi _d(u_d) \bigr )\), when \(F\) has independent marginals, is the inverse distribution transformation:

$$\begin{aligned} \varPsi _j(u_j) = F_j^{-1}(u_j), \quad j =1, \ldots , d, \qquad \text {where } F(\varvec{x}) = F_1(x_1) \cdots F_d(x_d). \end{aligned}$$
(5.2)

A number of transformation methods for different distributions can be found in [2] and [11, Chap. 1].

This chapter addresses the question of whether the design \(\mathscr {X}\) resulting from transformation (5.1) of a low discrepancy design, \(\mathscr {U}\), is itself low discrepancy with respect to the target distribution \(F\). In other words,

$$\begin{aligned} does\ small\ F_{unif }- F_\mathscr {U}\ imply\ small\ F- F_\mathscr {X}? \end{aligned}$$
(Q)

We show that the answer may be yes or no, depending on how the question is understood. We discuss both cases. For illustrative purposes, we consider the situation where \(F\) is the standard multivariate normal distribution, \(F_{normal }\).

In the next section, we define the discrepancy and motivate it from three perspectives. In Sect. 5.3 we give a simple condition under which the answer to (Q) is yes. But, in Sect. 5.4 we show that under more practical assumptions the answer to (Q) is no. An example illustrates what can go wrong. Section 5.5 provides a coordinate exchange algorithm that improves the discrepancy of a candidate design. Simulation results illustrate the performance of this algorithm. We conclude with a brief discussion.

2 The Discrepancy

Experimental design theory based on discrepancy assumes an experimental region, \(\varOmega \), and a target probability distribution, \(F:\varOmega \rightarrow [0,1]\), which is known a priori. We assume that \(F\) has a probability density, \(\varrho \). It is convenient to also work with measures, \(\nu \), defined on \(\varOmega \). If \(\nu \) is a probability measure, then the associated probability distribution is given by \(F(\varvec{x}) = \nu ((-\varvec{\infty },\varvec{x}])\). The Dirac measure, \(\delta _{\varvec{x}}\) assigns unit measure to the set \(\{\varvec{x}\}\) and zero measure to sets not containing \(\varvec{x}\). A design, \(\mathscr {X}= \{\varvec{x}_i\}_{i=1}^N\), is a finite set of points with empirical distribution \(F_{\mathscr {X}} = N^{-1} \sum _{i=1}^N \mathbbm {1}_{(-\varvec{\infty },\varvec{x}_i]}\) and empirical measure \(\nu _{\mathscr {X}} = N^{-1} \sum _{i=1}^N \delta _{\varvec{x}_i}\).

Our notation for discrepancy takes the form of

$$ D(F_{\mathscr {X}},F,K), \ D(\mathscr {X},F,K), \ D(\mathscr {X},\varrho ,K), \ D(\mathscr {X},\nu ,K), \ D(\nu _{\mathscr {X}},\nu ,K), \text { etc.}, $$

all of which mean the same thing. The first argument always refers to the design, the second argument always refers to the target, and the third argument is a symmetric, positive definite kernel, which is explained below. We abuse the discrepancy notation because sometimes it is convenient to refer to the design as a set, \(\mathscr {X}\), other times by its empirical distribution, \(F_{\mathscr {X}}\), and other times by its empirical measure, \(\nu _{\mathscr {X}}\). Likewise, sometimes it is convenient to refer the target as a probability measure, \(\nu \), other times by its distribution function, F, and other times by its density function, \(\varrho \).

Table 5.1 Three interpretations of the discrepancy

In the remainder of this section we provide three interpretations of the discrepancy, summarized in Table 5.1. These results are presented in various places, including [14, 15]. One interpretation of discrepancy is the norm of \(\nu - \nu _{\mathscr {X}}\). The second and third interpretations consider the problem of evaluating the mean of a random variable \(Y=f(\varvec{X})\), or equivalently a multidimensional integral

$$\begin{aligned} \mu = \mathbb {E}(Y) = \mathbb {E}[f(\varvec{X})] = \int _{\varOmega } f(\varvec{x}) \, \varrho (\varvec{x}) \, \mathrm {d}\varvec{x}, \end{aligned}$$
(5.3)

where \(\varvec{X}\) is a random vector with density \(\varrho \). The second interpretation of the discrepancy is worst-case cubature error for integrands, f, in the unit ball of a Hilbert space. The third interpretation is the root mean squared cubature error for integrands, f, which are realizations of a stochastic processes.

2.1 Definition in Terms of a Norm on a Hilbert Space of Measures

Let \((\mathscr {M}, \left\langle \cdot , \cdot \right\rangle _{\mathscr {M}})\) be a Hilbert space of measures defined on the experimental region, \(\varOmega \). Assume that \(\mathscr {M}\) includes all Dirac measures. Define the kernel function \(K:\varOmega \times \varOmega \rightarrow \mathbb {R}\) in terms of inner products of Dirac measures:

$$\begin{aligned} K(\varvec{t},\varvec{x}) := \left\langle \delta _{\varvec{t}}, \delta _{\varvec{x}} \right\rangle _{\mathscr {M}}, \qquad \forall \varvec{t}, \varvec{x}\in \varOmega . \end{aligned}$$
(5.4)

The squared distance between two Dirac measures in \(\mathscr {M}\) is then

$$\begin{aligned} \left\Vert \delta _{\varvec{x}} - \delta _{\varvec{t}} \right\Vert _{\mathscr {M}}^2 = K(\varvec{t},\varvec{t}) - 2K(\varvec{t},\varvec{x}) + K(\varvec{x},\varvec{x}), \qquad \forall \varvec{t}, \varvec{x}\in \varOmega . \end{aligned}$$
(5.5)

It is straightforward to show that K is symmetric in its arguments and positive-definite, namely:

$$\begin{aligned} K(\varvec{x}, \varvec{t}) = K(\varvec{t}, \varvec{x}) \qquad \forall \varvec{t}, \varvec{x}\in \varOmega ,\end{aligned}$$
(5.6a)
$$\begin{aligned} \sum \limits _{i, k=1}^N c_i c_k K(\varvec{x}_i,\varvec{x}_k) > 0, \qquad \forall N\in \mathbb {N}, \ \varvec{c}\in \mathbb {R}^N \setminus \{\varvec{0}\}, \ \mathscr {X}\subset \varOmega . \end{aligned}$$
(5.6b)

The inner product of arbitrary measures \(\lambda , \nu \in \mathscr {M}\) can be expressed in terms of a double integral of the kernel, K:

$$\begin{aligned} \left\langle \lambda , \nu \right\rangle _{\mathscr {M}} = \int _{\varOmega \times \varOmega } K(\varvec{t},\varvec{x}) \, \lambda (\mathrm {d}\varvec{t}) \nu (\mathrm {d}\varvec{x}). \end{aligned}$$
(5.7)

This can be established directly from (5.4) for \(\mathscr {M}_0\), the vector space spanned by all Dirac measures. Letting \(\mathscr {M}\) be the closure of the pre-Hilbert space \(\mathscr {M}_0\) then yields (5.7).

The discrepancy of the design \(\mathscr {X}\) with respect to the target probability measure \(\nu \) using the kernel K can be defined as the norm of the difference between the target probability measure, \(\nu \), and the empirical probability measure for \(\mathscr {X}\):

$$\begin{aligned} \nonumber D^2(\mathscr {X},\nu ,K)&:= \Vert \nu -\nu _\mathscr {X} \Vert ^2\\ \nonumber&= \int _{\varOmega \times \varOmega } K(\varvec{t},\varvec{x}) \, (\nu - \nu _{\mathscr {X}})(\mathrm {d}\varvec{t}) (\nu - \nu _{\mathscr {X}})(\mathrm {d}\varvec{x}) \\ \nonumber&= \int _{\varOmega \times \varOmega } K(\varvec{t},\varvec{x}) \, \nu (\mathrm {d}\varvec{t}) \nu (\mathrm {d}\varvec{x}) - \frac{2}{N} \sum _{i=1}^N \int _{\varOmega } K(\varvec{t},\varvec{x}_i) \, \nu (\mathrm {d}\varvec{t})\\&\qquad \qquad + \frac{1}{N^2} \sum _{i,k=1}^N K(\varvec{x}_i,\varvec{x}_k). \end{aligned}$$
(5.8a)

The formula for the discrepancy may be written equivalently in terms of the probability distribution, F, or the probability density, \(\varrho \), corresponding to the target probability measure, \(\nu \):

$$\begin{aligned} \nonumber D^2(\mathscr {X},F,K)&= \int _{\varOmega \times \varOmega } K(\varvec{t},\varvec{x}) \, \mathrm {d}F( \varvec{t}) \mathrm {d}F(\varvec{x}) - \frac{2}{N} \sum _{i=1}^N \int _{\varOmega } K(\varvec{t},\varvec{x}_i) \, \mathrm {d}F(\varvec{t}) \\&\qquad \qquad + \frac{1}{N^2} \sum _{i,k=1}^N K(\varvec{x}_i,\varvec{x}_k), \\ \nonumber&= \int _{\varOmega \times \varOmega } K(\varvec{t},\varvec{x}) \, \varrho (\varvec{t}) \varrho (\varvec{x}) \, \mathrm {d}\varvec{t}\mathrm {d}\varvec{x}- \frac{2}{N} \sum _{i=1}^N \int _{\varOmega } K(\varvec{t},\varvec{x}_i) \, \varrho (\varvec{t}) \, \mathrm {d}\varvec{t}\end{aligned}$$
(5.8b)
$$\begin{aligned}&\qquad \qquad + \frac{1}{N^2} \sum _{i,k=1}^N K(\varvec{x}_i,\varvec{x}_k). \end{aligned}$$
(5.8c)

Typically the computational cost of evaluating \(K(\varvec{t},\varvec{x})\) for any \((\varvec{t},\varvec{x}) \in \varOmega ^2\) is \(\mathscr {O}(d)\), where \(\varvec{t}\) is a d-vector. Assuming that the integrals above can be evaluated at a cost of \(\mathscr {O}(d)\), the computational cost of evaluating \(D(\mathscr {X},\nu ,K)\) is \(\mathscr {O}(dN^2)\).

The formulas for the discrepancy in (5.8) depend inherently on the choice of the kernel K. That choice is key to answering question (Q). An often used kernel is

$$\begin{aligned} K(\varvec{t},\varvec{x}) = \prod \limits _{j=1}^d\left[ 1+ \frac{1}{2} \left( |t_j|+ |x_j|- |x_j-t_j| \right) \right] . \end{aligned}$$
(5.9)

This kernel is plotted in Fig. 5.1 for \(d=1\). The distance between two Dirac measures by (5.5) for this kernel in one dimension is

$$\begin{aligned} \Vert \delta _{x}-\delta _{t}\Vert _{\mathscr {M}} = \sqrt{\left|x-t \right|}. \end{aligned}$$
Fig. 5.1
figure 1

The kernel defined in (5.9) for \(d=1\)

The discrepancy for the uniform distribution on the unit cube defined in terms of the above kernel is expressed as

$$\begin{aligned} \nonumber D^2(\mathscr {U},F_{unif },K)&= \int _{(0,1)^d \times (0,1)^d} K(\varvec{t},\varvec{x}) \, \mathrm {d}\varvec{t}\mathrm {d}\varvec{x}- \frac{2}{N} \sum _{i=1}^N \int _{(0,1)^d} K(\varvec{t},\varvec{u}_i) \, \mathrm {d}\varvec{t}\\ \nonumber&\qquad \qquad + \frac{1}{N^2} \sum _{i,k=1}^N K(\varvec{u}_i,\varvec{u}_k) \\ \nonumber&= \left( \frac{4}{3} \right) ^d - \frac{2}{N} \sum _{i=1}^N \prod _{j=1}^d \left[ 1 + u_{ij} - \frac{u_{ij}^2}{2} \right] \\ \nonumber&\qquad \qquad + \frac{1}{N^2} \sum _{i,k=1}^N \prod _{j=1}^d \left[ 1+ \min (u_{ij},u_{ik})\right] . \end{aligned}$$

2.2 Definition in Terms of a Deterministic Cubature Error Bound

Now let \((\mathscr {H}, \left\langle \cdot , \cdot \right\rangle _{\mathscr {H}})\) be a reproducing kernel Hilbert space (RKHS) of functions [1], \(f: \varOmega \rightarrow \mathbb {R}\), which appear as the integrand in (5.3). By definition, the reproducing kernel, K, is the unique function defined on \(\varOmega \times \varOmega \) with the properties that \(K(\cdot , \varvec{x})\in \mathscr {H}\) for any \(\varvec{x}\in \varOmega \) and \(f(\varvec{x})=\left\langle K(\cdot ,\varvec{x}), f \right\rangle _{\mathscr {H}}\). This second property, implies that K reproduces function values via the inner product. It can be verified that K is symmetric in its arguments and positive definite as in (5.6).

The integral \(\mu = \int _{\varOmega } f(\varvec{x}) \, \varrho (\varvec{x}) \, \mathrm {d}\varvec{x}\), which was identified as \(\mathbb {E}[f(\varvec{X})]\) in (5.3), can be approximated by a sample mean:

$$\begin{aligned} \widehat{\mu }=\frac{1}{N}\sum _{i=1}^N f(\varvec{x}_i). \end{aligned}$$
(5.10)

The quality of this approximation to the integral, i.e., this cubature, depends in part on how well the empirical distribution of the design, \(\mathscr {X}= \{\varvec{x}_i\}_{i=1}^N\), matches the target distribution F associated with the density function \(\varrho \).

Define the cubature error as

$$\begin{aligned} \nonumber {{\,\mathrm{err}\,}}(f,\mathscr {X})&= \mu - \widehat{\mu } = \int _{\varOmega } f(\varvec{x})\, \varrho (\varvec{x}) \mathrm {d}\varvec{x}-\frac{1}{N}\sum _{i=1}^N f(\varvec{x}_i) \\&=\int _\varOmega f(\varvec{x}) \, \mathrm {d}[F(\varvec{x})-F_{\mathscr {X}}(\varvec{x})]. \end{aligned}$$
(5.11)

Under modest assumptions on the reproducing kernel, \({{\,\mathrm{err}\,}}(\cdot , \mathscr {X})\) is a bounded, linear functional. By the Riesz representation theorem, there exists a unique representer, \(\xi \in \mathscr {H}\), such that

$$ {{\,\mathrm{err}\,}}(f, \mathscr {X})=\left\langle \xi , f \right\rangle _{\mathscr {H}}, \quad \forall f\in \mathscr {H}. $$

The reproducing kernel allows us to write down an explicit formula for that representer, namely, \(\xi (\varvec{x})=\left\langle K(\cdot ,\varvec{x}), \xi \right\rangle _{\mathscr {H}}=\left\langle \xi , K(\cdot ,\varvec{x}) \right\rangle _{\mathscr {H}}={{\,\mathrm{err}\,}}(K(\cdot ,\varvec{x}),\mathscr {X})\). By the Cauchy-Schwarz inequality, there is a tight bound on the squared cubature error, namely

$$\begin{aligned} \left|{{\,\mathrm{err}\,}}(f,\mathscr {X}) \right|^2=\left\langle \xi , f \right\rangle _{\mathscr {H}}^2\le \left\Vert \xi \right\Vert _{\mathscr {H}}^2 \left\Vert f \right\Vert _{\mathscr {H}}^2 . \end{aligned}$$
(5.12)

The first term on the right describes the contribution made by the quality of the cubature rule, while the second term describes the contribution to the cubature error made by the nature of the integrand.

The square norm of the representer of the error functional is

$$\begin{aligned} \left\Vert \xi \right\Vert _{\mathscr {H}}^2&=\left\langle \xi , \xi \right\rangle _{\mathscr {H}}={{\,\mathrm{err}\,}}(\xi ,\mathscr {X}) \quad \text {since}\, \xi \,\text {represents the error functional}\\&= {{\,\mathrm{err}\,}}({{\,\mathrm{err}\,}}(K(\cdot ,\cdot \cdot ),\mathscr {X}),\mathscr {X}) \quad \text {since } \xi (\varvec{x})={{\,\mathrm{err}\,}}(K(\cdot ,\varvec{x}),\mathscr {X})\\&=\int _{\varOmega \times \varOmega } K(\varvec{t},\varvec{x})\, \mathrm {d}[F(\varvec{t})-F_{\mathscr {X}}(\varvec{t})] \mathrm {d}[F(\varvec{x})-F_{\mathscr {X}}(\varvec{x})] . \end{aligned}$$

We can equate this formula for \(\left\Vert \xi \right\Vert _{\mathscr {H}}^2\) with the formula for \(D^2(\mathscr {X},F,K)\) in (5.8). Thus, the tight, worst-case cubature error bound in (5.12) can be written in terms of the discrepancy as

$$\begin{aligned} \left|{{\,\mathrm{err}\,}}(f,\mathscr {X}) \right| \le \left\Vert f \right\Vert _{\mathscr {H}} D(\mathscr {X},F,K). \end{aligned}$$

This implies our second interpretation of the discrepancy in Table 5.1.

We now identify the RKHS for the kernel K defined in (5.9). Let \((\varvec{a},\varvec{b})\) be some d dimensional box containing the origin in the interior or on the boundary. For any \(\mathfrak {u}\subseteq \{1, \ldots , d\}\), define \(\partial ^\mathfrak {u}f(\varvec{x}_\mathfrak {u}) : = \partial ^{|\mathfrak {u}|}f(\varvec{x}_\mathfrak {u},\varvec{0})/\partial \varvec{x}_\mathfrak {u}\), the mixed first-order partial derivative of f with respect to the \(x_j\) for \(j\in \mathfrak {u}\), while setting \(x_j=0\) for all \(j \notin \mathfrak {u}\). Here, \(\varvec{x}_{\mathfrak {u}} = (x_j)_{j \in \mathfrak {u}}\), and \(|\mathfrak {u}|\) denotes the cardinality of \(\mathfrak {u}\). By convention, \(\partial ^{\emptyset }f := f(\varvec{0})\). The inner product for the reproducing kernel K defined in (5.9) is defined as

$$\begin{aligned} \langle f,g \rangle _{\mathscr {H}}&:= \sum _{\mathfrak {u}\subseteq \{1,...,d\}}\int _{(\varvec{a},\varvec{b})}\partial ^{\mathfrak {u}}f(\varvec{x}_\mathfrak {u})\partial ^{\mathfrak {u}}g(\varvec{x}_\mathfrak {u}) \, \mathrm {d}\varvec{x}_\mathfrak {u}\\ \nonumber&= f(\varvec{0})g(\varvec{0}) + \int _{a_1}^{b_1} \partial ^{\{1\}}f(x_1)\partial ^{\{1\}}g(x_1) \, \mathrm {d}x_1 \\ \nonumber&\qquad + \int _{a_2}^{b_2} \partial ^{\{2\}}f(x_2)\partial ^{\{2\}}g(x_2) \, \mathrm {d}x_2 + \cdots \\ \nonumber&\qquad + \int _{a_2}^{b_2} \int _{a_1}^{b_1} \partial ^{\{1,2\}}f(x_1,x_2)\partial ^{\{1,2\}}g(x_1,x_2) \, \mathrm {d}x_1 \mathrm {d}x_2+ \cdots \\ \nonumber&\qquad + \int _{(\varvec{a},\varvec{b})} \partial ^{\{1,\ldots , d\}}f(\varvec{x})\partial ^{\{1, \ldots , d\}}g(\varvec{x}) \, \mathrm {d}\varvec{x}. \end{aligned}$$
(5.13)

To establish that the inner product defined above corresponds to the reproducing kernel K defined in (5.9), we note that

$$\begin{aligned} \partial ^{\mathfrak {u}} K((\varvec{x}_u,\mathbf{0}),\varvec{t})&=\prod _{j \in \mathfrak {u}}\frac{1}{2} \left[ {{\,\mathrm{sign}\,}}(x_j) - {{\,\mathrm{sign}\,}}(x_j - t_j) \right] \\&= \prod \limits _{j \in \mathfrak {u}}{{\,\mathrm{sign}\,}}(t_j) \mathbbm {1}_{(\min (0,t_j),\max (0,t_j))}(x_j). \end{aligned}$$

Thus, \(K(\cdot ,\varvec{t})\) possesses sufficient regularity to have finite \(\mathscr {H}\)-norm. Furthermore, K exhibits the reproducing property for the above inner product because

$$\begin{aligned} {\langle K(\cdot ,\varvec{t}),f \rangle }_{\mathscr {H}} \\&= \sum _{\mathfrak {u}\subseteq \{1,...,d\}}\int _{(\varvec{a},\varvec{b})} \partial ^{\mathfrak {u}}K((\varvec{x}_u,\mathbf{0}),\varvec{t})\partial ^{\mathfrak {u}}f(\varvec{x}_u,\mathbf{0}) \, \mathrm {d}\varvec{x}_u \\&= \sum _{\mathfrak {u}\subseteq \{1,...,d\}}\int _{(\varvec{a},\varvec{b})} \prod \limits _{j \in \mathfrak {u}}{{\,\mathrm{sign}\,}}(t_j) \mathbbm {1}_{(\min (0,t_j),\max (0,t_j))}(x_j) \partial ^{\mathfrak {u}}f(\varvec{x}_u,\mathbf{0}) \, \mathrm {d}\varvec{x}_u \\&= \sum _{\mathfrak {u}\subseteq \{1,...,d\}}\sum _{\mathfrak {v}\subseteq \mathfrak {u}}(-1)^{|\mathfrak {u}|-|\mathfrak {v}|}f(\varvec{t}_\mathfrak {v},\mathbf{0})= f(\varvec{t}). \end{aligned}$$

2.3 Definition in Terms of the Root Mean Squared Cubature Error

Assume \(\varOmega \) is a measurable subset in \(\mathbb {R}^d\) and F is the target probability distribution defined on \(\varOmega \) as defined earlier. Now, let \(f: \varOmega \rightarrow \mathbb {R}\) be a stochastic process with a constant pointwise mean, i.e.,

$$ \mathbb {E}_{f\in \mathscr {A}}[f(\varvec{x})]=m, \qquad \forall \varvec{x}\in \varOmega , $$

where \(\mathscr {A}\) is the sample space for this stochastic process. Now we interpret K as the covariance kernel for the stocastic process:

$$ K(\varvec{t},\varvec{x}):=\mathbb {E}_{f\in \mathscr {A}}\left( [f(\varvec{t})-m][f(\varvec{x})-m]\right) =\text {cov}(f(\varvec{t}),f(\varvec{x})),\qquad \forall \varvec{t}, \varvec{x}\in \varOmega . $$

It is straightforward to show that the kernel function is symmetric and positive definite.

Define the error functional \({{\,\mathrm{err}\,}}(\cdot ,\mathscr {X})\) in the same way as in (5.11). Now, the mean squared error is

$$\begin{aligned} \mathbb {E}_{f\in \mathscr {A}}[({{\,\mathrm{err}\,}}(f,\mathscr {X})]^2&= \mathbb {E}_{f\in \mathscr {A}} \left\{ \int _{\varOmega }f(\varvec{x}) \,\mathrm {d}F(\varvec{x})-\frac{1}{N}\sum _{i=1}^N f(\varvec{x}_i) \right\} ^2\\&=\mathbb {E}_{f\in \mathscr {A}}\left\{ \int _{\varOmega }(f(\varvec{x})-m) \, \mathrm {d}F(\varvec{x})-\frac{1}{N}\sum _{i=1}^N (f(\varvec{x}_i)-m) \right\} ^2\\&=\int _{\varOmega ^2}\mathbb {E}_{f\in \mathscr {A}}[ (f(\varvec{t})-m)(f(\varvec{x})-m)] \, \mathrm {d}F(\varvec{t})\mathrm {d}F(\varvec{x})\\&\qquad -\frac{2}{N}\sum _{i=1}^N \int _{\varOmega } \mathbb {E}_{f\in \mathscr {A}}[(f(\varvec{x})-m)(f(\varvec{x}_i)-m)]\,\mathrm {d}F(\varvec{x})\\&\qquad +\frac{1}{N^2}\sum _{i,k=1}^N\mathbb {E}_{f\in \mathscr {A}}[(f(\varvec{x}_i)-m)(f(\varvec{x}_k)-m)]\\&=\int _{\varOmega ^2}K(\varvec{t},\varvec{x}) \, \mathrm {d}F(\varvec{t})\mathrm {d}F(\varvec{x})-\frac{2}{N}\sum _{i=1}^N \int _{\varOmega } K(\varvec{x}, \varvec{x}_i)\, \mathrm {d}F(\varvec{x}) \\&\qquad +\frac{1}{N^2} \sum _{i,k=1}^N K(\varvec{x}_i,\varvec{x}_k). \end{aligned}$$

Therefore, we can equate the discrepancy \(D(\mathscr {X}, F, K)\) defined in (5.8) as the root mean squared error:

$$ D(\mathscr {X}, F, K) =\sqrt{\mathbb {E}_{f\in \mathscr {A}}[({{\,\mathrm{err}\,}}(f,\mathscr {X})]^2}=\sqrt{\mathbb {E}\left|\int _{\varOmega }f(\varvec{x})\varrho (\varvec{x})\mathrm {d}\varvec{x}-\frac{1}{N}\sum _{i=1}^N f(\varvec{x}_i) \right|^2}. $$

3 When a Transformed Low Discrepancy Design Also Has Low Discrepancy

Having motivated the definition of discrepancy in (5.8) from three perspectives, we now turn our attention to question (Q), namely, does a transformation of low discrepancy points with respect to the uniform distribution yield low discrepancy points with respect to the new target distribution. In this section, we show a positive result, yet recognize some qualifications.

Consider some symmetric, positive definite kernel, \(K_{unif }: (0,1)^d \times (0,1)^d \rightarrow \mathbb {R}\), some uniform design \(\mathscr {U}\), some other domain, \(\varOmega \), some other target distribution, F, and some transformation \(\varvec{\varPsi }:(0,1)^d \rightarrow \varOmega \) as defined in (5.1). Then the squared discrepancy of the uniform design can be expressed according to (5.8) as follows:

$$\begin{aligned} \nonumber&{D^2(\mathscr {U},F_{{unif }},K_{unif })} \\ \nonumber&= \int _{(0,1)^d \times (0,1)^d} K_{unif }(\varvec{u},\varvec{v}) \, \mathrm {d}\varvec{u}\mathrm {d}\varvec{v}- \frac{2}{N} \sum _{i=1}^N \int _{\varOmega } K_{unif }(\varvec{u},\varvec{u}_i) \, \mathrm {d}\varvec{u}\\ \nonumber&\qquad \qquad + \frac{1}{N^2} \sum _{i,k=1}^N K_{unif }(\varvec{u}_i,\varvec{u}_k) \\ \nonumber&= \int _{\varOmega \times \varOmega } K_{unif }(\varvec{\varPsi }^{-1}(\varvec{t}),\varvec{\varPsi }^{-1}(\varvec{x})) \, \left|\frac{\partial \varvec{\varPsi }^{-1}(\varvec{t})}{\partial \varvec{t}} \right| \left|\frac{\partial \varvec{\varPsi }^{-1}(\varvec{x})}{\partial \varvec{x}} \right| \,\mathrm {d}\varvec{t}\mathrm {d}\varvec{x}\\ \nonumber&\qquad \qquad - \frac{2}{N} \sum _{i=1}^N \int _{\varOmega } K_{unif }(\varvec{\varPsi }^{-1}(\varvec{t}),\varvec{\varPsi }^{-1}(\varvec{x}_i)) \, \left|\frac{\partial \varvec{\varPsi }^{-1}(\varvec{t})}{\partial \varvec{t}} \right| \, \mathrm {d}\varvec{t}\\ \nonumber&\qquad \qquad + \frac{1}{N^2} \sum _{i,k=1}^N K_{unif }(\varvec{\varPsi }^{-1}(\varvec{x}_i),\varvec{\varPsi }^{-1}(\varvec{x}_k)) \\ \nonumber&= D^2(\mathscr {X},F,K) \end{aligned}$$

where the kernel K is defined as

$$\begin{aligned} K(\varvec{t},\varvec{x}) = K_{{unif }}(\varvec{\varPsi }^{-1}(\varvec{t}),\varvec{\varPsi }^{-1}(\varvec{x})), \end{aligned}$$
(5.14a)

and provided that the density, \(\varrho \), corresponding to the target distribution, F, satisfies

$$\begin{aligned} \varrho (\varvec{x}) = \left|\frac{\partial \varvec{\varPsi }^{-1}(\varvec{x})}{\partial \varvec{x}} \right|. \end{aligned}$$
(5.14b)

The above argument is summarized in the following theorem.

Theorem 5.1

Suppose that the design \(\mathscr {X}\) is constructed by transforming the design \(\mathscr {U}\) according to the transformation (5.1). Also suppose that conditions (5.14) are satisfied. Then \(\mathscr {X}\) has the same discrepancy with respect to the target distribution, F, defined by the kernel K as does the original design \(\mathscr {U}\) with respect to the uniform distribution and defined by the kernel \(K_{unif }\). That is,

$$\begin{aligned} D(\mathscr {X},F,K) = D(\mathscr {U},F_{{unif }},K_{unif }). \end{aligned}$$

As a consequence, under conditions (5.14), question (Q) has a positive answer.

Condition (5.14b) may be easily satisfied. For example, it is automatically satisfied by the inverse cumulative distribution transform (5.2). Condition (5.14a) is simply a matter of definition of the kernel, K, but this definition has consequences. From the perspective of Sect. 5.2.1, changing the kernel from \(K_{unif }\) to K means changing the definition of the distance between two Dirac measures. From the perspective of Sect. 5.2.2, changing the kernel from \(K_{unif }\) to K means changing the definition of the Hilbert space of integrands, f, in (5.3). From the perspective of Sect. 5.2.3, changing the kernel from \(K_{unif }\) to K means changing the definition of the covariance kernel for the integrands, f, in (5.3).

To illustrate this point, consider a cousin of the kernel in (5.9), which places the reference point at \(\varvec{0.5} = (0.5, \ldots , 0.5)\), the center of the unit cube \((0,1)^d\):

$$\begin{aligned} K_{{unif }}(\varvec{u},\varvec{v})&= \prod _{j=1}^d\left[ 1+\frac{1}{2}\left( \left| u_j-1/2\right| + \left| v_j- 1/2 \right| -\left| u_j-v_j \right| \right) \right] \\ \nonumber&= K(\varvec{u}- \varvec{0.5}, \varvec{v}- \varvec{0.5}) \qquad \text {for}\, K \, \text {defined in (5.9)}. \end{aligned}$$
(5.15)

This kernel defines the centered \(L^2\)-discrepancy [13]. Consider the standard multivariate normal distribution, \(F_{normal }\), and choose the inverse normal distribution,

$$\begin{aligned} \varvec{\varPsi }(\varvec{u}) = (\varPhi ^{-1}(u_1), \ldots , \varPhi ^{-1}(u_d)), \end{aligned}$$
(5.16)

where \(\varPhi \) denotes the standard normal distribution function. Then condition (5.14b) is automatically satisfied, and condition (5.14a) is satisfied by defining

$$\begin{aligned} \nonumber K(\varvec{t},\varvec{x})&= K_{{unif }}(\varvec{\varPsi }^{-1}(\varvec{t}),\varvec{\varPsi }^{-1}(\varvec{x})) \\ \nonumber&= \prod _{j=1}^d\left[ 1+\frac{1}{2}\left( \left| \varPhi (t_j)-1/2\right| + \left| \varPhi (x_j)- 1/2 \right| \right. \right. \\ \nonumber&\qquad \qquad \left. \left. -\left| \varPhi (t_j)-\varPhi (x_j)\right| \right) \right] . \end{aligned}$$

In one dimension, the distance between two Dirac measures defined using the kernel \(K_{unif }\) above is \(\Vert \delta _{x} - \delta _{t} \Vert _{\mathscr {M}} = \sqrt{|x-t|}\), whereas the distance defined using the kernel K above is \(\Vert \delta _{x} - \delta _{t} \Vert _{\mathscr {M}} = \sqrt{|\varPhi (x)-\varPhi (t)|}\). Under kernel K, the distance between two Dirac measures is bounded, even though the domain of the distribution is unbounded. Such an assumption may be unpalatable.

4 Do Transformed Low Discrepancy Points Have Low Discrepancy More Generally

The discussion above indicates that condition (5.14a) can be too restrictive. We would like to compare the discrepancies of designs under kernels that do not satisfy that restriction. In particular, we consider the centered \(L^2\)-discrepancy for uniform designs on \((0,1)^d\) defined by the kernel in (5.15):

$$\begin{aligned} {D^2(\mathscr {U}, F_{unif }, K_{unif })} \\&= \left( \frac{13}{12}\right) ^d - \frac{2}{N}\sum _{i=1}^N \prod _{j=1}^d \left[ 1+\frac{1}{2}\left( |u_{ij}-1/2|-|u_{ij}-1/2|^2\right) \right] \\&\qquad + \frac{1}{N^2}\sum _{i,k=1}^N\prod _{j=1}^d\left[ 1+\frac{1}{2}\left( |u_{ij}-1/2|+|u_{kj}-1/2|-|u_{ij}-u_{kj}| \right) \right] , \end{aligned}$$

where again, \(F_{unif }\) denotes the uniform distribution on \((0,1)^d\), and \(\mathscr {U}\) denotes a design on \((0,1)^d\)

Changing perspectives slightly, if \(F_{unif }'\) denotes the uniform distribution on the cube of volume one centered at the origin, \((-0.5,0.5)^d\), and the design \(\mathscr {U}'\) is constructed by subtracting \(\varvec{0.5}\) from each point in the design \(\mathscr {U}\):

$$\begin{aligned} \mathscr {U}' = \{\varvec{u}-\varvec{0.5} : \varvec{u}\in \mathscr {U}\}, \end{aligned}$$
(5.17)

then

$$\begin{aligned} D(\mathscr {U}',F'_{unif },K) = D(\mathscr {U}, F_{unif }, K_{unif }), \end{aligned}$$

where K is the kernel defined in (5.9).

Recall that the origin is a special point in the definition of the inner product for the Hilbert space with K as its reproducing kernel in (5.13). Therefore, this K from (5.9) is appropriate for defining the discrepancy for target distributions centered at the origin, such as the standard normal distribution, \(F_{normal }\). Such a discrepancy is

$$\begin{aligned}&{D^2(\mathscr {X}, F_{normal }, K) = \left( 1+\sqrt{\frac{2}{\pi }}\right) ^d} \nonumber \\&- \frac{2}{N}\sum _{i=1}^N \prod \limits _{j=1}^d\left[ 1+\frac{1}{\sqrt{2\pi }}+\frac{1}{2}|x_{ij}|-x_{ij}\left( \varPhi (x_{ij})-\frac{1}{2} \right) -\phi (x_{ij})\right] \nonumber \\&+\frac{1}{N^2}\sum _{i,k=1}^N \prod _{j=1}^d \left[ 1+\frac{1}{2}\left( |x_{ij}|+|x_{kj}|-|x_{ij}-x_{kj}|\right) \right] . \end{aligned}$$
(5.18)

Here, \(\phi \) is the standard normal probability density function. The derivation of (5.18) is given in the Appendix.

We numerically compare the discrepancy of a uniform design, \(\mathscr {U}'\) given by (5.17) and the discrepancy of a design constructed by the inverse normal transformation, i.e., \(\mathscr {X}= \varvec{\varPsi }(\mathscr {U})\) for \(\varvec{\varPsi }\) in (5.16), where the \(\mathscr {U}\) leading to both \(\mathscr {U}'\) and \(\mathscr {X}\) is identical. We do not expect the magnitudes of the discrepancies to be the same, but we ask

$$\begin{aligned} \text {Does } D(\mathscr {U}'_1,F'_{unif }, K)&\le D(\mathscr {U}_2',F_{unif }', K)\\&\text {imply } D(\varvec{\varPsi }(\mathscr {U}_1),F_{normal }, K)\le D(\varvec{\varPsi }(\mathscr {U}_2),F_{normal }, K)? \end{aligned}$$
(Q′)

Again, K is given by (5.9). So we are actually comparing discrepancies defined by the same kernels, but not kernels that satisfy (5.14a).

Let \(d=5\) and \(N=50\). We generate \(B=20\) independent and identically distributed (IID) uniform designs, \(\mathscr {U}\) with \(N=50\) points on \((0,1)^5\) and then use the inverse distribution transformation to obtain IID random \(N(\mathbf{0}, {\mathsf I}_5)\) designs, \(\mathscr {X}= \varvec{\varPsi }(\mathscr {U})\). Figure 5.2 plots the discrepancies for normal designs, \(D(\varvec{\varPsi }(\mathscr {U}),F_{normal }, K)\), against the discrepancies for the uniform designs, \(D(\mathscr {U},F_{unif },K_{unif }) = D(\mathscr {U}',F_{unif }',K)\) for each of the \(B=20\) designs. Question (Q′) has a positive answer if and only if the lines passing through any two points on this plot all have non-negative slopes. However, that is not the case. Thus (Q′) has a negative answer.

Fig. 5.2
figure 2

Normal discrepancy versus uniform discrepancy for transformed designs

We further investigate the relationship between the discrepancy of a uniform design and the discrepancy of the same design after inverse normal transformation. Varying the dimension d from 1 to 10, we calculate the sample correlation between \(D(\varvec{\varPsi }(\mathscr {U}),F_{normal }, K)\) and \(D(\mathscr {U},F_{unif },K_{unif }) = D(\mathscr {U}',F_{unif }',K)\) for \(B=500\) IID designs of size \(N=50\). Figure 5.3 displays the correlation as a function of d. Although the correlation is positive, it degrades with increasing d.

Fig. 5.3
figure 3

Correlation between the uniform and normal discrepancies for different dimensions

Example 5.1

A simple cubature example illustrates that an inverse transformed low discrepancy design, \(\mathscr {U}\), may yield a large \(D(\varvec{\varPsi }(\mathscr {U}),F_{normal }, K)\) and also a large cubature error. Consider the integration problem in (5.3) with

$$\begin{aligned} \varvec{X}\sim N(\varvec{0}, {\mathsf I}_d), \qquad f(\varvec{x}) = \frac{x_1^2+\cdots +x^2_d}{1+10^{-8}(x_1^2+\cdots +x_d^2)}, \qquad Y = f(\varvec{X}),\end{aligned}$$
(5.19a)
$$\begin{aligned} \mu = \mathbb {E}(Y) = \int _{\mathbb {R}^d} \frac{x_1^2+\cdots +x^2_d}{1+10^{-8}(x_1^2+\cdots +x_d^2)} \phi (\varvec{x}) \, \mathrm {d}\varvec{x}, \end{aligned}$$
(5.19b)

where \(\phi \) is the probability density function for the standard multivariate normal distribution. The function \(f:\mathbb {R}^d \rightarrow \mathbb {R}\) is constructed to asymptote to a constant as \([\left\Vert 2 \right\Vert _{}]{\varvec{x}}\) tends to infinity to ensure that f lies inside the Hilbert space corresponding to the kernel K defined in (5.9). Since the integrand in (5.19) is a function of \([\left\Vert 2 \right\Vert _{}]{\varvec{x}}\), \(\mu \) can be written as a one dimensional integral. For \(d=10\), \(\mu = 10\) to at least 15 significant digits using quadrature.

We can also approximate the integral in (5.19) using a \(d=10\), \(N=512\) cubature (5.10). We compare cubatures using two designs. The design \(\mathscr {X}_1\) is the inverse normal transformation of a scrambled Sobol’ sequence, \(\mathscr {U}_1\), which has a low discrepancy with respect to the uniform distribution on the d-dimensional unit cube. The design \(\mathscr {U}_2\) takes the point in \(\mathscr {U}_1\) that is closet to \(\varvec{0}\) and moves it to \(\left( 10^{-15}, \ldots , 10^{-15}\right) \), which is very close to \(\varvec{0}\). As seen in Table 5.2, the two uniform designs have quite similar, small discrepancies. However, the transformed designs, \(\mathscr {X}_j = \varvec{\varPsi }(\mathscr {U}_j)\) for \(j=1,2\), have much different discrepancies with respect to the normal distribution. This is due to the point in \(\mathscr {X}_2\) that has large negative coordinates. Furthermore, the cubatures, \(\widehat{\mu }\), based on these two designs have significantly different errors. The first design has both a smaller discrepancy and a smaller cubature error than the second. This could not have been inferred by looking at the discrepancies of the original uniform designs.

Table 5.2 Comparison of Integral Estimate

5 Improvement by the Coordinate-Exchange Method

In this section, we propose an efficient algorithm that improves a design’s quality in terms of the discrepancy for the target distribution. We start with a low discrepancy uniform design, such as a Sobol’ sequence, and transform it into a design that approximates the target distribution. Following the optimal design approach, we then apply a coordinate-exchange algorithm to further improve the discrepancy of the design.

The coordinate-exchange algorithm was introduced in [18], and then applied widely to construct various kinds of optimal designs [16, 19, 21]. The coordinate-exchange algorithm is an iterative method. It finds the “worst” coordinate \(x_{ij}\) of the current design and replaces it to decrease loss function, in this case, the discrepancy. The most appealing advantage of the coordinate-exchange algorithm is that at each step one need only solve a univariate optimization problem.

First, we define the point deletion function, \(\mathfrak {d}_p\), as the change in square discrepancy resulting from removing the a point from the design:

$$\begin{aligned} \mathfrak {d}_p(i) = D^2(\mathscr {X})-\left( \frac{N-1}{N}\right) ^2 D^2(\mathscr {X}\backslash \{\varvec{x}_i\}). \end{aligned}$$
(5.20)

Here, the design \(\mathscr {X}\backslash \{\varvec{x}_i\}\) is the \(N-1\) point design with the point \(\{\varvec{x}_i\}\) removed. We suppress the choice of target distribution and kernel in the above discrepancy notation for simplicity. We then choose

$$\begin{aligned} i^*=\text{ argmax }_{i=1,\ldots ,N} \mathfrak {d}_p(i). \end{aligned}$$

The definition of \(i^*\) means that removing \(\varvec{x}_{i^*}\) from the design \(\mathscr {X}\) results in the smallest discrepancy among all possible deletions. Thus, \(\varvec{x}_{i^*}\) is helping the least, which makes it a prime candidate for modification.

Next, we define a coordinate deletion function, \(\mathfrak {d}_{c}\), as the change in the square discrepancy resulting from removing a coordinate in our calculation of the discrepancy:

$$\begin{aligned} \mathfrak {d}_c(j) = D^2(\mathscr {X})-D^2(\mathscr {X}_{-j}). \end{aligned}$$
(5.21)

Here, the design \(\mathscr {X}_{-j}\) still has N points but now only d dimensions, the jth coordinate having been removed. For this calculation to be feasible, the target distribution must have independent marginals. Also, the kernel must be of product form. To simplify the derivation, we assume a somewhat stronger condition, namely that the marginals are identical and that each term in the product defining the kernel is the same for every coordinate:

$$\begin{aligned} \varOmega = \widetilde{\varOmega }\times \cdots \times \widetilde{\varOmega }, \qquad K(\varvec{t},\varvec{x}) = \prod _{j=1}^d [1+ \widetilde{K}(t_j,x_j)], \qquad \widetilde{K}:\widetilde{\varOmega }\times \widetilde{\varOmega }\rightarrow \mathbb {R}. \end{aligned}$$
(5.22)

We then choose

$$\begin{aligned} j^*=\text{ argmax }_{j=1, \ldots , d} \mathfrak {d}_c(j). \end{aligned}$$

For reasons analogous to those given above, the jth coordinate seems to be the best candidate for change.

Let \(\mathscr {X}^*(x)\) denote the design that results from replacing \(x_{i^*j^*}\) by x. We now define \(\varDelta (x)\) as improvement in the squared discrepancy resulting from replacing \(\mathscr {X}\) by \(\mathscr {X}^*(x)\):

$$\begin{aligned} \varDelta (x) = D^2(\mathscr {X})-D^2(\mathscr {X}^*(x)). \end{aligned}$$
(5.23)

We can reduce the discrepancy by find an x such that \(\varDelta (x)\) is positive. The coordinate-exchange algorithm outlined in Algorithm 1 improves the design by maximizing \(\varDelta (x)\) for one chosen coordinate in one iteration. The algorithm terminates when it exhausts the maximum allowed number of iterations or the optimal improvement \(\varDelta (x^*)\) is so small that it becomes negligible (\(\varDelta (x^*)\le \text {TOL}\)). Algorithm 1 is a greedy algorithm, and thus it can stop at a local optimal design. We recommend multiple runs of the algorithm with different initial designs to obtain a design with the lowest discrepancy possible. Alternatively, users can include stochasticity in the choice of the coordinate that is to be exchanged, similarly to [16].

figure a

For kernels of product form, (5.22), and target distributions with independent and identical marginals, the formula for the squared discrepancy in (5.8) becomes

$$\begin{aligned} D^2(\mathscr {X},\rho ,K)&= (1+c)^d - \frac{2}{N} \sum _{i=1}^N H(\varvec{x}_{i}) + \frac{1}{N^2} \sum _{i,k=1}^N K(\varvec{x}_{i},\varvec{x}_{k}), \end{aligned}$$

\(\text {where}\)

$$\begin{aligned} h(x)&= \int _{\widetilde{\varOmega }} \widetilde{K}(t,x) \, \widetilde{\varrho }(t) \, \mathrm {d}t, \end{aligned}$$
(5.24a)
$$\begin{aligned} c&= \int _{\widetilde{\varOmega }\times \widetilde{\varOmega }} \widetilde{K}(t_k,x_k) \, \widetilde{\varrho }(t) \widetilde{\varrho }(x) \, \mathrm {d}t\mathrm {d}x = \int _{\widetilde{\varOmega }} h(x) \, \widetilde{\varrho }(x) \, \mathrm {d}x, \end{aligned}$$
(5.24b)
$$\begin{aligned} H(\varvec{x})&= \prod _{j=1}^d [1+ h(x_{j})]. \end{aligned}$$
(5.24c)

An evaluation of h(x) and \(\widetilde{K}(t,x)\) each require \(\mathscr {O}(1)\) operations, while an evaluation of \(H(\varvec{x})\) and \(K(\varvec{t},\varvec{x})\) each require \(\mathscr {O}(d)\) operations. The computation of \(D(\mathscr {X},\rho ,K)\) requires \(\mathscr {O}(dN^2)\) operations because of the double sum. For a standard multivariate normal target distribution and the kernel defined in (5.9), we have

$$\begin{aligned} c&= \sqrt{\frac{2}{\pi }}, \\ h(x)&= \frac{1}{\sqrt{2\pi }}+\frac{1}{2}|x|-x[\varPhi (x)-1/2]-\phi (x),\\ \widetilde{K}(t,x)&= \frac{1}{2} (|t|+ |x|- |x-t|). \end{aligned}$$

The point deletion function defined in (5.20) then can be expressed as

$$\begin{aligned} \nonumber \mathfrak {d}_p(i)&= \frac{(2N-1)(1+c)^d}{N^2} - \frac{2}{N}\biggl [ \frac{1}{N} \sum _{k=1}^N H(\varvec{x}_{k}) + \left( 1 - \frac{1}{N} \right) H(\varvec{x}_{i}) \biggr ] \\ \nonumber&\qquad \qquad + \frac{1}{N^2}\biggl [2 \sum _{k=1}^N K(\varvec{x}_{i},\varvec{x}_{j})- K(\varvec{x}_{i},\varvec{x}_{i})\biggr ]. \end{aligned}$$

The computational cost for \(\mathfrak {d}_p(1), \ldots , \mathfrak {d}_p(N)\) is then \(\mathscr {O}(dN^2)\), which is the same order as the cost of the discrepancy of a single design.

The coordinate deletion function defined in (5.21) can be be expressed as

$$\begin{aligned} \mathfrak {d}_c(j) = (c-1)c^{d-1} -\frac{2}{N}\sum _{i=1}^N \frac{h(x_{ij})H(\varvec{x}_i)}{1+h(x_{ij})} +\frac{1}{N^2}\sum _{i,k=1}^N \frac{\widetilde{K}(x_{ij},x_{kj}) K(\varvec{x}_i,\varvec{x}_j)}{1+\widetilde{K}(x_{ij},x_{kj})} . \end{aligned}$$

The computational cost for \(\mathfrak {d}_c(1), \ldots , \mathfrak {d}_p(d)\) is also \(\mathscr {O}(dN^2)\), which is the same order as the cost of the discrepancy of a single design.

Finally, the function \(\varDelta \) defined in (5.23) is given by

$$\begin{aligned} \nonumber \varDelta (x)&= -\frac{2\left[ h(x_{i^*j^*})-h(x) \right] H(\varvec{x}_{i^*})}{N[1 + h(x_{i^*j^*})]} \\ \nonumber&\qquad \qquad +\frac{1}{N^2}\left( 2\sum _{\begin{array}{c} i=1 \\ i \ne i^* \end{array}}^N \frac{[\widetilde{K}(x_{i^*j^*},x_{ij^*})-\widetilde{K}(x,x_{ij^*})] K(\varvec{x}_{i^*},\varvec{x}_i)}{1 + \widetilde{K}(x_{i^*j^*},x_{ij^*})} \right. \\ \nonumber&\qquad \qquad \left. + \frac{[\widetilde{K}(x_{i^*j^*},x_{i^*j^*})-\widetilde{K}(x,x)] K(\varvec{x}_{i^*},\varvec{x}_{i^*})}{1 + \widetilde{K}(x_{i^*j^*},x_{i^*j^*})} \right) \end{aligned}$$

If we drop the terms that are independent of x, then we can maximize the function

$$\begin{aligned} \varDelta '(x) = Ah(x) - \frac{1}{N}\sum _{\begin{array}{c} i=1 \\ i \ne i^* \end{array}}^N B_i \widetilde{K}(x,x_{ij^*}) - C \widetilde{K}(x,x) \end{aligned}$$

where

$$\begin{aligned} A = \frac{2H(\varvec{x}_{i^*})}{1 + h(x_{i^*j^*})}, \quad B_i = \frac{2K(\varvec{x}_{i^*},\varvec{x}_i)}{1 + \widetilde{K}(x_{i^*j^*},x_{ij^*})}, \quad C = \frac{K(\varvec{x}_{i^*},\varvec{x}_{i^*})}{N[1 + \widetilde{K}(x_{i^*j^*},x_{i^*j^*})]}. \end{aligned}$$

Note that \(A, B_1, \ldots , B_N, C\) only need to be computed once for each iteration of the coordinate exchange algorithm.

Note that the coordinate-exchange algorithm we have developed is a greedy and deterministic algorithm. The coordinate that we choose to make exchange is the one has the largest point and coordinate deletion function values, and we always make the exchange for new coordinate as long as the new optimal coordinate improves the objective function. It is true that such deterministic and greedy algorithm is likely to return a design of whose discrepancy attains a local minimum. To overcome this, we can either run the algorithm with multiple random initial designs, or we can combine the coordinate-exchange with stochastic optimization algorithms, such as simulated annealing (SA) [17] or threshold accepting (TA) [12]. For example, we can add a random selection scheme when choosing a coordinate to exchange, and when making the exchange of the coordinates, we can incorporate a random decision to accept the exchange or not. The random decision can follow the SA or TA method. Tuning parameters must be carefully chosen to make the SA or TA method effective. Interested readers can refer to [22] to see how TA can be applied to the minimization of discrepancy.

6 Simulation

To demonstrate the performance of the d-dimensional standard normal design proposed in Sect. 5.5, we compare three families of designs: (1) RAND: inverse transformed IID uniform random numbers; (2) SOBOL: inverse transformed Sobol’ set; (3) E-SOBOL: inverse transformed scrambled Sobol’ set where the one dimensional projections of the Sobol’ set have been adjusted to be \(\left\{ 1/(2N), 3/(2N), \ldots , (2N-1)/(2N) \right\} \); and (4) CE: improved E-SOBOL via Algorithm 1. We have tried different combinations of dimension, d, and sample size, N. For each (dN) and each algorithm we generate 500 designs and compute their discrepancies (5.18).

Figure 5.4 contains the boxplots of normal discrepancies corresponding to the four generators with \(d=2\) and \(N=32\). It shows that SOBOL, E-SOBOL, and CE all outperform RAND by a large margin. To better present the comparison between the better generators, in Fig. 5.5 we generally exclude RAND.

We also report the average execution times for the four generators in Table 5.3. All codes were run on a MacBook Pro with 2.4 GHz Intel Core i5 processor. The maximum number of iterations allowed is \(M_{\max } = 200\). Algorithm 1 converges within 20 iterations in all simulation examples.

Fig. 5.4
figure 4

Performance comparison of designs

We summarize the results of our simulation as follows.

  1. 1.

    Overall, CE produces the smallest discrepancy.

  2. 2.

    When the design is relatively dense, i.e., N/d is large, E-SOBOL and CE have similar performance.

  3. 3.

    When the design is more sparse, i.e., N/d is smaller, SOBOL and E-SOBOL have similar performance, but CE is superior to both of them in terms of the discrepancy. Not only in terms of the mean but also in terms of the range for the 500 designs generated.

  4. 4.

    CE requires the longest computational time to construct a design, but this is moderate. When the cost of obtaining function values is substantial, then the cost of constructing the design may be insignificant.

Fig. 5.5
figure 5

Performance comparison of designs

Table 5.3 Execution Time of Generators (in seconds)

7 Discussion

This chapter summarizes the three interpretations of the discrepancy. We show that for kernels and variable transformations satisfying conditions (5.14), variable transformations of low discrepancy uniform designs yield low discrepancy designs with respect to the target distribution. However, for more practical choices of kernels, this correspondence may not hold. The coordinate-exchange algorithm can improve the discrepancies of candidate designs that may be constructed by variable transformations.

While discrepancies can be defined for arbitrary kernels, we believe that the choice of kernel can be important, especially for small sample sizes. If the distribution has a symmetry, e.g. \(\varrho (\varvec{T}(\varvec{x})) = \varrho (\varvec{x})\) for some probability preserving bijection \(\varvec{T}:\varOmega \rightarrow \varOmega \), then we would like our discrepancy to remain unchanged under such a bijection, i.e., \(D(\varvec{T}(\mathscr {X}),\varrho ,K) = D(\mathscr {X},\varrho ,K)\). This can typically be ensured by choosing kernels satisfying \(K(\varvec{T}(\varvec{t}),\varvec{T}(\varvec{x})) = K(\varvec{t},\varvec{x})\). The kernel \(K_{unif }\) defined in (5.15) satisfies this assumption for the standard uniform distribution and the transformation \(\varvec{T}(\varvec{x}) = \varvec{1}- \varvec{x}\). The kernel K defined in (5.9) satisfies this assumption for the standard normal distribution and the transformation \(\varvec{T}(\varvec{x}) = -\varvec{x}\).

For target distributions with independent marginals and kernels of product form as in (5.22), coordinate weights [3] Sect. 4 are used to determine which projections of the design, denoted by \(\mathfrak {u}\subseteq \{1, \ldots , d\}\), are more important. The product form of the kernel given in (5.22) can be generalized as

$$\begin{aligned} K_{\varvec{\gamma }}(\varvec{t},\varvec{x}) = \prod _{j=1}^d\left[ 1+ \gamma _j \widetilde{K}(t_j,x_j) \right] . \end{aligned}$$

Here, the positive coordinate weights are \(\varvec{\gamma }= (\gamma _1, \ldots , \gamma _d)\). The squared discrepancy corresponding to this kernel may then be written as

$$\begin{aligned} D^2(\mathscr {X},F,K_{\varvec{\gamma }})&= \sum _{\begin{array}{c} \mathfrak {u}\subseteq \{1, \ldots , d\} \\ \mathfrak {u}\ne \emptyset \end{array}}\gamma _\mathfrak {u}D^2_{\mathfrak {u}} (\mathscr {X},\rho ,K), \qquad \gamma _{\mathfrak {u}} = \prod _{j \in \mathfrak {u}} \gamma _j\\ D^2_{\mathfrak {u}} (\mathscr {X}_\mathfrak {u},F_\mathfrak {u},K)&= c^{\left|\mathfrak {u} \right|} - \frac{2}{N} \sum _{i=1}^N \prod _{j \in \mathfrak {u}} h(x_{ij}) + \frac{1}{N^2} \sum _{i,k=1}^N \prod _{j \in \mathfrak {u}} \widetilde{K}(x_{ij},x_{kj}), \end{aligned}$$

where c and h are defined in (5.24). Here, \(\mathscr {X}_\mathfrak {u}\) denotes the projection of the design into the coordinates contained in \(\mathfrak {u}\), and \(F_\mathfrak {u}= \prod _{j \in \mathfrak {u}} F_j\) is the \(\mathfrak {u}\)-marginal distribution. Each discrepancy piece, \(D_{\mathfrak {u}} (\mathscr {X}_\mathfrak {u},F_\mathfrak {u},K)\), measures how well the projected design \(\mathscr {X}_\mathfrak {u}\) matches \(F_\mathfrak {u}\).

The values of the coordinate weights can be chosen to reflect the user’s belief as to the importance of the design matching the target for various coordinate projections. A large value of \(\gamma _j\) relative to the other \(\gamma _{j'}\) places more importance on the \(D_{\mathfrak {u}} (\mathscr {X}_\mathfrak {u},F_\mathfrak {u},K)\) with \(j \in \mathfrak {u}\). Thus, \(\gamma _j\) is an indication of the importance of coordinate j in the definition of \(D(\mathscr {X},F,K_{\varvec{\gamma }})\).

If \(\varvec{\gamma }\) is one choice of coordinate weights and \(\varvec{\gamma }'=C\varvec{\gamma }\) is another choice of coordinate weights where \(C > 1\), then \(\gamma '_\mathfrak {u}= C^{\left|\mathfrak {u} \right|} \gamma _\mathfrak {u}\). Thus, \(D(\mathscr {X},F,K_{\varvec{\gamma }'})\) emphasizes the projections corresponding to the \(\mathfrak {u}\) with large \(\left|\mathfrak {u} \right|\), i.e., the higher order effects. Likewise, \(D(\mathscr {X},F,K_{\varvec{\gamma }'})\) places relatively more emphasis lower order effects. Again, the choice of coordinate weights reflects the user’s belief as to the relative importance of the design matching the target distribution for lower order effects or higher order effects.