Keywords

Polynomial Chaos Methodology

The polynomial chaos methodology (PCM) is a rather recent approach, which offers a large potential for computational fluid dynamics (CFD) related non-deterministic simulations, as it allows the treatment of a large variety of stochastic variables and properties that can be described by probability density functions (PDFs). The method is based on a spectral representation of the uncertainty where the basis polynomials contain the randomness, described by random variables \({\pmb {\xi }}\) with values in a set \(\varGamma \), and the unknown expansion coefficients are deterministic, resulting in deterministic equations. More specifically, if u is a random variable indexed by a spatial variable \(\mathbf {x}\in {\mathscr {D}}\subseteq {\mathbb R}^{d}\) (typically, \(d=3\) in physical space) and time \(t\ge 0\), the so-called polynomial chaos expansion (PCE) reads:

$$\begin{aligned} u(\mathbf {x},t,{\pmb {\xi }})\simeq {\mathbb P}^P[u](\mathbf {x},t,{\pmb {\xi }})=\sum _{i=0}^Pu^i(\mathbf {x},t) \psi _i({\pmb {\xi }})\,. \end{aligned}$$
(1)

In the above, \(u^i\) are the deterministic unknown expansion coefficients and represent the random mode i of the random variable u. \(\psi _i\) are \(N\)-variate polynomials which are functions of \({\pmb {\xi }}=(\xi _1,\xi _2,\dots ,\xi _N)\) where \(\xi _j\) is a random variable with values in a set \(\varGamma _j\). \(N\) is the number of input uncertainties which is also the number of random dimensions. It is assumed that these variables are independent and are real valued, and hence \(\varGamma =\smash {\varGamma _1\times \varGamma _2\times \cdots \times \varGamma _N}\subseteq \smash {{\mathbb R}^N}\). Input uncertainties could, e.g., be associated with uncertain operational conditions or uncertainty in the geometry. For an external flow around an airplane, the inlet Mach number, angle of attack, inlet pressure, etc., are examples of operational conditions. Geometrical uncertainties are then uncertainties on the shape of the shape of the plane due to manufacturing tolerances. It is clear that because of the uncertain input, any flow variable, say u becomes also uncertain, and can therefore be described as in Eq. (1). The total number of terms \(P+1\) used in (1) depends on the highest order of the polynomial that is used (denoted by \(p\)) and on the number of random dimensions. One has, see [1]:

$$\begin{aligned} P+1=\frac{(N+p)!}{N!p!}\,. \end{aligned}$$
(2)

The methodology was originally formulated by Wiener [2] and was much later rediscovered and used for CFD applications by several groups, e.g., Xiu and Karniadakis [3], Lucor et al. [4], Le Maître et al. [5], Mathelin et al. [6], and Walters and Huyse [7] among others.

In the original method of Wiener [2], the projection basis \(\smash {\psi _i}\) is constituted by Hermite polynomials. These are optimal for random variables with Gaussian distribution. Optimal means that, for increasing polynomial order, the expansion will quickly converge in the mean-square sense. The condition for optimality is that the polynomials are orthogonal with a weighting function \({\pmb {\xi }}\mapsto \smash {W_N}({\pmb {\xi }})\) which is exactly the PDF of the set of random variables, i.e.:

$$\begin{aligned} \int _\varGamma \psi _i({\pmb {\xi }})\psi _j({\pmb {\xi }})W_N({\pmb {\xi }}) d{\pmb {\xi }}:=\langle \psi _i,\psi _j\rangle =\gamma _j\delta _{ij}\,, \end{aligned}$$
(3)

where \(\smash {\delta _{ij}}\) is the Kronecker symbol, and \(\smash {\gamma _j}\) is a normalization constant. With a proper scaling though, one can always normalize the polynomial basis such that \(\smash {\gamma _j}:=\smash {\langle \psi _j,\psi _j\rangle }=1\) \(\forall j\). In the case of a multivariate Gaussian distribution, the Hermite polynomials satisfy the condition above with \(\smash {W_N}\) given by:

$$\begin{aligned} W_N({\pmb {\xi }}) \equiv \frac{1}{\sqrt{(2\pi )^N}}\exp (-\frac{1}{2}{\pmb {\xi }}\cdot {\pmb {\xi }})\,, \end{aligned}$$
(4)

where \({\pmb {\xi }}\cdot {\pmb {\xi }}=\sum _{j=1}^N\xi _j^2\) is the standard Euclidian scalar product in \(\smash {{\mathbb R}^N}\). Note that because of the independence of the uncertainties, the PDF is the product of the PDF of each of the uncertainties, i.e., \(\smash {W_N}({\pmb {\xi }})=\smash {\prod _{j=1}^NW_1(\xi _j)}\) as defined above for Gaussian uncertainties.

For uncertainties with other distributions, the orthogonality condition (3) gives adapted polynomials, see e.g. [3], leading to the so-called Askey scheme; for example, as already mentioned Hermite polynomials for Gaussian distributions, and further Charlier polynomials for Poisson distributions, Laguerre polynomials for Gamma distributions, Jacobi polynomials for Beta distributions, etc. In case of less common distributions, an optimal PCM can always be found by constructing the polynomials via a Gram-Schmidt procedure; see Witteveen and Bijl [8], in order to satisfy (3). It should be noted that, if the optimal polynomials are not used, the PCM will also converge (with increasing order) in the mean-square sense but much slower than the exponential convergence with optimal polynomials; see [1].

In cases where the response of the system shows a localized sharp variation or a discontinuous change, local expansions may be more efficient than expansions with global polynomials, whose convergence will deteriorate due to the Gibbs phenomenon. This has led to developments using wavelet expansions [9] and to multi-element polynomial chaos [10, 11]. In the latter case, the random space is subdivided in smaller elements in which new random variables are defined with associated orthogonal polynomials that are constructed numerically.

As already mentioned, the dimension of the problem \(N\) is determined by the number of independent random input variables. In case of a random process (as opposed to a random variable), a Karhunen-Loève expansion (also known as Principal Component Analysis or Proper Orthogonal Decomposition) [12, 13] can be applied to the correlation function \(R(\mathbf {x},\mathbf {y})\) of the random process \(u(\mathbf {x})\) indexed by \(\mathbf {x}\in {\mathscr {D}}\), to decompose the random input process in a set of uncorrelated random variables. Assuming \(\smash {\int _{\mathscr {D}}R(\mathbf {x},\mathbf {x})d\mathbf {x}}<+\infty \) (which is untrue for a stationary process with \({\mathscr {D}}\equiv {\mathbb R}^d\)) and solving the eigenvalue problem:

$$\begin{aligned} \int _{\mathscr {D}}R(\mathbf {x},\mathbf {y}) \phi _i(\mathbf {y}) d\mathbf {y}= \lambda _i \phi _i(\mathbf {y}) \end{aligned}$$
(5)

with \(\phi _i(\mathbf {x})\) the eigenfunctions and \(\lambda _i\) the eigenvalues, the Karhunen-Loève expansion of the random field \(u(\mathbf {x})\) becomes:

$$\begin{aligned} u(\mathbf {x}) - \overline{u(\mathbf {x})} = \sum _i \sqrt{\lambda _i}\xi _i\phi _i(\mathbf {x})\,, \end{aligned}$$
(6)

where the \(\smash {\xi _i}\)s are uncorrelated random variables, and \(\smash {\overline{u(\mathbf {x})}}\) is the mean value at the indexation point \(\mathbf {x}\). Note that if the process u is Gaussian, the random variables \(\smash {\xi _i}\) are Gaussian as well, and hence, they are mutually independent.

A geometrical uncertainty is typically a random process where the coordinates of a geometry are uncertain with some specific correlation length. Depending on the correlation length of the process, the eigenvalues \(\smash {\lambda _i}\) become quickly very small, so that only few terms in the summation above have to be kept. This is not the case however for a very short correlation length (e.g., white noise) resulting in a high-dimensional chaos expansion for such processes. Non-Gaussian random processes are much more difficult to treat than Gaussian [14]. In the former case, mean and covariance are far from sufficient to completely specify the process. This remains an active area of research.

The PCM can be implemented either in an intrusive or in a non-intrusive way as follows.

Intrusive Polynomial Chaos

In an intrusive PCM, the polynomial expansion of the unknown variables, Eq. (1), is introduced in the model, e.g., for CFD applications, the Navier–Stokes equations. Each unknown u is therefore replaced with its expansion coefficients \(\smash {u^i}\). The number of unknowns is therefore basically multiplied with a factor \(P+1\), which can be quite high for high stochastic dimensions and/or high polynomial order. In addition, the model, e.g., CFD code, has to be adapted. The required effort for extending a deterministic CFD code with the intrusive PCM depends on the characteristics of the code: computer language, structured/unstructured, handling of data storage, etc. In the framework of the NODESIM-CFD EU project, an intrusive PCM was implemented in the commercial code Fine/Turbo of NUMECA. This has led to one of the first applications of intrusive PCM to three-dimensional turbulent Navier–Stokes flows [15]. The number of additional lines of code is very limited, compared to the length of the original, deterministic code. However, changes are not restricted to a local part of the code. This increases the risk of introducing bugs and requires someone who is very familiar with all aspects of the code. This is a big disadvantage compared to non-intrusive PC and the main reason why the application of intrusive PCM in commercial codes is very limited.

Nonetheless intrusive methods are more flexible and in general more precise than non-intrusive methods; see Aleksev et al. [16]. This is also confirmed by Xiu [14], who mentions that the intrusive method offers the most accurate solutions involving the least number of equations in multi-dimensional random spaces, even though the resulting equations are coupled.

It is to be noted that the treatment of geometrical uncertainties needs a different approach compared to operational uncertainties. A possibility is to use a transformation such that the deterministic problem in a stochastic domain becomes a stochastic problem in a deterministic domain, e.g., Xiu and Tartakovsky [17]. An alternative is the use of a so-called fictitious domain method [18, 19], or by introducing the uncertainty directly in the surface normals within a control volume approach [20, 21].

Non-intrusive Polynomial Chaos

In the UMRIDA EU project, all PCM contributions relate to non-intrusive approaches. Basically, two different classes of approaches have been formulated: (i) the so-called projection method, which is based on a numerical evaluation of the Galerkin integrals; see Le Maître et al. [5, 22, 23] and Nobile et al. [24]; (ii) regression methods based on a selected set of sample points; see Berveiller et al. [25], and Hosder et al. [26,27,28].

In the projection methods, starting from Eq. (1), the projection on \(\psi _j\) yields:

$$\begin{aligned} \begin{aligned} \int _\varGamma u(\mathbf {x},t,{\pmb {\xi }})\psi _j({\pmb {\xi }})W_N({\pmb {\xi }})d{\pmb {\xi }}&= \sum _{i=0}^Pu^i(\mathbf {x},t)\int _\varGamma \psi _i({\pmb {\xi }})\psi _j({\pmb {\xi }})W_N({\pmb {\xi }})d{\pmb {\xi }}\\&= \gamma _j u^j(\mathbf {x},t) \end{aligned} \end{aligned}$$
(7)

The last equation results from the orthogonality condition (3) and can be considered as an equation for the unknown expansion coefficient \(\smash {u^j}\). It requires the evaluation of the integral in the left-hand side. A numerical quadrature formula is used. For a single variable parameter, it reads:

$$\begin{aligned} \int _{\varGamma _1} u(\mathbf {x},t,\xi )\psi _j(\xi )W_1(\xi )d\xi \simeq \sum _{l=1}^qw^l u(\mathbf {x},t,\xi ^l)\psi _j(\xi ^l)\,. \end{aligned}$$
(8)

The evaluation of the sum in the right-hand side requires an evaluation of the unknown u in \(q\) sample points \(\smash {\{\xi ^l\}_{1\le l\le q}}\) in \(\smash {\varGamma _1}\) associated to \(q\) weights \(\smash {\{w^l\}_{1\le l\le q}}\). Depending on the weighting function (PDF) \(\smash {W_1}\), adapted Gaussian quadrature formulations exist for an accurate evaluation: With q sample points, a polynomial of order \(2q-1\) is integrated exactly in one dimension. Examples are the Gauss-Legendre quadrature (\(\smash {W_1=1/2}\) corresponding to a uniform distribution), the Gauss-Hermite quadrature (\(\smash {W_1}\) given by Eq. (4) in one dimension), etc. For a PCM of order \(p\), one takes \(q=p+1\). This guarantees exact quadrature if \(u(\mathbf {x},t,\xi )\) can be described by a polynomial of maximum order \(p+1\).

This extends to multiple stochastic dimensions by using a full-tensor product quadrature with \(Q=q^N\) sample points. This approach quickly becomes very expensive for high-order and high stochastic dimensions. This has led to the use of sparse grid sampling techniques, avoiding the full-tensorial sampling, e.g., the Smolyak scheme [29]. Sparse grid schemes can be combined with the non-nested Gaussian quadratures invoked above, as well as with nested quadratures, e.g., Clenshaw-Curtis, Gauss-Patterson [30,31,32,33]. More recently, adaptive algorithms have been developed that further reduce the cost [34,35,36]. The choice of quadrature sets is discussed further on in section “Choices of Interpolation Set” in relation with the stochastic collocation method. Alternatively, the numerical quadrature can also be achieved using Monte Carlo simulation [37, 38], or Latin Hypercube sampling [39]. All in all, the evaluation of the left-hand side of Eq. (7) using \(Q\) sampling points \(\smash {\{{\pmb {\xi }}^l\}_{1\le l\le Q}}\) in \(\varGamma \) associated to \(Q\) weights \(\smash {\{w^l\}_{1\le l\le Q}}\) yields:

$$\begin{aligned} u(\mathbf {x},t,{\pmb {\xi }})\simeq {\mathbb P}^P_Q[u](\mathbf {x},t,{\pmb {\xi }})=\sum _{i=0}^P\left( \frac{1}{\gamma ^i}\sum _{l=1}^Qw^lu(\mathbf {x},t,{\pmb {\xi }}^l)\psi _i({\pmb {\xi }}^l)\right) \psi _i({\pmb {\xi }})\,. \end{aligned}$$
(9)

In linear regression methods, the stochastic problem is solved in S samples in stochastic space. For each sample s, Eq. (1) can be written as:

$$\begin{aligned} u(\mathbf {x},t,{\pmb {\xi }}^s)=\sum _{i=0}^Pu^i(\mathbf {x},t) \psi _i({\pmb {\xi }}^s)\,. \end{aligned}$$
(10)

This leads to S equations for the \(P+1\) unknowns \(\smash {u^i}\). Note that this forms a linear system. In order to make the solution less dependent on the choice of the samples, oversampling is used and the system is solved with regression (i.e., the least squares method); see Berveiller et al. [25] and Hosder et al. [26,27,28]. As a rule of thumb, \(S=2(P+1)\) is a good choice; see [27]. Different sampling techniques can be used such as Random, Latin Hypercube, Hammersley [27], roots of Hermite polynomials of order \(p+1\) (for PCM of order \(p\) with Gaussian uncertainties) [25], Sobol’ quasi-random sampling [40], etc.

In case of geometrical uncertainties, each of the different samples–both in the projection and the regression method–will correspond to a different geometry. Geometrical uncertainties therefore require no special treatment in contrast with the intrusive method.

The Collocation Method

The stochastic collocation (SC) method based on Lagrange interpolation has been introduced in [41] and developed further on in e.g. [24, 42,43,44,45]. Examples of applications can be found in [46,47,48,49,50,51] among others. Along the same lines as Eq. (1), the SC expansion is formed as a sum of multi-dimensional Lagrange interpolation polynomials with respect to the \(N\)–dimensional random input variable \({\pmb {\xi }}\). Lagrange polynomials interpolate a set of points in one dimension \(\{\xi ^l_1\}_{1\le l\le {q_1}}\) in a bounded interval \(\varGamma _1\) by the following functional form:

$$\begin{aligned} L_l(\xi )=\prod _{{\begin{array}{c}k=1\\ k\ne l\end{array}}}^{q_1}\frac{\xi -\xi ^k_1}{\xi ^l_1-\xi ^k_1}\,, \end{aligned}$$
(11)

such that \(L_l(\xi ^k_1)=\delta _{kl}\), \(1\le k,l\le \smash {q_1}\); in addition, all \(L_l\)’s have order \(\smash {q_1-1}\). For interpolation in multiple dimensions, the tensor product of one-dimensional Lagrange polynomials can be formed. Eventually at this stage, it is assumed that the interpolation set is formed by tensorization of one-dimensional sets. In other words, structured interpolation sets are considered, for multivariate Lagrange interpolation on unstructured, arbitrary sets of nodes still raises numerous theoretical and practical difficulties. Letting \({\mathbf l}=(l_1,l_2\dots l_N)\) be a multi-index in \({\mathbb N}^N\setminus \{{\mathbf 0}\}\), the multi-dimensional Lagrange polynomial \(\smash {L_{\mathbf l}}\) reads:

$$\begin{aligned} L_{\mathbf l}({\pmb {\xi }})=L_{l_1}(\xi _1)\otimes L_{l_2}(\xi _2)\otimes \cdots \otimes L_{l_N}(\xi _N)\,, \end{aligned}$$
(12)

where different interpolation sets \(\smash {\{\xi ^l_j\}_{1\le l\le q_j}}\) in different intervals \(\varGamma _j\) may possibly be used for each different dimension j. If \(Q\) is now the total number of such multi-dimensional interpolation points counted by a single index l, \(\smash {\{{\pmb {\xi }}^l\}_{1\le l\le Q}}\), the SC expansion of the random field u reads:

$$\begin{aligned} u(\mathbf {x},t,{\pmb {\xi }})\simeq {\mathbb I}^Q[u](\mathbf {x},t,{\pmb {\xi }})=\sum _{l=1}^{Q}u(\mathbf {x},t,{\pmb {\xi }}^l)L_l({\pmb {\xi }})\,, \end{aligned}$$
(13)

where the expansion coefficients are the random field evaluated at \({\pmb {\xi }}^l\).

Choices of Interpolation Set

The key issue of the SC method is the choice of appropriate interpolation sets. A natural, straightforward choice is quadrature nodes and weights as in Eq. (8). Multi-dimensional quadrature sets \({\pmb \varTheta }(N,Q)=\smash {\{{\pmb {\xi }}^l,w^l\}_{1\le l\le Q}}\), where \(\smash {{\pmb {\xi }}^l}\) is the l-th node in \(\varGamma =\smash {\prod _{j=1}^N\varGamma _j}\) and \(\smash {w^l}\) is the corresponding weight, may be constructed from one-dimensional (univariate) quadrature sets by full tensorization or sparse tensorization, using Smolyak’s algorithm [29] as already invoked above.

Univariate Gauss quadratures \(\varTheta (1,\smash {q_1})\) based on \(\smash {q_1}\) integration points are tailored to integrate on \(\smash {\varGamma _1}\equiv [a,b]\) a smooth function \(\xi \mapsto f(\xi )\):

$$\begin{aligned} \int _{\varGamma _1} f(\xi )W_1(\xi )d\xi \simeq \sum _{l=1}^{q_1-r}w^l f(\xi ^l)+\sum _{m=1}^{r}w^{q_1-r+m} f(\xi ^{q_1-r+m})\,, \end{aligned}$$
(14)

such that this rule turns to be exact for univariate polynomials up to the order \(\smash {2q_1-1-r}\). Here, r is the number of fixed nodes of the rule, typically the bounds ab. Depending on the choice of r, different terminologies are used:

  • \(r=0\) is the classical Gauss rule;

  • \(r=1\) is the Gauss-Radau (GR) rule, choosing \(\smash {\xi ^{q_1}}=a\) or \(\smash {\xi ^{q_1}}=b\) for instance;

  • \(r=2\) is the Gauss-Lobatto (GL) rule, choosing \(\smash {\xi ^{q_1-1}}=a\) and \(\smash {\xi ^{q_1}}=b\) for instance.

Multivariate quadratures may subsequently be obtained by full or sparse tensorization of these one-dimensional rules. Firstly, a fully tensorized grid is obtained by the straightforward product rule:

$$\begin{aligned} {\pmb \varTheta }(N,Q)=\bigotimes _{j=1}^N\varTheta (1,q_j)\,, \end{aligned}$$
(15)

which contains \(Q=\prod _{j=1}^Nq_j\) grid points in \(\varGamma \). Secondly, a sparse quadrature rule can be derived thank to the Smolyak algorithm [29]. The so-called \(k\)–th level, \(N\)-dimensional Smolyak sparse grid \(\smash {\widehat{{\pmb \varTheta }}(N,k)}\) is obtained by the following linear combination of product formulas [52]:

$$\begin{aligned} \widehat{{\pmb \varTheta }}(N,k)=\sum _{l=k-N}^{k-1}\sum _{q_1+\cdots +q_N=N+l}\varTheta (1,q_1)\otimes \cdots \otimes \varTheta (1,q_N)\,. \end{aligned}$$
(16)

Clearly, the above sparse grid is a subset of the full-tensor product grids. It typically contains \(Q\sim \smash {(2N)^{k-1}/{k-1}!}\) nodes in \(\varGamma \) whenever \(N\gg 1\) and \(k\) is fixed. By a direct extension of the arguments divised in [31, 33], it can be shown that provided the univariate quadrature rules \(\varTheta (1,q)\) are exact for all univariate polynomials of order up to \(2q-1\) (Gauss rules) or \(2q-3\) (GL rules), the foregoing rule is exact for all \(N\)–variate polynomials of total order up to \(2k-1\) or \(2k-3\), respectively. Figure 1 displays for example the two-dimensional full and sparse rules for an underlying univariate GL quadrature (14) with \(q=9\) nodes and \(W_1(\xi )=(1-\xi ^2)^3\), \(\smash {\varGamma _1}=[-1,1]\). For this example:

Fig. 1
figure 1

Two-dimensional (\(N=2\)) nodes based on a non-nested, one-dimensional Gauss-Lobatto quadrature rule with \(q=9\) nodes. Left: fully tensorized grid (\(Q=81\)). Right: sparse tensorized grid from Smolyak’s algorithm with \(k=9\) (\(Q=193\))

$$\begin{aligned} \begin{aligned} \widehat{{\pmb \varTheta }}(2,9)&=\varTheta (1,2)\otimes \varTheta (1,7)+\varTheta (1,3)\otimes \varTheta (1,6)+\varTheta (1,4)\otimes \varTheta (1,5) \\&\quad +\varTheta (1,2)\otimes \varTheta (1,8)+\varTheta (1,3)\otimes \varTheta (1,7)+\varTheta (1,4)\otimes \varTheta (1,6) \\&\quad +\varTheta (1,5)\otimes \varTheta (1,5) +\text {perm.} \end{aligned} \end{aligned}$$

Here, \(Q=193\), compared to \(Q=81\) with the fully tensorized rule (15). In [53], it has been observed that sparse quadratures outperform fully tensorized quadratures with non-nested underlying one-dimensional rules whenever \(N\ge 4\), though. If \(\varTheta (1,q_i)\) is now Clenshaw-Curtis (CC) univariate quadrature of i-th level for \(i>1\), such that:

$$\begin{aligned} \xi ^l=-\cos \frac{(l-1)\pi }{q_i-1}\,,\quad 1\le l\le q_i=2^{i-1}+1\,, \end{aligned}$$

then the associated third-level bivariate sparse rule as constructed in, e.g., [32] for, say, \(q=9\) is:

$$\begin{aligned} \begin{aligned} \widehat{{\pmb \varTheta }}(2,3)&=\varTheta (1,1)\otimes \varTheta (1,5)+\varTheta (1,3)\otimes \varTheta (1,3) \\&\quad +\varTheta (1,1)\otimes \varTheta (1,9)+\varTheta (1,3)\otimes \varTheta (1,5) +\text {perm.} \end{aligned} \end{aligned}$$
(17)

The underlying univariate CC rules \(\varTheta (1,q_i)\) are nested, that is, \(\varTheta (1,q_i)\subset \varTheta (1,q_{i+1})\), and consequently, the multivariate rules are nested as well, \(\smash {\widehat{{\pmb \varTheta }}(N,k)}\subset \smash {\widehat{{\pmb \varTheta }}(N,k+1)}\). They are in addition exact at least for all multivariate polynomials of total order k [32]. Figure 2 displays the two-dimensional full rule (15) and third-level sparse rule (17) corresponding to the univariate CC quadrature with \(q=9\) nodes. The total number of nodes is significantly reduced with such a nested rule.

Fig. 2
figure 2

Two-dimensional (\(N=2\)) nodes based on a nested, one-dimensional Clenshaw-Curtis quadrature rule with \(q=9\) nodes. Left: fully tensorized grid (\(Q=81\)). Right: sparse tensorized grid from Smolyak’s algorithm with \(k=3\) (\(Q=29\))

Link with Polynomial Chaos

The multi-dimensional Lagrange polynomials may be expanded on the multi-dimensional polynomial chaos basis \(\{\psi _i\}_{0\le i\le P}\) as in Eq. (1):

$$\begin{aligned} L_l({\pmb {\xi }})=\sum _{i=0}^P\langle L_l,\psi _i\rangle \psi _i({\pmb {\xi }})\,,\quad 1\le l\le Q\,, \end{aligned}$$

where \(P\) is given by Eq. (2) with polynomial total order \(p=\smash {\sum _{j=1}^Nq_j-N}\). The expansion coefficients \(\ell _{li}:=\langle L_l,\psi _i\rangle \) can be evaluated with the quadrature rule \(\smash {\{{\pmb {\xi }}^l,w^l\}_{1\le l\le Q}}\) also used as the interpolation set:

$$\begin{aligned} \ell _{li}\simeq \frac{1}{\gamma _i^Q}\sum _{m=1}^Qw^mL_l({\pmb {\xi }}^m)\psi _i({\pmb {\xi }}^m)=\frac{1}{\gamma _i^Q}w^l\psi _i({\pmb {\xi }}^l)\,, \end{aligned}$$

where the second equality stems from the very definition of Lagrange polynomials. Here, \(\smash {\gamma _i^Q}=\smash {\sum _{l=1}^Qw^l(\psi _i({\pmb {\xi }}^l))^2}\) is the normalization constant for the polynomial chaos, which is simply \(\smash {\gamma _i^Q}=\smash {\gamma _i}\) if the quadrature rule integrates exactly polynomials of total order \(2p\). Consequently, the SC expansion (13) of the random field u reads:

$$\begin{aligned} \begin{aligned} {\mathbb I}^Q[u](\mathbf {x},t,{\pmb {\xi }})\simeq {\mathbb I}^Q_P[u](\mathbf {x},t,{\pmb {\xi }})&=\sum _{l=1}^Qu(\mathbf {x},t,{\pmb {\xi }}^l)\sum _{i=0}^P\frac{1}{\gamma _i^Q}w^l\psi _i({\pmb {\xi }}^l)\psi _i({\pmb {\xi }}) \\&=\sum _{i=0}^P\left( \frac{1}{\gamma _i^Q}\sum _{l=1}^Qw^l u(\mathbf {x},t,{\pmb {\xi }}^l)\psi _i({\pmb {\xi }}^l)\right) \psi _i({\pmb {\xi }})\,. \end{aligned} \end{aligned}$$
(18)

The bracketed sum above is the evaluation of the PC expansion coefficients \(u^i\) by the quadrature rule at hand. Hence, both PC and SC expansions are mathematically equivalent, \({\mathbb I}^Q_P\equiv {\mathbb P}^P_Q\), though they are numerically slightly different [54].

Application to Uncertainty Quantification (UQ)

Once the polynomial expansion (1) or (13) has been derived, the first moments and/or cumulants of the random field u can be computed using a quadrature rule \(\smash {{\pmb \varTheta }(N,Q)}\) and associated evaluations \(\smash {u(\mathbf {x},t,{\pmb {\xi }}^l)}\), \(1\le l\le Q\). Indeed, for a regular function \(u\mapsto f(u)\), one can estimate a mean output functional by:

$$\begin{aligned} {\mathbb {E}}\{f(u)\}(\mathbf {x},t)=\int _\varGamma f(u(\mathbf {x},t,{\pmb {\xi }}))W_N({\pmb {\xi }})d{\pmb {\xi }}\simeq \sum _{l=1}^Qw^l f(u(\mathbf {x},t,{\pmb {\xi }}^l))\,. \end{aligned}$$

The mean \(\mu \) is obtained for \(f(u)=u\), the variance \(\sigma ^2\) is obtained for \(f(u)=\smash {(u-\mu )^2}\), the skewness \(\smash {\beta _1}\) for \(f(u)=\smash {(\frac{u-\mu }{\sigma })^3}\), the kurtosis \(\beta _2\) for \(f(u)=\smash {(\frac{u-\mu }{\sigma })^4}\), etc. More generally, the j-th moment \(\smash {m_j}\) is obtained for \(f(u)=\smash {u^j}\) and may be used to compute the characteristic function \(\smash {\varPhi _U}\):

$$\begin{aligned} \varPhi _U(V)=\int {\mathrm e}^{\mathrm{i}U\cdot V}W_U(dU)=\sum _{j=0}^{+\infty }\frac{m_j}{j!}(\mathrm{i}U)^j\,, \end{aligned}$$

where by the causality principle (or transport of PDFs) for the random variable \(U\sim u(\cdot ,{\pmb {\xi }})\) one has:

$$\begin{aligned} W_U(dU)=\left| \frac{du^{-1}}{dU}\right| W_N(u^{-1}(dU))\,. \end{aligned}$$

Sobol’ sensitivity indices or global sensitivity indices may be computed alike; see [14, 53, 55,56,57] and references therein. Denoting by \({\mathscr {I}}_j\) the set of indices corresponding to the polynomials \(\psi _k\) depending only on the j-th variable parameter \(\xi _j\), the main-effect PCE-based Sobol’ indices are given by (see e.g. Sudret [57]):

$$\begin{aligned} S_j(\mathbf {x},t)=\frac{1}{\sigma ^2}\sum _{k\in {\mathscr {I}}_j}\gamma _k(u^k(\mathbf {x},t))^2\,, \end{aligned}$$

owing to the normalization condition (3). More generally, if \(\smash {{\mathscr {I}}_{j_1j_2\dots j_s}}\) is the set of indices corresponding to the polynomials \(\psi _k\) depending only on the parameters \(\smash {\xi _{j_1}},\smash {\xi _{j_2}},\dots \smash {\xi _{j_s}}\), the s-fold joint PCE-based Sobol’ indices are:

$$\begin{aligned} S_{j_1j_2\dots j_s}(\mathbf {x},t)=\frac{1}{\sigma ^2}\sum _{k\in {\mathscr {I}}_{j_1j_2\dots j_s}}\gamma _k(u^k(\mathbf {x},t))^2\,. \end{aligned}$$

Conclusions

In this chapter, we have outlined the main ingredients of polynomial expansion methods for the pseudo-spectral analysis of random variables and fields, using either projections on orthonormal polynomials–the generalized polynomial chaos method, or interpolations on Lagrange polynomials–the stochastic collocation method. We have also shown how both approaches are actually intimately connected by a proper choice of the integration/interpolation nodal sets used to compute the polynomial expansion coefficients. However, alternative strategies have been recently considered in order to evaluate them, which are detailed in the following chapters “Generalized Polynomial Chaos for Non-intrusive Uncertainty Quantification in Computational Fluid Dynamics” through “Screening Analysis and Adaptive Aparse Collocation Methods”. Applications to uncertainty quantification and robust design optimization for industrial challenges are given in parts III and IV.