Keywords

Introduction

Uncertainty in the design and operation of engineering problems may arise from various sources. The uncertainties in physical properties of materials and inevitable randomness in boundary conditions and geometries, as well as physical models uncertainties, are a few examples of such uncertainties that can significantly restrict the reliability of deterministic designs. Gas, steam, wind, and hydraulic turbines are examples of engineering devices that their operational condition and geometry might be uncertain. Design of these turbomachines using deterministic computations may fail in the presence of uncertainties. For a reliable design based on computational fluid dynamics (CFD) predictions, it is necessary to include all sources of uncertainty in the analysis and design process. However, CFD simulation of flows in real-world engineering problems requires a fine 3D computational mesh, small time-step, and high-dimensional stochastic space in the case of a large number of random variables. These dramatically increase the computational cost which is not desirable for design proposes, highlighting the need for employing robust numerical schemes for stochastic analysis of complex industrial flows. While efficient numerical methods for the spatial and temporal discretization of the Navier–Stokes equations are well developed, effective numerical schemes for stochastic discretization are still rare (see, e.g., [1, 2]).

In the literature, various techniques have been proposed for uncertainty quantification (UQ). The Monte Carlo (MC) approach [3] is widely used for UQ because of its simplicity and its superior property that the convergence rate does not depend on the number of stochastic dimensions. Unfortunately, the conventional MC methods converge slowly and often require a large number of realizations to achieve reasonable accuracy and thus are impractical for problems with a large number of uncertainties. Over the recent years, a number of other UQ approaches have been developed to represent and propagate uncertainties in engineering problems with a large number of uncertainties. Some examples of commonly developed UQ methods are: the multi-level Monte Carlo [4], the method of moments or the perturbation method [5], and polynomial chaos expansion (PCE) [6, 7]. All these techniques have positive and negative features, and no single technique is optimal for all applications. Following our previous work on UQ [8, 9], we focused on the PCE approach to model uncertainty propagation. PCE methods have been successfully applied to various structural and solid mechanics problems by several researchers [6, 10]. Polynomial chaos (PC) schemes have also been employed to fluid flow and heat transfer problems [7, 8, 11,12,13]. The polynomial chaos representation can be implemented through either intrusive or non-intrusive methods. The intrusive approach involves the substitution of all uncertain variables in the governing equations with the polynomial expansions consisting of \(P+1={{(p+n_s)!}/{p!n_s!}}\) unknown coefficients, where \(n_s\) is the number of stochastic dimensions and p is the polynomial order. Taking the inner product of the equations yields \(P+1\) times the number of original equations that can be solved by the same numerical schemes applied to the original deterministic system. This requires the modification of the CFD codes, and it may be difficult, expensive, and time-consuming for many CFD problems. Moreover, the sources of most commercial codes are not accessible, and thus, it is not feasible to implement the intrusive PC approach in such deterministic codes. For these reasons, here, we focused on non-intrusive polynomial chaos (NIPC) methodology for UQ. The NIPC method performs repeated simulations using deterministic solver on limited number of samples which are chosen properly. Then, the polynomial chaos expansion of output is constructed using deterministic solver evaluations. The two main NIPC approaches used for UQ in CFD are spectral projection (sampling-based and quadrature-based) and regression-based methods. The application of these NIPC schemes to model stochastic problems can be found in [14, 15]. In the present study, the regression-based NIPC scheme, introduced in section “Regression-Based Polynomial Chaos Expansion,” is used for the evaluation of PCE coefficients. The main weakness of all NIPC methods is the curse of dimensionality. In recent years, some alternative methodologies such as sparse polynomial chaos [16], sparse grid techniques [17], compressive sampling [18], and reduced models [1, 2] have been developed to break the curse of dimensionality. In the framework of the EU FP7 project UMRIDA, this study focuses on the development of an efficient reduced basis model for UQ. In recent years, several model reduction techniques have been proposed for uncertainty quantification. Two examples of such works are [1, 2]. In [2], a generalized spectral decomposition (GSD) was proposed that gives the reduced basis independent of the stochastic discretization scheme. In this method, the solution of the stochastic problem is first approximated as the summation over the product of deterministic functions and random variables. The reduced basis functions then appear as the solutions of a pseudo-eigenvalue problem whose dominant eigenspace is associated with the desired optimal basis. In the final form of GSD, the solution of only a few uncoupled deterministic problems and a few stochastic algebraic equations is required for the computation of deterministic functions and random variables. As shown in [2], the implementation of GSD to a class of stochastic partial differential equations (SPDE) leads to drastic computational saving, although it does not circumvent the curse of dimensionality. In [1], an intrusive model reduction technique was proposed for chaos representation of a SPDE to tackle the curse of dimensionality. They applied it successfully to a 2D solid mechanics problem with randomness in the elastic modulus where for a third-order PC (\(p=3\)), they could reduce the number of basis functions to 5 as compared to \(P=165\) in the “standard PCE” using a basis of the classical polynomials of the Askey scheme.

In this study, a regression-based non-intrusive reduced basis technique is developed. The model can be interpreted as a multi-level/multi-fidelity approach, where many low-fidelity model evaluations are combined with few high-fidelity evaluations to ensure accurate results at a lower CPU cost. In the framework of polynomial chaos, such ideas are also explored by Palar et al. [19] and Ng and Eldred [20].

The remaining part of this paper is organized as follows: In section “Regression-Based Polynomial Chaos Expansion,” the regression-based polynomial chaos expansion is described. The model reduction methodology is presented in section “Reduced Basis Methodology.” In section “Results and Discussion,” the numerical results are presented and discussed. Finally, the main findings of the present paper are summarized in section “Conclusions.”

Regression-Based Polynomial Chaos Expansion

Let assume \(u(\pmb {x};\pmb {\xi })\) is the response of a stochastic system with \(n_s\) random variables \(\pmb {\xi }=\{\xi _i\}_{i=1}^{n_s}\). In PCE, the uncertain output \(u(\pmb {x};\pmb {\xi })\) is decomposed into separable deterministic and stochastic components as:

$$\begin{aligned} u(\pmb {x};\pmb {\xi })=\sum _{i=0}^{P} u_{i}({\pmb {x}})\psi _{i}(\pmb {\xi }), \end{aligned}$$
(1)

where the total number of output modes, \(P+1={{(p+n_s)!}/{p!n_s!}}\), is a function of the order of polynomial chaos (p) and the number of random dimensions (\(n_s\)).

The \(\psi _i(\pmb {\xi })\)’s are orthogonal polynomials with respect to the probability density function (PDF) of input random variables \(\pmb {\xi }\):

$$\begin{aligned} \langle \psi _i\psi _j\rangle = \langle \psi _i^2\rangle \delta _{ij}. \end{aligned}$$
(2)

The quadrature-based NIPC scheme may be used for the evaluation of polynomial chaos expansion. However, the application of tensor–product quadrature approach for multi-dimensional problems suffers the curse of dimensionality since the required number of model evaluations grows exponentially with the number of random dimensions \(n_s\) (i.e., \( (p+1)^{n_s}\)). Although sparse quadrature rules are more efficient, still they are impractical for the stochastic problems with high dimensions. A more affordable NIPC scheme to find the response surface of the output is the regression method. The regression-based NIPC method starts with Eq. (1). To establish a closed system, \(P + 1\) sample points (\(\pmb {\xi }^s, s = 1, 2,\ldots ,P+1\)) are generated in the stochastic space for a given PCE with \(P +1\) unknown coefficients and the stochastic function, \(u(\pmb {x};{\pmb \xi })\), is evaluated at these sampling points. This yields the following linear system of equations:

$$\begin{aligned} \underbrace{\begin{pmatrix} \psi _{0}({\pmb \xi }^1) &{} \cdots &{} \psi _{i}({\pmb \xi }^1)&{} \cdots &{} \psi _{P}({\pmb \xi }^1) \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ \psi _{0}({\pmb \xi }^s) &{} \cdots &{} \psi _{i}({\pmb \xi }^s) &{} \cdots &{}\psi _{P}({\pmb \xi }^s) \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ \psi _{0}({\pmb \xi }^{P+1}) &{} \cdots &{} \psi _{i}({\pmb \xi }^{P+1}) &{} \cdots &{}\psi _{P}({\pmb \xi }^{P+1})\\ \end{pmatrix}}_{\varPsi ({\pmb \xi }^s)} \underbrace{ \begin{pmatrix} u_0(\pmb {x}) \\ \vdots \\ u_i(\pmb {x}) \\ \vdots \\ u_P(\pmb {x})\\ \end{pmatrix}}_{U} = \underbrace{ \begin{pmatrix} u(\pmb {x};{\pmb \xi }^1) \\ \vdots \\ u(\pmb {x};{\pmb \xi }^s) \\ \vdots \\ u(\pmb {x};{\pmb \xi }^{P+1}) \\ \end{pmatrix}}_{b}, \end{aligned}$$
(3)

or

$$\begin{aligned} \varPsi U = b. \end{aligned}$$
(4)

The least squares solution of the linear system (3) is \(U = (\varPsi ^T\varPsi )^{-1} \varPsi ^T b\).

Consistent with the literature (e.g., Hosder et al. [21]), we found that oversampling with \(2(P+1)\) model evaluations is necessary to obtain satisfactory results for the PCE. In principle, the sample points can be chosen freely. However, while random sampling is the simplest, its major disadvantage is that the sample points may not be space filling. This will have a repercussion on the accuracy of the results. An alternative to the random sampling technique is the Latin hypercube sampling (LHS) which offers better space filling characteristics. The basic idea is to divide the range of each random variable into n bins of equal probability and then to generate N samples such that, for each random variable, no two values should lie in the same bin. However, LHS suffers from a major difficulty. Indeed, the accuracy of the LHS-based estimates cannot be increased incrementally, i.e., by adding new sample points to the already existing LHS sample set, since the new set will not be a Latin hypercube anymore. An efficient method to build adaptive space filling design is the quasi-random sampling (e.g., Hammersley, Halton, Sobol). In quasi-random sequences, a deterministic sequence of points is generated. The main idea of using a quasi-random sequence is to reduce the discrepancy of the sets of points. In the present work, the coefficients of the PCEs are estimated by the regression-based NIPC, using the Sobol sampling scheme [22].

Due to the orthogonality of the basis, it is straightforward to show that the mean is \(\langle u(\pmb {x};{\pmb {\xi }})\rangle ={u}_0\), and the variance of the response reads as:

$$\begin{aligned} \sigma ^2= Var \left( \sum _{i=0}^{P} u_{i}({\pmb {x}})\psi _{i}({\pmb {\xi }}) \right) =\sum _{i=1}^{P}u_i^2 \langle \psi _i \psi _i\rangle . \end{aligned}$$
(5)

Reduced Basis Methodology

The above classical PCE (i.e., Eq. (1)) does not represent an optimal PC representation of \(u(\pmb {x};\pmb {\xi })\). The optimal chaos expansion is the Karhunen–Loève (KL) expansion (also known as proper orthogonal decomposition (POD)). However, this requires the knowledge of the covariance of the solution, which is unknown. Assuming that the behavior in spatial and random space can be decoupled, the covariance can be obtained via inexpensive calculations on a coarse grid. The size of coarse grid, necessary for the estimation of the optimal basis, can be identified through mesh adaptation in the spatial domain of the problem. Next, the problem can be solved on a fine mesh using the previously defined optimal basis \(\{z_i(\pmb \xi )\}_{i=0}^m\) where m is the number of dominant eigenvalues. This indicates that the dimensionality of the KL expansion can be reduced.

The first step in the model reduction scheme is to find an optimal basis using POD, a well-known procedure for extracting basis functions using an ensemble of realizations. To this end, suppose, on a fine grid, expression (6) represents an optimal chaos expansion of the stochastic field \(u(\pmb {x};\pmb {\xi })\):

$$\begin{aligned} u(\pmb {x};\pmb {\xi })-\langle u(\pmb {x};\pmb {\xi })\rangle =\sum _{i=1}^m \hat{u}_{i}({\pmb {x})} z_{i}(\pmb {\xi }), \end{aligned}$$
(6)

where the mean function is the coefficient of the zeroth-order basis (i.e., \(\langle u(\pmb {x};\pmb {\xi })\rangle =\hat{u}_0\)) and \(\{z_i(\pmb \xi )\}_{i=0}^m\) are the \(m+1\) dominant modes, forming the optimal basis.

On the coarse grid, the covariance matrix \(C(\pmb {x}_i,\pmb {x}_j)\) of the stochastic field can be obtained from:

$$\begin{aligned} C(\pmb {x}_i,\pmb {x}_j)= & {} \sum _{k=1}^P u_k(\pmb {x}_i)u_k(\pmb {x}_j)\langle \psi _k^2\rangle , \end{aligned}$$
(7)

where \(u_k\)’s are the classical PCE coefficients obtained using Eq. (3) on the coarse grid.

The corresponding eigenvalues \(\nu _i\) and eigenfunctions \(\phi _i(\pmb {x})\) are the solution of the following eigenvalue problem:

$$\begin{aligned} C \phi _i=\nu _i \phi _i. \end{aligned}$$
(8)

For a coarse mesh with n grid nodes the \(n\times n\) covariance matrix has the following form:

$$\begin{aligned} C= \begin{pmatrix} \sum _{k=1}^P u_k^2(\pmb {x}_1)\langle \psi _k^2\rangle &{} \cdots &{} \sum _{k=1}^P u_k(\pmb {x}_1)u_k(\pmb {x}_n)\langle \psi _k^2\rangle \\ \vdots &{} \vdots &{} \vdots \\ \sum _{k=1}^P u_k(\pmb {x}_i)u_k(\pmb {x}_1)\langle \psi _k^2\rangle &{} \cdots &{} \sum _{k=1}^P u_k(\pmb {x}_i)u_k(\pmb {x}_n)\langle \psi _k^2\rangle \\ \vdots &{} \vdots &{} \vdots \\ \sum _{k=1}^P u_k(\pmb {x}_n)u_k(\pmb {x}_1)\langle \psi _k^2\rangle &{} \cdots &{}\sum _{k=1}^P u_k^2(\pmb {x}_n)\langle \psi _k^2\rangle \\ \end{pmatrix}. \end{aligned}$$
(9)

For a large value of \(n\gg P\), the solution of the above eigenvalue problem is time-consuming and requires a large amount of memory for the data storage. To overcome this problem, one can notice that the covariance matrix C is symmetric and thus can be decomposed as:

$$\begin{aligned} C= \underbrace{ \begin{pmatrix} u_{1}(\pmb {x}_1)\sqrt{\langle \psi _1^2\rangle } &{} \cdots &{}u_{P}(\pmb {x}_1)\sqrt{\langle \psi _P^2\rangle } \\ \vdots &{} \vdots &{} \vdots \\ u_{1}(\pmb {x}_i)\sqrt{\langle \psi _1^2\rangle } &{} \cdots &{}u_{P}(\pmb {x}_i)\sqrt{\langle \psi _P^2\rangle } \\ \vdots &{} \vdots &{} \vdots \\ u_{1}(\pmb {x}_n)\sqrt{\langle \psi _1^2\rangle } &{} \cdots &{}u_{P}(\pmb {x}_n)\sqrt{\langle \psi _P^2\rangle }\\ \end{pmatrix}}_{Y(n\times P)} \end{aligned}$$
$$\begin{aligned} \otimes \underbrace{ \begin{pmatrix} u_{1}(\pmb {x}_1)\sqrt{\langle \psi _1^2\rangle } &{} \cdots &{} u_{1}(\pmb {x}_i)\sqrt{\langle \psi _1^2\rangle }&{} \cdots &{}u_{1}(\pmb {x}_n) \sqrt{\langle \psi _1^2\rangle } \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ u_{P}(\pmb {x}_1)\sqrt{\langle \psi _P^2\rangle } &{} \cdots &{} u_{P}(\pmb {x}_i)\sqrt{\langle \psi _P^2\rangle }&{} \cdots &{}u_{P}(\pmb {x}_n)\sqrt{\langle \psi _P^2\rangle } \end{pmatrix},}_{Y^T(P\times n)} \end{aligned}$$
(10)

where the \(P\times n\) matrix \(Y^T\) is the transpose of the \(n\times P\) matrix Y.

Substitution of the above decomposition in Eq. (8) and multiplication by \(Y^T\) yields:

$$\begin{aligned} Y^T Y(Y^T \phi _i)=\nu _i (Y^T \phi _i), \end{aligned}$$
(11)

This indicates that \(Y^T Y\) has eigenfunctions \(Y^T\phi _i\) and the same eigenvalues as C. However, \(Y^T Y\) is only a \(P\times P\) matrix, and thus, it is less expensive to find the eigenvalues and corresponding eigenfunctions than from the original covariance matrix C. This makes the size of the eigenvalue problem independent of the coarse grid size. By computing the eigenvalues from Eq. (11), the upper limit m in Eq. (6) can be found by the size of the dominant eigenspace (11) such that \({{\sum _{i=1}^{m}} \nu _i/{\sum _{i}\nu _i}}\) is sufficiently close to unity. In this work, the upper limit m is chosen to be the minimum integer such that \({{\sum _{i=1}^{m}} \nu _i/{\sum _{i}\nu _i}}\ge \varepsilon \) for a given \(\varepsilon \) (for instance \(\varepsilon =0.99\)).

Having obtained \(u_i(\pmb {x})\) from the regression-based NIPC (Eq. 3) on the coarse grid and eigenfunctions \(\phi _i(\pmb {x})\) from the solution of the eigenvalue problem (11), the set of optimal basis functions \(\{z_i(\pmb \xi )\}_{i=0}^m\) can now be recasted as a linear combination of the set of classical polynomial chaos functions; \(\{\psi _i(\pmb {\xi })\}_{i=1}^P\) using the following scalar product:

$$\begin{aligned} z_i (\pmb {\xi }) = [u(\pmb {x};\pmb {\xi })-\langle u(\pmb {x})\rangle ,\phi _i(\pmb {x})] = \sum _{j=1}^P \alpha _{ij}\psi _j(\pmb {\xi }), \end{aligned}$$
(12)

where the coefficients \(\alpha _{ij}\) are obtained via the scalar product:

$$\begin{aligned} \alpha _{ij}=\int _R u_{j}(\pmb {x}) \phi _{i} (\pmb {x})d\overrightarrow{\pmb {x}}. \end{aligned}$$
(13)

The \(m+1\) unknowns \(\hat{u}_{i}\)’s in the optimal expansion can be obtained by substitution of \(m+1\) random vectors (\({\pmb \xi }^s,s = 1,\ldots ,m+1\)) and the corresponding stochastic outputs \(u(\pmb {x};\pmb {\xi }^s)\) in Eq. (6). This yields the following linear system of equations:

$$\begin{aligned} \underbrace{ \begin{pmatrix} z_{0}({\pmb \xi }^1)&{} \cdots &{} z_{i}({\pmb \xi }^1)&{} \cdots &{} z_{m}({\pmb \xi }^1) \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ z_{0}({\pmb \xi }^s)&{} \cdots &{} z_{i}({\pmb \xi }^s) &{} \cdots &{}z_{m}({\pmb \xi }^s) \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ z_{0}({\pmb \xi }^{m+1})&{} \cdots &{} z_{i}({\pmb \xi }^{m+1}) &{} \cdots &{}z_{m}({\pmb \xi }^{m+1})\\ \end{pmatrix}}_{Z({\pmb \xi }^s)} \begin{pmatrix} \hat{u}_0(\pmb {x}) \\ \vdots \\ \hat{u}_i(\pmb {x}) \\ \vdots \\ \hat{u}_m(\pmb {x})\\ \end{pmatrix} = \begin{pmatrix} u({\pmb {x};\pmb \xi }^1) \\ \vdots \\ u({\pmb {x};\pmb \xi }^s) \\ \vdots \\ u({\pmb {x};\pmb \xi }^{m+1}) \\ \end{pmatrix}. \end{aligned}$$
(14)

Using Eqs. (12), (14) can be re-expressed as:

$$\begin{aligned} \underbrace{ \begin{pmatrix} z_{0}({\pmb \xi }^1)&{} \cdots &{} \sum _{j=1}^P \alpha _{ij}\psi _j({\pmb \xi }^1)&{} \cdots &{} \sum _{j=1}^P \alpha _{mj}\psi _j({\pmb \xi }^1) \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ z_{0}({\pmb \xi }^s)&{} \cdots &{} \sum _{j=1}^P \alpha _{ij}\psi _j({\pmb \xi }^s) &{} \cdots &{}\sum _{j=1}^P \alpha _{mj}\psi _j({\pmb \xi }^s) \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ z_{0}({\pmb \xi }^{m+1})&{} \cdots &{} \sum _{j=1}^P \alpha _{ij}\psi _j({\pmb \xi }^{m+1}) &{} \cdots &{}\sum _{j=1}^P \alpha _{mj}\psi _j({\pmb \xi }^{m+1})\\ \end{pmatrix}}_{Z({\pmb \xi }^s)} \begin{pmatrix} \hat{u}_0(\pmb {x}) \\ \vdots \\ \hat{u}_i(\pmb {x}) \\ \vdots \\ \hat{u}_m(\pmb {x})\\ \end{pmatrix} = \begin{pmatrix} u({\pmb {x};\pmb \xi }^1) \\ \vdots \\ u({\pmb {x};\pmb \xi }^s) \\ \vdots \\ u({\pmb {x};\pmb \xi }^{m+1}) \\ \end{pmatrix}. \end{aligned}$$
(15)

The matrix Z, containing the optimal basis, is already known from Eqs. (3) and (13), and the right-hand side of Eq. (15) can be found from \(m+1\) runs of the deterministic solver at \({\pmb \xi }^1,\ldots ,{\pmb \xi }^s,\ldots ,{\pmb \xi }^{m+1}\) on the fine mesh. Thus, the expansion coefficients \(\hat{u}_{i}(\pmb {x})\) are obtained by the solution of the above linear system. Here, again oversampling is required. Following the approach used in the regression-based NIPC analysis, \(2(m+1)\) sample points were found adequate to give acceptable results. As pointed out, the coefficient of the zeroth-order basis (\(z_0(\pmb \xi )\)) is the mean output (i.e., \(\langle u(\pmb {x};{\pmb \xi })\rangle =\hat{u}_0\)), while the variance is expressed as:

$$\begin{aligned} \sigma ^2= \sum _{i=1}^m {\hat{u}_{i}}^2\langle z_i,z_i\rangle , \end{aligned}$$
(16)

where \(\langle z_i,z_i\rangle =\nu _i\).

Results and Discussion

In the following subsections, numerical results for three benchmark stochastic problems, namely (I) Ackley function, (II) 2D RAE2822 transonic airfoil, and (III) 3D NASA rotor 37, are presented and discussed.

Highly Irregular Ackley Function

The 2D Ackley function is a challenging test function for the validation of the developed reduced basis methodology due to its complex structural distribution. The stochastic Ackley function is defined as:

(17)

where function coefficients (shown in red in Eq. (17)) are uncertain and the associated random variables \(\pmb {\xi }=\{\xi _i\}_{i=1}^{3}\) are uniformly distributed over \([-1,1]^{3}\) with a CoV of 0.0577.

Fig. 1
figure 1

Representation of Ackley function in different grids

Fig. 2
figure 2

Normalized eigenvalues using different coarse mesh sizes for the stochastic Ackley function: a linear scale; b semi-log scale

Figure 1 shows the deterministic Ackley function (i.e., \(\pmb {\xi }=0\)) on different grids. As expected, the Ackley function is highly irregular in 2D spatial space and is characterized by a nearly flat outer region and a large hole at the center. The mesh refinement from \(5\times 5\) to \(160\times 160\) reveals more details of the function. It was found that a finer mesh with \(400\times 400\) nodes is necessary to reproduce the fine-scale structures of the Ackley function, and thus, such a fine mesh is employed for the fine-scale analysis. Figure 2 shows the distribution of the normalized eigenvalues in the linear and semi-log scales when different grids are used for the coarse-scale analysis. A high polynomial order (\(p=13\)) is used for the coarse grid analysis. This is because a regression-based NIPC analysis indicated that such a high polynomial order is necessary to reproduce the details of the mean, variance, and skewness fields. As shown in Fig. 2, the eigenvalues decay rapidly. Thus, only a limited number of modes (or eigenvalues) are needed in the KL expansion. The number of chosen eigenvalues depends on the accuracy of the statistics. For higher accuracy, a larger number of modes should be taken into account. In Fig. 2b, as expected, the normalized eigenvalues distributions decrease slower with the finer grids. Results show that for this nonlinear test case, an accurate solution is obtained when a \(40\times 40\) mesh is used for the coarse grid analysis. In Fig. 3, the distributions of mean, variance, and skewness fields returned using the reduced basis method are compared with the distributions obtained using regression-based NIPC. It is observed that with a reduced basis of dimension \(m+1=15\) (correspond to \(\varepsilon =0.99999999\)), the fine-scale results are very close to those of the full NIPC. With reduced basis size \(m+1=15\), the average relative error (\(\bar{\varepsilon _r}\)) in mean, variance, and skewness is of the order of \(10^{-5}\), \(10^{-3}\), and \(10^{-2}\), respectively. Note, however, that for this case, the full PC analysis needs \(2(P+1)=1120\) expensive function evaluations. Further analysis (not presented here for the sake of brevity) shows the reduced basis methodology is more efficient than the classical PCE by more than one order of magnitude. Further efficiency improvement can be achieved by using a smaller \(\varepsilon \) (e.g., \(\varepsilon =0.99\)) and increasing the allowable relative error in the statistical quantities. More details can be found in [23].

Fig. 3
figure 3

Comparison of mean, variance, and skewness fields for the Ackley function. First row: mean field; second row: variance field; third row: skewness field

2D Transonic RAE2822 Airfoil

The 2D transonic flow around the RAE2822 airfoil represents a challenging configuration to investigate the performance of the developed reduced-order model due to the shock formation. The nominal flow conditions; free stream Mach number \(M=0.734\), angle of attack \(\alpha =2.79^\circ \), and Reynolds number \(Re=6.5\times 10^6\) are considered for this test case. For the deterministic solution of the RAE2822 using Ansys/Fluent, the second-order upwind scheme is employed for the approximation of nonlinear convective terms in all transport equations. The Spalart–Allmaras turbulence model is used for the predictions. To assess the accuracy of the results, a grid study was performed with four different C-type meshes with, respectively, \(7.5\times 10^2\), \(3.0\times 10^3\), \(1.1 \times 10^4\), and \(4.4 \times 10^4\) grid nodes. A coarse mesh with \(3.0\times 10^3\) and the finest mesh with \(4.4 \times 10^4\) grids are shown in Fig. 4. It was found that the predictions on the finest mesh are grid independent and thus are used for the fine-scale analysis. The geometry of the airfoil is assumed to be subject to random deformations, and variations of the airfoil boundary are modeled using the following Gaussian shaped covariance:

$$\begin{aligned} Cov(s_i,s_j)= & {} \sigma (s_i) \sigma (s_j) \exp \left[ -\frac{(s_i-s_j)^2}{2b^2}\right] , \end{aligned}$$
(18)
Fig. 4
figure 4

The coarse and fine C-type meshes with: \(3.0\times 10^3\) and \(4.4 \times 10^4\) grid nodes

where \(s_i\) and \(s_j\) are positions along the airfoil, b is the correlation length, and \(\sigma \) is the variance. For the RAE2822 airfoil, \(0 \le s \le 2.032\). Position \(s=0\) corresponds to the leading edge and increases along the upper surface. A constrained standard deviation, \(\sigma (s)={\sigma '}S(s)\), is considered to freeze the leading and trailing edges of the airfoil. The constraint functions on the upper and lower walls of the airfoil are, respectively, expressed as:

$$\begin{aligned} S(s) = \left\{ \begin{array}{l l} \sin \left( \frac{\pi s}{s_u}\right) ~\quad \qquad 0\le s<s_u\\ \\ \sin \left[ \frac{\pi (s-s_u)}{s_l}\right] \quad s_u\le s<s_u+s_l \end{array} \right. \end{aligned}$$
(19)

where \(s_u=\int _{upper} ds\) and \(s_l=\int _{lower} ds\).

Using KL expansion, a stochastic process of a given covariance function can always be approximated by a finite sum of products of deterministic spatial functions and uncorrelated random variables. The geometrical uncertainty at the airfoil surface can then be expressed as:

$$\begin{aligned} \mathbf {X}(s,\pmb {\xi })\approx \bar{X}(s)+\sum _{k=1}^{n_s}\sqrt{\lambda _k} \phi _k(s)\xi _k .\hat{n} \end{aligned}$$
(20)

where \({X}(s,\pmb {\xi })\) is the airfoil coordinate at sample sample \(\pmb \xi \), \(\bar{X}(s)\) is the airfoil mean coordinate, \(\hat{n}\) is a normal vector, and \(\phi _k(s)\) and \(\lambda _k\) are eigenvalues and eigenfunctions of the covariance kernel, respectively.

Fig. 5
figure 5

Normalized eigenvalues from the solution of four different grid sizes analysis

A case with the correlation length \(b = 0.05\) and the standard deviation \(\sigma ^\prime = 0.001\) is considered for the present analysis. The random variables are set to be uniformly distributed over the stochastic space \([-1, 1]^{n_s}\) where \(n_s\) is the number of independent random variables. The first ten highest modes of KL expansion are considered as uncertain for the UQ of the RAE2822 test case. The coarse-scale analysis is performed on a mesh with \(3.0\times 10^3\) nodes (shown in Fig. 4), a grid size fourteenth times smaller than the finest grid size. A classical PC analysis of third order using regression-based NIPC is performed on the coarse grid to get the covariance in stochastic space of the solution. In this analysis, the covariance matrix is built using all primitive variables (\(\rho \); \(\rho U\); \(\rho V\); \(\rho E\)). The criteria of the selection of the coarse grid are based on the analysis presented for the Ackley function. Starting from the POD analysis on a very coarse mesh, the mesh size is gradually increased until sufficient convergence of the POD eigenvalues. This is illustrated in Fig. 5 where the normalized eigenvalues are shown for four different mesh sizes with \(7.5\times 10^2\), \(3.0\times 10^3\), \(1.1 \times 10^4\), and \(4.4 \times 10^4\) grid nodes. It is observed that already on the \(3.0\times 10^3\) grid, the eigenvalues have converged. A classical PC analysis of third order using regression is performed on the coarse grid to get the covariance in stochastic space of the solution. In the regression approach, a total of 572 samples are needed as the classical PCE contains 286 polynomials. The Sobol quasi-random sequence is used to generate these sample points. For \(\varepsilon = 0.99\), the size of the reduced basis is 22, requiring 44 deterministic CFD calculations on the fine grid. In Fig. 6, the results (pressure coefficient) obtained with the reduced-order model and with the full PC are compared. It is observed that the results of the reduced-order model are in acceptable agreement with the results of the full model. On average, the errors in the mean \(C_p\) and its variance are less than \(0.2\%\) and \(5.0\%\), respectively. As shown in [24], for the present test case, the reduced basis method (using \(\varepsilon = 0.99\)) is almost 6–7 times more efficient than the classical PC method. A detailed discussion on the effect of criterion (\(\varepsilon \)) on the accuracy of the reduced basis method is presented in [24]. A case where the covariance matrix in the reduced basis approach is build using only one primitive variable (e.g., \(\rho U\)) is also analyzed, and similar results were obtained.

Fig. 6
figure 6

Comparison of pressure coefficient (\(\varepsilon = 0.99\)): mean and std deviation using classical PC and model reduction method

Fig. 7
figure 7

Meridional view of the rotor 37 blade with tip gap

Fig. 8
figure 8

Mean and std deviation of the pressure distribution around the rotor blade using reduced model for PC order 2

3D Transonic Rotor 37

For the validation of the developed reduced basis approach, uncertainty quantification of the rotor 37, shown in Fig. 7, is further considered. The rotational speed of the rotor is 17188 rpm, and the outlet static pressure is fixed at 110000 Pa. Combination of geometrical and operational uncertainties is considered for this test case. The geometry of the rotor blade is parameterized into sections of 2D airfoils using AutoBlade of NUMECA. The rotor 37 blade is parameterized into three sections of 2D airfoils (at 25, 50 and 75% of the blade height). For each airfoil section, leading and trailing edge angles are considered as uncertain. To model geometrical uncertainty around the blade, the uncertainty is also imposed on four half-thickness parameters (coefficients of half-thickness Bezier curve) of each airfoil section. In addition to these geometrical uncertainties, the tip clearance, the inlet total pressure profile, and the static outlet pressure are also considered uncertain. As a result, a total of 21 uncertain parameters are used for the uncertainty quantification of the NASA rotor 37. Symmetric beta distributions (\(\alpha =\beta =4\)) are chosen for all uncertain variables. The details of this test case are given in [24]. Based on experience with previous test cases, a coarse grid with \(1.04\times 10^5\) cells is chosen. With a fine grid of \(7.66\times 10^5\) cells, the fine-to-coarse grid ratio is almost 7.5. Using a PC order of 2,506 samples on the coarse grid allows to get the covariance matrix. Based on the results from the previous test case, only the static pressure was used to construct the covariance matrix. Similar to the previous test case, a very fast decay was observed in eigenvalues. The \(\varepsilon \) is set to 0.9999 to capture most of the stochastic information from the coarse grid. The size of the reduced basis is then 21, requiring only 42 deterministic CFD simulations on the fine grid. Figure 8 compares the pressure distribution around the blade at mid-span height with the classical polynomial chaos method for the second-order PC. The mean (left) and the standard deviation (right) of the static pressure are shown. It can be observed that both methods produce similar results. Further analysis of the present test case in [24] indicates that the reduced basis method almost is 5 times more efficient than the regression-based NIPC.

Conclusions

In this paper, an efficient non-intrusive model reduction technique for PCE is presented and discussed. The proposed algorithm relies on the fact that the ideal basis for a stochastic field follows from its POD decomposition. This, however, requires the covariance structure, which in the present approach is obtained from the PCE on a coarse grid, assuming hereby that the stochastic behavior is largely independent from the spatial scales. The size of the ideal basis that results depends on the number of POD modes that are accounted for but is always significantly smaller than the full PCE basis, especially for high stochastic dimensions. The reduced basis approach was successfully applied to: (1) a highly irregular analytical function, (2) the 2D transonic RAE2822 airfoil with ten geometrical uncertainties, and (3) the 3D transonic NASA rotor 37 with 21 geometrical and operational uncertainties. The numerical results show that the reduced basis method is able to produce acceptable results for the statistical quantities. The computation time of the reduced-order model is found to be much lower than that of the classical PCE.