1 Introduction

The finite element method (FEM) is a widely accepted numerical method for solving problems in science and engineering. The adaptive virtue of this method offers a simple way to solve complex problems in structural analysis, heat transfer, fluid mechanics and electromagnetic fields among other applications. The advantages of the FEM are well known: it can be applied to complex geometries with mixed material and boundary conditions. It is also suitable for time dependent problems and nonlinear material behaviour. However, the FEM is deterministic by nature and is therefore limited to describe the general characteristics of a system. In particular, it cannot directly study a system reliably where there exists some degree of uncertainty.

Different sources of uncertainty arise in the study of complex phenomena. These include human error [1], dynamic loading [2], inherent randomness of the material [3] or lack of data [4]. In a system with large disorder several problems exist in enabling the sources of uncertainty to be quantified within a deterministic framework. In practice a researcher that uses deterministic FEM is typically restricted to the average values of loads and material properties applied to a model with an idealized geometry, thus reducing the physical significance of the model. For significant variations and randomness the average values of the properties of a physical system are only a rough representation of the system.

A common practice in engineering to overcome the effects of uncertainty with a deterministic approach is to use safety factors [5, 6]. Several types of safety factor are used to take into account the uncertainty of a system and are valuable tools used to assess the reliability of a structure. However, safety factors cannot quantify or predict the influence and sources of randomness in a system [4]. Nuclear reactors, dams, bridges and other complex structures require the engineer to have a deeper understanding of the physical phenomena to understand and assess reliability.

The classic FEM has been combined with other methodologies to create a new type of analysis to study systems with random variations and/or uncertainty in parameters. It has been given the names the stochastic finite element method (SFEM) [7, 8]; the random finite element method (RFEM) [9, 10] and the probabilistic finite element method (PFEM) [11, 12]. To represent the stochastic nature of the system, random fields are introduced to the classic FEM to capture and create different stochastic scenarios. The influence of the random fluctuations is evaluated by calculating the statistical information of the response variables and evaluating the probability of an outcome of the system, such as failure. The SFEM has grown in importance over time. Many articles have been written that cover the mathematical background and extensions of this technique. Several disciplines such as geotechnical engineering, mechanical engineering and civil engineering have started to adopt the SFEM as a tool to ensure the reliability of foundations and structures.

Extensive general reviews of the mathematical framework of the SFEM have been presented elsewhere [2, 1316]. A useful reference that covers the topic is a state-of-art report on computational stochastic mechanics [17]. Software that has implemented the SFEM has also been reviewed [18]. An overview of the types and modelling of uncertainties in computation structural dynamics is given by Soize [19]. However, there are no review articles that focus on recent applications and engineering procedures. The main aim of this article is to bridge that gap, complementing the aforementioned articles with a different perspective; applications of the SFEM in engineering and science.

This article is structured as follows: Sect. 2 contains a brief description of the SFEM, techniques to represent the randomness of a model and a summary of the main approaches of this technique (important references that serve as a guideline for each method are included); Sect. 3 covers the available software of SFEM and Stochastic Mechanics; Sect. 4 presents a selection of recent applications of the SFEM in science and engineering. The final section provides a summary of the SFEM and the advantages and disadvantages of each of the branches covered.

2 A Brief Review of the Stochastic Finite Element Method (SFEM)

The stochastic finite element method (SFEM) is an extension of the FEM that incorporates random parameters. The SFEM can represent randomness in one or more of the main components of the classic FEM, namely geometry, external forces and material properties. To study the uncertainty and inherent randomness of a system the SFEM adopts different approaches. Each uses the mean, variance and correlation coefficients of the response variables to assess a quantity of interest, such as the probability of failure of a system. Several variants of the SFEM have been developed. Three of these are the most used and accepted: Monte Carlo simulation (MCS) [20], Perturbation Method [11], and the spectral stochastic finite element method (SSFEM) [21]. Each method adopts a different approach to represent, solve and study the randomness of a system.

The inclusion and construction of the random parameters leads to an increase in the computational power required to construct and to solve a single “realization” with the SFEM. Furthermore, usually many realizations are required to characterize a physical model. Each branch of the SFEM has different approaches to manipulate the random parameters and solve the equilibrium equations. The main methodology of each variant will be shown in the following subsections and the general structure of each method will be covered. For a concise evaluation of the advantages and disadvantages of each method, the reader is referred to the report by Sudret and Der Kiureghian [13].

2.1 Random Fields

The three variants of the SFEM share a common component that describes the inherent randomness of a system, namely random fields. An ideal random field should capture the main attributes of the random system by taking into account the minimum number of meaningful and measurable parameters of a system. A random field can be described as a set of indexed random variables that depict the random nature of a system. The index represents the position of the random variable in space or time or in both [22]. Random fields are characterized by the main statistical information of the variable of interest such as the mean, variance, probability distribution, autocorrelation function, among other statistical parameters. Several random field representation methods that determine the properties of a material can be found in literature, namely: the local average method [23], turning-bands method (TBM) [24], Fourier Transform Method (FTM) [25] and the Local Average Subdivision (LAS) method [26]. There are a number of useful guides available in the literature that focus on random fields and their usage [10, 13]. An overview of discretization methods and comparison of methods.

A review paper by Srimula and Chryssanthopoulus [27] provides an overview on how to treat uncertainty and calibrate random fields for fibre reinforced polymer-matrix composites. The content of this article contains useful information on how to characterize and identify different types of uncertainties at different length scales. Moreover, Representative Volume Element (RVE), homogenization and digital image-based characterization techniques are described with several examples in order to aid researchers to integrate these methodologies to model the uncertainty of composites. A detailed compilation of experimental data and characteristics of several model parameters is provided. The correlation length of several composites and references to studies that show the influence of this parameter in stochastic modelling is provided. Additional material regarding the criteria adopted in stochastic failure models and reliability assessment are also included.

Despite the tremendous development and variety of random field generators, the connection between experimental research and random fields is poor in several areas of study and has been an ongoing concern since the early stages of the SFEM (Vanmarcke et al. [15]). Several assumptions are used to calibrate the parameters of the random fields to overcome the lack of experimental data (Stefanou [16]). Moreover, experimental techniques for the acquisition of meaningful data for the calibration of random fields have not been fully developed. This issue has been noted by several researchers. A sensibility study of the parameters that characterize a random field and useful references describing how to calibrate random fields is covered by Charmpis et al. [28].

In geotechnical engineering the influence of the type of statistical distributions used to represent random parameters for a random field in the analysis of foundation settlements has been covered elsewhere [29]. Section  4 gives some examples of calibrated random fields.

2.2 Random Media

This section focuses on techniques used to study two phase random media (porous materials) using random texture models by Jeulin [30] and Torquato [31]. Heterogeneous media with complex geometries and patterns can be modelled probabilistically using information derived from 2D and 3D images. Several types of algorithm have been developed that generate porous media. One of the most general approaches was proposed by Yeong and Torquato [32]. Their method allows the addition of as many correlation functions (descriptors of the morphology of the material) as required. It uses a modified version of simulated annealing and can be used to describe multiphase anisotropic media. In their paper, they give an example where they use digitization to reconstruct a sample of Fontainebleau sandstone. The general procedure is as follows: (1) 2D images or micrographs are used to obtain the volume fraction of material and voids by counting the pixels of the image; (2) An initial guess or configuration with the given volume fractions is created; (3) After this the initial configuration is manipulated to create a new reconstruction. The new reconstructions are then tested to see if the new reconstruction minimizes the “energy” of the system. The energy of the system is given by the squared difference of the reference and simulated correlation functions. These correlation functions represent the morphology and macroscopic properties of the material, such as: pore size, volume fraction of the voids and material, degree of connectedness between the voids, permeability and other properties.

Different algorithms have been suggested for other materials such as Gilsocarbon nuclear graphite [33], and isotropic silica [34]. A monograph by Ostoja-Starzewski [35] provides the theory and background of both random media models and random fields.

2.3 Monte Carlo Simulation

Monte Carlo simulation (MCS) is the most general and direct approach for the SFEM [3, 36]. This methodology merges the Monte Carlo simulation technique with the deterministic FEM. This methodology proceeds as follows: (1) Determine the set of random and deterministic variables; (2) Characterize the density function and correlation parameters of the random variables; (3) Use a random field generator to produce a set of random fields; (4) Calculate the solution of each realization with the deterministic FEM; (5) Gather and analyse the information of the simulations and (6) Verify the accuracy of the procedure (Fig. 1) [37].

Fig. 1
figure 1

Schematic of the general procedure of the Monte Carlo simulation method

General formulae to calculate the average \(\upmu _\mathrm{ri}\) (Eq. 1), standard deviation \(\upsigma _\mathrm{ri}\) (Eq. 2) of the response variables, and one estimator of the probability of failure \(\hbox {p}_\mathrm{f}\)(Eq. 3) for the MCS approach are given by:

$$\begin{aligned} \mu _{r_i }&\approx \frac{1}{N}\sum _{j=1}^N {r_i^{(j)} }\end{aligned}$$
(1)
$$\begin{aligned} \sigma _{r_i }^2&\approx \frac{1}{N-1}\sum _{j=1}^N {\left( {r_i^{(j)} -\mu _{r_i } } \right) } ^{2} \end{aligned}$$
(2)
$$\begin{aligned} p_f&\approx \hat{{p}}_f =\frac{1}{N}\sum _{i=1}^N {1_f \left( {\theta ^{(i)}} \right) } \end{aligned}$$
(3)

where N is the sample size, the index j = 1, 2, 3 ... N, \(\hbox {r}_\mathrm{i}^{(\mathrm{j})}\) is the response variable of interest, \(\uptheta ^{(\mathrm{i})}\) is used to denote each realization, i is the number of samples and the symbol \(1_{\mathrm{f}}(\uptheta ^{(\mathrm{i})})\) is a binary parameter that gives the value of 1 if the value \(\hbox {r}_{\mathrm{i}}^{(\mathrm{j})}\) is above a given threshold or condition and 0 if not [2]. Like any other method that uses the MCS, the estimators of the response variables are dependent on the number of realizations that are used to calculate them. Equations (1), (2) and (3) confirm the idea that the accuracy of this method is dependent on the number of realizations (N).

The MCS is the most simple and direct approach. In general, of all the methods, this one requires the most computational power, especially with systems that have great variability and involve complex models that include several random variables. Even with this disadvantage, MCS is widely accepted and is often used to validate the Perturbation Method and the SSFEM. In several cases the last two approaches are complemented or merged with the MCS. Furthermore, alternative procedures have been proposed to reduce the computational effort to calculate the response variables and the probability of failure of a system by reducing the population of the required samples [3]. An extensive comparison of procedures used to decrease the number of realizations required by the MCS can be found in a benchmarking exercise [38]. In addition to these techniques, the parallelization of the solution methods of the MCS has been explored by some authors [3941].

2.4 Perturbation Method

The Perturbation Method is another popular branch of the SFEM [4244]. This method uses Taylor series expansions to introduce randomness into the system. When the spatial variation of material properties are selected as a random variable, the stiffness matrix takes the form of Eq. (4).

$$\begin{aligned} \hbox {K}=\hbox {K}^{0}+\sum _{i=1}^N {\hbox {K}_i^\mathrm{I} \alpha _i} +\frac{1}{2}\sum _{i=1}^N {\sum _{j=1}^N {\hbox {K}_{ij}^{\mathrm{II}} \alpha _{i} \alpha _{j} } } +\cdots \end{aligned}$$
(4)

where \(\upalpha _{\mathrm{i}}\) (i = 1, 2, ..., N) are the random variables that represent the spatial variation of the material properties. These values are assumed to have zero-mean and be relatively small in comparison to the mean value of the input parameter (\(\upalpha _{\mathrm{i}} << 1\)). Furthermore, \(\hbox {K}^{0}\) is the mean value of the input parameter, \(\hbox {K}^{\mathrm{I}}_{\mathrm{i}}\) and \(\hbox {K}^{\mathrm{II}}_{\mathrm{ij}}\) are calculated from the first and second derivatives evaluated at \(\upalpha = 0\), as in (Eqs. 5, 6).

$$\begin{aligned}&\left. {\hbox {K}_i^\mathrm{I} = \frac{\partial \hbox {K}}{\partial \alpha _i }} \right| _{\alpha =0}\end{aligned}$$
(5)
$$\begin{aligned}&\left. {K_{ij}^{\mathrm{II}} = \frac{\partial ^{2}\hbox {K}}{\partial \alpha _i \partial \alpha _j }} \right| _{\alpha =0} \end{aligned}$$
(6)

When the external force, F, is considered as random, the force vector is expanded as in Eq. (7), and the terms \(\hbox {F}^{\mathrm{I}}_{\mathrm{i}}\) and \(\hbox {F}^{\mathrm{II}}_{\mathrm{ij}}\) are obtained from the partial derivatives calculated in Eqs. (8) and (9). \(\hbox {F}^{0}\) is the mean value of the force vector.

$$\begin{aligned}&\hbox {F} = \hbox {F}^{0}+\sum _{i=1}^N {\hbox {F}_i^\mathrm{I} \alpha _i} +\frac{1}{2}\sum _{i=1}^N {\sum _{j=1}^N {\hbox {F}_{ij}^{\mathrm{II}} \alpha _{i} \alpha _{j}}} +\cdots \end{aligned}$$
(7)
$$\begin{aligned}&F_i^\mathrm{I} = \left. \frac{\partial F}{\partial \alpha _i } \right| _{\alpha =0} \end{aligned}$$
(8)
$$\begin{aligned}&F_{ij}^{\mathrm{II}} =\left. \frac{\partial ^{2}F}{\partial \alpha _{i} \partial \alpha _{j}} \right| _{\alpha =0} \end{aligned}$$
(9)

If the force vector F, is chosen to be deterministic, the partial derivatives of F become zero. Similarly the displacement vector U is represented in the form of the equation below (Eq. 10):

$$\begin{aligned} \hbox {U}=\hbox {U}^{0}+\sum _{i=1}^N {\hbox {U}_i^\mathrm{I} \alpha _i} +\frac{1}{2}\sum _{i=1}^N {\sum _{j=1}^N {\hbox {U}_{ij}^{\mathrm{II}} \alpha _{i} \alpha _{j}}} +\cdots \end{aligned}$$
(10)

The terms \(\hbox {U}^{0},\, \hbox {U}^{\mathrm{I}}_{\mathrm{i}}\) and \(\hbox {U}^{\mathrm{II}}_{\mathrm{ij}}\) of Eq. (10) can be alternatively represented by the following recursive equations:

$$\begin{aligned}&\hbox {U}^{0}=\left( {\hbox {K}_0 } \right) ^{-1}\hbox {F}_0 \end{aligned}$$
(11)
$$\begin{aligned}&\hbox {U}_i^\mathrm{I} =\left( {\hbox {K}_0 } \right) ^{-1}\left( {\hbox {F}_i^\mathrm{I} -\hbox {K}_i^\mathrm{I} \hbox {U}^{0}} \right) \end{aligned}$$
(12)
$$\begin{aligned}&\hbox {U}_{ij}^{\mathrm{II}} =\left( {\hbox {K}^{0}}\right) ^{-1}\left( {\hbox {F}_{ij}^{\mathrm{II}} -\hbox {K}_i^\mathrm{I} \hbox {U}_j^\mathrm{I} -\hbox {K}_j^\mathrm{I} \hbox {U}_i^\mathrm{I} -\hbox {K}_{ij}^{\mathrm{II}} \hbox {U}^{0}} \right) \end{aligned}$$
(13)

Equation (11) gives the deterministic nodal displacement, Eq. (12) gives the first-order perturbation of the displacement vector, and Eq. (13) provides the second-order perturbation of the displacement vector. For a detailed explanation of these equations and the calculation of stress and strain the reader is referred to [11, 42]. The accuracy of the Perturbation Method increases with the number of terms that are used to calculate the response variables. Higher order moments can be obtained using a similar procedure to the one shown previously. The calculation of higher-order moments greater than two are rarely found in literature, this is because of the high computational cost.

Using the previous expressions, the mean E and covariance Cov matrices are given by Eqs. (14) and (15).

$$\begin{aligned}&\hbox {E}\left[ \hbox {U} \right] \approx U^{0}+\frac{1}{2}\sum _{i=1}^N {\sum _{j=1}^N {U_{ij}^{\mathrm{II}} Cov\left[ {\alpha _i ,\alpha _j } \right] } }\end{aligned}$$
(14)
$$\begin{aligned}&Cov\left[ {U,U} \right] \approx \sum _{i=1}^N {\sum _{j=1}^N {U_i^I \cdot \left( {U_j^I } \right) ^{T}Cov\left[ {\alpha _i ,\alpha _j } \right] } } \end{aligned}$$
(15)

In general, the Perturbation Method is limited to values of random variables that are not large in comparison to their mean values. The coefficient of variation is usually set around 10 to 15 per cent of the mean value of the variable of interest. However, studies using higher coefficients of variation do exist [43]. The Perturbation Method is a popular and simple approach that can be useful to generate reasonable estimates of the statistical moments of the response variables. The Perturbation Method offers a balance between complexity and computational effort to estimate the influence of the mean, standard deviation and covariance of response variables on the behaviour of a structure.

2.5 The Spectral Stochastic Finite Element Method (SSFEM)

The spectral stochastic finite element method (SSFEM) was introduced by Roger G. Ghanem and Pol D. Spanos in their text book [21]. Developments are described in a number of key articles [4547]. The SSFEM is mainly concerned with representing the random material properties of a structure. To introduce the random parameters, the method uses the Karhunen-Loève expansion. The representation of the random parameters in this form seeks to reduce the computational power used in other methodologies such as MCS. To increase the efficiency of the SSFEM the solution space is mapped with Fourier-type series in the form:

$$\begin{aligned} E\left( {x,\theta } \right) =\bar{{E}}\left( x \right) +\sum _{i=1}^\infty {\sqrt{\lambda _i }\xi _i \left( \theta \right) } \psi _i \left( x \right) \end{aligned}$$
(16)

where Ē(x) is the mean of the random process of interest, \(\upxi _{\mathrm{i}}(\uptheta )\) is a group of orthogonal random variables, x and y represent the spatial coordinates and \(\uptheta \) denotes the random nature of each quantity. The other two terms of the equation are the eigenvalues (\(\uplambda _{\mathrm{i}}\)) and eigenfunctions (\(\uppsi _{\mathrm{i}}(\hbox {x})\)) of a covariance kernel. Both can be obtained by solving the integral Eq. (17):

$$\begin{aligned} \int \limits _D {C\left( {x,y} \right) \psi _i (y)dy=\lambda _i } \psi _{i} (x) \end{aligned}$$
(17)

where D is the spatial domain of the process E(\(\hbox {x},\,\uptheta \)). In the case that the process E(\(\hbox {x},\,\uptheta \)) is Gaussian, the random variables {\(\upxi \hbox {i}\)} will be part of an orthonormal Gaussian vector. The kernel of the covariance function that is defined in Eq. (17) is bounded, symmetric and positive. The numerical solution of the integral eigenvalue problem of Eq. 17 can be solved with recent approaches [4850]. An alternative expression of Eq. (17) for discrete random fields reads as Eq. (18).

$$\begin{aligned} \xi \left( \theta \right) =\frac{1}{\sqrt{\lambda _i }}\int _D {E\left( {x,\theta } \right) \psi _i \left( x \right) dD} \end{aligned}$$
(18)

The introduction of the Karhunen-Loève expansion leads to another modification of the formulation of the static equilibrium equation. In the same manner as the material properties, the nodal displacements could be represented with the Karhunen-Loève expansions. The covariance function of the solution is unknown and the solution process is dependent on the material properties. Hence, an alternative way to represent the nodal displacements has to be found. In order to represent the nodal displacements, the SSFEM uses polynomial chaos expansions. If the random variables are considered Gaussian, the nodal displacements take the form of Eq. (19):

$$\begin{aligned} \mu \left( \theta \right)&= a_0 \Gamma _0 +\sum _{i_1 =1}^\infty {a_{i_1 } } \Gamma _1 \left( {\xi _{i_1 } \left( \theta \right) } \right) \nonumber \\&\quad +\sum _{i_1 =1}^\infty {\sum _{i_2 =1}^{i_i } {a_{i_1 i_2 } } } \Gamma _2 \left( {\xi _{i_1 } \left( \theta \right) } \right) ,\left( {\xi _{i_2 } \left( \theta \right) } \right) +\cdots \end{aligned}$$
(19)

where \(\Gamma _{\mathrm{n}}(\upxi _{\mathrm{i}1}, {\ldots }, \upxi _{\mathrm{in}})\) is the Polynomial Chaos of order n in the variables (\(\upxi _{\mathrm{i}1}, {\ldots }, \upxi _{\mathrm{in}})\). This series represents the nodal displacement \(\upmu (\uptheta )\) as a nonlinear functional of the random material properties \(\upxi _{\mathrm{i}}(\uptheta )\). The expression for generating Eq. (19) is given by Eq. (20):

$$\begin{aligned} \Gamma _n \left( {\xi _{i_1 } ,\ldots ,\xi _{i_n } } \right) =e^{\frac{1}{2}\xi ^{T}\xi }\left( {-1} \right) ^{n}\frac{\partial ^{n}}{\partial \xi _{i_i } \ldots \xi _{in} }e^{-\frac{1}{2}\xi ^{T}\xi } \end{aligned}$$
(20)

where \(\upxi \) is the vector that contains the n random variables (\(\upxi _{\mathrm{i}1}, {\ldots }, \upxi _{\mathrm{in}})\) [51]. For convenience an alternative expression to Eq. (19) can be obtained by truncating the series after the \(\hbox {p}_{\mathrm{th}}\) term and by introducing a one-to-one mapping to a set with ordered indices represented by {\(\upgamma _{\mathrm{j}}(\uptheta )\)}. The resulting equation is (Eq. 21):

$$\begin{aligned} \mu \left( \theta \right) =\sum _{j=0}^p {u_j \gamma _j \left( \theta \right) } \end{aligned}$$
(21)

The accuracy of the method depends on the number of random variables \(\upxi _{\mathrm{i}}\) and increases with the number of terms used in the Polynomial Chaos expansion. Recent methods have been proposed to reduce the number of coefficients for the Polynomial Chaos expansion thus reducing the computations of the SSFEM [5254]. For the formulation of the equilibrium equation, the reader should consult the original text book [21] and later paper [45]. An efficient strategy to reduce the solution time of the equilibrium equations generated by the SSFEM was developed by Ghanem and Kruger [45].

The SSFEM and the spectral representation of random variables have received more attention recently, because the methodology tries to reduce the computational power required to analyse a stochastic process in comparison to the MCS. Since its early inception, further developments in efficient algorithms have improved the capabilities and performance of the original SSFEM.

3 Software for the SFEM

Software for SFEM was scarce until recently. Several software developers have incorporated SFEM algorithms or created specialized SFEM solvers and reliability tools to study systems with random variations. A review of software packages was given in a Special Issue of Structural Safety [18]. All the software considered for the Special Issue were “general-purpose” packages capable of handling a wide type of applications which have additional tools to study the reliability of a system. These software packages offer a wide range of procedures to calculate the reliability of a system such as MCS, Advanced Monte Carlo simulation (Ad. MCS) [55], Response Surface Method [56], First-Order Reliability Methods (FORM) and Second-Order Reliability Method (SORM). A complete and concise explanation of the FORM, SORM and response surface method can be found elsewhere [37, 57]. A more recent version of the capabilities of the general purpose software COSSAN is available [58].

Gordon and Griffiths merged previously developed Finite Element Analysis software [59] with specialized subroutines for the SFEM using a MCS approach, calling the resulting application [10]. Some of the RFEM functionality has been incorporated into the parallel finite element package ParaFEM [6062]. Kleiber and Hien have released SFESTA and SFEDYN for 3D trusses and frames with their book [44]. These programs are based on the Perturbation Method. Shen Shang and Gun Jin Yun introduced the Karhunen-Loève expansion merged with MCS into ABAQUS [63] with a subroutine called SFEQ8 (stochastic Q8 finite element) written in FORTRAN [64]. A review paper by Eiermann et al. [65] provides an overview of the computational aspects required for the SFEM.

Table 1 lists the software packages described in this section, together with the key references.

Table 1 Software for the stochastic finite element method and reliability software (updated from [18])

4 Applications of the SFEM

In this section, the authors present a number of examples of how the SFEM is used in a range of different disciplines in science and engineering. The articles that are reviewed show how researchers adapt and combine other techniques with the SFEM. Special attention is given to articles that show how experimental data is used to inform the generation of the random fields and validation of the models. Table 2 summarizes the applications that are extensively reviewed in this paper.

Table 2 Summary of reviewed applications

4.1 Materials Science

In Materials Science, the SFEM is used to investigate the behaviour of complex materials such as composites and fibrous structures. Several techniques have been developed to depict the random properties of these specialized materials, six of which are presented here. In the first example, a multiscale methodology is used to link the complex microstructure of Polymer Nanocomposites (PNC) with its bulk properties. In the second, the effective material properties of the composite Ni–YSZ are investigated by combining the random generation of microstructure with deterministic FEM. The third example provides a methodology to obtain the elastic properties of porous materials by combining stochastic homogenization with the perturbation method. The fourth application models material imperfections in honeycomb structures by introducing geometric and material imperfections. The fifth application provides a framework to study the material properties of metal foams using computer tomography. The final application of this section studies the ultimate tensile strength of pine wood strands.

Polymer nanocomposites (PNC) are a mix of polymers and single-walled carbon nanotubes (SWCNT). PNC is considered a promising material for aerospace applications. The addition of SWCNT to the polymer mix improves several physical properties that are valuable to the aerospace industry in comparison with other carbon-fiber-reinforced polymeric composites [93]. PNC exhibits complex mechanical behaviour due to the random formation of bundles, agglomerates, and clusters during its manufacture. The random locations of these structures make the study of the mechanical properties of PNC by normal means difficult. Spanos and Kontsos proposed a multiscale methodology to fully characterize the material using Stochastic Finite Element Analysis [76]. In their study, PNC was considered a random heterogeneous medium where the heterogeneity is introduced by the non-uniform spatial distribution of SWCNT inclusions. The inclusions make it difficult to predict the behaviour of the composite. The random location and structures of SWCNT in their models are generated using a random field. Their methodology involved: (1) Defining a representative material region; (2) Characterising randomness to create a random field model; (3) Homogenization and (4) Solution using the Monte Carlo finite element method (Fig. 2).

Fig. 2
figure 2

Multiscale model based on Monte Carlo simulation to estimate the mechanical properties of PNC

A Representative Volume Element (RVE) [94] was used in this research to obtain a portion of material that captures the essential characteristics and bulk properties in a reproducible way. Microscopy images and experiments were used to identify the effects and types of inclusions of SWCNT in the PNC and their effect on behaviour. The lack of precise data for the actual distribution of the different formations of SWCNT forced the researchers to propose 3 different types of probability distribution, namely uniform, beta and log-normal. A numerical solver developed in MATLAB was used to solve each realization of the random field for a thin plate of PNC subjected to a static load under plane stress condition. Each individual element of the proposed mesh was fully characterized with a different volume fraction of SWCNT, hence each element had a different value of Young’s Modulus. The characterization of each element was complemented by a homogenization procedure, using the Mori-Tanaka (MT) method [95, 96]. Furthermore, for this homogenization scheme the mechanical properties of SWCNT were obtained using a model proposed by Odegard et al. [97]. When the whole model is fully constructed the Young’s modulus and Poisson’s ratio of the PNC is estimated using a MCS approach. The proposed model shows good results when compared with the experimental data presented in the article.

Another study that incorporates a multiscale model of a complex fibrous material is described by Hatami-Marbini, Shahsavari et. al [98]. Yavari and Kadivar also study fibre composites [99].

In our second example for Materials Science, Johnson and Qu use the SFEM to estimate the effective elastic Young’s Modulus and Coefficient of Thermal Expansion (CTE) for the Ni–YSZ anode of a planar solid oxide fuel cell [77]. Solid oxide fuel cells (SOFCs) convert chemical fuels into electricity. In general, fuel cells are considered to be more efficient and more environmentally friendly than the usual conventional energy systems [100]. SOFCs are usually composed of layers of porous metal composites and solid ceramics [101, 102]. In Johnson and Qu’s paper, the Ni–YSZ anode provides structural support, needs to provide enough space for the transfer of fuels and must resist drastic changes of temperature. To represent this porous composite the authors created several digital realizations of the material that mimic the microstructure and volume fractions of each phase. Then these realizations were used to obtain the effective elastic modulus and coefficient of thermal expansion.

The realizations of random media were based on a modified version of the simulated annealing methodology proposed by Yeong and Torquato (Sect. 2.2) and used by others [103, 104]. Each phase of the Ni–YSZ was assigned the experimental values of Elastic Modulus and Poisson’s Ratio for nickel [105] and for YSZ [106]. The coefficient of thermal expansion was considered to be temperature dependent thus the equation proposed by Faisst was used to estimate the behaviour of this material property [107]. The general procedure to generate the realizations of porous NI–YSZ consists of 6 steps: (1) Randomly generate a realization of the multiphase material matching the volume fraction of each material; (2) Estimate the probability distribution of the realization; (3) Exchange the voxels in order to perform a change of the energy of the system; the exchange of voxels must maintain the same volume fraction of each phase; (4) Recalculate the new probability distribution and obtain the value of the current energy of the realization; (5) Verify whether the energy of the realization fulfils a given criterion; (6) Repeat the process until the given criterion is satisfied.

The realizations were imported into the Finite Element software ABAQUS to calculate the effective elastic modulus at room temperature. The CTE was calculated over the range of 0 to 1,000 \(^{\circ }\hbox {C}\). Two different procedures were tested to measure the convergence of the FE models: discretization error and the representative volume element size (RVE). Finally, both convergence methods are compared with experimental results.

The third example describes a framework that combines stochastic homogenization with the perturbation method to analyse the elastic properties of porous materials. This paper by Sakata et. al, [78] provides the mathematical background to compute the elastic properties of porous materials and compares these results with experimental data. A manufacturing process known as rapid prototyping was used to create samples that recreate the geometries used in the computational experiments. The perturbation method was implemented to create porous materials with periodic voids; this approach enables the size, shape and volume fractions of the pores to be controlled. The elastic properties of the material are estimated with the combination of a stochastic homogenization scheme and the perturbation method. The homogenization theory of this article is based on reference [108]. Numerical examples are provided to illustrate the influence of the shape, volume fraction and size of the pores on the equivalent elastic coefficient of variance. In these examples the elastic material properties are considered to be of epoxy resin with a volume fraction of voids of 0.2. The accuracy of the proposed perturbation method was measured by comparison with the results obtained using MCS. Test probes manufactured with a rapid prototyping system with a 2D distribution of voids were subjected to uniaxial tensile test to compare with the numerical results.

An alternative procedure to analyse materials with random porosity in composites can be found in the study made by Yu et. al [109].

The fourth application is reviewed by Asprone et. al [79] and analyses mechanical performance during the compression of honeycomb structures made of phenolic resin-impregnated aramid paper (Nomex). The computational simulations try to reproduce the buckling, compression and crushing response of the honeycomb structure observed in experimental compression tests.

A single unit cell of honeycomb is used for the model. Common manufacturing defects are simulated by variation of the Young’s modulus and material thickness. The authors provide some references that show evidence of the material variability of honeycomb structures made of Nomex [110, 111]. The imperfections of the honeycomb structure are modelled using a normal distribution for the variation in Young’s modulus and material thickness. The constitutive material behaviour of the model is that of an isotropic and linearly elasto-perfectly plastic material. The geometry was discretized using 9000 S4 shell elements. Several cases are analysed, investigating the responses due to variation of the thickness and Young’s modulus with coefficients of variance of 5, 10, 15 and 20 %. Moreover, the analysis is extended to analyse a similar honeycomb structure made with Hexcel AL-2052-H39, an aluminium honeycomb structure. In the case of the aluminium the parameters of coefficient of variance used are 5 and 20 %. The results of the numerical analysis of both cases; Nomex and aluminium honeycomb structures are compared with experimental results. In the case of the aluminium structure the results have a similar behaviour as the experiments when the coefficient of variation was 5 %, due to the lower variability in aluminium honeycomb structures. The numerical results obtained for the Nomex structure show good agreement with the experimental results when the Young’s modulus and thickness are included with a range of coefficient of variation between 10 and 15 %.

Two similar computational experiments that include random variables in the study of honeycomb structures have been published. Firstly a study of honeycomb structures with random geometric parameters that uses the FEM and a Gaussian process emulator to reduce the computational expense is proposed by [112]. Secondly, research by Sotomayor et al. [113] creates different honeycomb structures with different levels of randomness controlled by the regularity of the cells created using Voronoi tessellations.

In the fifth paper of this subsection, Geißendörfer et al. [80] describe a multiscale approach in which computer models are generated from tomographic images of metal foams. The results obtained using this methodology are compared against experimental results for Duocel, a copper foam.

The proposed methodology consists of 7 steps, including: data collection via tomographic imaging; digital image reconstruction using the software MAVI [114]; extraction of key geometric parameters from the 3D images such as volume fraction using minus-sampling edge correction [115]; generation of geometric representations of the microstructure using a power-tessellations generator [116]; implementation of the finite element method to analyse the generated microstructures; use of the SFEM to analyse the microstructure and statistical analysis to obtain the material properties from the results of the SFEM. Figure 3 shows the general steps used.

Fig. 3
figure 3

Methodology steps of the proposed multiscale approach to study metal foams

After the generation of the foam microstructures, a stochastic volume elements methodology is used to obtain statistical information of the elastic material properties of the foam model. In this case a set of 100 models are created using the power-tessellations generator [116] to determine the Young’s modulus and its influence on the natural frequencies of the material. Two types of boundary conditions are applied: the kinematic uniform boundary condition (KUBC) and static uniform boundary condition (SUBC). Additional analyses are performed in 15 computational generated beam structures to determine the correlation functions for the linear elastic material properties. The statistical information of the previous steps is used to generate random fields. The selected random field generator produces non-Gaussian random fields with a truncated Karhunen-Loève expansion that discretizes the random fields. The procedure previously described is then employed to create random fields that are integrated in a MCS procedure to calculate the bending frequencies of the realizations. The comparison between experimental and the mean value of the computational predictions of the natural frequencies gives a difference of 3 %.

An alternative method for generating stochastic foams with Voronoi cells is described in [117].

The final example in this subsection is a SFEM study for loblolly pine strands. In this study by Jeong and Hidnman [81] the strength of wood is assumed to be a stochastic variable that is linked to the variability of material properties in wood. Wood can be considered a composite made of several volume fractions of juvenile and mature wood. Different cell thickness and fibril angle are found in juvenile and mature wood, therefore there is distinct type of material variability in each phase of wood. A deterministic analysis cannot fully account for the material variability present in wood, thus the authors decided to implement a SFEM in their analysis. The objective of this study is to determine the ultimate tensile strength of four strand orientation models. Four wood patterns with different strand orientations that contain juvenile and mature wood are proposed: radial, tangential, angled and homogeneous grain or single grain. The homogeneous grain is modelled deterministically and the other three are modelled stochastically. In all the models the content of juvenile and mature wood is distributed equally (50 and 50 %). A 2-D tensile computational experiment was set to test the ultimate tensile strength of pine wood. The geometry of the model forms a square prism. Boundary conditions are set at the top and bottom of the prism, where the base is fixed in x and y direction and a uniform displacement is set at the top end of the geometry to simulate a tensile test under linear elastic conditions. Experimental data was used by the researchers to calibrate the random values for the strength of each type of woods considered in the modelling [118]. A MCS scheme is selected to obtain the ultimate tensile strength of the pine wood with a Tsai-Hill failure criterion [119].

Computational results of average strengths and stress distributions are compared against experimental results. The average strengths calculated with the SFEM were accurate for a range of proposed cases. A discussion about stress distributions for each case is presented by the authors. In addition a sensibility analysis for the ultimate tensile strength with different grain orientations is provided.

4.2 Biomechanics

The SFEM is starting to have a prominent position in the field of Biomechanics due to the great variability of biological structures. Material properties and geometry in biological structures are strongly dependent on factors such as genetics, age, environment and other external phenomena. Consideration of random variations is crucial to fully understanding a biological structure.

A review article that covers the use of SFEM in biomechanics is presented by Laz and Browne [120]. This gives a summary of the analysis methods and provides some examples on how a probabilistic approach including the SFEM is used in several types of application. The areas covered by the application examples include structural reliability, kinematics, joint mechanics, musculoskeletal modelling and patient-specific representation. Challenges such as the appropriate selection of input parameters, bounds and the difficulties in measuring several variables in biological systems are also addressed.

Here, six recent examples are reviewed that typify the use of SFEM in biomechanics. The first is an article that concerns the variability of the geometry of the human spine. The second investigates the effect of variability of material properties in a craniofacial skeletal structure. In the third and fourth paper the key parameters and factors that create an ageing effect on a hip implant and a knee implant, respectively, are determined with the SFEM. The fifth application uses the SFEM to simulate the damage and bonding between different type of bone phases. The application that closes this section proposes a stress based criterion, determined using the SFEM, to predict the rupture of aneurysms in human aortas.

The human spine is a complex structure formed of several components and tissues with different classes of materials. Many FEM models have been implemented to investigate different types of normal motion and factors such as disease or accident that could affect the normal function of the human spine [121]. In the most recent work covered here, Niemeyer el al. study the influence of the variability in spine component geometry on how it behaves [82]. Additionally, they establish the degree of influence of each parameter studied. The leading sources of variability considered by the authors include natural variations of geometry and errors in measurement. The study uses Probabilistic Sensitivity Analysis (PSA) combined with MCS. A model generator implemented in ANSYS Parametric Design Language (APDL) was used to create the models and their random variations. The geometric complexity of the human spine was replaced with simplified models using geometric primitives. Ligaments and other structures were also considered. The material properties considered in this model were obtained from the literature. The spine was subjected to compression, extension, flexion, lateral bending to the right and left and rotation to the left and right. The selected response variables were the intradiscal pressure, range of motion, and contact forces. A priori power analysis was performed to estimate the number of models required to obtain a significant sample. The research confirmed that geometry strongly affects the response variables.

In the second Biomechanics example, the influence of random material properties was studied in a macaque cranium using MCS [83]. A number of hypotheses were proposed to test whether the randomness of material properties had a significant effect on the selected response variables. For example, one hypothesis tested whether the variability of the material properties of bone between individuals leads to changes in the moderate-to-high stress regions occurring in the cranium. Another considered whether the variability of the material property values increased stress as the degree of anisotropy in the material increases.

Six different models, based on the model of reference [122], were created to verify each hypothesis. In the models, the material properties and coefficients of variation for the different types of materials were varied. The randomized material properties were the modulus of elasticity, shear modulus and Poisson’s ratio, determined using Gaussian distributions. Each model was run up to 144 times using ANSYS APDL 13.0. Strain measures in 35 regions of the cranium were used as response variables. Furthermore, the simulations were divided into two sets, one using only empirical data [123] and the other using the same empirical data, but with a coefficient of variation of 0.20. To reduce the number of simulations required for the MCS, Latin hypercube sampling was performed. The six models were assigned different types of material properties. Two models were considered isotropic and were divided into trabecular bone, cortical bone and teeth. A further two models were divided into 35 regions with different isotropic material properties. In the final two models, the material properties in the 35 regions materials were considered to be orthotropic. The results of the response variables were analysed using ANSYS and compared with in-vivo data [124]. Finally the researchers tested each hypothesis with a statistical analysis of the response variables of each model.

The third application investigates the parameters and factors that significantly contribute to the deterioration of a hip implant. This particular hip implant is used during a total hip arthroplasty, which is the replacement of the hip joint. In this example by Donaldson et al. [84] the SFEM is used to introduce random variations to the geometry, material properties and loading conditions. Two types of models are considered, a deterministic model that serves as a baseline to validate the numerical calculations with mechanical tests and stochastic models that are used to determine the variables that contribute to the damage processes in the hip implant. Two parts of the hip implant are considered, a taper (head sleeve) and a trunnion (neck post). A surface-to-surface contact is assigned between the taper and trunnion. The mechanical properties in both parts are considered to be linear elastic. The study is carried out using the software ANSYS and consists of two simulations for the deterministic case and 400 realizations for the stochastic case.

The geometry for the deterministic analysis is a hip implant with a 3:1 scale, and mechanical properties of 6061 aluminum. The reason this size and material properties are chosen is to match the size and material properties of test specimens. The two sets of hip implants are manufactured to have different angular mismatches. Larger size hip implant specimens are selected due to the difficulties in measuring micro-motions and taper angles in real size samples. The validation via experimental results showed a similar trend and values to the deterministic numerical analysis.

In the stochastic analysis, the material properties are CoCrMo/TiAIV and come from an ASM Materials database. The geometry is assumed to be the same size as a standard hip implant. Geometry, loading and material properties of the hip implant are stochastically distributed. Firstly, the taper and trunnion are assigned with seven stochastically geometric parameters; the values for these geometric parameters are selected into a range where most of the implants for a total hip arthroplasty are designed. Secondly, the loading conditions consist of two parts, an impact that is composed of 5 parameters that follow a statistical distribution and two twin gait loading cycles that are stochastically distributed. To calibrate the loading parameters several experimental sources are combined and adapted to match with common data values for patients. Thirdly, the elastic modulus, Poisson’s ratio and coefficient of friction between the CoCrMo/TiAIV implant are treated as random variables. The values for both materials are obtained from experimental data. The stochastic variables in the model are shown in Fig. 4.

Fig. 4
figure 4

Stochastic variables considered in the modelling of a hip implant

The damage produced in the implant is tracked with three key response variables are measured, the contact pressure, micro-motion and fretting work done. A statistical analysis is done to determine which stochastic variables have more influence in the change of the variables of interest. The stochastic variables that were correlated with the response variables were the taper-trunnion angular mismatch, patient weight and center offset.

Research carried out by Arsene and Gabrys [85] serves as our fourth application example, in which the authors implement the SFEM to investigate the importance of 77 input stochastic variables for total knee replacement surgery. The aim of this study is to help surgeons make informed decisions in their surgical plan and to predict the peak pressure generated by the changes produced after the surgery. Key geometric and material properties parameters are determined from a statistical analysis calculated with a MCS methodology and the response surface method.

The boundary and loading conditions try to reproduce the motions and conditions of stair ascension of a patient.Bar elements are used to represent the muscles and apply the forces. Two software packages are used to perform the analyses PamOpt [125] and PamCrash [126]. Both software packages are capable of performing parallel computations.

The SFEM study is divided into four stages; two stages with two different methodologies. The first methodology uses MCS with 77 input stochastic parameters. The second uses a response surface method with the same number of input parameters. Afterwards the same methodologies are used but with a reduced set of 22 key stochastic parameters. The 77 input parameters are obtained from experimental and computational sources found in the literature. The 8 response variables defined include the kinematics and peak contact pressure of patella-femoral and tibio-femoral joints. These response variables are considered to be the ones that determine the comfort of the patient after a surgery. The first stage of the study the MCS with 77 stochastic variables is used to generate results that serve as a baseline and a point of comparison with the rest of the stages. 800 realizations are used in this MCS scheme.

A comparison between each stage is compared in the study. From these results the key parameters that have a stronger influence on the response variables are reported. The authors highlight the possibility of obtaining good quality results with a reduced number of random variables, allowing surgeons to make quick and informed decisions during the planning of the surgery.

The fifth example combines a cohesive element technique and SFEM to analyze the effects and damage of three interfacial interactions in bone [86]. The micro-damage accumulation in bones serves as a principal mechanism to release energy in bone. The direct study of micro-fractures in bones via experiments is difficult to carry out, thus researchers have opted to analyze this phenomenon with the FEM.

The bone tissue of lamellar bone is composed of layers of mineral and collagen with a 2-D plane-strain model. The layers are composed by intercalated mineral and collagen sections. The mineral and collagen layers have a thickness of a unit and a length of 300 units. The layers of bone tissue are subjected to a tensile load. In order to propagate the damage an initial imperfection is placed at the center of the model were a mineral layer is placed. A linear elastic behavior is assigned to the mineral sections; meanwhile a nonlinear behavior is assigned to the collagen sections. Three interfaces of mineral-collagen in bone are proposed to realize this study: strong, intermediate and weak interfaces. The interfaces are characterized by the type of structures, opening modes and sliding mode between the mineral and collagen. For the strong interface an electrostatic interaction is modelled for the opening and sliding mode, the structures where the interactions occur are anionic macromolecules present in collagen and cationic mineral crystals. In the intermediate interface, a thin layer of water is considered to be present in-between the mineral and collagen phases. This addition of a thin layer of water causes a hydrogen opening bond and a van der Waals sliding mode. The weak interaction introduces a thick water layer as interface that causes a van der Waals opening mode and a viscous shear sliding mode. The interface between the layers of mineral and collagen are simulated with the inclusion of cohesive elements to represent the connections that maintain the phases together. Random fields are used to model the variability of elastic modulus for the mineral phase. The random fields generator and calibration is based on the work carried on by Dong et al. [127] and Tai et al. [128]. The values generated for the random fields follow a Gaussian normal distribution. The realizations are analyzed with MCS. For each analysis of the realizations a failure analysis is performed.

The results obtained for the three types of interfaces present different types of damage and defects. The results obtained for the strong interface show partial damage via the cohesive elements, thus the phases remained together. The intermediate interface experienced propagation of damage across the simulated geometry. The weak interface produced larger damaged regions at a quicker rate than the other two interfaces. The researchers showed through their study some mechanisms of how bones accumulate micro-damage.

The final paper on Biomechanics is by Celi and Berti [87] in which they use SFEM to predict the possible rupture of an aneurysm in a human aorta. The maximum diameter of an aneurism is the principal criterion to predict the risk of an aneurism-rupture. Nevertheless, there is evidence that risk of rupture is dependent on other factors. An alternative way to predict the rupture of human aorta is proposed; that is the estimation of the mechanical stress on the arch segment produced by the aneurysm. The authors propose two methodologies to create the geometry of study, first a patient-specific 3D model generated from computer tomography and a second method that uses a bank of morphological features obtained from the computer tomography to reproduce the aneurysm in the arch segment region.

A data bank of 18 electrocardiographic CT datasets was processed with an in-house code programmed in Matlab to obtain the patient-specific models and the key geometrical parameters of thoracic aortic aneurysms. From these parameters three morphological variables are used to represent the thoracic aortic aneurysm, these variables are the maximum diameter ratio, the lesion extension ratio and the lesion position along the thoracic arch. The upper and lower bound values for each variable are determined to represent the morphologic parameters of the 3D models.

In this research, the 18 patient-specific cases are considered as the deterministic FEM. The mechanical properties are isotropic, hyperelastic and remain constant. The wall thickness is also considered constant. Pressure is applied to the inner surface of the simulations to replicate the conditions of a human heart.

The stochastic analysis includes three geometric parameters for the maximum diameter ratio, the lesion extension ratio and the lesion position. A MCS is used to study the influence of randomized geometric parameters of the aneurism on the stresses generated in the region of study.

The authors concluded that the maximum diameter and eccentricity ratio have the strongest correlation with the stresses measured in the models. An additional conclusion from the authors is that FEM and SFEM modeling can help to predict possible aneurisms through stress analysis.

4.3 Engineering

The final example applications are related to Engineering. In this discipline, the SFEM is used to estimate the reliability and performance of materials and structures such as soils, bridges, structures, components of machines or structures and concrete. To exemplify the use of the SFEM in this discipline, several examples are provided. The first concerns using random fields to study geomaterials with voids and the second studies impact of projectiles on randomly generated rock-rubble concrete. As a third example the initial imperfections of tubes are introduced to the mechanical modelling of shell structures. The fourth application investigates the effects of air voids on the mechanical performance of hot mix asphalt. The final application is a review on the damage produced by random traffic loadings on steel bridges.

Foundations are constructed in very diverse types of materials, such as clay, soluble rocks, limestone and dolomite. The last three may contain a significant quantity of large voids that can have a considerable effect on the settlement of a foundation. An MCS approach has been used to obtain the effective elastic parameters of materials with voids [88, 129]. To represent materials with voids, these researchers have adapted random field generators from the RFEM [10]. Geometry and boundary conditions were generalized for each realization. A cube formed of 50 \(\times \) 50 \(\times \) 50 8-node hexahedra was considered as the basic geometry of the model. A tied freedom approach was applied to ensure that the displacements on cubic elements deformed at the same rate and to maintain a similar geometry as the initial state. The boundary conditions consist of a vertical force applied to compress the cube at the top face, with the base of the cube bounded in the z direction (same direction as the force). With this group of boundary conditions, the effective Young’s Modulus and Poisson’s ratio can be easily calculated from linear elastic theory. After the boundary conditions were set in the 3D cube models, voids were assigned randomly to the model using the Local Average Subdivision method (LAS) [26]. The voids of the random field were mainly characterized through the correlation length. This quantity represents the distance over which a region of the given correlation length has similar values. In this case a large correlation length will produce a few voids, while smaller values will create larger regions with voids. The elements that were not voids were assigned constant material properties, whereas the voids were assigned a Young’s Modulus that was 100 times smaller than the intact elements. The Poisson’s ratio was considered equal for all the elements. Next, all the realizations of the model were solved using the preconditioned conjugate gradient (PCG) method [62]. The random fields, boundary conditions and FEM here form an MCS process. The outputs of the elastic analysis considered were the vertical and horizontal deformations of the block. The results of 2D simulations considered in previous work [129] were compared with other independent and theoretical results. Also a comparison between 2D and 3D simulations are given. Finally a similar methodology with a probabilistic study was used to interpret the influence of voids on the settlement of a strip footing.

A compilation of several applications of the SFEM in Geotechnical Engineering can be found in Fenton and Griffith’s book [10]. Another example from Geotechnical Engineering uses the Perturbation Method for modelling groundwater flow (Yao et al. 2010). There are many applications of the SFEM in Civil Engineering. Two examples for bridges are described by Cavdar et al. [131, 132].

The second example in this section addresses the performance of rock-rubble overlays subjected to impacts of projectiles [89]. The authors produced several algorithms to generate rock-rubble particles with random geometries. These particles are then integrated with grouted concrete to generate FE models. A parametric study shows the influence of the percentage fraction and size of the rock-rubble on performance.

The generation of rock-rubble particles start with the random generation of a quadrilateral that is then modified until an octahedron is created. A series of points are calculated from the octahedron to create a random polyhedron. The polyhedron is then modified until a number of conditions are fulfilled. Several particles of rock-rubble are generated to be integrated into a FEM model. The authors generated an algorithm to simulate the dropping and compacting of the rock-rubble and grouted concrete. The algorithm can be summarized as follows: (1) Randomly drop the rock-rubble particles; (2) Ensure that the particles have been placed into a burst layer, if not the particles are repositioned: (3) Repeat step 2 until all the particles are placed into the bursting layer; (4) A final inspection checks that the particles are not overlapping. If this happens, the particles are rotated to fulfil this condition. After placing the rock-rubble particles the process of compaction starts. When the rock-rubble particles are dropped the volume percentage is constantly computed. If the desired volume fraction is not obtained a sinusoidal forced vibration is simulated at the bottom of the bursting layer to accommodate the particles and obtain the desired percentage fraction of rock-rubble. At the end of the process the position of each particle is calculated. This position is used to map the boundaries of each phase of the material and then to generate the mesh of the FE model. When the mapping ends, each material is assigned with either the material properties of the rock-rubble particles or grouted concrete.

The authors selected the LS-DYNA hydrocode [133] using FORTRAN and APDL (ANSYS Parametric Design Language) to analyse the resistance of the rock-rubble overlays to the impact of projectiles. Additional models were considered to simulate the materials of the projectiles and the target [134, 135]. Furthermore, the contact and friction effects between the rock-rubble, grouted concrete and the projectile were added to the model, with the contact eroding “surface to surface” in LS-DYNA. Some of the finite elements become highly distorted due to the impact. An erosion technique is used to remove these elements. Two criteria were selected for the materials of the target, the concrete material and the rock-rubble. The criterion for the concrete depends on a threshold of maximum principal strain and shear strain. For the rock-rubble elements the criterion depends upon a maximum principal strain value.

The results obtained from the simulations were compared with experiment [136]. Furthermore, a parametric study was performed to analyse the influence of the size of the rock-rubble and rock-volume fraction.

The third example studies the limit load of steel tubes with random initial geometric imperfections. The research by Vryzids et. al [90] implements the method of separation combined with a spectral representation method to create initial geometric imperfections in the modelling of steel tubes. Figure 5 summarizes the fundamental steps and products of each step.

Fig. 5
figure 5

General steps for the study of tubes with random initial imperfections

Figure 5 starts with the acquisition of the input values for the random field from a data bank that contains information about geometric imperfections in steel tubes [137]. After this the evolutionary power spectrum is calculated using the method of separation [138]. When the evolutionary power spectrum is determined, the spectral representation method [139] is used to create the random field that represents the initial defects. The the geometry of the tubes with imperfections is determined from an initial perfect geometry using a mean function that represents the imperfections and a zero-mean non-homogeneous Gaussian stochastic field. The authors selected the MCS to analyse the random fields, thus a mesh sensibility analysis was done to obtain a balance between accuracy of the analyses and computation time. When the mesh is determined the imperfect geometries of the steel tubes are analysed under several loading conditions. Fifty nonlinear analyses are made on tubes with initial geometric imperfections under axial load, lateral pressure, combined axial load and lateral pressure. Each of the loading conditions is compared with their deterministic counterpart showing that geometric imperfections cause premature failure of the steel tubes.

Other suggested methodologies that are similar to the last reviewed application can be found in literature. An example by Combescure for thin walled structures [140], Kamiński and Świta [141] for steel tanks, Chryssanthopoulos and Poggi [142] and Papadopoulos [143] for imperfect shells.

The fourth example concerns flexible pavements [90]. These are manufactured using a hot mix asphalt comprising asphalt binder, aggregates and air voids. The properties of the asphalt depend on three factors: the properties of each phase, the material properties of the mixture as a whole and the manufacture of the material. The presence of air voids in hot mix asphalt greatly determines performance of the material. The objective of this study is to take into account the influence of air voids in the modelling of hot mix asphalt. Two probabilistic models were used. The first approach simulates the quantity and distribution of air through a single randomized quantity of material properties and the second approach considers the spatial variability of material properties represented with several values calculated with random fields.

The authors provide experimental evidence obtained from several sources of the distribution and volume fraction of air voids calculated using X-ray computed tomography data. The distribution of voids in hot mix asphalt depend on the depth of the material. Large voids represent 11 % of the volume fraction of the top layer. The middle section contains 5 % of air voids and around voids make up 5–9 % of the bottom layer. The content of air voids are directly related to the material properties of hot mix asphalt and deterioration processes such as oxidation and moisture damage.

The model considers a pavement structure composed by four layers: an asphalt course, an unbounded granular base layer, unbounded granular sub-base and a layer of sub-grade. Only the top layer of asphalt is considered to be represented by a single random value or a random field. Both approaches assume that the top layer has a stochastic linear viscoelastic material behaviour, meanwhile the rest of the layers are considered to be linear elastic, homogeneous and deterministic. A mechanical load is applied at the top of the geometry to determine the performance of the layer structure. Boundary conditions are applied at the edges of the structure in the X and Y direction. The SFEM analysis was carried out using the software Abaqus. In total 100 realizations are analysed for both procedures, following a MCS scheme.

The response variable that measures the mechanical performance of the hot asphalt mix is the horizontal strain. The variability in the single random material property approach is smaller than the spatial variability approach with random fields. Furthermore, the maximum horizontal displacement values of both approaches are used to generate a probability density function that shows a larger variability in the case of the random field approach. Hence the authors showed that the spatial variability of air voids has an influence on the response and performance of the hot asphalt mix in computational analysis.

The final example in this subsection is an analysis of fatigue in steel bridges due to random vehicle loading [92]. Steel bridges are subjected to continuous loading from traffic that after a certain time starts to generate fatigue in the components of the bridge. Several types of instruments can be placed on a bridge to track the effects of traffic loading e.g. strain gauges, displacement sensors, accelerometers, etc. The authors selected the 50 year old Throgs Neck Bridge for their study because it has been instrumented to track its fatigue. Some existing cracks had been registered in several locations on the Throgs Neck Bridge. Data generated by instruments known as weight-in-motion devices can record the overall weight of a vehicle and axle weights. Furthermore, camera recordings allow researchers to confirm the type of vehicle that is being driven through the bridge.

The FEM modelling is divided into two stages; a deterministic stage to validate and calibrate the parameters for the model and a stochastic stage where the random loadings are included. The deterministic analysis is computed with the aid of the FEM software ANSYS. In the deterministic case, a linear elastic analysis with a deterministic moving unit load is applied in several load steps. The model is then recalibrated and compared against strains recorded at the bridge to validate the deterministic results. For the stochastic case two random number generators are used to compute uniformly distributed values that determine which line of the bridge was going to be loaded and the type of vehicle that is going to be used for the loading during the analysis. Moreover, another set of random numbers are generated to characterize the vehicle (axle weights and axle spacing) with Latin Hypercube Sampling. A total of 50 realizations are created to evaluate the fatigue and reliability of the bridge. The SFEM is computed in Matlab using in-house software created by the authors.

The authors propose this methodology to monitor bridges, predict possible damage to the bridge caused by the fatigue of certain elements and to reduce the frequency of direct monitoring of bridges.

5 Summary

This section of the paper summarizes the main features that need to be considered when using the SFEM in practical applications. It also reviews the advantages and disadvantages of each of the main branches of the SFEM, thus guiding the reader in which method to select for their own applications.

5.1 Considerations for the SFEM

The SFEM requires a deep understanding of the FEM theory. Several alterations and modifications to the FEM formulation are required to construct the system of equations and select an appropriate solution strategy. In addition to the formulation of the SFEM several experimental procedures and mathematical tools are required to analyse a stochastic model.

Generating the finite element mesh for an SFEM analysis needs to take into account additional factors that do not arise in the deterministic FEM. The description of a random field is governed by the number of random variables, element size, the scale of fluctuation and the correlation length [7]. Careful selection of the number of random variables and elements is required to avoid creating simulations that are unnecessarily large. The use of small elements may cause high correlation length between neighbouring elements. The scale of fluctuation should be smaller than the distance between the centroids of elements, again to avoid high correlation lengths between nearby elements [144].

Realistic methods that involve the SFEM require additional techniques to develop more realistic assumptions of the variability of the system. Techniques such as Image-based modelling characterization, RVE, multiscale approaches, homogenization and random media techniques would greatly benefit the understanding, representation and treatment of stochastic modelling.

The lack of experimental procedures to measure the spatial variability of the mechanical properties of materials remains as one of open issues of the SFEM. Only a few studies in the literature use experimental data to justify the variability of the material properties. The quality of a research that implements the SFEM largely depends on the experimental data and assumptions of the characteristics of the model. Furthermore, the validation of the simulations is rarely compared with experimental data.

Other problems in the acquisition of data is that some variables are difficult measure. This is particulary true in Biomechanics [120]. In areas such as engineering where material properties tend to be better known, repositories of materials data do not always contain enough information to justify the type of random field that is going to be used.

5.2 Comparison Between the Different SFEM Techniques

The MCS has proved to be the most general and simple approach of the SFEM. It is suitable for a wide range of applications and can be used for nonlinear problems. Accurate approximations can be calculated when the deterministic solution for the problem is known [17]. Moreover, the interfaces of modern FEM software allow the implementation of the SFEM. In some cases, the MCS is already a feature in several applications. For its simplicity and independence between tasks, the computations and steps of MCS are considered to be embarrassingly parallel. A scheme for the parallelization of the MCS is proposed by [40]. Nevertheless, the MCS requires more computational power than either the Perturbation Method or the SSFEM for the same problem, as indicated by a comparison of the computational times between procedures for a foundation settlement analysis [13]. Even so, the MCS method is the most general procedure of the SFEM.

The Perturbation Method is an efficient method to calculate the mean, variance and correlation coefficients of a stochastic model. Furthermore, the computational cost of estimating these quantities is quite low in comparison to the MCS. The Perturbation Method is suitable for linear, nonlinear and eigenvalue problems [17]. Another valuable advantage of this method is that the calculated results are distribution free [13]. The main disadvantage of the Perturbation Method is that is limited to low coefficients of variation of around 10 or 15 per cent the mean value of the variable of interest [3]. This limitation is more problematic when the problem is nonlinear [2]. Moreover the accuracy of the method is also dependent on the number of terms of the Taylor series that calculate the response variables.

The newest branch of the SFEM, the SSFEM performs well for linear analysis and can be used for nonlinear analysis. However, some researchers consider it impractical for nonlinear analysis [16]. At the early stages this method could not be used for reliability analysis, though this has recently changed [145]. The size of the linear system of equations to solve a problem with the SSFEM is given by the number of degrees of freedom of the model (N) and the order of expansion that is used to calculate the response (P). The size of system of equations can be calculated as NP x NP. A common value for the order of the expansion (P) tends to be around 10-35, thus for large problems the computational cost can be prohibitive [145]. Nevertheless, this method has attracted the attention of several researchers and hybrids have been merged with the MCS to extend its use.

This paper is a compilation of several other previous review papers. It summarizes the main branches of the SFEM, lists supporting software and gives a number of examples illustrating how researchers apply this methodology in their respective disciplines.