Abstract
Assessing the reliability of a complex system with uncertainty propagation consists in estimating its probability of failure. Common sampling strategies for such tasks are notably based on Monte Carlo sampling. This kind of methods is well suited to characterize events of which associated probabilities are not too low with respect to the simulation budget. However, for critical systems such as aerospace vehicles, the reliability specifications often induce very low probability of failures (said below 10−4). In this case, Monte Carlo based methods are not efficient inducing unaffordable costs with regard to the available simulation budget. In this chapter, we review the main simulation techniques to estimate low failure probabilities with accuracy.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Assessing the reliability of a complex system with uncertainty propagation consists in estimating its probability of failure. Common sampling strategies for such tasks are notably based on Monte Carlo sampling. This kind of methods is well suited to characterize events of which associated probabilities are not too low with respect to the simulation budget. However, for critical systems such as aerospace vehicles, the reliability specifications often induce very low probability of failures (said below 10−4). In this case, Monte Carlo based methods are not efficient inducing unaffordable costs with regard to the available simulation budget. In this chapter, we review the main simulation techniques to estimate low failure probabilities with accuracy.
1 Introduction
Analyzing the reliability of a complex system often corresponds to the estimation of its failure probability. A significant number of specific methods has been proposed to estimate this probability when the system uncertainties are not considered such as fault trees or formal methods. In this case, the global system failure probability may be estimated with some cascade of conditional probabilities. If the uncertainties of the system inputs may be modeled as random variables, then the system failure probability is expressed with an integral. Within the context of complex system design and optimization under uncertainty, the constraints of the optimization problem are often formulated through a probability that the constraint exceeds a given threshold, resulting in the need to estimate a probability of failure (see Chapter 5 for more details on optimization under uncertainty). For the design of aerospace vehicles, these failure probabilities might be low with respect to the affordable simulation budget and therefore efficient reliability analysis techniques are required.
Let us assume that the considered complex system is a deterministic black-box function g(⋅) with a d-dimensional random input U with known joint PDF ϕ(⋅) of support \(\mathbb {R}^d\). For the sake of clarity, we assume the system failure is only an output threshold exceedance, that is g(u) > T with T a scalar. The methods described in the following chapter are still valid even for more intricate failure model. Reliability engineering techniques aim thus at estimating the failure probability \(\mathbb {P}\left (g(\mathbf {U})>T\right )\), that is (Figure 4.1):
with the indicator function such that if g(u) > T, 0 otherwise. Due to the critical nature of a system failure, P f takes in general low values (as an indication, one may consider P f < 10−4). Nevertheless, the accuracy required for the estimation of P f is very high because the consequences of a misestimation, and particularly of an underestimation of P f may be catastrophic.
The estimation of P f corresponds to an integral computation. However, numerical deterministic integration methods such as Gaussian quadrature (Novak 1988) or sparse grids (Smolyak 1963; Gerstner and Griebel 2003) are not adapted in this context. They require some smoothness of the function to be integrated: it is not the case here since we integrate an indicator function . Moreover, these methods are efficient when the dimension d is small (as an indication, one should have d < 5): this is typically not the case for a complex system.
One may sort the different techniques of failure probability estimation into four main categories:
-
the simulation techniques which consist of input uncertainty propagation,
-
the statistical methods which enable, from a set of output samples, to estimate P f,
-
the reliability-based approaches which take advantage of geometrical considerations on the function g(⋅), sometimes combined with sampling,
-
the use of surrogate models which are of interest for computational time consuming systems g(⋅).
The exhaustive analysis of the main available techniques has already been done in Morio and Balesdent (2015). Thus, for the sake of brevity, we only focus on the most well-known techniques for reliability assessment.
The relative deviation or coefficient of variation CV of an estimator \(\widehat {P}\) of P f is given by the following ratio:
We will use this metrics in the following to compare the different probability estimate for a given simulation budget. The toy case considered in this chapter is the following three dimensional function g(⋅):
The random input U follows a Gaussian distribution with zero mean and covariance matrix 0.7I 3 with I 3 the identity matrix. The threshold T is set to 150.9 so that an estimation of P f with a budget for a Crude Monte Carlo sampling of 107 samples is 2.92 × 10−4. It is considered as the reference for this chapter to compare the alternative techniques on this toy case.
2 Simulation Methods for Reliability Analysis
2.1 Crude Monte Carlo
As the failure probability of Equation (4.1) is an expectation, the law of large numbers may be applied. Crude Monte Carlo (CMC) (Silverman 1986; Sobol 1994; Kroese et al. 2014) takes advantage of this feature to estimate the probability P f. For that purpose, one generates M independent and identically distributed (i.i.d.) samples u (1), …, u (M) with PDF ϕ(⋅) and computes their outputs through the function g(⋅): \(g\left ({\mathbf {u}}_{(1)}\right ),\ldots ,g\left ({\mathbf {u}}_{(M)}\right )\). The failure probability P f is then estimated with:
This estimate \(\widehat {P}_f^{CMC}\) converges almost surely when M → +∞ to the failure probability P f due to the law of large numbers (Sobol 1994). The coefficient of variation of the estimator \(\widehat {P}_f^{CMC}\) is given by
Since we consider rare event estimation, P f may take low values and thus
The coefficient of variation is consequently unbounded. To estimate a probability P f of order 10−4 with a 10% relative deviation, at least 106 samples are required which is often not affordable on real applications.
The results of probability estimation with CMC for the toy case are proposed in Table 4.1. The variability of CMC estimate decreases with rate \(\sqrt {M}\). CMC samples u (i) that lead to a failure, that is , are given in Figure 4.2. Their distribution is far from the initial Gaussian distribution. More than 105 samples are required to get a CV lower than 10%. CMC is thus not adapted to rare event probability estimation since the simulation budget to get a low CV is often not available for complex systems.
2.2 Importance Sampling
The idea of importance sampling (IS) (Bucklew 2004) is to rewrite the integral of Equation (4.1) with an auxiliary distribution ψ(⋅) such that
The notation \(\mathbb {E}_\psi \) means that mathematical expectation is done under the distribution ψ(⋅). This notation will only be used in this section to avoid some confusions.
The support of the distribution ψ(⋅) must contain the support of ϕ(⋅) to avoid biased estimates. One may also notice that if ψ = ϕ, IS is CMC. Then, the principle of IS is similar to CMC since the failure probability is still described as an expectation and thus, the law of large numbers may be applied. One generates M i.i.d. samples u (1), …, u (M) with the input PDF ψ(⋅) and computes their outputs through the function g(⋅): \(g\left ({\mathbf {u}}_{(1)}\right ),\ldots ,g\left ({\mathbf {u}}_{(M)}\right )\). The failure probability P f is then estimated with
where \(\frac {\phi (\cdot )}{\psi (\cdot )}\) is called the likelihood ratio. As \(\widehat {P}_f^{CMC}\), the estimate \(\widehat {P}_f^{IS}\) converges almost surely when M → +∞ to the failure probability P f. Nevertheless, the choice of ψ(⋅) is of high importance to reduce the variance of the estimator. A bad choice of ψ(⋅) may also increase the variance. Let us consider the variance of the IS probability estimate.
As variances are non-negative quantities, the optimal auxiliary density ψ opt(⋅) is determined by cancelling the variance in Equation (4.8). It may be shown that ψ opt(⋅) is then defined with (Bucklew 2004)
The optimal auxiliary density ψ opt(⋅) depends on P f and is thus unusable in practice. Let us notice that the CMC samples of Figure 4.2 follow ψ opt(⋅) as this distribution may be seen as a conditional distribution. A valuable auxiliary distribution ψ(⋅) will be close to the shape of ψ opt(⋅). The framework to determine an efficient auxiliary distribution is rather large. Unless an initial guess is possible thanks to some a priori knowledge on g, the auxiliary distribution has to be learnt iteratively for a given range of models (parametric or nonparametric) for ψ(⋅). Recent advances on this subject are the cross-entropy optimization of importance sampling parametric distribution ψ λ(⋅) (De Boer et al. 2005) and nonparametric adaptive importance sampling (NAIS) where ψ(⋅) is modeled with kernel density estimator (Zhang 1996; Morio 2012).
NAIS has been applied to the toy case. With a budget simulation of 6000 samples, the estimated failure probability with NAIS is 2.68 × 10−4 with a CV of 6.5%. The variance reduction is consequently significant compared to CMC. All the NAIS results are gathered in Table 4.2 for different simulation budgets. Figure 4.3 also shows samples generated during some iterations of NAIS algorithms.
NAIS is particularly suited to the toy case because the dimension d is low. When d is greater than 10, finding an efficient IS auxiliary distribution becomes more difficult even in the parametric case.
2.3 Subset Simulation
Subset simulation (SS) also called sometimes importance splitting, subset sampling or sequential Monte Carlo, aims at decomposing the failure probability P f as a product of conditional probabilities that can be estimated with a classical CMC approach. Different variants have been then proposed in the recent years (Au and Beck 2001; Cérou et al. 2012; Botev and Kroese 2012).
Let us consider the set \(\mathbf {A}=\{\mathbf {u} \in \mathbb {R}^d|g(\mathbf {u})> T\}\), the probability P f may be rewritten with \(P_f=\mathbb {P}(\mathbf {U}\in \mathbf {A})\). The principle of SS is to iteratively estimate supersets of A and then to estimate \(\mathbb {P}(\mathbf {U}\in \mathbf {A})\) with conditional probabilities.
Let us define \({\mathbf {A}}_{0}=\mathbb {R}^d \supset {\mathbf {A}}_{1} \supset \ldots \supset {\mathbf {A}}_{p-1} \supset {\mathbf {A}}_{p}=\mathbf {A}\), a decreasing sequence of \(\mathbb {R}^d\) subsets with smallest element A = A p. The easiest way to get some subsets A i is to choose a sequence of thresholds T = T p > T p−1 > … > T i > … > T 0 = −∞ and then consider that \({\mathbf {A}}_i=\{\mathbf {u} \in \mathbb {R}^{d}|g(\mathbf {u})> T_i\}\) for i = 0, …, p. This definition is indeed well adapted to threshold exceedance probability estimation. The probability P f is then defined with Bayes’ theorem as
where \(\mathbb {P}(\mathbf {U} \in {\mathbf {A}}_{i}|\mathbf {U} \in {\mathbf {A}}_{i-1})\) is the probability that U ∈A i knowing that U ∈A i−1. The probabilities \(\mathbb {P}(\mathbf {U} \in {\mathbf {A}}_{i}|\mathbf {U} \in {\mathbf {A}}_{i-1})\) may then be estimated through Monte Carlo simulations with samples of the densities h i−1(⋅), the distributions of U restricted to the set A i−1. The knowledge of h i−1(⋅) is barely available in general but let us assume first that it is the case here. The different stages of SS to estimate P f are thus the following ones when all the thresholds T i are inputs of the algorithm:
-
1.
Set i = 1, and h 0 = ϕ.
-
2.
Generate M samples \({\mathbf {u}}_{(1)}^{i},\ldots ,{\mathbf {u}}_{(M)}^{i}\) from h i−1(⋅) and apply the function g(⋅) to get \(g\left ( {\mathbf { u}}_{(1)}^{i}\right ),\ldots ,g\left ( {\mathbf {u}}_{(M)}^{i}\right )\).
-
3.
Estimate the probability \({\mathbb {P}(\mathbf {U} \in {\mathbf {A}}_{i}|\mathbf {U} \in {\mathbf {A}}_{i-1})}\) with CMC by \(\widehat {P}_i^{CMC}\),
-
4.
Set i = i + 1 and if i < p + 1, go back to stage 2. Otherwise, estimate the probability P f with
$$\displaystyle \begin{aligned}\widehat{P}^{SS}=\prod_{i=1}^{p} \widehat{P}_i^{CMC}. \end{aligned}$$
CMC is suitable to estimate the conditional probabilities since they do not correspond to rare events and thus, \(\widehat {P}_i^{CMC}\) may have a low variance with a reasonable simulation budget M. The variance of \(\widehat {P}^{SS}\) may be then lower than a CMC direct approach when the event g(U) > T is particularly rare, that is when P f < 10−4 (Cérou et al. 2012).
As stated before, the complete knowledge of h i−1(⋅) is not available in most cases but this density is known up to a constant as
Markov Chain Monte Carlo (MCMC) is a well-known class of algorithms to sample from a density that is known up to a constant. Metropolis–Hastings algorithm or Gibbs sampling (Robert and Casella 2005) are typical MCMC methods that may be used in that case. Moreover, the difficulty of initializing these approaches is negligible here since the samples \({\mathbf {u}}_{(j)}^{i-1}\) such that are distributed exactly with h i−1(⋅) and may be used to start the Markov chain. When ϕ(⋅) is Gaussian, Crank–Nicolson technique is also another method to sample with h i−1(⋅).
A priori choice of thresholds is tricky since one has to control that the different conditional probabilities are not too low and may be estimated accurately with Monte Carlo. A significant variance for a given conditional probability highly impacts the whole performance of SS. In fact, an optimal choice of the sequence A i and thus for T i is given when P(U ∈A i|U ∈A i−1) = ρ, where ρ is a constant, that is when all the conditional probabilities are equal. The variance of \(\widehat {P}^{SS}\) is indeed minimized in this configuration as shown in Lagnoux (2006) and Cérou et al. (2006). In practice, the values of T i for i = 0, …, p are set in an adaptive manner to perform valuable results (Cérou et al. 2012) using ρ-quantile of samples generated with the PDF h i−1(⋅).
In practice, the different stages of SS to estimate P f are the following ones:
-
1.
Set i = 1, ρ ∈ ]0, 1[ and h 0 = ϕ
-
2.
Generate M samples \({\mathbf {u}}_{(1)}^{i},\ldots ,{\mathbf {u}}_{(M)}^{i}\) from h i−1(⋅) and apply the function g(⋅) in order to have \(g\left ( {\mathbf {u}}_{(1)}^{i}\right ),\ldots ,g\left ( {\mathbf {u}}_{(M)}^{i}\right )\).
-
3.
Estimate the ρ-quantile \(\gamma _\rho ^{i}\) of the samples \(g\left ( {\mathbf {u}}_{(1)}^{i}\right ),\ldots ,g\left ( {\mathbf {u}}_{(M)}^{i}\right )\).
-
4.
Generate M samples \({\mathbf {u}}_{(1)}^{i+1},\ldots ,{\mathbf {u}}_{(M)}^{i+1}\) that follows h i(⋅) with MCMC from the samples \(g\left ( {\mathbf { u}}_{(1)}^{i}\right ),\ldots ,g\left ( {\mathbf {u}}_{(M)}^{i}\right )\) that are greater than \(\gamma _\rho ^{i}\).
-
5.
If \(\gamma _\rho ^{i}<T\), set i = i + 1 and go back to stage 3. Otherwise estimate the probability with
SS is not really efficient on the toy case as the failure probability is over 10−4 as shown in Table 4.3. The performance of SS is of the same order as CMC in that case. Figure 4.4 shows the SS samples for different iterations when M = 500.
SS gives its best results relatively to the other methods when the failure probability is lower than 10−4 and when the dimension d is greater than 10. More than 104 samples are often necessary to get a probability estimation, so that it is hard to apply SS directly on time-costly applications.
3 Statistical Approaches for Reliability Analysis
The principle of the following theories is to approximate the maximum of g(U) or its tail PDF with a parametric model. One assumes in this section that a finite set of i.i.d. samples g(u (1)), …, g(u (M)) of the output is available, but also that one cannot generate new samples of g(u). The i.i.d. hypothesis may be relaxed for stationary samples.
3.1 Extreme Value Theory
The Fisher–Tippett–Gnedenko theorem (Embrechts and Schmidli 1994) shows that the maxima S M of the sequence \(g\left (\mathbf { u}_{(1)}\right ),\ldots ,g\left ({\mathbf {u}}_{(M)}\right )\) converges to a generalized extreme value (GEV) distribution F ξ:
If there exists a M and b M, with a M > 0 such that, for all \(y \in \mathbb {R}\),
where F is a non-degenerate CDF, then F is a GEV distribution F ξ. In this case, F is said to belong the maximum domain of attraction (MA) of ξ, F ∈ MA(ξ). The set of GEV distributions is composed of three distinct types, characterized by ξ = 0, ξ > 0, and ξ < 0 that correspond to the Gumbel, Fréchet, and Weibull distributions, respectively. The expression of a M and b M is known for usual PDF (Embrechts and Schmidli 1994). An approximation of P f (Embrechts and Schmidli 1994) for large values M is also available:
Equation (4.11) is often difficult to use in practice because the estimations of a M, b M, and ξ have a strong variance. This approach is notably considered when only samples of maxima are available. The samples of a monthly river maximum height correspond, for instance, to this situation. When it is not the case, it is required to group the samples \(g\left ({\mathbf {u}}_{(1)}\right ),\ldots ,g\left ({\mathbf {u}}_{(M)}\right )\) into blocks and fit the GEV using the maximum of each block.
3.2 Peak Over Threshold Approach
Instead of grouping the samples into block maxima, peak over threshold (POT) considers the largest samples \(g\left ({\mathbf {u}}_{(1)}\right ),\ldots ,g\left (\mathbf { u}_{(M)}\right )\) to estimate the probability P f. The Pickands–Balkema–de Haan theorem (Embrechts and Schmidli 1994) links EVT and threshold exceedance:
Let us assume that the distribution function F of i.i.d. samples \(g\left ({\mathbf {u}}_{(1)}\right ),\ldots ,g\) \(\left ({\mathbf {u}}_{(M)}\right )\) is continuous. Set \(y^*=\sup \{y,\,g(y)<1\}=\inf \{y,\,g(y)=1\}\). Then, the two following assertions are equivalent:
-
1.
F ∈ MA(ξ),
-
2.
there exists a positive and measurable function u↦β(u) such that
$$\displaystyle \begin{aligned}\underset{t\mapsto y^*}{\lim}\underset{0<y<y^*-t}{\sup}\vert F^t(y)-H_{\xi,\beta(t)}(y)\vert =0,\end{aligned}$$
where \(F^t(y)=\mathbb {P}(g(\mathbf {U})-t\leq y\vert g(\mathbf {U})>t)\), and H ξ,β(t) is the CDF of a generalized Pareto distribution with shape parameter ξ and scale parameter β(t).
The expression of the generalized Pareto CDF is the following
This theorem may be applied to estimate the probability of exceedance \(\mathbb {P}(g(\mathbf {U})>T)\) as the distribution of g(U) knowing g(U) > t is modeled with a parametric PDF. Indeed, the probability P f may be rewritten as
for T > t. A natural estimate of \(\mathbb {P}(g(\mathbf {U})>t)\) is given by a Monte Carlo estimate
With the Pickands–Balkema–de Haan Theorem and for large values of t, one has
The parameters (ξ, β) of the Pareto distribution H have to be estimated with the samples g(u (i)) such that g(u (i)) > T. The estimate of P f with POT is then built with
Three parameters have to be determined in the POT probability estimate of Equation (4.15): the threshold t and the couple (ξ, β(t)). The choice of t is sensitive since it determines the samples that are used in the estimation of (ξ, β(t)). Indeed, a high threshold leads to consider only a small number of samples in the estimation of (ξ, β(t)) and thus their estimate can be then spoiled by a large variance, whereas a low threshold introduces a bias in the probability estimate (Dekkers and De Haan 1999). There are several methods to determine a valuable threshold t knowing the samples. The most well-known ones are the Hill plot and the mean excess plot (Embrechts and Schmidli 1994). These methods are nevertheless very empirical since they are based on graphical interpretation. Automatic threshold selection has also been proposed such as in Thompson et al. (2009) based on the distribution of the difference of the parameter estimates when the threshold is changed. It is often necessary in practice to compare the estimates of t given by the different methods. Once the value of t is set, the parameters (ξ, β(t)) are often estimated by maximum likelihood (Coles 2001) or more occasionally by the method of moments (Hosking and Wallis 1987). An overview of these different methods can be found in Neves and Fraga Alves (2004). The statistical POT approach for different budget M is applied on the toy case and the corresponding results are given in Table 4.4. The POT threshold is evaluated with the method of Thompson et al. (2009) and the parameters of the Pareto distribution are estimated with maximum likelihood. One notices an overestimation of the failure probability whatever the considered budget simulation.
When resampling is not possible, extreme value theory and POT are the only solutions to estimate a failure probability. In the contrary case, if resampling is possible, specific sampling methods such as IS or SS should be considered to estimate accurately a failure probability.
4 Reliability Based Approaches
4.1 First-Order Reliability Method (FORM)/Second-Order Reliability Method (SORM)
First-order and second-order reliability method (FORM/SORM) (Madsen et al. 1986; Bjerager 1991; Yan-Gang and Tetsuro 1999; Lassen and Recho 2006) is a probabilistic computational approach for structural reliability. FORM gives an analytic approximation of the failure probability when the inputs are standard normal by estimating the most probable failure point (MPP). The limit state surface is defined as the input region where g(U) = T. Then, the MPP also called design point is the point with the minimum distance from the origin to the limit state surface. This design point is determined with optimization methods. The failure probability is then estimated with the evaluation of a standard normal CDF at the design point. In that case, the limit state surface is approximated by a linear function at the design point. Thus, accuracy problems may occur when the performance function is strongly non-linear or if the most probable failure point is not unique (Sudret 2012). The second-order reliability method (SORM) (Kiureghian et al. 1987) has been established as an attempt to improve the accuracy of FORM as it approximates the limit state surface at the design point by a second-order surface. When the most probable failure point is not unique, FORM and SORM with multiple design points have been proposed in Kiureghian and Dakessian (1998). FORM/SORM method is applied in three stages to estimate P f:
-
1.
Apply a transformation V (⋅) on the input U such that R = V (U) with R a standard normal variable. Depending on the available information on the PDF of U, several transformations may be considered (Nataf 1962; Hasofer and Lind 1974; Pei-Ling and Kiureghian 1991; Rosenblatt 1952; Lebrun and Dutfoy 2009a,b).
-
2.
Evaluate the most probable failure point β such that
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \boldsymbol{\beta}=\underset{\mathbf{R}}{\operatorname{argmin}} \mid\mid \mathbf{R} \mid\mid \end{array} \end{aligned} $$(4.16)$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\text{s.t. } T-g(V^{-1}(\mathbf{R}))=0, \end{array} \end{aligned} $$(4.17)where ||.|| is the Euclidian norm. The constraint T − g(V −1(R)) = 0 defines the limit of failure space for the variable R. The parameter β is the design point and ||β|| is the reliability index. Several algorithms have been proposed to solve this optimization procedure as described in Hasofer and Lind (1974), Pei-Ling and Kiureghian (1991), Rackwitz and Flessler (1978), and Dietlevsen and Madsen (1996).
-
3.
Estimate the failure probability with in the case of FORM:
$$\displaystyle \begin{aligned} \widehat{P}_f^{FORM}=\Phi(-||\boldsymbol{\beta}||), \end{aligned} $$(4.18)where Φ is the CDF of standard normal variable. In the case of SORM, the failure probability is given by Breitung (1984)
$$\displaystyle \begin{aligned} \widehat{P}_f^{SORM}=\Phi(-||\boldsymbol{\beta}||) \prod_{i=1}^{d-1} (1-\boldsymbol{\beta} \kappa_i)^{-\frac{1}{2}} , \end{aligned} $$(4.19)where κ i denotes the principal curvature of T − ϕ(V −1(R)) at the minimum distance point β. The term κ i is defined with
$$\displaystyle \begin{aligned} \kappa_i= \left.\frac{\partial^2\left(T-\phi\left(V^{-1}(\mathbf{R})\right)\right)}{\partial^2 R^{(i)}}\right|{}_{\mathbf{R}=\boldsymbol{\beta}}, \end{aligned} $$(4.20)
with R (i), i = 1, …, d, a component of the vector R. These methods do not need a large simulation budget to obtain a valuable result. Nevertheless, the different assumptions require that one has to be careful when one applies FORM/SORM to a realistic case of function g(⋅). There is also no control of the error in FORM/ SORM. However, it is possible from the FORM/SORM design point to propose an importance sampling auxiliary density and then to sample with it to estimate the rare event probability.
Standard FORM, FORM with multiple design points, and SORM have been applied to the toy case and give the results summarized in Table 4.5. The standard FORM design point is estimated to β = [3.07, 0, −5.53 × 10−2] with 1096 samples as shown in Figure 4.5. The corresponding \(\widehat {P}_f^{FORM}\) does not lead to an accurate estimation as the limit state surface is non-linear and the design point is not unique in this toy case. FORM with multiple design points finds a second design point located at [−3.10, 0.09, −3.22] that doubles the probability given by FORM but still underestimates the true failure probability.
FORM/SORM are very interesting in general to get a rough approximation of the failure probability with limited simulation budgets, but as no control of the error is available, it is often better to combine FORM/SORM with IS for instance.
4.2 Directional Sampling
The principle of directional sampling (DS) (Bjerager 1988), also known as directional simulation, is to aim at the limit-state function and then estimate the failure probability as the mean of the failure probabilities over all the directions. DS may be considered when the input U follows a Gaussian standard distribution. If it is not the case, iso-probabilistic transformations should be applied on the inputs several transformations may be considered (Nataf 1962; Hasofer and Lind 1974; Pei-Ling and Kiureghian 1991; Rosenblatt 1952; Lebrun and Dutfoy 2009a,b). DS takes advantage of the rewriting of the standard Gaussian vector U as a product of random variables
where R 2 is a chi-squared random variable with d degrees of freedom with density ϕ R and A is a random unit vector uniformly distributed on the d-dimensional unit sphere Λd with density ϕ A. R 2 and A are independent random variables. The failure probability P f and the integral of Equation (4.1) may be then expressed as follows:
The DS estimation corresponds in fact to a CMC estimation of \(\mathbb {E}\left (\mathbb {P}(g(R \mathbf {A})>T|\mathbf {A}\right . \) \(\left .=\mathbf {a})\right )\). In practice, a sequence of M i.i.d. random direction vectors A j for j = 1, …, M is generated and then, one determines r j such that g(r jA j) = T, by dichotomy for instance. An estimate of \(\mathbb {P}(g(R \mathbf {A})>T|\mathbf {A}={\mathbf {A}}_j)\) is given by \(1-F_{R^2}\left (r_j^2\right )\), where \(F_{R^2}(\cdot )\) is the CDF of a chi-squared random variable with d degrees of freedom. This approximation is only valuable if there is only one intersection point between the input failure region and the chosen sampling direction. The DS probability estimate \(\widehat {P}_f^{DS}\) is then obtained with
DS has been applied to the toy case and all the estimated failure probabilities are given in Table 4.6 for different number of directions. Samples that have been generated when M = 100 are proposed in Figure 4.6. DS is very efficient on this example as the dimension d is quite low and the function g(⋅) is regular and differentiable.
The tuning of the number of directions M may be complicated in complex systems. For that purpose, the choice of the directions A j could be done adaptively instead of randomly to focus on input regions that have the highest \(\mathbb {P}(g(R \mathbf {A})>T|\mathbf {A}={\mathbf {A}}_j)\) as shown in Zuniga et al. (2011) to reduce the estimate variance.
5 Use of Surrogate Models in Rare Event Probability Estimation
Being able to build an efficient surrogate model which allows to reduce the number of calls to the expensive input–output function g(⋅) while keeping a reasonable accuracy of the probability estimate is the key point. Surrogate models are particularly interesting in the case of rare event probability estimation as the surrogate model has to be accurate not over all the support of U but only in the input region where g(u) = T. The use of the exact function g(⋅) and its surrogate \(\widehat {g}(\cdot )\) in the probability calculation will lead to the same result if . In other words, the surrogate model might not be representative of the exact function outside the zones of interest as it does not take part of the probability estimation. \(\widehat {g}(\cdot )\) has to be a good classifier of samples for the failure prediction.
A great number of methods has been proposed and compared in recent years. Polynomial chaos expansions (PCE) have been associated with Monte Carlo sampling to estimate failure probabilities (Li and Xiu 2010). Support vector machines have also been employed to estimate the domains of failure (Basudhar et al. 2008) and been coupled to rare event estimator such as subset sampling (Bourinet et al. 2011a). Kriging has been extensively used with classical Monte Carlo estimator (Echard et al. 2011), importance sampling method (Janusevskis and Le Riche 2012; Schueremans and Van Gemert 2005; Balesdent et al. 2013; Dubourg et al. 2011), importance sampling with control variates (Cannamela et al. 2008), or subset simulation (Vazquez and Bect 2009; Li et al. 2012; Bect et al. 2012). All these approaches are mainly based on three main ingredients:
-
the type of surrogate models (PCE, Kriging, SVM, etc.),
-
the surrogate model refinement strategy, that is how to choose samples that are evaluated on the exact function g(⋅),
-
the associated sampling strategy (CMC, IS, SS, etc.).
We will not review all the different combinations that have been proposed in the literature but focus on two well-known methods for the sake of brevity.
5.1 Subset Sampling by Support Vector Margin Algorithm for Reliability esTimation (2SMART)
The 2SMART method (Deheeger and Lemaire 2007; Bourinet et al. 2011a) is dedicated to the use of adaptive subset sampling technique (SS) and consists in defining one SVM model at each adaptive threshold involved in SS. For each intermediate threshold, a SVM model is built using a three-stage refinement approach (localization, stabilization, and convergence) which allows to accurately represent the regions corresponding to the involved thresholds. At the ith stage of SS, the main steps of 2SMART are as follows:
-
1.
A first set of samples is generated to build a SVM model \(\widehat {g}_{T_i}(\cdot )\) in the region corresponding to the ith level of SS and some of these samples are used to determine the current intermediate threshold T i, using the ρ-quantile level of SS and the SVM model in regression,
-
2.
The SVM \(\widehat {g}_{T_i}(\cdot )\) is refined using a three-stage approach, in an iterative manner, inducing resampling (by MCMC) and clustering of the generated samples. For that purpose, three populations of samples of different size are generated and used to refine the current SVM model.
-
3.
The last step consists in evaluating the conditional probability \(\mathbb {P}(\widehat {g}_{T_i}(\mathbf {U})>T_i|\widehat {g}_{T_i}(\mathbf {U})>T_{i-1})\), corresponding to the current threshold T i.
2SMART has been applied on the toy case. The probability estimation is 2.82 × 10−4 with a CV equal to 17% for only 800 calls to the function g(⋅). The reduction of the number of calls compared to SS is very significant. We show in Figure 4.7 the limit-state function obtained with 2SMART algorithm from open source FERUM 4.1 Matlab® toolbox.
5.2 Active Learning Reliability Method Combining Kriging and Probability Estimation Method
From the initial training set \(\mathcal {U}\), the Kriging properties are valuable to determine the additional samples which have to be evaluated on g(⋅) to refine its surrogate model. Active learning (Echard 2012) determines a new sample point u to add to the training set \(\mathcal {U}\) by solving the following optimization problem:
where Φ(⋅) is the CDF of the standard Gaussian distribution and σ is the Kriging variance. The used criterion generates a sample for which the Kriging prediction is close to the threshold (numerator of Equation (4.21)) and which presents a high prediction error (denominator of Equation (4.21)). Due to the monotonicity of the involved CDF, the optimization problem Equation (4.21) is equivalent to:
This criterion has been coupled with CMC (Echard et al. 2011), IS (Echard et al. 2013), SS (Echard 2012). In practice, the optimization problem is not solved, and given a sample set {u (1), …, u (M)} provided by CMC, IS, or SS, the new sample which will be added to the training set is determined by
Active Kriging combined with CMC (105 samples) has been applied on the toy case. With a limited budget simulation of 54 calls to the exact function g(⋅), the estimated failure probability is 2.68 × 10−4 with a CV of 24%. The variance reduction is consequently significant compared to CMC. We show in Figure 4.8 the CMC samples where the samples that have been really evaluated on g(⋅) are highlighted.
Surrogate models are particularly adapted to rare event estimation as they offer a low variance of the probability estimate with few calls to g(⋅). Nevertheless the complexity required to set up and tune these algorithms is important and may be case dependent. Moreover, surrogate models suffer from the curse of dimensionality. Thus, it is often hard to apply them for the estimation of failure probabilities of high-dimensional complex systems.
6 Brief Overview of Reliability Analysis with Both Aleatory and Epistemic Uncertainties
Several works proposed to extend reliability analysis that is mainly developed within the probability theory framework as presented in the previous sections, in order to include alternative uncertainty description frameworks. This section presents a brief overview of reliability analysis dealing with both aleatory and epistemic uncertainties.
Among the existing approaches for reliability analysis with both aleatory variables modeled with probability theory and epistemic variables described by an alternative modeling framework (e.g., evidence theory, interval formalism, possibility theory, Pbox) two families may be distinguished. The first ones transform the epistemic uncertain variables in order to describe them with probability theory and to use adaptation of the reliability analysis techniques presented in the previous sections. The second ones manage both formalisms resulting in a nested loop for the estimation of the bounds on the probability of failure. Some approaches try to remove the nested loop to decrease the reliability analysis cost.
In reliability analysis, different techniques have been proposed in the case where aleatory input distributions with interval uncertain parameters are considered (Hurtado 2013; Balesdent et al. 2016). For instance, if the input uncertainty is described by a Gaussian distribution but the mean value is only known within an interval. Hurtado (2013) proposed to extend a technique called reliability plot to account for interval uncertainty for the aleatory distribution hyperparameters. Balesdent et al. (2016) proposed to combine Kriging, importance sampling, and a dedicated refinement strategy of the surrogate model to find the upper and the lower bounds on the probability of failure due to interval uncertainty. Within the more general context of Pboxes, the probability of failure is defined within bounds given by:
where \(\min \) and \(\max \) mean that the optimization would be carried out over all PDF ϕ(⋅) that satisfy the definition of the input Pbox. Schöbi and Sudret (2017) proposed a reliability analysis approach in which the input variables are modeled by Pboxes (in both cases, parametric and general). The authors developed a two-level approach using Kriging surrogate models with adaptive experimental designs at different levels of the reliability analysis. The input uncertainties are represented using convex normal membership functions.
Similarly to the concept of Most Probable Point (MPP) of failure in probability reliability analysis with the probability formalism, the concept of Most Probable Focal Element (MPFE) has been proposed by Jiang et al. (2013). Due to the discrete nature of the basic mass assignment, the MPFE is defined as a region rather than a point, which has the maximum contribution to the failure probability (Figure 4.9). Du (2008) proposed to extend FORM to the presence of evidence theory to account for both aleatory and epistemic uncertainties into a unified framework. Let us consider an input space divided between the aleatory space \(\mathcal {U}\) and the epistemic space \(\mathcal {X}\). Consider also a state function \(\mathbf {U}\in \mathcal {U}\times \mathbf {X}\in \mathcal {X} \rightarrow g(\mathbf {U},\mathbf {X})\) and \({\mathbf {C}}_{{\mathbf {X}}_i}\) for i = {1, …, n} the focal elements of X,with n the number of focal elements. It is possible to define the belief and plausibility of the probability of failure as follows:
with G min(U, X) and G max(U, X) the global minimum and maximum values of g(⋅) in the subset \({\mathbf {C}}_{{\mathbf {X}}_i}\). The calculation of the belief or the plausibility measures is transformed into the calculation of the minimum or maximum probability of failure at each focal element of X. In order to compute Bel(P f) and Pl(P f), it is necessary to carry out interval analysis (IA) to determine G min and G max and probability analysis (PA) to estimate the corresponding probability of failure. Du proposed to introduce FORM-UUA to solve this nested problem. FORM is used for the PA and non-linear optimization to solve IA. To overcome the nested loop problem, FORM-UUA provides a sequential approach in which interval variables are fixed during the MPP search in the aleatory space and optimization to find the maximum and minimum of state function g(⋅) is performed with the aleatory variables fixed to MPP. The values of both evidence theory measures indicate the effect of aleatory uncertainty on a response while the gap between them reflects the effect of epistemic uncertainty on the response. Yao et al. (2013) extended FORM-UUA to reformulate the double-loop optimization problem into an equivalent single-level optimization (SLO) problem.
Xiao et al. (2015) proposed to transform the variables described with evidence theory into probability random variables. Support vector regression (SVR, see Chapter 3) is used to build an approximation model of the limit-state function. Based on the surrogate model, the MPP of the approximate reliability problem with only random variables is searched out. Then, the MPFE of the original problem with evidence variables is determined. Using the MPFE and the monotonicity of the limit-state function, contributions of some focal elements to Belief and Plausibility measures are estimated. Therefore, the number of focal elements involved in the calculation of extreme values of the limit-state function is reduced, facilitating the reliability analysis for mixed uncertain variables. The challenges of this approach are the proper transformation of evidence variables into random variables and the control of SVR uncertainty.
Nannapaneni and Mahadevan (2016) developed a probabilistic framework to account for both aleatory uncertainty and epistemic uncertainty in the reliability analysis. It is based on FORM, extended to include uncertain distribution parameters, distribution type, uncertain correlations, and model errors using auxiliary variables, based on the probability integral transform and the theorem of total probability. This formalism enables to avoid the nested loop of uncertainty propagation when accounting for different types of uncertainty formalism, resulting in a single-loop approach combined with MCS and FORM for reliability analysis in the presence of aleatory and epistemic uncertainties.
In the case the epistemic variables are modeled with interval instead of evidence theory, Yang et al. (2015) proposed to combine Kriging (in a classification mode instead of a regression approach), surrogate refinement and DIRECT optimization algorithm (Finkel 2003) to find the maximum and minimum values of g(⋅) for each aleatory sample and CMC to estimate the bounds on the probability of failure.
Within the framework of possibility theory, Mourelatos and Zhou (2005) proposed a reliability estimation method with insufficient data. Instead of using classical discretization methods (Akpan et al. 2002) or vertex methods (Penmetsa and Grandhi 2002) to propagate the fuzzy uncertainty, the authors proposed a hybrid global-local optimization method. The optimization approach is used to calculate the confidence level of fuzzy response. The DIRECT (divisions of rectangles) global optimizer is first used, followed by a local optimizer based on sequential quadratic programming.
Mixed aleatory and epistemic reliability analyses are more complex to set up than traditional probability estimation due to the need to combine different mathematical formalisms to account for the diverse natures of uncertainty and to mix different numerical analysis tools such as sampling, optimization, combinatorics calculations, etc.
7 Summary
In this chapter, an overview of the main different algorithms that may be applied for the estimation of rare event probabilities modeled by a threshold exceedance of an input–output function g(⋅) has been presented. Their domain of applicability may vary a lot from an algorithm to another and thus the choice of a given simulation technique depends on practical characteristics of the reliability problem such as the ability to resample, the knowledge we have on the density of g(U), the computational cost and the non-linearity of g(⋅), the dimension of U, or the complexity of the limit state surface. Figure 4.10 proposed a classification of the reliability analysis techniques presented in this chapter.
In addition to the uncertainty propagation presented in Chapter 3 and the reliability analysis presented in this chapter, in order to solve single discipline optimization problem in the presence of uncertainty, it is necessary to address the formulation of the optimization problem and to appropriately choose a suitable optimization algorithm. The next chapter is focused on these key issues.
References
Akpan, U., Rushton, P., and Koko, T. (2002). Fuzzy probabilistic assessment of the impact of corrosion on fatigue of aircraft structures. In 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Denver, CO, USA.
Au, S. K. and Beck, J. L. (2001). Estimation of small failure probabilities in high dimensions by subset simulations. Probabilistic Engineering Mechanics, 16(4):263–277.
Balesdent, M., Morio, J., and Brevault, L. (2016). Rare event probability estimation in the presence of epistemic uncertainty on input probability distribution parameters. Methodology and Computing in Applied Probability, 18(1):197–216.
Balesdent, M., Morio, J., and Marzat, J. (2013). Kriging-based adaptive importance sampling algorithms for rare event estimation. Structural Safety, 13:1–10.
Basudhar, A., Missoum, S., and Sanchez, A. (2008). Limit state function identification using Support Vector Machines for discontinuous responses and disjoint failure domains. Probabilistic Engineering Mechanics, 23:1–11.
Bect, J., Ginsbourger, D., Li, L., Picheny, V., and Vazquez, E. (2012). Sequential design of computer experiments for the estimation of a probability of failure. Statistics and Computing, 22(3):773–793.
Bjerager, P. (1988). Probability integration by directional simulation. Journal of Engineering Mechanics, 114(8):1288–1302.
Bjerager, R. (1991). Methods for structural reliability computation, pages 89–136. Springer Verlag, New York.
Botev, Z. I. and Kroese, D. P. (2012). Efficient Monte Carlo simulation via the generalized splitting method. Statistics and Computing, 22(1):1–16.
Bourinet, J.-M., Deheeger, F., and Lemaire, M. (2011a). Assessing small failure probabilities by combined subset simulation and support vector machines. Structural Safety, 33:343–353.
Breitung, K. (1984). Asymptotic approximation for multinormal integrals. Journal of Engineering Mechanics, 110(3):357–366.
Bucklew, J. A. (2004). Introduction to Rare Event Simulation. Springer.
Cannamela, C., Garnier, J., and Iooss, B. (2008). Controlled stratification for quantile estimation. Annals of Applied Stats, 2(4):1554–1580.
Cérou, F., Del Moral, P., Furon, T., and Guyader, A. (2012). Sequential Monte-Carlo for rare event estimation. Statistics and computing, 22(3):795–808.
Cérou, F., Del Moral, P., Le Gland, F., and Lezaud, P. (2006). Genetic genealogical models in rare event analysis. INRIA report, 1797:1–30.
Coles, S. G. (2001). An introduction to statistical modeling of extreme values. Springer, New York.
De Boer, P.-T., Kroese, D. P., Mannor, S., and Rubinstein, R. Y. (2005). A tutorial on the cross-entropy method. Annals of operations research, 134(1):19–67.
Deheeger, F. and Lemaire, M. (2007). Support vector machine for efficient subset simulations: 2SMART method. In 10th International Conference on Application of Statistics and Probability in Civil Engineering, Tokyo, Japan.
Dekkers, A. L. M. and De Haan, L. (1999). On the estimation of the extreme-value index and large quantile estimation. The Annals of Statistics, 17(4):1795–1832.
Dietlevsen, O. and Madsen, H. (1996). Structural Reliability Methods. John Wiley and Sons, New York.
Du, X. (2008). Unified uncertainty analysis by the first order reliability method. Journal of Mechanical Design, 130(9):091401.
Dubourg, V., Deheeger, E., and Sudret, B. (2011). Metamodel-based importance sampling for the simulation of rare events. In Faber, M. J. Kohler and K. Nishilima (Eds.), Proceedings of the 11th International Conference of Statistics and Probability in Civil Engineering (ICASP2011), Zurich, Switzerland.
Echard, B. (2012). Kriging-based reliability assessment of structures submitted to fatigue. PhD thesis, Université Blaise Pascal.
Echard, B., Gayton, N., and Lemaire, M. (2011). AK-MCS : An active learning reliability method combining Kriging and Monte-Carlo Simulation. Structural Safety, 33:145–154.
Echard, B., Gayton, N., Lemaire, M., and Relun, N. (2013). A combined importance sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models. Reliability Engineering & System Safety, 111:232–240.
Embrechts, P. and Schmidli, H. (1994). Modelling of extremal events in insurance and finance. Zeitschrift für Operations Research, 39(1):1–34.
Finkel, D. E. (2003). DIRECT optimization algorithm user guide. Center for Research in Scientific Computation, North Carolina State University, 2:1–14.
Gerstner, T. and Griebel, M. (2003). Dimension-adaptive tensor-product quadrature. Computing, 71(1):65–87.
Hasofer, A. and Lind, N. (1974). An exact and invariant first-order reliability format. Journal of Engineering Mechanics, 100:111–121.
Hosking, J. and Wallis, J. (1987). Parameter and quantile estimation for the generalized Pareto distribution. Technometrics, 29(3):339–349.
Hurtado, J. E. (2013). Assessment of reliability intervals under input distributions with uncertain parameters. Probabilistic Engineering Mechanics, 32:80–92.
Janusevskis, J. and Le Riche, R. (2012). Simultaneous Kriging-based estimation and optimization of mean response. Journal of Global Optimization, 55(2):313–336.
Jiang, C., Zhang, Z., Han, X., and Liu, J. (2013). A novel evidence-theory-based reliability analysis method for structures with epistemic uncertainty. Computers & Structures, 129:1–12.
Kiureghian, A. D. and Dakessian, T. (1998). Multiple design points in first and second-order reliability. Structural Safety, 20(1):37–49.
Kiureghian, A. D., Lin, H.-Z., and Hwang, S.-J. (1987). Second-order reliability approximations. Journal of Engineering Mechanics, 113(8):1208–1225.
Kroese, D. P., Brereton, T., Taimre, T., and Botev, Z. I. (2014). Why the Monte Carlo method is so important today. Wiley Interdisciplinary Reviews: Computational Statistics, 6(6):386–392.
Kroese, D. P. and Rubinstein, R. Y. (2012). Monte-Carlo methods. Wiley Interdisciplinary Reviews: Computational Statistics, 4(1):48–58.
Lagnoux, A. (2006). Rare event simulation. Probability in the Engineering and Informational science, 20(1):45–66.
Lassen, T. and Recho, N. (2006). Fatigue Life Analyses of Welded Structures. ISTE Wiley, New York.
Lebrun, R. and Dutfoy, A. (2009a). A generalization of the Nataf transformation to distributions with elliptical copula. Probabilistic Engineering Mechanics, 24(2):172–178.
Lebrun, R. and Dutfoy, A. (2009b). An innovating analysis of the Nataf transformation from the copula viewpoint. Probabilistic Engineering Mechanics, 24(3):312–320.
Li, J. and Xiu, D. (2010). Evaluation of failure probability via surrogate models. Journal of Computational Physics, 229:8966–8980.
Li, L., Bect, J., and Vazquez, E. (2012). Bayesian Subset Simulation: a Kriging-based subset simulation algorithm for the estimation of small probabilities of failure. In 11th International Probabilistic Assessment and Management Conference (PSAM11) and The Annual European Safety and Reliability Conference (ESREL 2012), Helsinki, Finland.
Madsen, H., Krenk, S., and Lind, N. C. (1986). Methods of structural safety. Springer-Verlag.
Morio, J. (2012). Extreme quantile estimation with nonparametric adaptive importance sampling. Simulation Modelling Practice and Theory, 27(0):76–89.
Morio, J. and Balesdent, M. (2015). Estimation of Rare Event Probabilities in Complex Aerospace and Other Systems: A Practical Approach. Woodhead Publishing.
Mourelatos, Z. P. and Zhou, J. (2005). Reliability estimation and design with insufficient data based on possibility theory. AIAA Journal, 43(8):1696–1705.
Nannapaneni, S. and Mahadevan, S. (2016). Reliability analysis under epistemic uncertainty. Reliability Engineering & System Safety, 155:9–20.
Nataf, A. (1962). Distribution des distributions dont les marges sont données (in French). Comptes rendus de l’ Académie des Sciences, 225:42–43.
Neves, C. and Fraga Alves, M. (2004). Reiss and Thomas’ automatic selection of the number of extremes. Computational Statistics and Data Analysis, 47(4):689–704.
Novak, E. (1988). Deterministic and Stochastic Error Bounds in Numerical Analysis, volume 1349 of Lecture Notes in Mathematics. Springer, Berlin, Germany.
Pei-Ling, L. and Kiureghian, A. D. (1991). Optimization algorithms for structural reliability. Structural Safety, 9(3):161–177.
Penmetsa, R. and Grandhi, R. (2002). Estimating membership response function using surrogate models. In 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Denver, CO, USA.
Rackwitz, R. and Flessler, B. (1978). Structural reliability under combined random load sequences. Computers and Structures, 9(5):489–494.
Robert, C. and Casella, G. (2005). Monte Carlo Statistical Methods. Springer, New York.
Rosenblatt, M. (1952). Remarks on a multivariate transformation. Annals of Mathematical Statistics, 23:470–472.
Schöbi, R. and Sudret, B. (2017). Structural reliability analysis for P-boxes using multi-level meta-models. Probabilistic Engineering Mechanics, 48:27–38.
Schueremans, L. and Van Gemert, D. (2005). Use of Kriging as Meta-model in simulation procedures for structural reliability. In 9th International conference on structural safety and reliability, Rome, pages 2483–2490.
Silverman, B. W. (1986). Density estimation for statistics and data analysis. In Monographs on Statistics and Applied Probability. London: Chapman and Hall.
Smolyak, S. (1963). Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet Mathematics, Doklady, 4:240–243.
Sobol, I. M. (1994). A Primer for the Monte Carlo Method. CRC Press, Boca Raton, Fl.
Sudret, B. (2012). Meta-models for structural reliability and uncertainty quantification. In 5th Asian-Pacific Symposium on Structural Reliability and its Applications, Singapore, Singapore.
Thompson, P., Cai, Y., Reeve, D., and Stander, J. (2009). Automated threshold selection methods for extreme wave analysis. Coastal Engineering, 56(10):1013–1021.
Vazquez, E. and Bect, J. (2009). A Sequential Bayesian algorithm to estimate a probability of failure. In 15th IFAC Symposium on System Identification, Saint-Malo, France.
Xiao, M., Gao, L., Xiong, H., and Luo, Z. (2015). An efficient method for reliability analysis under epistemic uncertainty based on evidence theory and support vector regression. Journal of Engineering Design, 26(10–12):340–364.
Yan-Gang, Z. and Tetsuro, O. (1999). A general procedure for first/second-order reliability method (FORM/SORM). Structural Safety, 21(2):95–112.
Yang, X., Liu, Y., Gao, Y., Zhang, Y., and Gao, Z. (2015). An active learning Kriging model for hybrid reliability analysis with both random and interval variables. Structural and Multidisciplinary Optimization, 51(5):1003–1016.
Yao, W., Chen, X., Huang, Y., and van Tooren, M. (2013). An enhanced unified uncertainty analysis approach based on first order reliability method with single-level optimization. Reliability Engineering & System Safety, 116:28–37.
Zhang, P. (1996). Nonparametric importance sampling. Journal of the American Statistical Association, 91(434):1245–1253.
Zuniga, M. M., Garnier, J., Remy, E., and de Rocquigny, E. (2011). Adaptive directional stratification for controlled estimation of the probability of a rare event. Reliability Engineering & System Safety, 96(12):1691–1712.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Morio, J., Brevault, L., Balesdent, M. (2020). Reliability Analysis. In: Aerospace System Analysis and Optimization in Uncertainty. Springer Optimization and Its Applications, vol 156. Springer, Cham. https://doi.org/10.1007/978-3-030-39126-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-39126-3_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-39125-6
Online ISBN: 978-3-030-39126-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)