Abstract
Time-variant reliability is often evaluated by Rice’s formula combined with the First Order Reliability Method (FORM). To improve the accuracy and efficiency of the Rice/FORM method, this work develops a new simulation method with the first order approximation and series expansions. The approximation maps the general stochastic process of the response into a Gaussian process, whose samples are then generated by the Expansion Optimal Linear Estimation if the response is stationary or by the Orthogonal Series Expansion if the response is non-stationary. As the computational cost largely comes from estimating the covariance of the response at expansion points, a cheaper surrogate model of the covariance is built and allows for significant reduction in computational cost. In addition to its superior accuracy and efficiency over the Rice/FORM method, the proposed method can also produce the failure rate and probability of failure with respect to time for a given period of time within only one reliability analysis.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Reliability is the ability that a system or component fulfills its intended function under given circumstances over a specified time period (Choi et al. 2007). It is usually measured by the probability of such ability. If a limit-state function, which indicates the state of success or failure, is available, the reliability can be estimated computationally. If the limit-state function is dependent on time, so is the reliability.
In the past decades, various time-variant reliability analysis methodologies were developed. Even though many progresses have been made, maintaining the accuracy and efficiency of reliability analysis is still a challenge and is still an on-going research topic. Examples of the recent work include the nested extreme response surface approach (Wang and Wang 2012b), the composite limit state approach (Singh et al. 2010), the importance sampling approach (Singh et al. 2011a), and other residual life prediction methods based on testing data (Hu et al. 2012), prognostics and health management (Youn et al. 2011), and degradation knowledge (Gebraeel et al. 2009).
Amongst existing methods, the most dominating method is the upcrossing rate method that is based on Rice’s formula (Rice 1944, 1945). Upcrossing is an event that a limit-state function passes (upcrosses) its failure threshold from the safety region. For a given period of time, there might be many upcrossings. Rice’s formula assumes that all the upcrossings are independent. With this assumption, the upcrossing rate can be obtained for a limit-state function that follows Guassian process. The upcrossing rate is the rate of change in the probability of upcrossing with respect to time. The combined First Order Reliability Method (FORM) and Rice’s formula (Rice/FORM) (Hu and Du 2012; Hagen and Tvedt 1991) has been widely used because FORM can transform a time-variant limit-state function into a Gaussian process; as a result, the upcrossing rate is available with Rice’s formula. Knowing the upcrossing rate, we can easily calculate the time-variant reliability.
Using the first order approximation, the Rice/FORM method can efficiently estimate time-variant reliability. But its accuracy is not satisfactory when upcrossings are strongly dependent, especially when the reliability is low or the probability of failure is high, or when there are many dependent upcrossings. The estimated probability of failure is always higher than the accurate value, and the result is therefore conservative.
It is possible to improve the accuracy of the Rice/FORM method by removing Rice’s formula or the independent upcrossing assumption. After the limit-state function is approximated at its limit state, the response becomes a general Gaussian process. (The approximated limit-state function may not be a Gaussian process, and it is a Gaussian process only at the limit state.) It is a challenging task to calculate the probability that the limit-state function upcrosses its limit state at the first time, which is the time-variant probability of failure. Even today, there is no explicit formula for the time-variant probability of failure for a general Gaussian process (Lovric 2011). There are approximations to the time-variant probability of failure for the general Gaussian process, but the approximations are only good when the limit state is very large or approaches infinity (Lovric 2011).
In this work we propose a simulation method to estimate the time-variant probability of failure. The stochastic process of the response is mapped into an equivalent Gaussian stochastic process by integrating FORM with stochastic process characterization methods. Sampling is then performed on the equivalent stochastic process to estimate the time-dependent probability of failure. Since the proposed sampling method is developed based on FORM, we call it the First Order Simulation Approach (FOSA). Specially, FOSA is built upon series expansion methods that include the Expansion Optimal Linear Estimation (EOLE) and the Orthogonal Series Expansion (OSE). Both of the methods have been developed to approximate a random field (Sudret and Kiureghian 2000) and have been used to simulate Gaussian processes.
The contributions and significance of this work include the following aspects: (1) This work develops a numerical procedure that integrates FORM and series expansion methods so that it can handle general time-variant limit-state functions with random variables, non-stationary stochastic processes, and time. (2) This work explores an efficient way to approximate the time-dependent functions of the reliability index function and auto-covariance function of the expanded stochastic process obtained from FORM by employing the Kriging regression method. Both of the functions call FORM repeatedly. Reducing the time of evaluating the two functions is the key to the improvement of the efficiency. (3) The new method predicts not only the reliability defined in a period of time [0,T] , but also the reliability function and failure rate function with respect to all the time intervals [0,τ] (τ<T) within [0,T] using only one reliability analysis. The reliability or failure rate function is vital to life-cycle cost optimization (Singh et al. 2010; Hu and Du 2013c), maintenance scheduling (Wang et al. 2011), and warranty decision making.
The remainder of this paper is organized as follows. In Section 2, we review the basics of time-variant reliability analysis and its major methodologies. The new method, FOSA, is discussed in Section 3. It is then applied to two examples in Section 4. Conclusions are given in Section 5.
2 Review of time-variant reliability analysis
In this section, we review the definition of time-variant reliability and several commonly used reliability methods.
2.1 Time-variant reliability
A general limit-state function is given by G(t)=g(X, Y(t), t), where X=[X 1, X 2, ⋯, X n ] is a vector of random variables, Y(t)=[Y 1(t), Y 2(t), ⋯, Y m (t)] is a vector of stochastic processes, and t is the time. A failure occurs if
in which e is a failure threshold. The time-variant probability of failure over a time interval [0, T] is defined by
where ∃ stands for “there exists”.
As shown in Fig. 1, the first-passage failure occurs when G(t) passes its threshold e for the first time. The event that G(t) passes its threshold is called an upcrossing. Figure 1 also indicates that there may be a number of crossings for a given period of time.
The widely used methodologies for time variant reliability analysis include upcrossing rate methods, other approximation methods without using an upcrossing rate, and simulation methods. In the subsequent sections, we briefly review these three categories of methods.
2.2 Upcrossing rate methods
The upcrossing rate v(t) is defined by Zhang and Du (2011)
It is the rate of change in the probability of upcrossing \(\Pr \{G(t)<0\cap G(t+{\Delta } t)>0\}\). The commonly used approach is the Rice’s formula (Rice 1944, 1945), which has been further developed in many other studies. The methods can be roughly divided into two groups: those based on the independent assumption of upcrossings and those that relax the independent assumption by considering dependent upcrossings.
The methods in the first group assume that upcrossings are independent and their occurrence follows a Poisson distribution. With this assumption,
in which R(0) is the initial reliability at the initial time instant t=0.
The approximation of v(t) has been extensively studied. For instance, Lindgren (1984) and Breitung (1984, 1988) derived the expressions for the asymptotic upcrossing rate of stationary Gaussian processes. Ditlevsen (1983) gave bounds of the upcrossing rate for a non-stationary Gaussian process. Hagen and Tvedt (1991, 1992) later proposed a parallel system approach to solve general time-variant reliability problems where binomial cumulative distributions are involved (Hagen and Tvedt 1991, 1992). Based on Hagen and Tvedt’s work, the PHI2 method was proposed by Sudret (Andrieu-Renaud et al. 2004). This method transforms dependent responses at two successive time instants into independent ones by introducing a new random variable (Andrieu-Renaud et al. 2004). Later, Zhang and Du derived equations for upcrossing rates of function generator mechanisms based on the Frist Order Second Moment method (Zhang and Du 2011). Du and Hu (2012) approximated the upcrossing rate based on FORM and Rice’s formula in the application of hydrokinetic turbine blades.
The methods in the first group are accurate when there is only one upcrossing or small number of upcrossings. This happens when the threshold is at a very high level. When there are many dependent upcrossings, the methods are not accurate.
The methods in the second group relax the independent upcrossing assumption by accounting for dependent upcrossings. To consider the fact that the first passage may be followed by more upcrossings as shown in Fig. 1, Vanmarcke (1975) modified the Poisson formula according to the bandwidth parameter of a stochastic process. Several empirical formulas have also been suggested to modify the outcrossing rate according to simulation results (Dahlberg 1988; Gusev 1996; Preumont 1985). Bernard and Shipley (1972), and Madsen and Krenk (1984) derived an integral equation for the first-passage probability density from different approaches. The integral equation was then extended to the approximation of the first passage rate of general problems with random variables and non-stationary processes (Hu and Du 2013b; Hu et al. 2013). Many other methods have also been developed with different principles. For example, the Markov process method was proposed by considering the correlation between two successive time instants (Yang and Shinozuka 1971). The methods in the second group can significantly improve the accuracy, especially when there are multiple upcrossings (Hu and Du 2013b; Hu et al. 2013). But their computational cost is also increased. Their implementation is in general not as easy as those in the first group.
Among the above methods, the most commonly used method is the Rice/FORM method, which belongs to the first group. The method combines FORM and Rice’s formula and has many advantages. For example, it is efficient and easy to use. As discussed above, however, its accuracy is not good for many applications. The purpose of this work is to improve its accuracy by eliminating the use of Rice’s formula.
2.3 Methods without using upcrossing rate
As discussed above, upcrossing rate methods may not be accurate due to the assumption of independent upcrossings. Many methods without using an upcrossing rate have been developed. Based on the fact that the time-variant reliability is determined by the global extreme values of the response, Chen and Li (2007) proposed an approach for the evaluation of the extreme value distribution of dynamic systems based on the probability density evolution method. Singh and Mourelatos (Singh et al. 2010; Li et al. 2012) developed a composite limit state method to transform the time-variant problem into a time-invariant one. It is accurate for special limit-state functions in the form of G(t)=g(X, t).
Surrogate models have also been used to approximate the extreme values of the limit-state function. For a special limit-state function G(t)=g(X, t),
where \(G_{\max } \) is the global maximum response on [0,T].
After a surrogate model for \(G_{\max } \) is established (žilinskas 1992; Wang et al. 2001; Eldred and Burkardt 2009; Sudret 2008; Eldred 2009; Richard et al. 2012), the time-variant problem is then transformed into a time-invariant problem. For example, a nested extreme response surface approach has been developed by Wang and Wang (2012a) using the Kriging-based method. The surrogate model methods must rely on effective global optimization algorithms. Their accuracy and efficiency deteriorate when there are stochastic processes Y(t) involved in the limit state function. Another drawback is they can only obtain the reliability R(0,T) for a given period of time [0, T]. The reliability function R(0, t) for t<T is not available.
2.4 Sampling methods
The most direct method is Monte Carlo simulation (MCS). It generates a large number of samples for input random variables and stochastic processes and then evaluates the limit-state function at the sample points. MCS is computationally expensive, and advanced sampling methods have been developed for better efficiency. For example, an adaptive importance sampling method was developed by Mori and Ellingwood (1993). Reinaldo and Armando (González-Fernández and Leite Da Silva 2011) proposed a sequential cross-entropy MCS method. Hu and Du (2013a) presented a sampling approach to the extreme value distribution for problems with only one stochastic process. An importance sampling method was also developed by Singh and Mourelatos (Singh et al. 2011a).
In sum, the Rice/FORM method is efficient and easy to implement, but its accuracy is poor for problems with strong dependent upcrossings. On the other hand, MCS is accurate but not efficient. In this work, we take advantages of both the Rice/FORM method and MCS by integrating them seamlessly through the use of series expansions. This improves both the accuracy and efficiency of the Rice/FORM method.
3 First order simulation approach (FOSA)
In this section, we first give the general principle of FOSA and then discuss its detailed steps. We assume that all the random variables in X and all the stochastic processes in Y(t) are independent.
3.1 Overview of FOSA
The purpose of this work is to improve both efficiency and accuracy of the first order time-dependent reliability method. Instead of computing outcrossing rates at many time instants on [0,T] after G(t) is linearized at the Most Probable Points (MPPs), FOSA simulates the linearized process of G(t) on [0,T]. The simulation takes place on the output side (G(t)), instead of on the input side (X and Y(t)). This helps improve the accuracy because of the elimination of the independent upcrossing assumption. More specifically, by using FORM, FOSA maps G(t) into an equivalent Gaussian process H(t), which is then simulated without evaluating the original limit-state function G(t). High efficiency is achieved by the efficient construction of the covariance matrix and mean value of H(t) on [0,T]. This allows for the minimal number of FORM analyses or MPP searches, thereby resulting in higher efficiency.
Simulating a stochastic process usually requires characterizing or expanding the process. The methods for characterizing a stochastic process can be roughly divided into two types (Grigoriu 2003; Itoh 2007). The first type simulates a stochastic process with only one sample trajectory using the time series modeling. The methods employ the Auto-Regressive (AR) model, the Moving Average (MA) model, the Auto-Regressive Moving Average (ARMA) model, and the Auto-Regressive Integrated Moving Average (ARIMA) model (Crato and Ray 1996; Newbold et al. 1994). These methods play a vital role in weather forecast, financial risk assessment, and flood risk prediction. In engineering, the AR model has been used to model the road height stochastic process (Li et al. 2012; Singh et al. 2011b). The second type includes the spectral representation methods (Bergman et al. 1997). Amongst them, the Karhunen-Loeve (KL) method (Ghanem and Spanos 1991) is widely used. It requires calculating an integral to obtain the eigenvalue and eigenfunction for the expansions. The analytical solution of the integral is only available for simple cases. Most of the time, the finite-element analysis (Ghanem and Spanos 1991) and mesh free method (Rahman and Xu 2005) need to be employed. Later, as an extension of KL method, the Orthogonal Series Expansion (OSE) (Zhang and Ellingwood 1994) and the Expansion Optimal Linear Estimation (EOLE) (Sudret and Der Kiureghian 2002) methods were developed. This work uses the EOLE and OSE methods, which are reviewed in Appendixes A and B, respectively.
Since it is impossible for us to simulate the general response G(t) directly, we need to convert it into an equivalent Gaussian stochastic process H(t), such that
For a given G(t), there exist many Gaussian processes that can satisfy the probability equivalency in (6). Identifying an equivalent Gaussian processes is difficult. One possible way is employing FORM at every time instant on [0, T] as FORM is capable of transforming a non-Gaussian random variable into a standard Gaussian random variable. Performing FORM or MPP search at every time instant, however, is computationally expensive. In this work, we explore an efficient way to reduce the number of MPP searches.
The equivalent Gaussian process H(t) associated with FORM is given by Andrieu-Renaud et al. (2004)
where L(t) is a standard Gaussian process; −β(t) is the mean of H(t), and β(t) is also the reliability index at t.
FOSA characterizes (expands) H(t). For this purpose, the EOLE and OSE methods are employed. Once the statistical characteristics of H(t) are available from EOLE and OSE, MCS is implemented to estimate the time-variant reliability. Figure 2 shows a brief procedure of FOSA.
In the subsequent sections, we explain details of FOSA for stationary and non-stationary responses. How do we distinguish whether a given problem is stationary or not? If the limit-state function is not an explicit function of time and there are not non-stationary processes in its input variables, the problem belongs to a stationary case; otherwise, it is a non-stationary case. We focus on non-stationary responses as they are more complicated and more general.
3.2 Stationary H(t)
If time t is not explicitly involved and the input stochastic processes Y(t) are stationary, the limit-state function has the following format:
where T(⋅) stands for the transformation operator from U X to X, or from U Y (t) to U Y (t).
G(t) is then a stationary process, and so is H(t). The mean function −β(t) of H(t) is time independent, and (8) is rewritten as
Since L(t) is a standard stationary Gaussian process, the auto-correlation function or auto-covariance function of L(t) is given by Hu and Du (2012)
in which \({\boldsymbol {\alpha }}_{\mathrm {X}}=\mathbf {u}_{\mathrm {X}}^{*}\left /\beta ,\right . \boldsymbol {\alpha }_{\mathrm {Y}}=\mathbf {u}_{\mathrm {Y}}^{*}\left /\beta \right .\), and \(\beta =\left \|\mathbf {u^{*}}\right \|\), where \(\mathbf {u^{*}}=[\mathbf {u}_{\mathrm {X}}^{*}, \mathbf {u}_{\mathrm {Y}}^{*}]\) the MPP obtained from FORM, and C(t, τ) is given by
where ρ i (t, τ), in which i=1, 2, ⋯, m, are the auto-correlation coefficient functions of \(U_{Y_{i} } (t)\).
Since α X , α Y , and β are constant for all time instants, β and the auto-correlation function of L(t) can be fully determined with only one MPP search. We then employ EOLE (reviewed in Appendix A) to expand H(t) by
where Z i are independent standard Gaussian random variables, η i and \({\mathbf {\varphi }}_{i}^{T} \) are eigenvalues and eigenvectors of the covariance matrix of L(t) at p time instants, p is the expansion order, and ρ L (t)=[ρ L (t, t 1),ρ L (t, t 2),…, ρ L (t, t p )]T gives the correlations between t and the p time instants. Details are available in Appendix A. For the error analysis of EOLE, please refer to Sudret and Der Kiureghian (2002).
3.3 Non-stationary H(t)
3.3.1 Problem statement
For a general limit-sate function G(t)=g(X, Y(t), t), the associated H(t) may be a non-stationary Gaussian process. The two terms of H(t) are both time-dependent: the mean function −β(t) is a deterministic function of time, and L(t) may be a non-stationary standard Gaussian process with the auto-correlation function given by
where t, τ∈[0, T].
Equation (13) indicates that the MPP search has to be performed twice for each pair of time instants on [0, T] to obtain ρ L (t, τ). EOLE is not suitable for the non-stationary problem because it requires MPP searches for all the pairs of time instants on [0, T]. This will be extremely computationally expensive. To this end, we use the OSE method (Zhang and Ellingwood 1994), which is reviewed in Appendix B. The critical step of OSE is the computation of matrix Γ defined by
where h i (t) and h j (τ) are the i-th order and j-th order orthogonal functions, respectively. There are many ways to evaluate the integral in (14), such as the adaptive Simpson’s rule, the adaptive Gaussian quadrature, and other adaptive integration methods (Press 2007). The numerical integration method, however, is not efficient. The integral is two dimensional and requires many calculations of the integrand or the auto-correlation function ρ L (t,τ). The calculation of each ρ L (t,τ) calls the MPP search twice. To reduce the computational cost, we create a surrogate for ρ L (t,τ) and then evaluate the integrals using the surrogate model. We also build a surrogate model for β(t). The use of surrogate models makes the expansion of H(t) much more efficient.
3.3.2 Surrogate models of ρ L (t, τ) and β(t)
In this work, we use the Kriging model or DACE model (Lophaven et al. 2002) to construct surrogate models for ρ L (t,τ) and β(t). The Kriging model has been widely used in various areas (Grogan et al. 2013; Lockwood and Mavriplis 2013; Raghavan and Breitkopf 2013; Steponaviče et al. 2014). The Kriging model can provide not only predictions, but also probabilistic errors (or mean square error) of the predictions (Lophaven et al. 2002). The advantages of the Kriging model for the present problem are two folds. It is accurate for the nonlinear functions ρ L (t,τ) and β(t); it is also efficient for the two-dimensional function ρ L (t,τ) and one-dimensional function β(t).
For a to-be-predicted function z(x), the Kriging model is given by
where f(x) includes polynomial terms with unknown coefficients and ε(x) is the error term, which is assumed to be a Gaussian stochastic process with mean zero and variance σ 2 (Lophaven et al. 2002). For our present problem, z(x) is ρ L (t,τ) or β(t), and x is (t,τ) or t. Details of the Kriging model are available in Lophaven et al. (2002), and we herein focus on only the application of Kriging model to the modeling of ρ L (t,τ) and β(t).
We can maintain high efficiency by carefully considering the features of β(t) and ρ L (t,τ). Even though ρ L (t,τ) is a two-dimensional function, we do not need to always generate samples for t and τ separately. We only do so whenever necessary. We can generate samples for t and also use them for τ. Thus, we construct surrogate models for β(t) and ρ L (t,τ) simultaneously because both models share the common input variable t and the MPP search result at t can be used for both models.
An algorithm is developed for the Kinging models. It is efficient because the number of MPP searches is minimal. The algorithm is plotted in Fig. 3 and explained in the following flowchart where MPP searches are highlighted (Table 1).
The algorithm uses the fact that ρ L (t,τ)=1 when t=τ. For any instant t=τ , we have ρ L (t,τ)=1 without calling MPP search again. With the added pairs of time instants (t,τ) in Step 4, we obtain more sample points where ρ L (t,t)=1 and ρ L (τ,τ)=1. This means that we have much more sample points at the highest value ρ L (t,τ)=1. Thus, this will significantly increase the accuracy of the surrogate model \(\hat {{\rho }}_{L} (t,\tau )\).
The algorithm calls the MPP search in Steps 2 and 6. Using a good starting point for the MPP search also helps reduce the number of function evaluations. Our strategy is using the MPP of the time instant that is closest to the current time instant as the starting point.
With \(\hat {{\rho }}_{L} (t,\tau )\) available, the elements of the matrix Γ in (14) are computed by numerical integration based on \(\hat {{\rho }}_{L} (t,\tau )\). Since the numerical integration will not call the original limit-state function anymore, we can use any of numerical algorithms by calling \(\hat {{\rho }}_{L} (t,\tau )\) with sufficient number of times to ensure high accuracy. Next, we discuss how the stochastic process H(t) is expanded with \(\hat {{\beta }}(t)\) and Γ.
3.3.3 Orthogonal series expansion (OSE) for H(t)
As mentioned previously, we use the OSE method to expand H(t). Suppose the expansion order is M, and Γ is then an M×M matrix. H(t) given in (7) is approximated by
where γ i , i=1,2,⋯ ,M, are correlated zero-mean Gaussian random variables, λ i is the i-th eigenvalue of Γ, \({P_{j}^{i}} \) is the j-th element of the i-th eigenvector of Γ, Z i are independent standard Gaussian random variables, and h j (t) is the j-th orthogonal function given by
where L e j (⋅) is the j-th Legendre polynomial (Wan and Zudilin 2013; Zhang and Gao 2012), and t r is given by
Details about how the correlated variables γ i are transformed into independent Z i can be found in Appendix B and Zhang and Ellingwood (1994). Expanding H(t) with OSE is now independent from the MPP search. Increasing the order of the expansion, therefore, will not increase the computational cost. The higher is M, the higher is the accuracy. Since M does not affect the computational cost or the number of MPP searches, we can always use a large value of M. The appropriate value of M may be problem dependent. For a specific problem, we can gradually increase the value of M until the result is stabilized or convergence is reached. For the aforementioned reason, checking convergence will not affect the computational efficiency. This will be demonstrated in the example section.
3.4 Reliability analysis
The next step is to use the expanded processes to calculate reliability. We use simulation for this task because the computational cost is no longer a concern.
3.4.1 Sampling of H(t)
After H(t) is expanded in terms of the standard Gaussian random variables as given in (12) and (16), we can easily obtain the samples of H(t). We first discretize the time interval [0, T] into W time instants, t 1, t 2,⋯ ,t W . As the procedures of EOLE and OSE are similar, we only discuss OSE.
We plug time instants t 1, t 2,⋯ ,t W into the M orthogonal functions and obtain the following matrix:
Plugging t 1, t 2,⋯ ,t W into \(\hat {{\beta }}(t)\), we have
and
where I N×1=[1; 1;⋯ ;1] N×1 and N is the number of samples at each time instant.
After obtaining the eigenvalue and eigenvector of Γ, we also obtain the eigenvector matrix P M×M and vector of eigenvalues λ M×1=[λ1; λ2;⋯ ;λ M ]. Multiply the eigenvector matrix with the matrix given in (19), we have
We then have
in which \({\mathbf {Z}}_{i} =[{Z_{i}^{1}} ,\;{Z_{i}^{2}},\cdots ,{Z_{i}^{N}} ]^{T}\) is a N×1 vector of the random samples of Z i , i=1, 2,⋯ ,M. With (23), the samples of H(t) are then generated by
\(\tilde {{H}}_{N\times W} \) is two dimensional. One dimension, described by W, is for the discretized time instants. The other dimension, described by N, is for the samples of the standard Gaussian variables at each time instant.
We then obtain the following samples in \(\tilde {{H}}_{N\times W} \)
Next we discuss how to calculate the time-variant reliability and the failure rate based on the samples.
3.4.2 Reliability and failure rate
After obtaining samples \(\tilde {{H}}_{N\times W} \) of response H(t), we estimate p f (0, T). We first define an indicator function I(i, t j ) as follows:
As discussed in Section 2.1, a failure occurs if the response passes the threshold for the first time. We then define a first passage failure indicator I +(i) as follows:
p f (0, T) is then approximated by
The proposed method can also estimate the failure rate v 1(t) on [0,T] . The failure rate is an important concept in reliability engineering and is widely used for cost analysis, maintenance scheduling, and warranty policy. It is the derivative of the first time to failure (FTTF) T F (Singh et al. 2010; Hu and Du 2013c; Wang et al. 2011). v 1(t) can also be viewed as the first-time-upcrossing rate. v 1(t) is given by
To calculate v 1(t), we also define an indicator I 1(t) as
v 1(t) is then estimated by
3.5 Numerical procedure
The numerical procedure of FOSA for stationary and non-stationary responses are summarized in this section.
3.5.1 Stationary response
- Step 1::
-
Perform the MPP search at t=0: transform random variables X and Y(0) into standard Gaussian random variables U X and U Y (0) and search for the MPP u ∗ at t=0.
- Step 2::
-
Compute the covariance matrix: discretize [0, T] into time instants, and compute the covariance or correlation between each two time instants using (10).
- Step 3::
-
Conduct EOLE expansion: expand H(t) using (12) based on the eigenvalues and eigenvectors obtained from the covariance matrix.
- Step 4::
-
Generate samples for H(t): obtain the two-dimensional sampling matrix for H(t) based on (12).
- Step 5::
-
Approximate p f (0, T): estimate p f (0, T) with the samples generated in Step 4 using (26) through (31).
3.5.2 Non-stationary response
- Step 1::
-
Create surrogate models for \(\hat {{\beta }}(t)\) and \(\hat {{\rho }}_{L} (t,\;\tau )\): use the algorithm in Section 3.3.2 to build \(\hat {{\beta }}(t)\) and \(\hat {{\rho }}_{L} (t,\;\tau )\) by performing MPP searches at the sampling points needed by the surrogate models.
- Step 2::
-
Solve for matrix Γ: solve for Γ with elements given in (14) based on \(\hat {{\rho }}_{L} (t,\;\tau )\).
- Step 3::
-
Use OSE for H(t): Solve for the eigenvalue and eigenvector of Γ, and then expand H(t) using OSE as given in (16).
- Step 4::
-
Generate samples of H(t): obtain the matrix of samples of H(t) using (19) through (25).
- Step 5::
-
Perform reliability analysis: calculate reliability and failure rates with (26) through (31).
4 Numerical examples
In this section, two examples are used to demonstrate the effectiveness of FOSA. They are reliability analyses for a crank-slider mechanism system subjected to manufacturing imprecision and a beam subjected to stochastic loadings. There are no stochastic processes in the input variables in the first example, but the response is a non-stationary process because it is a function of time. In the second example, random variables, stochastic processes, and time all appear in the limit-state function.
4.1 A mechanism example
A two-slider-crank mechanism is shown in Fig. 4. The link with lengths R 1 and R 3 rotates with an angular velocity of ω=π rad/s. The motion output is the difference between displacements of two sliders A and B. The mechanism is supposed to work with small motion errors during the period of time [0, T]=[0, 2] seconds. A failure occurs when the motion error is larger than e=0.94 mm. The motion error is defined as the difference between the desired motion output and the actual motion output. The time-variant probability of failure in one motion cycle is to be determined.
The limit-state function for the mechanism is given by
in which
Table 2 shows the random variables and other parameters.
Since the response of the mechanism is non-stationary, following the procedure presented in Section 3.5.2, we first constructed Kriging models \(\hat {{\beta }}(t)\) and \(\hat {{\rho }}_{L} (t,\tau )\) using the algorithm given in Section 3.3.2. The convergence criterion was ε M S E =10−4. MPP searches were performed at seven evenly distributed initial time instants. Figures 5 and 6 show the initial surrogate model \(\hat {{\beta }}(t)\) constructed with the seven samples {t i , β(t i )} i=1,…,7 and the mean square errors of \(\hat {{\beta }}(t)\), respectively. The initial surrogate model \(\hat {{\rho }}_{L} (t,\;\tau )\) and its mean square errors are plotted in Figs. 7 and 8. Since the maximum mean square error was large, we gradually added more time instants and performed more MPP searches until convergence. Figures 9, 10, 11 and 12 give the final surrogate models \(\hat {{\beta }}(t)\) and \(\hat {{\rho }}_{L} (t,\;\tau )\), their mean square errors, as well as the initial and added sample points.
After the surrogate models of \(\hat {{\beta }}(t)\) and \(\hat {{\rho }}_{L} (t,\;\tau )\) are constructed, we characterize the equivalent stochastic process H(t) with the OSE method given in Section 3.3.3. To determine an appropriate value for the expansion order M, we performed reliability analysis with different values of M, starting from two. As discussed in Section 3.3.3, the expansion of H(t) with OSE is independent of the MPP search. Performing reliability analysis with different values of M, therefore, does not require the evaluation of the original limit-state function. To check the convergence, we used the percentage change, which is defined by
where \({p_{f}^{t}} (M)\) is p f (0, T) with an order of M.
Figure 13 shows the percentage change. It indicates that the percentage change converged at M=10. This implies that M≥10 is an appropriate expansion order. We therefore use M=10. We obtained not only p f (0, 2) for time interval [0, 2] seconds, but also the function of p f (0, t) for time interval [0, t], where t<2 seconds. To evaluate the accuracy and efficiency of FOSA, we compared its results with those from the Rice/FORM method and MCS. For MCS, the time interval [0, 2] seconds was divided into 60 time instants and 5×106 samples were generated at each time instant. The comparison is given in Table 3 and is depicted in Fig. 14, which shows high accuracy of FOSA.
From FOSA, we also obtained the first-passage failure rate at every time instant. The failure rates from FOSA and MCS are plotted in Fig. 15. Following the figure, Table 4 shows the numbers of function evaluations required by the three methods.
The results show that FOSA improved both the accuracy and efficiency of the Rice/FORM method significantly for the mechanism problem.
4.2 A beam under stochastic loads
The problem in Andrieu-Renaud et al. (2004) was modified for our second example.The problem involves a beam under two stochastic loads as shown in Fig. 16. The cross section A-A is rectangular with its initial width a 0 and height b 0. The two parameters decrease at a rate of k due to corrosion. A random load F acts at the midpoint of the beam. The beam is also subjected to a constant weight and another stochastic load q, which is uniformly distributed on the top surface of the beam.
A failure occurs when the stress exceeds the ultimate strength. The limit-state function is given by
where σ u is the ultimate strength, ρ s t is the density, and L is the length of the beam.
Table 5 provides all the random variables and parameters. The auto-correlation coefficient functions of the stochastic process F(t) and q(t) are given by
and
respectively.
We considered two cases. In case one, k=0 and the mean μ F of F(t) is constant over time. This means that the width and height do not decrease with time. As the load is stationary, the response of the beam is also stationary. In case two, k=1×10−4 m/yr and μ F varies with time. In this case, the stress response of the beam is non-stationary. μ F (t) of F(t) over [0, 15] years is plotted in Fig. 17 and given by
Case 1:
Stationary response When the response is stationary, as discussed in Section 3.2, we used EOLE to calculate p f (0, T) over [0, 15] years. We also compared the results from the Rice/FORM method and MCS. They are provided
in Tables 6 and 7, respectively. The time step size, \({\Delta } t=\frac {T}{s}\), used for FOSA was 0.1 years.
The results show that FOSA is much more accurate and efficient than the Rice/FORM method.
Since the accuracy of EOLE may be affected by the time step size, we studied the effect of the time step size. The results are plotted in Fig. 18.
Figure 18 shows that a refined step size has a positive effect on the accuracy. To achieve good accuracy, we could therefore use a small element size, which will not increase the computational cost. Sudret (Sudret and Der Kiureghian 2002) has investigated the selection criterion for the step size. It should be noted that there are some noises in the Fig. 18. The noises come from MCS. Increasing the number of samples used in MCS will decrease the noises in the figure.
Case 2:
Non-stationary response For this case, the method presented in Section 3.3 was employed. Following the procedure given in Section 3.5, we first constructed surrogate models for \(\hat {{\beta }}(t)\) and \(\hat {{\rho }}_{L} (t,\;\tau )\). MPP searches were performed at 16 initial time instants with t i =0+(i−1)(15−0)/15, years, i=1, 2, ⋯, 16. Figures 19, 20, 21, and Fig. 22 show the constructed surrogate models for \(\hat {{\beta }}(t)\) and \(\hat {{\rho }}_{L} (t,\;\tau )\) and the associated mean square errors of the surrogate models, respectively.
The figures indicate that the maximum mean square errors of \(\hat {{\beta }}(t)\) and \(\hat {{\rho }}_{L} (t,\;\tau )\) have satisfied the convergence criterion ε M S E =10−4. We then used the constructed surrogate models to characterize the equivalent stochastic process H(t). As what we have done in example one, we also performed convergence study for choosing M. Figure 23 shows the percentage change with difference values of M. It illustrates that the percentage change stabilized at M=12. Since M does not affect the computational cost, we used M=25 for this problem.
Table 8 and Fig. 24 present the comparison between the three methods. For MCS, the time interval was divided into 600 time instants and 8×106 samples were generated at each time instant. Table 9 shows the numbers of function evaluations required by different methods. Figure 25 shows the failure rates we obtained at each time instants.
The results show that the accuracy and efficiency of FOSA are good for a problem that involves non-stationary stochastic processes, random variables, and time.
5 Conclusions
The Rice/FORM method is widely used for time-variant reliability analysis, but its accuracy is not good for many problems. This work improves the accuracy of the Rice/FORM method based on series expansions and simulations. Different from the traditional simulation method, the proposed method performs sampling at the response level. Upon samples of the response, the time-variant probability of failure is estimated. During the simulation process, no upcrossing rate needs to be approximated, no derivative of response is required, and no independent upcrossing assumption is made. As a result, the accuracy of the proposed method is higher than the Rice/FORM method. For problems with stationary responses, the proposed method is as efficient as the time-independent FORM and much more efficient than the Rice/FORM method. For the non-stationary responses, it is much more efficient and accurate than the Rice/FORM method. Two numerical examples have illustrated the accuracy and efficiency of the proposed method.
The Kriging model is employed to construct surrogate models for the mean and auto-correlation functions of the equivalent Gaussian stochastic process. As a surrogate model technique, it may have convergence problems. In future, we will investigate how to select appropriate surrogate model techniques for different problems. We will also study the application of other surrogate model methods to the construction of these two functions.
As the new method is based on FORM, it also shares the same disadvantages of FORM. For example, the accuracy may not be good if the limit-state function is highly nonlinear with respect to random and process input variables in the transformed space. Another error source is from the use of the Kriging models for the mean and auto-correlation functions of the equivalent Gaussian process obtained from FORM. The series expansions for the equivalent Gaussian process also produces some error. The last two errors, however, can be easily reduced with more sample points and expansion terms. The most significant error is from FORM. Thus, an important future research task should be to extend the proposed method to the Second Order Reliability Method (SORM) for higher accuracy. Other future work may include applying the proposed method to reliability-based design optimization and developing more efficient sampling methods to further improve the efficiency.
References
Andrieu-Renaud C, Sudret B, Lemaire M (2004) The PHI2 method: a way to compute time-variant reliability. Reliab Eng Syst Saf 84(1):75–86
Bergman LA, Shinozuka M, Bucher CG, Sobczyk K, Dasgupta G, Spanos PD, Deodatis G, Spencer BF, Ghanem RG, Sutoh A, Grigoriu M, Takada T, Hoshiya M, Wedig WV, Johnson EA, Wojtkiewicz SF, Naess A, Yoshida I, Pradlwarter HJ, Zeldin BA, Schuëller GI, Zhang R (1997) A state-of-the-art report on computational stochastic mechanicsk. Probabilistic Eng Mech 12(4):197–321
Bernard MC, Shipley JW (1972) The first passage problem for stationary random structural vibration. J Sound Vib 24(1):121–132
Breitung K (1984) Asymptotic crossing rates for stationary Gaussian vector processes. Tech. Report, 1, Dept. of Math, and Statistics, Univ. of Lund, Lund, Sweden
Breitung K (1988) Asymptotic approximations for the outcrossing rates of stationary vector processes. Stochast Process Appl 13:195–207
Chen JB, Li J (2007) The extreme value distribution and dynamic reliability analysis of nonlinear structures with uncertain parameters. Struct Saf 29(2):77–93
Choi SK, Grandhi RV, Canfield RA (2007) Reliability-based structural design. Springer, pp 1–7
Crato N, Ray BK (1996) Model selection and forecasting for long-range dependent processes. J Forecast 15(2):107–125
Dahlberg T (1988) The peak factor of a short sample of a stationary Gaussian process. J Sound Vib 122(1):1–10
Ditlevsen O (1983) Gaussian outcrossings from safe convex polyhedrons. J Eng Mech 109(1):127–148
Eldred MS (2009) Recent advances in non-intrusive polynomial chaos and stochastic collocation methods for uncertainty analysis and design. In: Proceedings of the 50th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference, art. no. 2009–2274
Eldred MS, Burkardt J (2009) Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification. In: Proceedings 47th AIAA aerospace sciences meeting including the new horizons forum and aerospace exposition, art. no. 2009–0976
Gebraeel N, Elwany A, Pan J (2009) Residual life predictions in the absence of prior degradation knowledge. IEEE Transactions on Reliab 58(1):106–117
Ghanem RG, Spanos PD (1991) Stochastic finite element analysis: a spectral approach. Springer, New York, pp 67–99
González-Fernández RA, Leite Da Silva AM (2011) Reliability assessment of time-dependent systems via sequential cross-entropy Monte Carlo simulation. IEEE Trans Power Syst 26(4):2381–2389
Grigoriu M (2003) A class of models for non-stationary Gaussian processes. Probabilistic Eng Mech 18(3):203–213
Grogan JA, Leen SB, McHugh PE (2013) Optimizing the design of a bioabsorbable metal stent using computer simulation methods. Biomaterials 34(33):8049–8060
Gusev AA (1996) Peak factors of Mexican accelerograms: evidence of a non-Gaussian amplitude distribution. J Geophys Res B: Solid Earth 101(9):20083–20090
Hagen O, Tvedt L (1991) Vector process out-crossing as parallel system sensitivity measure. J Eng Mech 117(10):2201–2220
Hagen O, Tvedt L (1992) Parallel system approach for vector out-crossing. J Offshore Mech Arctic Eng 114(2):122–128
Hu C, Youn BD, Wang P, Taek Yoon J (2012) Ensemble of data-driven prognostic algorithms for robust prediction of remaining useful life. Reliab Eng & Syst Saf 103:120–135
Hu Z, Du X (2012) Reliability analysis for hydrokinetic turbine blades. Renew Energy 48:251–262
Hu Z, Du X (2013) A sampling approach to extreme value distribution for time-dependent reliability analysis. J Mech Des 135(7):071003
Hu Z, Du X (2013) Time-dependent reliability analysis with joint upcrossing rates. Struct Multidiscip Optim 48(5):893–907
Hu Z, Du X (2013) Lifetime cost optimization with time-dependent reliability. Engineering optimization (ahead-of-print), pp 1–22
Hu Z, Li H, Du X, Chandrashekhara K (2013) Simulation-based time-dependent reliability analysis for composite hydrokinetic turbine blades. Struct Multidiscip Optim 47(5):765–781
Itoh Y (2007) A class of Gaussian hybrid processes for modeling financial markets. Asia-Pacific Finan Markets 14(3):185–199
Li J, Mourelatos Z, Singh A (2012) Optimal preventive maintenance schedule based on lifecycle cost and time-dependent reliability. SAE Int J Mater Manuf 5(1):87–95
Lindgren G (1984) Extremal ranks and transformation of variables or extremes of functions of multivariate Gaussian processes. Stoch Process Appl 17:285–312
Lockwood B, Mavriplis D (2013) Gradient-based methods for uncertainty quantification in hypersonic flows. Comput Fluids 85:27–38
Lophaven SN, Nielsen HB, Søndergaard J (2002) DACE-A MATLAB Kriging toolbox. Technical University of Denmark
Lovric M (2011) International encyclopedia of statistical science. Springer, pp 496–497
Madsen PH, Krenk S (1984) Integral equation method for the first-passage problem in random vibration. J Appl Mech Trans ASME 51(3):674–679
Mori Y, Ellingwood BR (1993) Time-dependent system reliability analysis by adaptive importance sampling. Struct Saf 12(1):59–73
Newbold P, Agiakloglou C, Miller J (1994) Adventures with ARIMA software. Int J Forecast 10(4):573–581
Preumont A (1985) On the peak factor of stationary Gaussian processes. J Sound Vib 100(1):15–34
Press WH (2007) Numerical recipes 3rd edition: the art of scientific computing. Cambridge university press
Raghavan B, Breitkopf P (2013) Asynchronous evolutionary shape optimization based on high-quality surrogates: application to an air-conditioning duct. Eng Comput 29(4):467–476
Rahman S, Xu H (2005) A meshless method for computational stochastic mechanics. Int J Comput Methods Eng Sci Mech 6:41–58
Rice SO (1944) Mathematical analysis of random noise. Bell Syst Techn J 23:282–332
Rice SO (1945) Mathematical analysis of random noise. Bell Syst Tech J 24:146–156
Richard B, Cremona C, Adelaide L (2012) A response surface method based on support vector machines trained with an adaptive experimental design. Struct Saf 39:14–21
Singh A, Mourelatos ZP, Li J (2010) Design for lifecycle cost using time-dependent reliability. J Mech Des Trans ASME 132(9):0910081–09100811
Singh A, Mourelatos ZP, Nikolaidis E (2011) An importance sampling approach for time-dependent reliability. In: Proceedings of the ASME design engineering technical conference, Washington, DC, I.C. pp 1077–1088
Singh A, Mourelatos Z, Nikolaidis E (2011) Time-dependent reliability of random dynamic systems using time-series modeling and importance sampling. SAE Int J Mater Manuf 4(1):929–946
Steponaviče I, Ruuska S, Miettinen K (2014) A solution process for simulation-based multiobjective design optimization with an application in the paper industry. CAD Comput Aided Des 47:45–58
Sudret B (2008) Global sensitivity analysis using polynomial chaos expansions. Reliab Eng Syst Saf 93(7):964–979
Sudret B, Der Kiureghian A (2002) Comparison of finite element reliability methods. Probabilistic Eng Mech 17(4):337–348
Sudret B, Kiureghian AD (2000) Stochastic finite element methods and reliability: a state-of-the-art report. A report on research supported by Electricité de France under Award Number D56395-T6L29-RNE861, Report No. UCB/SEMM-2000/08. University of California, Berkeley
Vanmarcke EH (1975) On the distribution of the first-passage time for normal stationary random processes. J Appl Mech 42:215–220
Wan J, Zudilin W (2013) Generating functions of Legendre polynomials: a tribute to Fred Brafman. J Approx Theory:198–213
Wang Z, Wang P (2012) Reliability-based product design with time-dependent performance deterioration. In: Proceedings 2012 IEEE international conference on prognostics and health management: enhancing safety, efficiency, availability, and effectiveness of systems through PHM technology and application, PHM 2012, Denver, CO, 18-21 June, 2012, pp 1–12. doi:10.1109/ICPHM.2012.6299541
Wang Z, Wang P (2012) Reliability-based product design with time-dependent performance deterioration. In: Proceedings 2012 IEEE international conference on prognostics and health management: enhancing safety, efficiency, availability, and effectiveness of systems through PHM technology and application, PHM 2012, Denver, CO
Wang GG, Dong Z, Aitchison P (2001) Adaptive response surface method - A global optimization scheme for approximation-based design problems. Eng Optim 33(6):707–733
Wang Z, Mourelatos ZP, Li J, Singh A, Baseski I (2013) Time-dependent reliability of dynamic systems using subset simulation with splitting over a series of correlated time intervals. In: Proceedings ASME 2013 international design engineering technical conferences and computers and information in engineering conference, American society of mechanical engineers, pp V03BT03A048-V003BT003A048
Yang JN, Shinozuka M (1971) On the first excursion probability in stationary narrow- band random vibration. J Appl Mech Trans ASME 38 Ser E (4):1017–1022
Youn BD, Hu C, Wang P (2011) Resilience-driven system design of complex engineered systems. J Mech Des 133:10101011
žilinskas A (1992) A review of statistical models for global optimization. J Global Optim 2(2):145–153
Zhang J, Ellingwood B (1994) Orthogonal series expansions of random fields in reliability analysis. J Eng Mech 120(12):2660–2677
Zhang J, Du X (2011) Time-dependent reliability analysis for function generator mechanisms. ASME J Mech Des 133(3):031005.
Zhang HQ, Gao LB (2012) Application of legendre polynomial in predicting of energy consumption per capital. In: World automation congress (WAC), pp 1–3
Acknowledgments
This material is based upon work supported by the National Science Foundation through grant CMMI 1234855. We would also like to acknowledge the support from the Intelligent Systems Center at the Missouri University of Science and Technology.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Expansion optimal linear estimation (EOLE)
EOLE (Sudret and Der Kiureghian 2002) is used to generate samples for a Gaussian stochastic process Y(t) on [0, T], which is divided into sintervals with a step size \({\Delta } t=\frac {T}{s}\). The s time instants are t i =(i−1)Δt, where i=1, 2, ⋯, s. The covariance matrix Σ is given by
where c Y (t i , t j ) is the covariance of Y(t) at t i and t j .
Let the eigenvalues and eigenvectors of Σ be η i and φ i , i=1, 2, ⋯, s, respectively. Then Y(t) is approximated by the following series expansion (Sudret and Der Kiureghian 2002):
in which Z i (i=1, 2,⋯,p≤s) are independent standard Gaussian random variables, ρ Y (t)=[ρ Y (t, t 1),ρ Y (t, t 2),…,ρ Y (t, t p )]T, and μ Y (t) and σ Y (t) are the mean and standard deviation of Y(t), respectively; p is the number of terms, and p≤s. When p=s, no truncation is made, and the error is minimum.
The accuracy of EOLE is affected by the size of the finite element mesh Δt, and the selection of the size depends on the correlation length of the stochastic process (Sudret and Der Kiureghian 2002). The shorter is the mesh length, the more accurate are the results.
Appendix B: Orthogonal series expansion (OSE)
Different from EOLE, OSE does not need finite element meshes. Its accuracy is therefore not affected by the mesh size. OSE approximates a Gaussian process Y(t) as Zhang and Ellingwood (1994)
in which Γ=[γ 1,γ 2,…,γ M ] is a vector of correlated zero-mean Gaussian random variables, and h i (t), i=1, 2, ⋯, M, are orthogonal functions.
The correlated random variables γ i are then transformed into independent standard Gaussian random variables with the following steps.
Construct an M×M square matrix Γ by
where C Y Y (t, τ) is the ato-covariance of Y(t).
Then obtain the eigenvectors of Γ. Use
where λ is a diagonal matrix with its diagonal elements being eigenvalues of Γ, P is a M×M square matrix whose i-th column is the i-th eigenvector of Γ, and Λ is a lower triangular matrix.
Then Γ=[γ 1,γ 2,…,γ M ] is transformed into independent standard Gaussian random variables Z=[Z 1,Z 2,…,Z M ] through
For the orthogonal functions,
The Legendre polynomials may be chosen as the orthogonal functions. When the eigenfunctions and covariance functions are approximated using the same orthogonal functions, OSE can be regarded as an approximation to the K−L expansion (Zhang and Ellingwood 1994).
Rights and permissions
About this article
Cite this article
Hu, Z., Du, X. First order reliability method for time-variant problems using series expansions. Struct Multidisc Optim 51, 1–21 (2015). https://doi.org/10.1007/s00158-014-1132-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00158-014-1132-9